Guest post by Louise Lubetkin
A while ago this sardonic little vignette, titled Medicine Through the Ages, was doing the rounds on the Internet:
2000 B.C. – Here, eat this root.
1000 A.D. – That root is pagan. Here, say this prayer.
1750 A.D. – That prayer is superstition. Here, drink this potion.
1900 A.D. – That potion is snake oil. Here, swallow this pill.
1985 A.D. – That pill is ineffective. Here, take this antibiotic.
2000 A.D. – That antibiotic is unnatural. Here, eat this root.
We seem to have come full circle. The idea of health as a personal goal, something that can be achieved by taking nutritional supplements such as herbal preparations, vitamins and minerals, is a fundamental tenet of alternative medicine. Consequently the rise of alternative medicine has been accompanied by a parallel rise in the use of dietary supplements.
Most people assume that dietary supplements, like pharmaceuticals, are thoroughly tested before being allowed onto the market, and that in any case because they are “natural” they are ipso facto safe.
Neither assumption is correct.
First of all, it is a great mistake to assume that all “natural” substances are harmless and therefore fit for consumption. (Fugu anyone? Perhaps with some sautéed Amanita mushrooms?) Secondly, unlike pharmaceuticals, which must undergo protracted pre-market testing for safety and efficacy, dietary supplements need not undergo even rudimentary testing before being sold over the counter to the public.
Supplement usage is at an all-time high. Currently, almost 50 percent of us regularly take supplements. The older you are, the more likely you are to take them: usage climbs to 70 percent amongst people 70 years and older. Similarly, the more formal education you have had, and the higher your income level, the more likely you are to be a regular consumer of dietary supplements.
Our collective enthusiasm for taking supplements has undoubtedly done considerably more for the health of the supplement industry than it has for that of the public. There is mounting research evidence to suggest that taking dietary supplements may neither be as safe nor beneficial to health as has previously been assumed (more on this in another post). Nevertheless, physicians seem to be just as vulnerable as the rest of us to the blandishments of the supplement industry. According to one study published in the Journal of Nutrition, 75 percent of dermatologists, 73 percent of orthopedists and 57 percent of cardiologists reported personally using dietary supplements. An earlier study by the same research group found that a staggering 79 percent of physicians and 82 percent of nurses reported recommending dietary supplements to their patients. Of course the fact that supplements come with a personal recommendation by a physician only serves to reinforce the public’s ill-founded presumption of safety.
The manufacture and sale of supplements is a hugely profitable business, generating more than $25 billion in annual sales and contributing an estimated $60 billion to the US economy. While most other industries have languished during the current economic downturn, the supplement industry has grown steadily: overall, between 2008 and 2012, sales of supplements rose by 31.7 percent.
(Ironically, the huge popularity of these so-called “natural” health products has not escaped the notice of agribusiness and pharmaceutical giants such as Kellogg’s, Pfizer, Monsanto and others, all of which have now begun manufacturing and marketing nutritional supplements of their own.)
None of this would have been possible had it not been for the 1994 enactment by the US Congress of the Dietary Supplement Health and Education Act (DSHEA), an extraordinarily ill-conceived piece of legislation that drastically weakened the FDA’s regulatory control over vitamins, minerals, herbal, botanical and other “traditional” medical products. Prior to DSHEA, these products were classified as drugs and were therefore subject to FDA regulation. By reclassifying them as foods rather than drugs DSHEA effectively removed dietary supplements from FDA regulatory oversight. As a result, supplement manufacturers became exempt from any obligation to perform pre-market testing for purity, safety or effectiveness, and it became infinitely harder for the FDA to detect unsafe products and quickly remove them from the market.
While the FDA does have the authority to insist that manufacturers refrain from making unsubstantiated health claims, it no longer has the power to mandate removal of unsafe products from the market without first clearing the almost insurmountable legal hurdle of proving significant risk. In other words, DSHEA inverts the responsibility for ensuring safety. Before the FDA can act, consumers must first be harmed sufficiently seriously, and in sufficient numbers, to trigger an investigation.
In one fell legislative swoop, DSHEA dished up a profit bonanza to the supplement industry while simultaneously robbing the public of any meaningful protection. Thus disencumbered of all but token regulation, the dietary supplement industry quickly burgeoned. In 1994, when DSHEA was enacted, there were just 4,000 dietary supplements on the market. Today there are more than 75,000.
The brave new world spawned by DSHEA is well exemplified by the ephedra case. Herbal weight loss supplements containing the plant alkaloids ephedra and ephedrine were linked to a string of over 150 deaths and countless other serious adverse events. Metabolife, the manufacturer of the supplement, received 15,000 complaints of adverse events – including deaths – related to the product, yet was under no obligation to alert the FDA, and (not surprisingly) chose not to do so.
It took a full 10 years of intense legal fighting for the FDA to succeed in getting ephedra-containing supplements removed from the market. Undeterred, powerful industry lobbying groups and vociferous opponents of regulation mounted a successful appeal challenging the legality of the FDA ban, and ephedra supplements once again went on sale in several states. The ruling against FDA was eventually overturned on appeal and the ephedra ban was upheld, but the cost, difficulty and duration of the legal process of restricting access to this dangerous “natural” supplement was staggering. Yet even now, despite the FDA’s hard-won ban on ephedra, it is perfectly legal to buy ephedrine hydrochloride – an extract of ephedra – over the counter in the US, where it is marketed for sale without prescription as a bronchodilator and nasal decongestant. The only restriction on its sale is that it must be presented in pill form with dosage not exceeding 8mg, and the label cannot promote it as a weight loss aid – a restriction which can be sidestepped with the greatest of ease, as this website, with the in-your-face domain name ephedrinediet.org, vividly demonstrates.
Perhaps not surprisingly, the ephedra case is the only time the FDA has attempted to force the removal of a dangerous supplement from the market. Hamstrung by DSHEA, the FDA can do little more than exert its limited authority over the wording on supplement labels to ensure that manufacturers make no explicit claims that their products may be used prevent, cure or treat a specific disease. However, the lack of seriousness with which an increasingly confident supplement industry takes the FDA and its semantic policing powers is well illustrated by the following statement which appears in a recent report published by the Natural Products Foundation, an industry umbrella and lobbying group:
Healthy consumers use supplements to decrease their risk of heart disease, boost their immune systems, prevent vision loss, build strong bones, or prevent birth defects. Less healthy or ill consumers turn to supplements as an alternative to traditional medical treatments, to either complement prescription drugs they may be taking or substitute supplements for prescription drugs they either cannot afford or do not trust.
There are encouraging signs that concern about the dangers posed by an largely unregulated supplement industry may at last be growing, although industry and grass-roots opposition to attempts to repeal DSHEA have been well organized, well funded and vociferous. Even so, in 2007, largely as a result of public unease over the FDA’s protracted struggle to ban ephedra, DSHEA was amended to make reporting of serious adverse events such as death, life-threatening emergencies, inpatient hospitalizations or significant, persistent incapacities, mandatory. As a result of this amendment, in the first 9 months of 2008 alone, the FDA received almost 600 reports of serious adverse events arising from the use of dietary supplements. Moreover, the FDA believes that adverse events are being seriously under-reported, and that the annual number of supplement-related adverse events in the US is close to 50,000.
Perhaps it will take another ephedra disaster to make us rethink DSHEA, take the handcuffs off the FDA and begin looking more critically at the notion that dietary supplements are intrinsically beneficial and harmless.
In the meantime, here’s the 2013 addendum to Medicine Through the Ages:
2000 A.D. – That antibiotic is unnatural. Here, eat this root.
2013 A.D. – Has that root been assayed for adulterants, standardized for potency and purity? Has that root been approved by the FDA following clinical trials to establish dosage, efficacy and safety? Is the use of that root use evidence-based? Is it safe to take that root concurrently with other roots? Are there any contraindications? My diet already contains roots; will taking more be too much?
There are few subjects in the area of alternative medicine which are more deceptive than the now fashionable topic of “integrated medicine” (or integrative medicine, healthcare etc.). According to its proponents, integrated medicine (IM) is based mainly on two concepts. The first is that of “whole person care”, and the second is often called “the best of both worlds”. Attractive concepts, one might think – why then do I find IM superfluous, deeply misguided and plainly wrong?
Whole patient care or holism
Integrated healthcare practitioners, we are being told, do not just treat the physical complaints of a patient but look after the whole individual: body, mind and soul. On the surface, this approach seems most laudable. Yet a closer look reveals major problems.
The truth is that all good medicine is, was, and always will be holistic: today’s GPs, for instance, should care for their patients as whole individuals dealing the best they can with physical problems as well as social and spiritual issues. I said “should” because many doctors seem to neglect the holistic aspect of care. If that is so, they are, by definition, not good doctors. And, if the deficit is wide-spread, we should reform conventional healthcare. But delegating holism to IM-practitioners would be tantamount to abandoning an essential element of good healthcare; it would be a serious disservice to today’s patients and a detriment to the healthcare of tomorrow.
It follows that the promotion of IM under the banner of holism is utter nonsense. Either it is superfluous because it misleads patients into believing holism is an exclusive feature of IM, while, in fact, it is a hallmark of any good healthcare. Or, if holism is neglected or absent in a particular branch of conventional medicine, it detracts us from the important task to remedy this deficit. We simply must not allow a core value of medicine to be highjacked.
The best of both worlds
The second concept of IM is often described as “the best of both worlds”. Proponents of IM claim to use the “best” of the world of alternative medicine and combine it with the “best” of conventional healthcare. Again, this concept looks commendable at first glance but, at closer inspection, serious doubts emerge.
They hinge, in my view, on the use of the term “best”. We have to ask, what does “best” stand for in the context of healthcare? Surely it cannot mean the most popular or fashionable – and certainly “best” is not by decree of HRH Prince Charles. Best can only signify “the most effective” or more precisely “being associated with the most convincingly positive risk/benefit balance”.
If we understand “the best of both worlds” in this way, the concept becomes synonymous with the concept of evidence-based medicine (EBM) which represents the currently accepted thinking in healthcare. According to the principles of EBM, treatments must be shown to be safe as well as effective. When treating their patients, doctors should, according to EBM-principles, combine the best external evidence with their own experience as well as with the preferences of their patients.
If “the best of both worlds” is synonymous with EBM, we clearly don’t need this confusing duplicity of concepts in the first place; it would only distract from the auspicious efforts of EBM to continuously improve healthcare. In other words, the second axiom of IM is as nonsensical as the first.
The practice of integrated medicine
So, on the basis of these somewhat theoretical considerations, IM is a superfluous, misleading and counterproductive distraction. But the most powerful argument against IM is really an entirely practical one: namely the nonsensical, bogus and dangerous things that are happening every day in its name and under its banner.
If we look around us, go on the internet, read the relevant literature, or walk into an IM clinic in our neighbourhood, we are sure to find that behind all these politically correct slogans of holism and” best of all worlds” there is the coal face of pure quackery.Perhaps you don’t believe me, so go and look for yourself. I promise you will discover any unproven and disproven therapy that you can think of, anything from crystal healing to Reiki, and from homeopathy to urine-therapy.
What follows is depressingly simple: IM is a front of half-baked concepts behind which boundless quackery and bogus treatments are being promoted to unsuspecting consumers.
“Don’t take this therapy lightly. Multiple sclerosis, colitis, lupus, rheumatoid arthritis, cancer, hepatitis, hyperactivity, pancreatic insufficiency, psoriasis, eczema, diabetes, herpes, mononucleosis, adrenal failure, allergies and so many other ailments have been relieved through use of this therapy. After you overcome your initial gag response (I know I had one), you will realize that something big is going on, and if you are searching for health, this is an area to investigate. There are numerous reports and double blind studies which go back to the turn of the century supporting the efficacy of using urine for health”. This quote refers to a treatment that I, and probably most readers of this blog, find truly amazing – even in the realm of alternative medicine, we do not often come across a therapy as bizarre as this one: urine therapy.
Urine therapy enthusiasts claim that your own urine administered either externally, internally or both, has a long history of use, that most medical cultures have usefully employed it, that many VIPs swear by it, that it can cure almost all diseases and that it can save lives. What was new to me is the claim that it is supported by numerous double-blind studies.
Such trials would, of course, be entirely feasible; all you need to do is to give one group of patients the experimental treatment, while the other takes a placebo. Recruitment might be a bit of a problem, and the ethics committee might raise one or two eyebrows but, in theory, it certainly seems doable. So where are the “numerous” studies?
A quick, rough and ready Medline-Search found several unfortunate authors with the last name of “URINE”, yet no clinical trials of urine therapy emerged. A little more time-consuming search through my books on alternative medicine revealed nothing that remotely resembled evidence. At this point, I arrived at the conclusion that the clinical trials are either non-existent or extremely well hidden. Further searches of the proponents’ literature, websites etc made me settle for the former explanation.
All this could be entirely irrelevant, perhaps slightly amusing, would it not reveal a pattern which is so painfully common in alternative medicine: anyone can claim anything without fear of any type of retribution, gullible consumers are attracted through the exotic flair, VIP-promotion, long history of use etc. and follow in droves [yes, amazingly, urine therapy seems to have plenty of followers]; consequently, lives are put at risk whenever someone starts truly believing the bogus, irresponsible claims that are being made.
I do apologise for the rudeness of my words but I really do think THEY ARE TAKING THE PISS!
In my last post, we discussed the “A+B versus B” trial design as a tool to produce false positive results. This method is currently very popular in alternative medicine, yet it is by no means the only approach that can mislead us. Today, let’s look at other popular options with a view of protecting us against trialists who naively or willfully might fool us.
The crucial flaw of the “A+B versus B” design is that it fails to account for non-specific effects. If the patients in the experimental group experience better outcomes than the control group, this difference could well be due to effects that are unrelated to the experimental treatment. There are, of course, several further ways to ignore non-specific effects in clinical research. The simplest option is to include no control group at all. Homeopaths, for instance, are very proud of studies which show that ~70% of their patients experience benefit after taking their remedies. This type of result tends to impress journalists, politicians and other people who fail to realise that such a result might be due to a host of factors, e.g. the placebo-effect, the natural history of the disease, regression towards the mean or treatments which patients self-administered while taking the homeopathic remedies. It is therefore misleading to make causal inferences from such data.
Another easy method to generate false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the outcomes in clinical trials is to make sure that expectation is not the cause of or contributor to the outcome. They say that expectation can move mountains; this might be an exaggeration, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment. Similarly, the parents of a child or the owners of an animal can transfer their expectations, and this is one of several reasons why it is incorrect to claim that children and animals are immune to placebo-effects.
Failure to randomise is another source of bias which can make an ineffective therapy look like an effective one when tested in a clinical trial. If we allow patients or trialists to select or choose which patients receive the experimental and which get the control-treatment, it is likely that the two groups differ in a number of variables. Some of these variables might, in turn, impact on the outcome. If, for instance, doctors allocate their patients to the experimental and control groups, they might select those who will respond to the former and those who don’t to the latter. This may not happen with malicious intent but through intuition or instinct: responsible health care professionals want those patients who, in their experience, have the best chances to benefit from a given treatment to receive that treatment. Only randomisation can, when done properly, make sure we are comparing comparable groups of patients, and non-randomisation is likely to produce misleading findings.
While these options for producing false positives are all too obvious, the next possibility is slightly more intriguing. It refers to studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but to investigations attempting to assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce the same or similarly positive results, both must be effective. For instance, such a study might compare the effects of acupuncture to a common pain-killer. Such trials are aptly called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a deliberately silly example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.
Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In our hypothetical example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then apparently confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition, and people who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.
Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, the most useless or even slightly detrimental treatment would appear to be effective simply because it is equivalent to or less harmful than the comparator.
A variation of this theme is the plethora of controlled clinical trials which compare one unproven therapy to another unproven treatment. Perdicatbly, the results indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic researchers then tend to conclude that this proves both treatments to be equally effective.
Another option for creating misleadingly positive findings is to cherry-pick the results. Most trails have many outcome measures; for instance, a study of acupuncture for pain-control might quantify pain in half a dozen different ways, it might also measure the length of the treatment until pain has subsided, the amount of medication the patients took in addition to receiving acupuncture, the days off work because of pain, the partner’s impression of the patient’s health status, the quality of life of the patient, the frequency of sleep being disrupted by pain etc. If the researchers then evaluate all the results, they are likely to find that one or two of them have changed in the direction they wanted. This can well be a chance finding: with the typical statistical tests, one in 20 outcome measures would produce a significant result purely by chance. In order to mislead us, the researchers only need to “forget” about all the negative results and focus their publication on the ones which by chance have come out as they had hoped.
One fail-proof method for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.
And finally, there is always the possibility of overt cheating. Researchers are only human and are thus not immune to temptation. They may have conflicts of interest or may know that positive results are much easier to publish than negative ones. Certainly they want to publish their work – “publish or perish”! So, faced with disappointing results of a study, they might decide to prettify them or even invent new ones which are more pleasing to them, their peers, or their sponsors.
Am I claiming that this sort of thing only happens in alternative medicine? No! Obviously, the way to minimise the risk of such misconduct is to train researchers properly and make sure they are able to think critically. Am I suggesting that investigators of alternative medicine are often not well-trained and almost always uncritical? Yes.
Would it not be nice to have a world where everything is positive? No negative findings ever! A dream! No, it’s not a dream; it is reality, albeit a reality that exists mostly in the narrow realm of alternative medicine research. Quite a while ago, we have demonstrated that journals of alternative medicine never publish negative results. Meanwhile, my colleagues investigating acupuncture, homeopathy, chiropractic etc. seem to have perfected their strategy of avoiding the embarrassment of a negative finding.
Since several years, researchers in this field have adopted a study-design which is virtually sure to generate nothing but positive results. It is being employed widely by enthusiasts of placebo-therapies, and it is easy to understand why: it allows them to conduct seemingly rigorous trials which can impress decision-makers and invariably suggests even the most useless treatment to work wonders.
One of the latest examples of this type of approach is a trial where acupuncture was tested as a treatment of cancer-related fatigue. Most cancer patients suffer from this symptom which can seriously reduce their quality of life. Unfortunately there is little conventional oncologists can do about it, and therefore alternative practitioners have a field-day claiming that their interventions are effective. It goes without saying that desperate cancer victims fall for this.
In this new study, cancer patients who were suffering from fatigue were randomised to receive usual care or usual care plus regular acupuncture. The researchers then monitored the patients’ experience of fatigue and found that the acupuncture group did better than the control group. The effect was statistically significant, and an editorial in the journal where it was published called this evidence “compelling”.
Due to a cleverly over-stated press-release, news spread fast, and the study was celebrated worldwide as a major breakthrough in cancer-care. Finally, most commentators felt, research has identified an effective therapy for this debilitating symptom which affects so many of the most desperate patients. Few people seemed to realise that this trial tells us next to nothing about what effects acupuncture really has on cancer-related fatigue.
In order to understand my concern, we need to look at the trial-design a little closer. Imagine you have an amount of money A and your friend owns the same sum plus another amount B. Who has more money? Simple, it is, of course your friend: A+B will always be more than A [unless B is a negative amount]. For the same reason, such “pragmatic” trials will always generate positive results [unless the treatment in question does actual harm]. Treatment as usual plus acupuncture is more than treatment as usual, and the former is therefore moer than likely to produce a better result. This will be true, even if acupuncture is no more than a placebo – after all, a placebo is more than nothing, and the placebo effect will impact on the outcome, particularly if we are dealing with a highly subjective symptom such as fatigue.
I can be fairly confident that this is more than a theory because, some time ago, we analysed all acupuncture studies with such an “A+B versus B” design. Our hypothesis was that none of these trials would generate a negative result. I probably do not need to tell you that our hypothesis was confirmed by the findings of our analysis. Theory and fact are in perfect harmony.
You might say that the above-mentioned acupuncture trial does still provide important information. Its authors certainly think so and firmly conclude that “acupuncture is an effective intervention for managing the symptom of cancer-related fatigue and improving patients’ quality of life”. Authors of similarly designed trials will most likely arrive at similar conclusions. But, if they are true, they must be important!
Are they true? Such studies appear to be rigorous – e.g. they are randomised – and thus can fool a lot of people, but they do not allow conclusions about cause and effect; in other words, they fail to show that the therapy in question has led to the observed result.
Acupuncture might be utterly ineffective as a treatment of cancer-related fatigue, and the observed outcome might be due to the extra care, to a placebo-response or to other non-specific effects. And this is much more than a theoretical concern: rolling out acupuncture across all oncology centres at high cost to us all might be entirely the wrong solution. Providing good care and warm sympathy could be much more effective as well as less expensive. Adopting acupuncture on a grand scale would also stop us looking for a treatment that is truly effective beyond a placebo – and that surely would not be in the best interest of the patient.
I have seen far too many of those bogus studies to have much patience left. They do not represent an honest test of anything, simply because we know their result even before the trial has started. They are not science but thinly disguised promotion. They are not just a waste of money, they are dangerous – because they produce misleading results – and they are thus also unethical.
Even though I have not yet posted a single article on this subject, it already proved to be a most controversial subject in the comments section. A new analysis of the evidence has just been published, and, in view of the news just out of a Royal Charter for the UK College of Chiropractors, it is time to dedicate some real attention to this important issue.
The analysis comes in the form of a systematic review authored by an international team of chiropractors (we should not fear therefore that the authors have an “anti-chiro bias”). Their declared aim was “to determine whether conclusive evidence of a strong association [between neck manipulation and vascular accidents] exists”. The authors make it clear that they only considered case-control studies and omitted all other articles.
They found 4 such publications all of which had methodological limitations. Two studies were of acceptable quality, and one of these studies seemed to show an association between neck manipulation and stroke, while the other one did not. The authors’ conclusion is ambivalent: “Conclusive evidence is lacking for a strong association between neck manipulation and stroke, but it is also lacking for no association”.
The 4 case-control studies, their strength and weaknesses are, of course, well-known and have been discussed several times before. It was also known that the totality of these data fail to provide a clear picture. I would therefore argue that, in such a situation, we need to include further evidence in an attempt to advance the discussion.
Generally speaking, whenever we assess therapeutic safety, we must not ignore case-reports. One might be next to meaningless but collectively they can provide strong indicators of risk. In drug research, for instance, they send invaluable signals about potential problems and many drugs have been withdrawn from the market purely on the basis of case-reports. If we include case-reports in an analysis of the risks of neck manipulations, the evidence generated by the existing case-control studies appears in a very different light. There are virtually hundreds of cases where neck manipulations have seriously injured patients, and many have suffered permanent neurological deficits or worse. Whenever causation is validated by experts who are not chiropractors and thus not burdened with a professional bias, investigators find that most of the criteria for a causal relationship are fulfilled.
While the omission of case-reports in the new review is regrettable, I find many of the staements of the authors helpful and commendable, particularly considering that they are chiropractors. They seem to be aware that, when there is genuine uncertainty, we ought to err on the safe side [the precautionary principle]. Crucially, they comment on the practical implications of our existing knowledge: “Considering this uncertainty, informed consent is warranted for cervical spinal manipulative therapy that advises patients of a possible increase in the risk of a rare form of stroke…” A little later, in their discussion they write: “As the possibility of an association between cervical spinal manipulative therapy and vascular accidents cannot be ruled out, practitioners of cervical spinal manipulative therapy are obliged to take all reasonable steps that aim to minimise the potential risk of stroke. There is evidence that cervical rotation places greater stresses on vertebral arteries than other movements such as lateral flexion, and so it would seem wise to avoid techniques that involve full rotation of the head.”
At this point it is, I think, important to note that UK chiropractors tend not to obtain informed consent from their patients. This is, of course, a grave breach of medical ethics. It becomes even graver, when we consider that the GCC seems to do nothing about it, even though it has been known for many years.
Is this profession really worthy of a Royal Charter? This and the other question raised here require some serious consideration and discussion which, no doubt, will follow this short post.
Boiron, the world’s largest manufacturer of homeopathic products, has recently been in the headlines more often and less favourably than their PR-team may have hoped for. Now they have added to this attention by publishing a large and seemingly impressive multi-national study of homeopathy.
Its objective was “to evaluate the effectiveness of homeopathic medicine for the prevention and treatment of migraine in children”. For this purpose, the researchers recruited 59 homeopaths from 12 countries who included into the study a total of 168 children with “definite or probable” migraine. The homeopaths had complete freedom to individualise their treatments according to the distinct characteristics of their patients.
The primary study-endpoints were the frequency, severity and duration of migraine attacks during 3 months of homeopathic treatment compared to the 3 months prior to that period. The secondary outcome measure was the amount of days off school. The results are fairly clear-cut and demonstrated that all of these variables improved in the period of homeopathic care.
This study is remarkable but possibly not in the way Boiron intended it to be. The first thing to notice is that each homeopath in this study barely treated 3 patients. I wonder why anyone would go to the trouble of setting up a multi-national trial with dozens of homeopaths from around the globe when, in the end, the total sample size is not higher than that achievable in one single well-organised, one-centre study. A multitude of countries, cultures and homeopaths is only an asset for a study, if justified through the recruitment of a large patient sample; otherwise, it is just an unwelcome source for confounding and bias.
But the main concern I have with this study lies elsewhere. Its stated objective was “…to evaluate the effectiveness of homeopathic medicines…” This aim cannot possibly be tackled with a study of this nature. As it stands, this study merely investigated what happens in 3 months while children receive 3 months of homeopathic care. The observed findings are not necessarily due to the homeopathic medicines; they might be due to the passage of time, the tender loving care received by their homeopaths, the expectation of the homeopaths and/or the parents, a regression towards the mean, the natural history of the (in some cases only “probable”) migraine, any concomitant treatments administered during the 3 months, a change in life-style, a placebo-effect, a Hawthorne-effect, or the many other factors that I have not thought of.
To put the result of the Boiron-researchers into the right context, we should perhaps remember that even the most outspoken promoters of homeopathy on the planet concluded from an evaluation of the evidence that homeopathy is ineffective as a treatment of migraine. Therefore it seems surprising to publish the opposite result on the basis of such flimsy evidence made to look impressive by its multi-national nature.
I have been accused of going out of my way to comment on bogus evidence in the realm of homeopathy. If this claim were true, I would not be able to do much else. Debunking flawed homeopathy studies is not what I aim for or spend my time on. Yet this study, I thought, does deserve a brief comment.
Why? Because it has exemplary flaws, because it reflects on homeopathy as a whole as well as on the journal it was published in (the top publication in this field), because it is Boiron-authored, because it produced an obviously misleading result, because it could lead many migraine-sufferers up the garden path and – let’s be honest – because Dan Ullman will start foaming from the mouth again, thus proving to the world that homeopathy is ineffective against acute anger and anguish.
Joking apart, the Boiron-authors conclude that “the results of this study demonstrate the interest of homeopathic medicines for this prevention and treatment of migraine attacks in children”. This is an utterly bizarre statement, as it does not follow from the study’s data at all.
But what can possibly be concluded from this article that is relevant to anyone? I did think hard about this question, and here is my considered answer: nothing (other than perhaps the suspicion that homeopathy-research is in a dire state).
How do you fancy playing a little game? Close your eyes, relax, take a minute or two and imagine the newspaper headlines which new medical discoveries might make within the next 100 years or so. I know, this is a slightly silly and far from serious game but, I promise, it’s quite good fun.
Personally, I see the following headlines emerging in front of my eyes:
MEASLES IRRADICATED
VACCINATION AGAINST AIDS READY FOR ROUTINE USE
IDENTIFICATION OF THE CAUSE OF DEMENTIA LEADS TO FIRST EFFECTIVE CURE
GENE-THERAPY BEGINS TO SAVE LIVES IN EVERY DAY PRACTICE
CANCER, A NON-FATAL DISEASE
HEALTHY AGEING BECOMES REALITY
Yes, I know this is nothing but naïve conjecture mixed with wishful thinking, and there is hardly anything truly surprising in my list.
But, hold on, is it not remarkable that I visualise considerable advances in conventional healthcare but no similarly spectacular headlines relating to alternative medicine? After all, alternative medicine is my area of expertise. Why do I not see the following announcements?
YET ANOTHER HOMEOPATH WINS THE NOBEL PRIZE
CHIROPRACTIC SUBLUXATION CONFIRMED AS THE SOLE CAUSE OF MANY DISEASES
CHRONICALLY ILL PATIENTS CAN RELY ON BACH FLOWER REMEDIES
CHINESE HERBS CURE PROSTATE CANCER
ACUPUNCTURE MAKES PAIN-KILLERS OBSOLETE
ROYAL DETOX-TINCTURE PROLONGS LIFE
CRANIOSACRAL THERAPY PROVEN EFFECTIVE FOR CEREBRAL PALSY
IRIDOLOGY, A VALID DIAGNOSTIC TEST
How can I be so confident that such headlines about alternative medicine will not, one day, become reality?
Simple: because I only need to study the past and realise which breakthroughs have occurred within the previous 100 years. Mainstream scientists and doctors have discovers insulin-therapy that turned diabetes from a death sentence into a chronic disease, they have developed antibiotics which saved millions of lives, they have manufactured vaccinations for deadly infections, they have invented diagnostic techniques that made early treatment of many life-threatening conditions possible etc, etc, etc.
None of the many landmarks in the history of medicine has ever been in the realm of alternative medicine.
What about herbal medicine? Some might ask. Aspirin, vincristine, taxol and other drugs originated from the plant kingdom, and I am sure there will be similar such success-stories in the future.
But were these truly developments driven by traditional herbalists? No! They were discoveries entirely based on systematic research and rigorous science.
Progress in healthcare will not come from clinging to a dogma, nor from adhering to yesterday’s implausibilites, nor from claiming that clinical experience is more important than scientific research.
I am not saying, of course, that all of alternative medicine is useless. I am saying, however, that it is time to get realistic about what alternative treatments can do and what it cannot achieve. They will not save many lives, for instance; an alternative cure for anything is a contradiction in terms. The strength of some alternative therapies lies in palliative and supportive care and not in changing the natural history of diseases.
Yet proponents of alternative medicine tend to ignore this all too obvious fact and go way beyond the line that divides responsible from irresponsible behaviour. The result is a plethora of bogus claims – and this is clearly not right. It raises false hopes which, in a nutshell, are always unethical and often cruel.
Since it was first published, the “Swiss government report” on homeopathy has been celebrated as the most convincing proof so far that homeopathy works. On the back of this news, all sorts of strange stories have emerged. Their aim seems to be that consumers become convinced that homeopathy is based on compelling evidence.
Readers of this blog might therefore benefit from a brief and critical evaluation of this “evidence” in support of homeopathy. Recently, not one, two, three but four independent critiques of this document have become available.
Collectively, these articles [only one of which is mine] suggest that the “Swiss report” is hardly worth the paper it was written on; one of the critiques published in the Swiss Medical Weekly even stated that it amounted to “research misconduct”! Compared to such outspoken language, my own paper concluded much more conservatively: “this report [is] methodologically flawed, inaccurate and biased”.
So what is wrong with it? Why is this document not an accurate summary of the existing evidence? I said this would be a brief post, so I will only mention some of the most striking flaws.
The report is not, as often claimed, a product by the Swiss government; in fact, it was produced by 13 authors who have no connection to any government and who are known proponents of homeopathy. For some unimaginable reason, they decided to invent their very own criteria for what constitutes evidence. For instance, they included case-reports and case-series, re-defined what is meant by effectiveness, were highly selective in choosing the articles they happened to like [presumably because of the direction of the result] while omitting lots of data that did not seem to confirm their prior belief, and assessed only a very narrow range of indications.
The report quotes several of my own reviews of homeopathy but, intriguingly, it omitted others for no conceivable reason. I was baffled to realise that the authors reported my conclusions differently from the original published text in my articles. If this had occurred once or twice, it might have been a forgivable error – but this happened in 10 of 22 instances.
Negative conclusions in my original reviews were thus repeatedly turned into positive verdicts, and evidence against homeopathy suddenly appeared to support it. This is, of course, a serious problem: if someone is too busy to look up my original articles, she is very unlikely to notice this extraordinary attempt to cheat.
To me, this approach seems similar to that of an accountant who produces a balance sheet where debts appear as credits. It is a simple yet dishonest way to generate a positive result where there is none!
The final straw for me came when I realised that the authors of this dubious report had declared that they were free of conflicts of interest. This notion is demonstrably wrong; several of them earn their living through homeopathy!
Knowing all this, sceptics might take any future praise of this “Swiss government report” with more than just a pinch of salt. Once we are aware of the full, embarrassing details, it is not difficult to understand how the final verdict turned out to be in favour of homeopathy: if we convert much of the negative data on any subject into positive evidence, any rubbish will come out smelling of roses – even homeopathy.
What is and what isn’t evidence, and why is the distinction important?
In the area of alternative medicine, we tend to engage in sheer endless discussions around the subject of evidence; the relatively few comments on this new blog already confirm this impression. Many practitioners claim that their very own clinical experience is at least as important and generalizable as scientific evidence. It is therefore relevant to analyse in a little more detail some of the issues related to evidence as they apply to the efficacy of alternative therapies.
To prevent the debate from instantly deteriorating into a dispute about the value of this or that specific treatment, I will abstain from mentioning any alternative therapy by name and urge all commentators to do the same. The discussion on this post should not be about the value of homeopathy or any other alternative treatment; it is about more fundamental issues which, in my view, often get confused in the usually heated arguments for or against a specific alternative treatment.
My aim here is to outline the issues more fully than would be possible in the comments section of this blog. Readers and commentators can subsequently be referred to this post whenever appropriate. My hope is that, in this way, we might avoid repeating the same arguments ad nauseam.
Clinical experience is notoriously unreliable
Clinicians often feel quite strongly that their daily experience holds important information about the efficacy of their interventions. In this assumption, alternative practitioners are usually entirely united with healthcare professionals working in conventional medicine.
When their patients get better, they assume this to be the result of their treatment, especially if the experience is repeated over and over again. As an ex-clinician, I do sympathise with this notion which might even prevent practitioners from losing faith in their own work. But is the assumption really correct?
The short answer is NO. Two events [the treatment and the improvement] that follow each other in time are not necessarily causally related; we all know that, of course. So, we ought to consider alternative explanations for a patient’s improvement after therapy.
Even the most superficial scan of the possibilities discloses several options: the natural history of the condition, regression towards the mean, the placebo-effect, concomitant treatments, social desirability to name but a few. These and other phenomena can contribute to or determine the clinical outcome such that inefficacious treatments appear to be efficacious.
What follows is simple, undeniable and plausible for scientists, yet intensely counter-intuitive for clinicians: the prescribed treatment is only one of many influences on the clinical outcome. Thus even the most impressive clinical experience of the perceived efficacy of a treatment can be totally misleading. In fact, experience might just reflect the fact that we repeat the same mistake over and over again. Put differently, the plural of anecdote is anecdotes, not evidence!
Clinicians tend to get quite miffed when anyone tries to explain to them how multifactorial the situation really is and how little their much-treasured experience tells us about therapeutic efficacy. Here are seven of the counter-arguments I hear most frequently:
1) The improvement was so direct and prompt that it was obviously caused by my treatment [this notion is not very convincing; placebo-effects can be just as prompt and direct].
2) I have seen it so many times that it cannot be a coincidence [some clinicians are very caring, charismatic, and empathetic; they will thus regularly generate powerful placebo-responses, even when using placebos].
3) A study with several thousand patients shows that 75% of them improved with my treatment [such response rates are not uncommon, even for ineffective treatments, if patient-expectation was high].
4) Surely chronic conditions don’t suddenly get better; my treatment therefore cannot be a placebo [this is incorrect, eventually many chronic conditions improve, if only temporarily].
5) I had a patient with a serious condition, e.g. cancer, who received my treatment and was cured [if one investigates such cases, one often finds that the patient also took a conventional treatment; or, in rare instances, even cancer-patients show spontaneous remissions].
6) I have tried the treatment myself and had a positive outcome [clinicians are not immune to the multifactorial nature of the perceived clinical response].
7) Even children and animals respond very well to my treatment, surely they are not prone to placebo-effects [animals can be conditioned to respond; and then there is, of course, the natural history of the disease].
Is all this to say that clinical experience is useless? Clearly not! I am merely pointing out that, when it comes to therapeutic efficacy, clinical experience is no replacement for evidence. It is invaluable for a lot of other things, but it can at best provide a hint and never a proof of efficacy.
What then is reliable evidence?
As the clinical outcomes after treatments always have many determinants, we need a different approach for verifying therapeutic efficacy. Essentially, we need to know what would have happened, if our patients had not received the treatment in question.
The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.
Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.
Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.
Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.
Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-commings, they are far superior than any other method for determining the efficacy of medical interventions.
There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.
Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.
In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.
Why is evidence important?
In a way, this question has already been answered: only with reliable evidence can we tell with any degree of certainty that it was the treatment per se – and not any of the other factors mentioned above – that caused the clinical outcome we observe in routine practice. Only if we have such evidence can we be sure about cause and effect. And only then can we make sure that patients receive the best possible treatments currently available.
There are, of course, those who say that causality does not matter all that much. What is important, they claim, is to help the patient, and if it was a placebo-effect that did the trick, who cares? However, I know of many reasons why this attitude is deeply misguided. To mention just one: we probably all might agree that the placebo-effect can benefit many patients, yet it would be a fallacy to assume that we need a placebo treatment to generate a placebo-response.
If a clinician administers an efficacious therapy [one that generates benefit beyond placebo] with compassion, time, empathy and understanding, she will generate a placebo-response PLUS a response to the therapy administered. In this case, the patient benefits twice. It follows that, merely administering a placebo is less than optimal; in fact it usually means cheating the patient of the effect of an efficacious therapy.
The frequently voiced counter-argument is that there are many patients who are ill without an exact diagnosis and who therefore cannot receive a specific treatment. This may be true, but even those patients’ symptoms can usually be alleviated with efficacious symptomatic therapy, and I fail to see how the administration of an ineffective treatment might be preferable to using an effective symptomatic therapy.
Conclusion
We all agree that helping the patient is the most important task of a clinician. This task is best achieved by maximising the non-specific effects [e.g. placebo], while also making sure that the patient benefits from the specific effects of what medicine has to offer. If that is our goal in clinical practice, we need reliable evidence and experience. Therefore one cannot be a substitute for the other, and scientific evidence is an essential precondition for good medicine.