Many cancer patients use some form of complementary and alternative medicine (CAM), mostly as an adjunct to conventional cancer therapies to improve the symptoms of the disease or to alleviate the side-effects of the often harsh cancer-therapy. The hope is that this approach leads to less suffering and perhaps even longer survival – but is this really so?
In a recently published study, Korean researchers evaluated whether CAM-use influenced the survival and health-related quality of life (HRQOL) of terminal cancer patients. From July 2005 to October 2006, they prospectively studied a cohort study of 481 cancer patients. During a follow-up of 163.8 person-years, they identified 466 deceased patients. Their multivariate analyses of these data showed that, compared with non-users, CAM-users did not have better survival. Using mind-body interventions or prayer was even associated with significantly worse survival. CAM users reported significantly worse cognitive functioning and more fatigue than nonusers. In sub-group analyses, users of alternative medical treatments, prayer, vitamin supplements, mushrooms, or rice and cereal reported significantly worse HRQOL. The authors conclude that “CAM did not provide any definite survival benefit, CAM users reported clinically significant worse HRQOLs.”
Most proponents of CAM would find this result counter-intuitive and might think it is a one-off coincidental result or a fluke. But, in fact, it is not; similar data have been reported before. For instance, a Norwegian study from 2003 examined the association between CAM-use and cancer survival. Survival data were obtained with a follow-up of 8 years for 515 cancer patients. A total of 112 patients used CAM. During the follow-up period, 350 patients died. Death rates were higher in CAM-users (79%) than in those who did not use CAM (65%). The hazard ratio of death for CAM-use compared with no use was 1.30. The authors of this paper concluded that “use of CAM seems to predict a shorter survival from cancer.”
I imagine that, had the results been the opposite (i.e. showing that CAM-users live longer and have a better quality of life), most CAM-enthusiasts would not have hesitated in claiming a cause effect relationship (i.e. that the result was due to the use of alternative medicine). Critical thinkers, however, are more careful, after all, correlation is not causation! So, how can these findings be explained?
There are, of course, several possibilities, for example:
1) Some patients might use ineffective alternative therapies instead of effective cancer treatments thus shortening their life and reducing their quality of life.
2) Other patients might employ alternative treatments which cause direct harm; for this, there are numerous options; for instance, if they self-medicate St John’s Wort, they would decrease the effectiveness of many mainstream medications, including some cancer drugs.
3) Patients who elect to use alternative medicine as an adjunct to their conventional cancer treatment might, on average, be more sick than those who stay clear of alternative medicine.
The available data do not allow us to say which explanation applies. But things are rarely black or white, and I would not be surprised, if a complex combination of all three possibilities came closest to the truth.
Reiki is a form of healing which rests on the assumption that some form ”energy” determines our health. In this context, I tend to put energy in inverted commas because it is not the energy a physicist might have in mind. It is a much more mystical entity, a form of vitality that is supposed to be essential for life and keep us going. Nobody has been able to define or quantify this “energy”, it defies scientific measurement and is biologically implausible. These circumstances render Reiki one of the least plausible therapies in the tool kit of alternative medicine.
Reiki-healers (they prefer to be called “masters”) would channel “energy” into his or her patient which, in turn, is thought to stimulate the healing process of whatever condition is being treated. In the eyes of those who believe in this sort of thing, Reiki is therefore a true panacea: it can heal everything.
The clinical evidence for or against Reiki is fairly clear – as one would expect after realising how ‘far out’ its underlying concepts are. Numerous studies are available, but most are of very poor quality. Their results tend to suggest that patients experience benefit after having Reiki but they rarely exclude the possibility that this is due to placebo or other non-specific effects. Those that are rigorous show quite clearly that Reiki is a placebo. Our own review therefore concluded that “the evidence is insufficient to suggest that Reiki is an effective treatment for any condition… the value of Reiki remains unproven.”
Since the publication of our article, a number of new investigations have become available. In a brand-new study, for instance, the researchers wanted to explore a Reiki therapy-training program for the care-givers of paediatric patients. A series of Reiki training classes were offered by a Reiki-master. At the completion of the program, interviews were conducted to elicit participant’s feedback regarding its effectiveness.
Seventeen families agreed to participate and 65% of them attended three Reiki training sessions. They reported that Reiki had benefited their child by improving their comfort (76%), providing relaxation (88%) and pain relief (41%). All caregivers thought that becoming an active participant in their child’s care was a major gain. The authors of this investigation conclude that “a hospital-based Reiki training program for caregivers of hospitalized pediatric patients is feasible and can positively impact patients and their families. More rigorous research regarding the benefits of Reiki in the pediatric population is needed.”
Trials like this one abound in the parallel world of “energy” medicine. In my view, such investigations do untold damage: they convince uncritical thinkers that “energy” healing is a rational and effective approach – so much so that even the military is beginning to use it.
The flaws in trials as the one above are too obvious to mention. Like most studies in this area, this new investigation proves nothing except the fact that poor quality research will mislead those who believe in its findings.
Some might say, so what? If a patient experiences benefit from a bogus yet harmless therapy, why not? I would strongly disagree with this increasingly popular view. Reiki and similarly bizarre forms of ”energy” healing are well capable of causing harm.
Some fanatics might use these placebo-treatments as a true alternative to effective therapies. This would mean that the condition at hand remains untreated which, in a worst case scenario, might even lead to the death of patients. More important, in my view, is an entirely different risk: making people believe in mystic “energies” undermines rationality in a much more general sense. If this happens, the harm to society would be incalculable and extends far beyond health care.
There probably is no area in health care that produces more surveys than alternative medicine. I estimate that about 500 surveys are published every year; this amounts to about two every working day which is substantially more than the number of clinical trials in this field.
I have long been critical of this ‘survey-mania’. The reason is simple: most of these articles are of such poor quality that they tell us nothing of value.
The vast majority of these surveys attempts to evaluate the prevalence of use of alternative medicine, and it is this type of investigation that I intend to discuss here.
For a typical prevalence survey, a team of enthusiastic researchers might put together a few questions and design a questionnaire to find out what percentage of a group of individuals have tried alternative medicine in the past. Subsequently, the investigators might get one or two hundred responses. They then calculate simple descriptive statistics and demonstrate that xy% (let’s assume it is 45%) use alternative medicine. This finding eventually gets published in one of the many alternative medicine journals, and everyone is happy – well, almost everybody.
How can I be such a spoil-sport and claim that this result tells us nothing of value? At the very minimum, some might argue, it shows that enthusiasts of alternative medicine are interested in and capable of conducting research. I beg to differ: this is not research, it is pseudo-research which ignores most of the principles of survey-design.
The typical alternative medicine prevalence survey has none of the features that would render it a scientific investigation:
1) It lacks an accepted definition of what is being surveyed. There is no generally accepted definition of alternative medicine, and even if the researchers address specific therapies, they run into huge problems. Take prayer, for instance – some see this as alternative medicine, while others would, of course, argue that it is a religious pursuit. Or take herbal medicine – many consumers confuse it with homeopathy, some might think that drinking tea is herbal medicine, while others would probably disagree.
2) The questionnaires used for such surveys are almost never validated. Essentially, this means that we cannot be sure they evaluate what we think they evaluate. We all know that the way we formulate a question can determine the answer. There are many potential sources of bias here, and they are rarely taken into consideration.
3) Enthusiastic researchers of alternative medicine usually use a small convenience sample of participants for their surveys. This means they ask a few people who happen to be around to fill their questionnaire. As a consequence, there is no way the survey is representative of the population in question.
4) The typical survey has a low response rate; sometimes the response rate is not even provided or remains unknown even to the investigators. This means we do not know how the majority of patients/consumers who received but did not fill the questionnaire would have answered. Often there is good reason to suspect that those who have a certain attitude did respond, while those with a different opinion did not. This self-selection process is likely to produce misleading findings.
And why I am so sure about all of theses limitations? To my embarrassment, I know about them not least because I have made most these mistakes myself at some time in my career. You might also ask why this is important: what’s the harm in publishing a few flimsy surveys?
In my view, these investigations are regrettably counter-productive because:
they tend to grossly over-estimate the popularity of alternative medicine,
they distract money, manpower and attention from the truly important research questions in this field,
they give a false impression of a buoyant research activity,
and their results are constantly misused.
The last point is probably the most important one. The argument that is all too often spun around such survey data goes roughly as follows: a large percentage of the population uses alternative medicine; people pay out of their own pocket for these treatments; they are satisfied with them (if not, they would not pay for them). BUT THIS IS GROSSLY UNFAIR! Why should only those individuals who are rich enough to afford alternative medicine benefit from it? ALTERNATIVE MEDICINE SHOULD BE MADE AVAILABLE FOR ALL.
I rest my case.
Clinical trials of acupuncture can be quite challenging. In particular, it is often difficult to make sure that any observed outcome is truly due to the treatment and not caused by some other factor(s). How tricky this can be, shows a recently published study.
A new RCT has all (well, almost all) the features of a rigorous study. It tested the effects of acupuncture in patients suffering from hay fever. The German investigators recruited 46 specialized physicians in 6 hospital clinics and 32 private outpatient clinics. In total, 422 patients with IgE sensitization to birch and grass pollen were randomized into three groups: 1) acupuncture plus rescue medication (RM) (n= 212), 2) sham acupuncture plus RM (n= 102), or 3) RM alone (n= 108). Twelve acupuncture sessions were provided in groups 1 and 2 over 8 weeks. The outcome measures included changes in the Rhinitis Quality of Life Questionnaire (RQLQ) overall score and the RM score (RMs) from baseline to weeks 7, 8 and 16 in the first year as well as week 8 in the second year after randomization.
Compared with sham acupuncture and with RM, acupuncture was associated with improvement in RQLQ score and RMS. There were no differences after 16 weeks in the first year. After the 8-week follow-up phase in the second year, small improvements favoring real acupuncture over sham were noted.
Based on these results, the authors concluded that “acupuncture led to statistically significant improvements in disease-specific quality of life and antihistamine use measures after 8 weeks of treatment compared with sham acupuncture and with RM alone, but the improvements may not be clinically significant.”
The popular media were full of claims that this study proves the efficacy of acupuncture. However, I am not at all convinced that this conclusion is not hopelessly over-optimistic.
It might not have been the acupuncture itself that led to the observed improvements; they could well have been caused by several factors unrelated to the treatment itself. To understand my concern, we need to look closer at the actual interventions employed by the investigators.
The real acupuncture was done on acupuncture points thought to be indicated for hay fever. The needling was performed as one would normally do it, and the acupuncturists were asked to treat the patients in group 1 in such a way that they were likely to experience the famous ‘de-qi’ feeling.
The sham acupuncture, by contrast, was performed on non-acupuncture points; acupuncturists were asked to use shallow needling only and they were instructed to try not to produce ‘de-qi’.
This means that the following factors in combination or alone could have caused [and in my view probably did cause] the observed differences in outcomes between the acupuncture and the sham group:
1) verbal or non-verbal communication between the acupuncturists and the patient [previous trials have shown this factor to be of crucial importance]
2) the visibly less deep needling in the sham-group
3) the lack of ‘de-qi’ experience in the sham-group.
Sham-treatments in clinical trials serve the purpose of a placebo. They are thus meant to be indistinguishable from the verum. If that is not the case [as in the present study], the trial cannot be accepted as being patient-blind. If a trial is not patient-blind, the expectations of patients will most certainly influence the results.
Therefore I believe that the marginal differences noted in this study were not due to the effects of acupuncture per se, but were an artifact caused through de-blinding of the patients. De facto, neither the patients nor the acupuncturists were blinded in this study.
If that is true, the effects were not just not clinically relevant, as noted by the authors, they also had nothing to do with acupuncture. In other words, acupuncture is not of proven efficacy for this condition – a verdict which is also supported by our systematic review of the subject which concluded that “the evidence for the effectiveness of acupuncture for the symptomatic treatment or prevention of allergic rhinitis is mixed. The results for seasonal allergic rhinitis failed to show specific effects of acupuncture…”
Once again, we have before us a study which looks impressive at first glance. At closer scrutiny, we find, however, that it had important design flaws which led to false positive results and conclusions. In my view, it would have been the responsibility of the authors to discuss these limitations in full detail and to draw conclusions that take them into account. Moreover, it would have been the duty of the peer-reviewers and journal editors to pick up on these points. Instead the editors even commissioned an accompanying editorial which displays an exemplary lack of critical thinking.
Having failed to do any of this, they are in my opinion all guilty of misleading the world media who reported extensively and often uncritically on this new study thus misleading us all. Sadly, the losers in this bonanza of incompetence are the many hay fever sufferers who will now be trying (and paying for) useless treatments.
Still in the spirit of ACUPUNCTURE AWARENESS WEEK, I have another critical look at a recent paper. If you trust some of the conclusions of this new article, you might think that acupuncture is an evidence-based treatment for coronary heart disease. I think this would be a recipe for disaster.
This condition affects millions and eventually kills a frighteningly large percentage of the population. Essentially, it is caused by the fact that, as we get older, the blood vessels supplying the heart also change, become narrower and get partially or even totally blocked. This causes lack of oxygen in the heart which causes pain known as angina pectoris. Angina is a most important warning sign indicating that a full blown heart attack might be not far.
The treatment of coronary heart disease consists in trying to let more blood flow through the narrowed coronaries, either by drugs or by surgery. At the same time, one attempts to reduce the oxygen demand of the heart, if possible. Normalisation of risk factors like hypertension and hypercholesterolaemia are key preventative strategies. It is not immediate clear to me how acupuncture might help in all this - but I have been wrong before!
The new meta-analysis included 16 individual randomised clinical trials. All had a high or moderate risk of bias. Acupuncture combined with conventional drugs (AC+CD) turned out to be superior to conventional drugs alone in reducing the incidence of acute myocardial infarction (AMI). AC+CD was superior to conventional drugs in reducing angina symptoms as well as in improving electrocardiography (ECG). Acupuncture by itself was also superior to conventional drugs for angina symptoms and ECG improvement. AC+CD was superior to conventional drugs in shortening the time to onset of angina relief. However, the time to onset was significantly longer for acupuncture treatment than for conventional treatment alone.
From these results, the authors [who are from the Chengdu University of Traditional Chinese Medicine in Sichuan, China] conclude that “AC+CD reduced the occurrence of AMI, and both acupuncture and AC+CD relieved angina symptoms and improved ECG. However, compared with conventional treatment, acupuncture showed a longer delay before its onset of action. This indicates that acupuncture is not suitable for emergency treatment of heart attack. Owing to the poor quality of the current evidence, the findings of this systematic review need to be verified by more RCTs to enhance statistical power.”
As in the meta-analysis discussed in my previous post, the studies are mostly Chinese, flawed, and not obtainable for an independent assessment. As in the previous article, I fail to see a plausible mechanism by which acupuncture might bring about the effects. This is not just a trivial or coincidental observation – I could cite dozens of systematic reviews for which the same criticism applies.
What is different, however, from the last post on gout is simple and important: if you treat gout with a therapy that is ineffective, you have more pain and eventually might opt for an effective one. If you treat coronary heart disease with a therapy that does not work, you might not have time to change, you might be dead.
Therefore I strongly disagree with the authors of this meta-analysis; “the findings of this systematic review need NOT to be verified by more RCTs to enhance statistical power” — foremost, I think, the findings need to be interpreted with much more caution and re-written. In fact, the findings show quite clearly that there is no good evidence to use acupuncture for coronary heart disease. To pretend otherwise is, in my view, not responsible.
There might be an important lesson here: A SEEMINGLY SLIGHT CORRECTION OF CONCLUSIONS OF SUCH SYSTEMATIC REVIEWS MIGHT SAVE LIVES.
This week is acupuncture awareness week, and I will use this occasion to continue focusing on this therapy. This first time ever event is supported by the British Acupuncture Council who state that it aims to “help better inform people about the ancient practice of traditional acupuncture. With 2.3 million acupuncture treatments carried out each year, acupuncture is one of the most popular complementary therapies practised in the UK today.“
Right, let’s inform people about acupuncture then! Let’s show them that there is often more to acupuncture research than meets the eye.
My team and I have done lots of research into acupuncture and probably published more papers on this than any other subject. We had prominent acupuncturists on board from the UK, Korea, China and Japan, we ran conferences, published books and are proud to have been innovative and productive in our multidisciplinary research. But here I do not intend to dwell on our own achievements, rather I will highlight several important new papers in this area.
Korean authors just published a meta-analysis to assess the effectiveness of acupuncture as therapy for gouty arthritis. Ten RCTs involving 852 gouty arthritis patients were included. Six studies of 512 patients reported a significant decrease in uric acid in the treatment group compared with a control group, while two studies of 120 patients reported no such effect. The remaining four studies of 380 patients reported a significant decrease in pain in the treatment group.
The authors conclude “that acupuncture is efficacious as complementary therapy for gouty arthritis patients”.
We should be delighted with such a positive and neat result! Why then do I hesitate and have doubts?
I believe that this paper reveals several important issues in relation to systematic reviews of Chinese acupuncture trials and studies of other TCM interventions. In fact, this is my main reason for discussing the new meta-analysis here. The following three points are crucial, in my view:
1) All the primary studies were from China, and 8 of the 10 were only available in Chinese.
2) All of them had major methodological flaws.
3) It has been shown repeatedly that all acupuncture-trials from China are positive.
Given this situation, the conclusions of any review for which there are only Chinese acupuncture studies might as well be written before the actual research has started. If the authors are pro-acupuncture, as the ones of the present article clearly are, they will conclude that “acupuncture is efficacious“. If the research team has some critical thinkers on board, the same evidence will lead to an entirely different conclusion, such as “due to the lack of rigorous trials, the evidence is less than compelling.“
Systematic reviews are supposed to be the best type of evidence we currently have; they are supposed to guide therapeutic decisions. I find it unacceptable that one and the same set of data could be systematically analysed to generate such dramatically different outcomes. This is confusing and counter-productive!
So what is there to do? How can we prevent being misled by such articles? I think that medical journals should refuse to publish systematic reviews which so clearly lack sufficient critical input. I also believe that reviewers of predominantly Chinese studies should provide English translations of these texts so that they can be independently assessed by those who are not able to read Chinese – and for the sake of transparency, journal editors should insist on this point.
And what about the value of acupuncture for gouty arthritis? I think I let the readers draw their own conclusion.
“They would say that, wouldn’t they?” is the quote attributed to Mandy Rice-Davies giving witness in the Profumo affair. I think, it aptly highlights some of the issues related to conflicts of interest in health care.
These days, when a researcher publishes a paper, he will in all likelihood have to disclose all conflicts of interest he might have. The aim of this exercise is to be as transparent as possible; if someone has received support from a commercial company, for example, this fact does not necessarily follow that his paper is biased, but it is important to lay open the fact so that the readers can make up their own minds.
The questionnaires that authors have to complete prior to publication of their article focus almost exclusively on financial issues. For instance, one has to disclose any sponsorship, fees, travel support or shares that one might own in a company. In conventional medicine, these matters are deemed to be the most important sources for potential conflicts of interest.
In alternative medicine, financial issues are generally thought to be far less critical; it is generally seen as an area where there is so little money that it is hardly worth bothering. Perhaps this is the reason why few journals in this field insist on declarations of conflicts of interests and few authors disclose them.
After having been a full-time researcher of alternative medicine for two decades, I have become convinced that conflicts of interest are at least as prevalent and powerful in this field as in any other area of health care. Sure, there is less money at stake, but this fact is more than compensated by non-financial issues. Quasi-evangelic convictions abound in alternative medicine and it is, I think, obvious that they can amount to significant conflicts of interest.
During their training, alternative practitioners are being taught many things which are unproven, have no basis in fact or are just plainly wrong. Eventually this schooling can create a belief system which often is adhered to regardless of the scientific evidence and which tends to be defended at all cost. As some of my readers are bound to object to this remark, I better cite an example: during their training, students of chiropractic develop a more and more firm stance against immunization which in all likelihood is due to the type of information they receive at the chiropractic college. There is no question in my mind that creeds can represent an even more powerful conflict of interest than financial matters.
Moreover, this belief is indivisibly intertwined with existential issues. In alternative medicine, there may not be huge amounts of money at stake but practitioners’ livelihoods are perceived to be at risk. If an acupuncturist, for instance, argues in favour of his therapy, he also consciously or sub-consciously is trying to protect his income.
Some might say that this not different from conventional medicine, but I disagree: if we take away one specific therapy from a doctor because it turns out to be useless or unsafe, he will be able to use another one; if we take the acupuncture needle away from an acupuncturist, we have deprived him of his livelihood.
This is why conflicts of interest in alternative medicine tend to be very acute, powerful and personal. And this is why enthusiasts of alternative medicine are incapable or unwilling to look upon any type of critical assessment of their area as anything else than an attack on their income, their beliefs, their status, their training or their person. If anyone should doubt it, I recommend studying the comments I received to previous posts of this blog.
When Mandi Rice-Davies gave evidence during the trial of Stephen Ward, the osteopath who had introduced her to influential clients, the prosecuting council noted that Lord Astor denied having had an affair with her. Mrs Rice-Davies allegedly replied “Well, he would say that, wouldn’t he?” (Actually, she did not say these exact words but something rather similar) When I read the comments following my posts on this blog, I am often reminded of this now classical quote.
When chiropractors deny that neck manipulations carry a risk, when herbalists insist that traditional herbalism is based on good evidence, when homeopaths claim that their remedies are more than placebos, I believe we should ask who, in these debates, might have a conflict of interest.
Is there a circumstance of one party in the discussion where personal interests might benefit from the argument? Who is more likely to be objective, the person whose livelihood is endangered or the independent expert who studied the subject in depth but has no axe to grind? If you ask these questions, you might conclude as I frequently do: “they would say that, wouldn’t they?”
During the last decade, Professor Claudia Witt and co-workers from the Charite in Berlin have published more studies of homeopathy than any other research group. Much of their conclusions are over-optimistic and worringly uncritical, in my view. Their latest article is on homeopathy as a treatment of eczema. As it happens, I have recently published a systematic review of this subject; it concluded that “the evidence from controlled clinical trials… fails to show that homeopathy is an efficacious treatment for eczema“. The question therefore arises whether the latest publication of the Berlin team changes my conclusion in any way.
Their new article describes a prospective multi-centre study which included 135 children with mild to moderate atopic eczema. The parents of the kids enrolled in this trial were able to choose either homeopathic or conventional doctors for their children who treated them as they saw fit. The article gives only scant details about the actual treatments administered. The main outcome of the study was a validated symptom score at 36 months. Further endpoints included quality of life, conventional medicine consumption, safety and disease related costs at six, 12 and 36 months.
The results showed no significant differences between the groups at 36 months. However, the children treated conventionally seemed to improve quicker than those in the homeopathy group. The total costs were about twice higher in the homoeopathic compared to the conventional group. The authors conclude as follows: “Taking patient preferences into account, while being unable to rule out residual confounding, in this long-term observational study, the effects of homoeopathic treatment were not superior to conventional treatment for children with mild to moderate atopic eczema, but involved higher costs“.
At least one previous report of this study has been available for some time and had thus been included in my systematic review. It is therefore unlikely that this new analysis might change my conclusion, particularly as the trial by Witt et al has many flaws. Here are just some of the most obvious ones:
Patients were selected according to parents’ preferences.
This means expectations could have played an important role.
It also means that the groups were not comparable in various, potentially important prognostic variables.
Even though much of the article reads as though the homeopaths exclusively employed homeopathic remedies, the truth is that both groups received similar amounts of conventional care and treatments. In other words, the study followed a ‘A+B versus B’ design (here is the sentence that best gives the game away “At 36 months the frequency of daily basic skin care was… comparable in both groups, as was the number of different medications (including corticosteroids and antihistamines)…”). I have previously stated that this type of study-design can never produce a negative result because A+B is always more than B.
Yet, at first glance, this new study seems to prove my thesis wrong: even though the parents chose their preferred options, and even though all patients were treated conventionally, the addition of homeopathy to conventional care failed to produce a better clinical outcome. On the contrary, the homeopathically treated kids had to wait longer for their symptoms to ease. The only significant difference was that the addition of homeopathy to conventional eczema treatments was much more expensive than conventional therapy alone (this finding is less than remarkable: even the most useless additional intervention costs money).
So, is my theory about ‘A+B versusB’ study-designs wrong? I don’t think so. If B equals zero, one would expect exactly the finding Witt et al produced: A+0=A. In turn, this is not a compliment for the homeopaths of this study: they seem to have been incapable of even generating a placebo-response. And this might indicate that homeopathy was not even usefull as a means to generate a placebo-response. Whatever interpretation one adopts, this study tells us very little of value (as children often grow out of eczema, we cannot even be sure whether the results are not simply a reflection of the natural history of the disease); in my view, it merely demonstrates that weak study designs can only create weak findings which, in this particular case, are next to useless.
The study was sponsored by the Robert Bosch Stiftung, an organisation which claims to be dedicated to excellence in research and which has, in the past, spent millions on researching homeopathy. It seems doubtful that trials of this caliber can live up to any claim of excellence. In any case, the new analysis is certainly no reason to change the conclusion of my systematic review.
To their credit, Witt et al are well aware of the many weaknesses of their study. Perhaps in an attempt to make them appear less glaring, they stress that “the aim of this study was to reflect the real world situation“.Usually I do not accept the argument that pragmatic trials cannot be rigorous - but I think Witt et al do have a point here: the real word tells us that homeopathic remedies are pure placebos!
As I am drafting this post, I am in a plane flying back from Finland. The in-flight meal reminded me of the fact that no food is so delicious that it cannot be spoilt by the addition of too many capers. In turn, this made me think about the paper I happened to be reading at the time, and I arrived at the following theory: no trial design is so rigorous that it cannot to be turned into something utterly nonsensical by the addition of a few amateur researchers.
The paper I was reading when this idea occurred to me was a randomised, triple-blind, placebo-controlled cross-over trial of homeopathy. Sounds rigorous and top quality? Yes, but wait!
Essentially, the authors recruited 86 volunteers who all claimed to be suffering from “mental fatigue” and treated them with Kali-Phos 6X or placebo for one week (X-potencies signify dilution steps of 1: 10, and 6X therefore means that the salt had been diluted 1: 1000000 ). Subsequently, the volunteers were crossed-over to receive the other treatment for one week.
The results failed to show that the homeopathic medication had any effect (not even homeopaths can be surprised about this!). The authors concluded that Kali-Phos was not effective but cautioned that, because of the possibility of a type-2-error, they might have missed an effect which, in truth, does exist.
In my view, this article provides an almost classic example of how time, money and other resources can be wasted in a pretence of conducting reasonable research. As we all know, clinical trials usually are for testing hypotheses. But what is the hypothesis tested here?
According to the authors, the aim was to “assess the effectiveness of Kali-Phos 6X for attention problems associated with mental fatigue”. In other words, their hyposesis was that this remedy is effective for treating the symptom of mental fatigue. This notion, I would claim, is not a scientific hypothesis, it is a foolish conjecture!
Arguably any hypothesis about the effectiveness of a highly diluted homeopathic remedy is mere wishful thinking. But, if there were at least some promissing data, some might conclude that a trial was justified. By way of justification for the RCT in question, the authors inform us that one previous trial had suggested an effect; however, this study did not employ just Kali-Phos but a combined homeopathic preparation which contained Kalium-Phos as one of several components. Thus the authors’ “hypothesis” does not even amount to a hunch, not even to a slight incling! To me, it is less than a shot in the dark fired by blind optimists - nobody should be surprised that the bullet failed to hit anything.
It could even be that the investigators themselves dimly realised that something is amiss with the basis of their study; this might be the reason why they called it an “exploratory trial”. But an exploratory study is one whithout a hypothesis, and the trial in question does have a hyposis of sorts – only that it is rubbish. And what exactly did the authos meant to explore anyway?
That self-reported mental fatigue in healthy volunteers is a condition that can be mediatised such that it merits treatment?
That the test they used for quantifying its severity is adequate?
That a homeopathic remedy with virtually no active ingredient generates outcomes which are different from placebo?
That Hahnemann’s teaching of homeopathy was nonsense and can thus be discarded (he would have sharply condemned the approach of treating all volunteers with the same remedy, as it contradicts many of his concepts)?
That funding bodies can be fooled to pay for even the most ridiculous trial?
That ethics-committees might pass applications which are pure nonsense and which are thus unethical?
A scientific hypothesis should be more than a vague hunch; at its simplest, it aims to explain an observation or phenomenon, and it ought to have certain features which many alt med researchers seem to have never heard of. If they test nonsense, the result can only be nonsense.
The issue of conducting research that does not make much sense is far from trivial, particularly as so much (I would say most) of alt med research is of such or even worst calibre (if you do not believe me, please go on Medline and see for yourself how many of the recent articles in the category “complementary alternative medicine” truly contribute to knowledge worth knowing). It would be easy therefore to cite more hypothesis-free trials of homeopathy.
One recent example from Germany will have to suffice: in this trial, the only justification for conducting a full-blown RCT was that the manufacturer of the remedy allegedly knew of a few unpublished case-reports which suggested the treatment to work – and, of course, the results of the RCT eventually showed that it didn’t. Anyone with a background in science might have predicied that outcome – which is why such trials are so deplorably wastefull.
Research-funds are increasingly scarce, and they must not be spent on nonsensical projects! The money and time should be invested more fruitfully elsewhere. Participants of clinical trials give their cooperation willingly; but if they learn that their efforts have been wasted unnecessarily, they might think twice next time they are asked. Thus nonsensical research may have knock-on effects with far-reaching consequences.
Being a researcher is at least as serious a profession as most other occupations; perhaps we should stop allowing total amateurs wasting money while playing at being professioal. If someone driving a car does something seriously wrong, we take away his licence; why is there not a similar mechanism for inadequate researchers, funders, ethics-committees which prevents them doing further damage?
At the very minimum, we should critically evaluate the hypothesis that the applicants for research-funds propose to test. Had someone done this properly in relatiom to the two above-named studies, we would have saved about £150,000 per trial (my estimate). But as it stands, the authors will probably claim that they have produced fascinating findings which urgently need further investigation – and we (normally you and I) will have to spend three times the above-named amount (again, my estimate) to finance a “definitive” trial. Nonsense, I am afraid, tends to beget more nonsense.
In my last post, we discussed the “A+B versus B” trial design as a tool to produce false positive results. This method is currently very popular in alternative medicine, yet it is by no means the only approach that can mislead us. Today, let’s look at other popular options with a view of protecting us against trialists who naively or willfully might fool us.
The crucial flaw of the “A+B versus B” design is that it fails to account for non-specific effects. If the patients in the experimental group experience better outcomes than the control group, this difference could well be due to effects that are unrelated to the experimental treatment. There are, of course, several further ways to ignore non-specific effects in clinical research. The simplest option is to include no control group at all. Homeopaths, for instance, are very proud of studies which show that ~70% of their patients experience benefit after taking their remedies. This type of result tends to impress journalists, politicians and other people who fail to realise that such a result might be due to a host of factors, e.g. the placebo-effect, the natural history of the disease, regression towards the mean or treatments which patients self-administered while taking the homeopathic remedies. It is therefore misleading to make causal inferences from such data.
Another easy method to generate false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the outcomes in clinical trials is to make sure that expectation is not the cause of or contributor to the outcome. They say that expectation can move mountains; this might be an exaggeration, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment. Similarly, the parents of a child or the owners of an animal can transfer their expectations, and this is one of several reasons why it is incorrect to claim that children and animals are immune to placebo-effects.
Failure to randomise is another source of bias which can make an ineffective therapy look like an effective one when tested in a clinical trial. If we allow patients or trialists to select or choose which patients receive the experimental and which get the control-treatment, it is likely that the two groups differ in a number of variables. Some of these variables might, in turn, impact on the outcome. If, for instance, doctors allocate their patients to the experimental and control groups, they might select those who will respond to the former and those who don’t to the latter. This may not happen with malicious intent but through intuition or instinct: responsible health care professionals want those patients who, in their experience, have the best chances to benefit from a given treatment to receive that treatment. Only randomisation can, when done properly, make sure we are comparing comparable groups of patients, and non-randomisation is likely to produce misleading findings.
While these options for producing false positives are all too obvious, the next possibility is slightly more intriguing. It refers to studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but to investigations attempting to assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce the same or similarly positive results, both must be effective. For instance, such a study might compare the effects of acupuncture to a common pain-killer. Such trials are aptly called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a deliberately silly example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.
Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In our hypothetical example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then apparently confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition, and people who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.
Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, the most useless or even slightly detrimental treatment would appear to be effective simply because it is equivalent to or less harmful than the comparator.
A variation of this theme is the plethora of controlled clinical trials which compare one unproven therapy to another unproven treatment. Perdicatbly, the results indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic researchers then tend to conclude that this proves both treatments to be equally effective.
Another option for creating misleadingly positive findings is to cherry-pick the results. Most trails have many outcome measures; for instance, a study of acupuncture for pain-control might quantify pain in half a dozen different ways, it might also measure the length of the treatment until pain has subsided, the amount of medication the patients took in addition to receiving acupuncture, the days off work because of pain, the partner’s impression of the patient’s health status, the quality of life of the patient, the frequency of sleep being disrupted by pain etc. If the researchers then evaluate all the results, they are likely to find that one or two of them have changed in the direction they wanted. This can well be a chance finding: with the typical statistical tests, one in 20 outcome measures would produce a significant result purely by chance. In order to mislead us, the researchers only need to “forget” about all the negative results and focus their publication on the ones which by chance have come out as they had hoped.
One fail-proof method for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.
And finally, there is always the possibility of overt cheating. Researchers are only human and are thus not immune to temptation. They may have conflicts of interest or may know that positive results are much easier to publish than negative ones. Certainly they want to publish their work – “publish or perish”! So, faced with disappointing results of a study, they might decide to prettify them or even invent new ones which are more pleasing to them, their peers, or their sponsors.
Am I claiming that this sort of thing only happens in alternative medicine? No! Obviously, the way to minimise the risk of such misconduct is to train researchers properly and make sure they are able to think critically. Am I suggesting that investigators of alternative medicine are often not well-trained and almost always uncritical? Yes.