A paper entitled ‘Real world research: a complementary method to establish the effectiveness of acupuncture’ caught my attention recently. I find it quite remarkable and think it might stimulate some discussion on this blog. Here is its abstract:
Acupuncture has been widely used in the management of a variety of diseases for thousands of years, and many relevant randomized controlled trials have been published. In recent years, many randomized controlled trials have provided controversial or less-than-convincing evidence that supports the efficacy of acupuncture. The clinical effectiveness of acupuncture in Western countries remains controversial.
Acupuncture is a complex intervention involving needling components, specific non-needling components, and generic components. Common problems that have contributed to the equivocal findings in acupuncture randomized controlled trials were imperfections regarding acupuncture treatment and inappropriate placebo/sham controls. In addition, some inherent limitations were also present in the design and implementation of current acupuncture randomized controlled trials such as weak external validity. The current designs of randomized controlled trials of acupuncture need to be further developed. In contrast to examining efficacy and adverse reaction in a “sterilized” environment in a narrowly defined population, real world research assesses the effectiveness and safety of an intervention in a much wider population in real world practice. For this reason, real world research might be a feasible and meaningful method for acupuncture assessment. Randomized controlled trials are important in verifying the efficacy of acupuncture treatment, but the authors believe that real world research, if designed and conducted appropriately, can complement randomized controlled trials to establish the effectiveness of acupuncture. Furthermore, the integrative model that can incorporate randomized controlled trial and real world research which can complement each other and potentially provide more objective and persuasive evidence.
In the article itself, the authors list seven criteria for what they consider good research into acupuncture:
Acupuncture should be regarded as complex and individualized treatment;
The study aim (whether to assess the efficacy of acupuncture needling or the effectiveness of acupuncture treatment) should be clearly defined and differentiated;
Pattern identification should be clearly specified, and non-needling components should also be considered;
The treatment protocol should have some degree of flexibility to allow for individualization;
The placebo or sham acupuncture should be appropriate: knowing “what to avoid” and “what to mimic” in placebos/shams;
In addition to “hard evidence”, one should consider patient-reported outcomes, economic evaluations, patient preferences and the effect of expectancy;
The use of qualitative research (e.g., interview) to explore some missing areas (e.g., experience of practitioners and patient-practitioner relationship) in acupuncture research.
Furthermore, the authors list the advantages of their RWR-concept:
In RWR, interventions are tailored to the patients’ specific conditions, in contrast to standardized treatment. As a result, conclusions based on RWR consider all aspects of acupuncture that affect the effectiveness.
At an operational level, patients’ choice of the treatment(s) decreases the difficulties in recruiting and retaining patients during the data collection period.
The study sample in RWR is much more representative of the real world situation (similar to the section of the population that receives the treatment). The study, therefore, has higher external validity.
RWR tends to have a larger sample size and longer follow-up period than RCT, and thus is more appropriate for assessing the safety of acupuncture.
The authors make much of their notion that acupuncture is a COMPLEX INTERVENTION; specifically they claim the following: Acupuncture treatment includes three aspects: needling, specific non-needling components drove by acupuncture theory, and generic components not unique to acupuncture treatment. In addition, acupuncture treatment should be performed on the basis of the patient condition and traditional Chinese medicine (TCM) theory.
There is so much BS here that it is hard to decide where to begin refuting. As the assumption of acupuncture or other alternative therapies being COMPLEX INTERVENTIONS (and therefore exempt from rigorous tests) is highly prevalent in this field, let me try to just briefly tackle this one.
The last time I saw a patient and prescribed a drug treatment I did all of the following:
- I greeted her, asked her to sit down and tried to make her feel relaxed.
- I first had a quick chat about something trivial.
- I then asked why she had come to see me.
- I started to take notes.
- I inquired about the exact nature and the history of her problem.
- I then asked her about her general medical history, family history and her life-style.
- I also asked about any psychological problems that might relate to her symptoms.
- I then conducted a physical examination.
- Subsequently we discussed what her diagnosis might be.
- I told her what my working diagnosis was.
- I ordered a few tests to either confirm or refute it and explained them to her.
- We decided that she should come back and see me in a few days when her tests had come back.
- In order to ease her symptoms in the meanwhile, I gave her a prescription for a drug.
- We discussed this treatment, how and when she should take it, adverse effects etc.
- We also discussed other therapeutic options, in case the prescribed treatment was in any way unsatisfactory.
- I reassured her by telling her that her condition did not seem to be serious and stressed that I was confident to be able to help her.
- She left my office.
The point I am trying to make is: prescribing an entirely straight forward drug treatment is also a COMPLEX INTERVENTION. In fact, I know of no treatment that is NOT complex.
Does that mean that drugs and all other interventions are exempt from being tested in rigorous RCTs? Should we allow drug companies to adopt the RWR too? Any old placebo would pass that test and could be made to look effective using RWR. In the example above, my compassion, care and reassurance would alleviate my patient’s symptoms, even if the prescription I gave her was complete rubbish.
So why should acupuncture (or any other alternative therapy) not be tested in proper RCTs? I fear, the reason is that RCTs might show that it is not as effective as its proponents had hoped. The conclusion about the RWR is thus embarrassingly simple: proponents of alternative medicine want double standards because single standards would risk to disclose the truth.
You may feel that homeopaths are bizarre, irrational, perhaps even stupid – but you cannot deny their tenacity. Since 200 years, they are trying to convince us that their treatments are effective beyond placebo. And they seem to get more and more bold with their claims: while they used to suggest that homeopathy was effective for trivial conditions like a common cold, they now have their eyes on much more ambitious things. Two recent studies, for instance, claim that homeopathic remedies can help cancer patients.
The aim of the first study was to evaluate whether homeopathy influenced global health status and subjective wellbeing when used as an adjunct to conventional cancer therapy.
In this pragmatic randomized controlled trial, 410 patients, who were treated by standard anti-neoplastic therapy, were randomized to receive or not receive classical homeopathic adjunctive therapy in addition to standard therapy. The main outcome measures were global health status and subjective wellbeing as assessed by the patients. At each of three visits (one baseline, two follow-up visits), patients filled in two questionnaires for quantification of these endpoints.
The results show that 373 patients yielded at least one of three measurements. The improvement of global health status between visits 1 and 3 was significantly stronger in the homeopathy group by 7.7 (95% CI 2.3-13.0, p=0.005) when compared with the control group. A significant group difference was also observed with respect to subjective wellbeing by 14.7 (95% CI 8.5-21.0, p<0.001) in favor of the homeopathic as compared with the control group. Control patients showed a significant improvement only in subjective wellbeing between their first and third visits.
Our homeopaths concluded that the results suggest that the global health status and subjective wellbeing of cancer patients improve significantly when adjunct classical homeopathic treatment is administered in addition to conventional therapy.
The second study is a little more modest; it had the aim to explore the benefits of a three-month course of individualised homeopathy (IH) for survivors of cancer.
Fifteen survivors of any type of cancer were recruited by a walk-in cancer support centre. Conventional treatment had to have taken place within the last three years. Patients scored their total, physical and emotional wellbeing using the Functional Assessment of Chronic Illness Therapy for Cancer (FACIT-G) before and after receiving four IH sessions.
The results showed that 11 women had statistically positive results for emotional, physical and total wellbeing based on FACIT-G scores.
And the conclusion: Findings support previous research, suggesting CAM or individualised homeopathy could be beneficial for survivors of cancer.
As I said: one has to admire their tenacity, perhaps also their chutzpa – but not their understanding of science or their intelligence. If they were able to think critically, they could only arrive at one conclusion: STUDY DESIGNS THAT ARE WIDE OPEN TO BIAS ARE LIKELY TO DELIVER BIASED RESULTS.
The second study is a mere observation without a control group. The reported outcomes could be due to placebo, expectation, extra attention or social desirability. We obviously need an RCT! But the first study was an RCT!!! Its results are therefore more convincing, aren’t they?
No, not at all. I can repeat my sentence from above: The reported outcomes could be due to placebo, expectation, extra attention or social desirability. And if you don’t believe it, please read what I have posted about the infamous ‘A+B versus B’ trial design (here and here and here and here and here for instance).
My point is that such a study, while looking rigorous to the naïve reader (after all, it’s an RCT!!!), is just as inconclusive when it comes to establishing cause and effect as a simple case series which (almost) everyone knows by now to be utterly useless for that purpose. The fact that the A+B versus B design is nevertheless being used over and over again in alternative medicine for drawing causal conclusions amounts to deceit – and deceit is unethical, as we all know.
My overall conclusion about all this:
QUACKS LOVE THIS STUDY DESIGN BECAUSE IT NEVER FAILS TO PRODUCE FALSE POSITIVE RESULTS.
This is a question which I have asked myself more often than I care to remember. The reason is probably that, in alternative medicine, I feel surrounded by so much dodgy research that I simply cannot avoid asking it.
In particular, the co-called ‘pragmatic’ trials which are so much ‘en vogue’ at present are, in my view, a reason for concern. Take a study of cancer patients, for instance, where one group is randomized to get the usual treatments and care, while the experimental group receives the same and several alternative treatments in addition. These treatments are carefully selected to be agreeable and pleasant; each patient can choose the ones he/she likes best, always had wanted to try, or has heard many good things about. The outcome measure of our fictitious study would, of course, be some subjective parameter such as quality of life.
In this set-up, the patients in our experimental group thus have high expectations, are delighted to get something extra, even more happy to get it for free, receive plenty of attention and lots of empathy, care, time, attention etc. By contrast, our poor patients in the control group would be a bit miffed to have drawn the ‘short straw’ and receive none of this.
What result do we expect?
Will the quality of life after all this be equal in both groups?
Will it be better in the miffed controls?
Or will it be higher in those lucky ones who got all this extra pampering?
I don’t think I need to answer these questions; the answers are too obvious and too trivial.
But the real and relevant question is the following, I think: IS SUCH A TRIAL JUST SILLY AND MEANINGLESS OR IS IT UNETHICAL?
I would argue the latter!
Because the results of the study are clearly known before the first patient had even been recruited. This means that the trial was not necessary; the money, time and effort has been wasted. Crucially, patients have been misled into thinking that they give their time, co-operation, patience etc. because there is a question of sufficient importance to be answered.
But, in truth, there is no question at all!
Perhaps you believe that nobody in their right mind would design, fund and conduct such a daft trial. If so, you assumed wrongly. Such studies are currently being published by the dozen. Here is the abstract of the most recent one I could find:
The aim of this study was to evaluate the effectiveness of an additional, individualized, multi-component complementary medicine treatment offered to breast cancer patients at the Merano Hospital (South Tyrol) on health-related quality of life compared to patients receiving usual care only. A randomized pragmatic trial with two parallel arms was performed. Women with confirmed diagnoses of breast cancer were randomized (stratified by usual care treatment) to receive individualized complementary medicine (CM group) or usual care alone (usual care group). Both groups were allowed to use conventional treatment for breast cancer. Primary endpoint was the breast cancer-related quality of life FACT-B score at 6 months. For statistical analysis, we used analysis of covariance (with factors treatment, stratum, and baseline FACT-B score) and imputed missing FACT-B scores at 6 months with regression-based multiple imputation. A total of 275 patients were randomized between April 2011 and March 2012 to the CM group (n = 136, 56.3 ± 10.9 years of age) or the usual care group (n = 139, 56.0 ± 11.0). After 6 months from randomization, adjusted means for health-related quality of life were higher in the CM group (FACT-B score 107.9; 95 % CI 104.1-111.7) compared to the usual care group (102.2; 98.5-105.9) with an adjusted FACT-B score difference between groups of 5.7 (2.6-8.7, p < 0.001). Thus, an additional individualized and complex complementary medicine intervention improved quality of life of breast cancer patients compared to usual care alone. Further studies evaluating specific effects of treatment components should follow to optimize the treatment of breast cancer patients.
The key sentence in this abstract is, of course: complementary medicine intervention improved quality of life of breast cancer patients… It provides the explanation as to why these trials are so popular with alternative medicine researchers: they are not real research but they are quite simply promotion! The next step would be to put a few of those pseudo-scientific trials together and claim that there is solid proof that integrating alternative treatments into conventional health care produces better results. At that stage, few people will bother asking whether this is really due to the treatments in questioning or to the additional attention, pampering etc.
My question is ARE SUCH TRIALS ETHICAL?
I would very much appreciate your opinion.
A new study of homeopathic arnica suggests efficacy. How come?
Subjects scheduled for rhinoplasty surgery with nasal bone osteotomies by a single surgeon were prospectively randomized to receive either oral perioperative arnica or placebo in a double-blinded fashion. A commercially available preparation was used which contained 12 capsules: one 500 mg capsule with arnica 1M is given preoperatively on the morning of surgery and two more later that day after surgery. Thereafter, arnica was administered in the 12C potency three times daily for the next 3 days (“C” indicates a 100-fold serial dilution; and M, a 1000-fold dilution)
Ecchymosis was measured in digital “three-quarter”-view photographs at three postoperative time points. Each bruise was outlined with Adobe Photoshop and the extent was scaled to a standardized reference card. Cyan, magenta, yellow, black, and luminosity were analyzed in the bruised and control areas to calculate change in intensity.
Compared with 13 subjects receiving placebo, 9 taking arnica had 16.2%, 32.9%, and 20.4% less extent on postoperative days 2/3, 7, and 9/10, a statistically significant difference on day 7. Color change initially showed 13.1% increase in intensity with arnica, but 10.9% and 36.3% decreases on days 7 and 9/10, a statistically significant difference on day 9/10. One subject experienced mild itching and rash with the study drug that resolved during the study period.
The authors concluded that Arnica montana seems to accelerate postoperative healing, with quicker resolution of the extent and the intensity of ecchymosis after osteotomies in rhinoplasty surgery, which may dramatically affect patient satisfaction.
Why are the results positive? Pervious systematic reviews confirm that homeopathic arnica is a pure placebo. First, I thought the answer lies in the 1M potency. It could well still contain active molecules. But then I realised that the answer is much more simple: if we apply the conventional level of statistical significance, there are no statistically significant differences to placebo at all! I had not noticed the little sentence by the authors: a P value of 0.1 was set as a meaningful difference with statistical significance. In fact, none of the effects called significant by the authors pass the conventionally used probability level of 5%.
So, what so the results of this new study truly mean? In my view, they show what was known all along: HOMEOPATHIC REMEDIES ARE PLACEBOS.
In my last post, I claimed that researchers of alternative medicine tend to be less than rigorous. I did not link this statement to any evidence at all. Perhaps I should have at least provided an example!? As it happens, I just came across a brand new paper which nicely demonstrates what I meant.
According to its authors, this non-interventional study was performed to generate data on safety and treatment effects of a complex homeopathic drug. They treated 1050 outpatients suffering from common cold with a commercially available homeopathic remedy for 8 days. The study was conducted in 64 German outpatient practices of medical doctors trained in CAM. Tolerability, compliance and the treatment effects were assessed by the physicians and by patient diaries. Adverse events were collected and assessed with specific attention to homeopathic aggravation and proving symptoms. Each adverse effect was additionally evaluated by an advisory board of experts.
The physicians detected 60 adverse events from 46 patients (4.4%). Adverse drug reactions occurred in 14 patients (1.3%). Six patients showed proving symptoms (0.57%) and only one homeopathic aggravation (0.1%) appeared. The rate of compliance was 84% for all groups. The global assessment of the treatment effects resulted in the verdict “good” and “very good” in 84.9% of all patients.
The authors concluded that the homeopathic complex drug was shown to be safe and effective for children and adults likewise. Adverse reactions specifically related to homeopathic principles are very rare. All observed events recovered quickly and were of mild to moderate intensity.
So why do I think this is ‘positively barmy’?
The study had no control group. This means that there is no way anyone can attribute the observed ‘treatment effects’ to the homeopathic remedy. There are many other phenomena that may have caused or contributed to it, e. g.:
- a placebo effect
- the natural history of the condition
- regression to the mean
- other treatments which the patients took but did not declare
- the empathic encounter with the physician
- social desirability
To plan a study with the aim as stated above and to draw the conclusion as cited above is naïve and unprofessional (to say the least) on the part of the researchers (I often wonder where, in such cases, the boundary between incompetence and research misconduct might lie). To pass such a paper through the peer review process is negligent on the part of the reviewers. To publish the article is irresponsible on the part of the editor.
In a nut-shell: COLLECTIVELY, THIS IS ‘POSITIVELY BARMY’!!!
In the realm of homeopathy there is no shortage of irresponsible claims. I am therefore used to a lot – but this new proclamation takes the biscuit, particularly as it currently is being disseminated in various forms worldwide. It is so outrageously unethical that I decided to reproduce it here [in a slightly shortened version]:
“Homeopathy has given rise to a new hope to patients suffering from dreaded HIV, tuberculosis and the deadly blood disease Hemophilia. In a pioneering two-year long study, city-based homeopath Dr Rajesh Shah has developed a new medicine for AIDS patients, sourced from human immunodeficiency virus (HIV) itself.
The drug has been tested on humans for safety and efficacy and the results are encouraging, said Dr Shah. Larger studies with and without concomitant conventional ART (Antiretroviral therapy) can throw more light in future on the scope of this new medicine, he said. Dr Shah’s scientific paper for debate has just been published in Indian Journal of Research in Homeopathy…
The drug resulted in improvement of blood count (CD4 cells) of HIV patients, which is a very positive and hopeful sign, he said and expressed the hope that this will encourage an advanced research into the subject. Sourcing of medicines from various virus and bacteria has been a practise in the homeopathy stream long before the prevailing vaccines came into existence, said Dr Shah, who is also organising secretary of Global Homeopathy Foundation (GHF)…
Dr Shah, who has been campaigning for the integration of homeopathy and allopathic treatments, said this combination has proven to be useful for several challenging diseases. He teamed up with noted virologist Dr Abhay Chowdhury and his team at the premier Haffkine Institute and developed a drug sourced from TB germs of MDR-TB patients.”
So, where is the study? It is not on Medline, but I found it on the journal’s website. This is what the abstract tells us:
“Thirty-seven HIV-infected persons were registered for the trial, and ten participants were dropped out from the study, so the effect of HIV nosode 30C and 50C, was concluded on 27 participants under the trial.
Results: Out of 27 participants, 7 (25.93%) showed a sustained reduction in the viral load from 12 to 24 weeks. Similarly 9 participants (33.33%) showed an increase in the CD4+ count by 20% altogether in 12 th and 24 th week. Significant weight gain was observed at week 12 (P = 0.0206). 63% and 55% showed an overall increase in either appetite or weight. The viral load increased from baseline to 24 week through 12 week in which the increase was not statistically significant (P > 0.05). 52% (14 of 27) participants have shown either stability or improvement in CD4% at the end of 24 weeks, of which 37% participants have shown improvement (1.54-48.35%) in CD4+ count and 15% had stable CD4+ percentage count until week 24 week. 16 out of 27 participants had a decrease (1.8-46.43%) in CD8 count. None of the adverse events led to discontinuation of study.
Conclusion: The study results revealed improvement in immunological parameters, treatment satisfaction, reported by an increase in weight, relief in symptoms, and an improvement in health status, which opens up possibilities for future studies.”
In other words, the study had not even a control group. This means that the observed ‘effects’ are most likely just the normal fluctuations one would expect without any clinical significance whatsoever.
The homeopathic Ebola cure was bad enough, I thought, but, considering the global importance of AIDS, the homeopathic HIV treatment is clearly worse.
Today, I had a great day: two wonderful book reviews, one in THE TIMES HIGHER EDUCATION and one in THE SPECTATOR. But then I did something that I shouldn’t have done – I looked whether someone had already written a review on the Amazon site. There were three reviews; the first was nice the last was very stupid and the third one almost made me angry. Here it is:
I was at Exeter when Ernst took over what was already a successful Chair in CAM. I am afraid this part of it appears to be fiction. It was embarrassing for those of us CAM scientists trying to work there, but the university nevertheless supported his right to freedom of speech through all the one-sided attacks he made on CAM. Sadly, it became impossible to do genuine CAM research at Exeter, as one had to either agree with him that CAM is rubbish, or go elsewhere. He was eventually asked to leave the university, having spent the £2.M charity pot set up by Maurice Laing to help others benefit from osteopathy. CAM research funding is so tiny (in fact it is pretty much non-existent) and the remedies so cheap to make, that there is not the kind of corruption you find in multi-billion dollar drug companies (such as that recently in China) or the intrigue described. Subsequently it is not possible to become a big name in CAM in the UK (which may explain the ‘about face’ from the author when he found that out?). The book bears no resemblance to what I myself know about the field of CAM research, which is clearly considerably more than the author, and I would recommend anyone not to waste time and money on this particular account.
I know, I should just ignore it, but outright lies have always made me cross!
Here are just some of the ‘errors’ in the above text:
- There was no chair when I came.
- All the CAM scientists – not sure what that is supposed to mean.
- I was never asked to leave.
- The endowment was not £ 2 million.
- It was not set up to help others benefit from osteopathy.
It is a pity that this ‘CAM-expert’ hides behind a pseudonym. Perhaps he/she will tell us on this blog who he/she is. And then we might find out how well-informed he/she truly is and how he/she was able to insert so many lies into such a short text.
Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?
Here is a brand new one which might stand for dozens of others.
In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).
The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.
Good news then for enthusiasts of homeopathy? 91% improvement!
Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:
Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.
Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:
- How on earth can we take this and so many other articles on homeopathy seriously?
- When does this sort of article cross the line between wishful thinking and scientific misconduct?
Guest post by Nick Ross
If you’re a fan of Edzard Ernst – and who with a rational mind would not be – then you will be a fan of HealthWatch.
Edzard is a distinguished supporter. Do join us. I can’t promise much in return except that you will be part of a small and noble organisation that campaigns for treatments that work – in other words for evidence based medicine. Oh, and you get a regular Newsletter, which is actually rather good.
HealthWatch was inspired 25 years ago by Professor Michael Baum, the breast cancer surgeon who was incandescent that so many women presented to his clinic late, doomed and with suppurating sores, because they had been persuaded to try ‘alternative treatment’ rather than the real thing.
But like Edzard (and indeed like Michael Baum), HealthWatch keeps an open mind. If there are reliable data to show that an apparently weirdo treatment works, hallelujah. If there is evidence that an orthodox one doesn’t then it deserves a raspberry. HealthWatch has worked to expose quacks and swindlers and to get the Advertising Standards Authority to do its job regulating against false claims and flimflam. It has fought the NHS to have women given fair and balanced advice about the perils of mass screening. It has campaigned with Sense About Science, English Pen and Index to protect whistleblowing scientists from vexatious libel laws, and it has joined the AllTrials battle for transparency in drug trials. It has an annual competition for medical and nursing students to encourage critical analysis of clinical research protocols, and it stages the annual HealthWatch Award and Lecture which has featured Edzard (in 2005) and a galaxy of other champions of scepticism and good evidence including Sir Iain Chalmers, Richard Smith, David Colquhoun, Tim Harford, John Diamond, Richard Doll, Peter Wilmshurst, Ray Tallis, Ben Goldacre, Fiona Godlee and, last year, Simon Singh. We are shortly to sponsor a national debate on Lord Saatchi’s controversial Medical innovation Bill.
But we need new blood. Do please check us out. Be careful, because since we first registered our name a host of brazen copycats have emerged, not least Her Majesty’s Government with ‘Healthwatch England’ which is part of the Care Quality Commission. We have had to put ‘uk’ at the end of our web address to retain our identity. So take the link to http://www.healthwatch-uk.org/, or better still take out a (very modestly priced) subscription.
As Edmund Burke might well have said, all it takes for quackery to flourish is that good men and women do nothing.
As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.
To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):
A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.
The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.
Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).
Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.
Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.
I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.
It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.
Why did they do that?
The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).
By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?
Well, I think they committed several serious mistakes.
- Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
- Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.
There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.
And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:
I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.
For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.
Domain I: Sequence generation:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)
Domain IIIb: Blinding of outcome assessor
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)
Domain V: Selective outcome reporting
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)
Domain VI: Other sources of bias:
Rating: NO (high risk of bias), no details given
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given
In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.
So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying.
Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.