MD, PhD, FMedSci, FSB, FRCP, FRCPEd

methodology

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.

SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.

Many proponents of alternative medicine seem somewhat suspicious of research; they have obviously understood that it might not produce the positive result they had hoped for; after all, good research tests hypotheses and does not necessarily confirm beliefs. At the same time, they are often tempted to conduct research: this is perceived as being good for the image and, provided the findings are positive, also good for business.

Therefore they seem to be tirelessly looking for a study design that cannot ‘fail’, i.e. one that avoids the risk of negative results but looks respectable enough to be accepted by ‘the establishment’. For these enthusiasts, I have good news: here is the study design that cannot fail.

It is perhaps best outlined as a concrete example; for reasons that will become clear very shortly, I have chosen reflexology as a treatment of diabetic neuropathy, but you can, of course, replace both the treatment and the condition as it suits your needs. Here is the outline:

  • recruit a group of patients suffering from diabetic neuropathy – say 58, that will do nicely,
  • randomly allocate them to two groups,
  • the experimental group receives regular treatments by a motivated reflexologist,
  • the controls get no such therapy,
  • both groups also receive conventional treatments for their neuropathy,
  • the follow-up is 6 months,
  • the following outcome measures are used: pain reduction, glycemic control, nerve conductivity, and thermal and vibration sensitivities,
  • the results show that the reflexology group experience more improvements in all outcome measures than those of control subjects,
  • your conclusion: This study exhibited the efficient utility of reflexology therapy integrated with conventional medicines in managing diabetic neuropathy.

Mission accomplished!

This method is fool-proof, trust me, I have seen it often enough being tested, and never has it generated disappointment. It cannot fail because it follows the notorious A+B versus B design (I know, I have mentioned this several times before on this blog, but it is really important, I think): both patient groups receive the essential mainstream treatment, and the experimental group receives a useless but pleasant alternative treatment in addition. The alternative treatment involves touch, time, compassion, empathy, expectations, etc. All of these elements will inevitably have positive effects, and they can even be used to increase the patients’ compliance with the conventional treatments that is being applied in parallel. Thus all outcome measures will be better in the experimental compared to the control group.

The overall effect is pure magic: even an utterly ineffective treatment will appear as being effective – the perfect method for producing false-positive results.

And now we hopefully all understand why this study design is so very popular in alternative medicine. It looks solid – after all, it’s an RCT!!! – and it thus convinces even mildly critical experts of the notion that the useless treatment is something worth while. Consequently the useless treatment will become accepted as ‘evidence-based’, will be used more widely and perhaps even reimbursed from the public purse. Business will be thriving!

And why did I employ reflexology for diabetic neuropathy? Is that example not a far-fetched? Not a bit! I used it because it describes precisely a study that has just been published. Of course, I could also have taken the chiropractic trial from my last post, or dozens of other studies following the A+B versus B design – it is so brilliantly suited for misleading us all.

Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories