MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.

SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.

58 Responses to “Proof of concept that homeopathic medicines have clinical treatment effects.” A challenge for experts to comment

  • My impression:

    Thirty-two RCT’s were studied and no less than twenty of them appeared to be complete rubbish even to homeopathic standards. Only three trials out of 32 made a little bit of sense.
    But luckily that did not stop them from carrying out a meta analysis in which they included 22 trials, of which only 3 were slightly sensible.

  • I’ve not worked my way through all of it yet, but is it just me or is the criterion for inclusion in the meta-analysis somewhat unclear? Table 3 (a) list the included trials and table 3 (b) the excluded ones. It says:

    Table 3(b): Trials that were deficient in domain V (selective outcome reporting) included the ten whose data were not extractable for meta-analysis and which were thus ‘C’-rated by default; seven of these ten trials were already ‘C’-rated due to deficiency in at least one other domain of assessment.

    Is that simply saying that they couldn’t extract data from these ten, therefore they were not included? But they included everything else, regardless of how poor the trial was?

    But the striking feature is that they were not able to rate any of the trials as having low risk of bias overall.

    • I THINK YOU ARE ON TO SOMETHING HERE.

    • “No trial was ‘A’-rated (low risk of bias overall)—i.e. none fulfilled the criteria for
      all seven domains of assessment”

      By their own criteria, not a single study was found that had a low risk of bias. The review should have ended there.

      • by their own criteria, none of the trials was completely free of bias – very few studies ever are; and that applies to all types of medicine.

  • Oh you silly people. There is no need to do any analysis here. The paper tells us itself that it is ‘rigorous’ and ‘focused’. So that’s that then. Like the Democratic People’s Republic of Korea tells us it is democratic.

  • In computing we call this “GIGO” Garbage In, Garbage Out

    • I believe that this is not entirely true; some of these studies were clearly not ‘garbage’.

      • But it is entirely true. Any CT of homeopathy is “garbage” (“cargo cult science”) for reasons given here. You say that “the assumptions which underpin homeopathy fly in the face of science” but then do, or give credence to, [analyses of the results of] ‘experiments’ which would only be appropriate if homeopathy didn’t fly in the face of science!

  • I just read the provisional pdf About27 pages
    I think the conclusion of author is , the evidence for homeopathic research is full of uncertainty due to bias
    because with the bias increase , so odd ratio was increase
    so we need more reliable RCT

    but I think my english may be terrible because DR Robert manthie read that paper and interpret in opposite to mine …. He is British so I think he may read english research more accurate than me

  • Could any effect from “individualized” homeopathy be due to the extended counseling session that comes with it, which has already been shown to have positive effect?

    • no, because the placebo patients received the same councelling

      • Same counselling, but with whom? Other studies – well, one at least, if i can find it – have shown that the placebo effect also varies with the counsellor and how they present themselves to the patient.

      • Strictly speaking these are therefore not tests of ‘individualised homeopathy’. If you consider the therapeutic system of homeopathy to be a complex intervention of – amongst other components, the homeopathic medicine, the homeopathic consultation and application of homeopathic principles, then what is being tested is the specific effects of just one component, because both groups are getting the other components.

        • …an often voiced fallacy!
          “the homeopathic medicine, the homeopathic consultation and application of homeopathic principles” ???
          the same applies to any other medicine.
          THE CONVENTIONAL MEDICINE, THE CONVENTIONAL CONSULTATION AND APPLICATION OF CONVENTIONAL PRINCIPLES… and in surgery, physical medicine, we even have many more components.
          are you claiming that no form of health care should be submitted to scientific (‘reductionist’) tests?

          • No, that is not my claim at all. However MRC guidelines for complex interventions recommend testing the effectiveness of the whole intervention first, and then the efficacy of individual components only after effectiveness is established, the opposite path to first testing the specific effects of a medicine before seeing whether these translate into patient benefits.

            What I am suggesting is that these so called ‘efficacy’ studies cannot in fact test efficacy (ie the variable tested in optimum conditions) because for any complex intervention, optimum conditions involve the optimum interaction of the constituent variables.What is being tested here is the efficacy of the homeopathic medicine, but under compromised conditions, since the other variable’s contributions are not included.

            One expects specific effects from most pharmaceutical medicines, therefore it makes sense to test for those and compare with placebo. However, yes, I would argue that we need to be doing far more pragmatic, comparative trials to see whether the efficacy of medicines translates into effectiveness in clinical practice, and what helps patients most, which is after all what matters.

            The issue with homeopathy as I see it is that trials are just not reflecting clinical practice: population studies tend to show improvements, RCTs not. With pharmaceutical medicine the opposite tends to occur. Medicines which show efficacy in tightly controlled conditions sometimes are less so in clinical practice, and over the long term.
            So what’s the issue with RCTs of specific effects of homeopathic medicines? My suggestion, based on MRC recommendations is that a) they’re not appropriate at this juncture, b) they’re not allowing optimum action of the tested variable. In short the wrong kind of trials are being performed to answer the question – does homeopathy work. The question to which these trials are the answer is: do homeopathic medicines prescribed under compromised conditions, work?

          • having done ‘reductionistic’ RCTs myself, I can assure you that one can conduct them such that the boundary conditions are optimal and not in any way compromised.

  • Here’s what Dr Mathie should have said:
    “We planned to do a meta-analysis of studies investigating the benefit of individualised homeopathic treatment. Unfortunately, all such clinical trials that were done so far were crappy in one way or another and we have not found a single one worth including in our meta-analysis. They’re all methodologically biased, you see. We should really have halted our little project there and spent some more time with our family or gone fishing or something. Instead we thought: in for a penny, in for a pound. So we kinda randomly defined “reliable evidence” in a way that suited our agenda (‘if the uncertainty in its risk of bias was for one of domains IV, V or VI only’), but at the same time included lots of misleading reference to Cochrane handbooks and stuff that these skeptics around this Edzard Ernst usually like. These references are only straw-men of course – our used definition of “reliable evidence” isn’t actually found there, is it? But anyway: so we generated a publication that will give people the impression of positive evidence for homeopathy. A lot of people will see these headlines and hardly anyone will realise that the only actual finding in our “research” is that previous trials have been crap.”

  • I think the 5th paragraph of the discussion puts the conclusions into context. The main conclusion is based on three trials with reliable evidence, and as I understand it, this was borderline reliability. “Two of the three trials used medicines that were diluted beyond the Avogadro limit.” The authors admit that the pooled effect estimate for these three trials may therefore be a false positive. They go on to say that “One of these trials displayed evidence of vested interest”.

    It is a pity that most people, especially those with a leaning towards homeopathy, will not read beyond the abstract and will therefore miss the authors’ own reservations.

  • ‘Judgment in seven assessment domains enabled a trial’s risk of bias to be designated’

    Is this Mathie’s subjective domain assessment?

    Anyway, no trials were clearly without risk of bias as admitted. So the main conclusion must be that further analysis is precluded. The secondary conclusion then follows that homeopaths need to learn about clinical trials and scientific method, not that they should do more.

  • The critical thing here is that the finding can only stand if *all* such trials were published. And we can be pretty confident that they weren’t.

    You’ll be aware of Ioannidis’ work, which predicts that publication bias and other confounders will result in a small net positive evidence base for an inert treatment, and the chances of a positive result being false are dependent on prior plausibility (which in this case is as close to zero as makes no odds).

    The result is sufficiently weak that it is still fully consistent with the null hypothesis, and in the absence of any credible theoretical framework, including any persuasive evidence that like actually does cure like, since Hahnemann’s own basis for this was refuted in the 19th Century, there is no reason yet to tear up the physics books.

  • Might have lost my way in this paper (telling which studies exactly got picked), but I fail to see if the studies had an OR ratio in a statistical significant way or not. Given that only 3 studies of the 32 had a statistical significant result I would be surprised if that would be exactly the ones used.

    The other points (high amount of useless studies) already were brought up.

  • None of the trials they reviewed was rated as category A for bias risk. More than half the included studies had significant risk of bias. Also given that homeopathy (individualized or not) has not established efficacy for any one condition, combining the studies on 12 different conditions is not warranted. The rationale for this study is fundamentally wrong, as it is using terrible quality evidence and meta-analyzing them in an invalid way.

  • Perhaps the author will next do a study comparing the results of Tooth Fairy visits in different countries? The point being that any study at all of something that has no prior scientific plausibility is rather a waste of time. Studying homeopathy (or prayer, for example) is simply an unfortunate side effect of living in a free society that promotes religious freedom. Bah, humbug!

  • The phrase “proof of concept” is, surely, associated with engineering and not the biological sciences. A working model of a new type of hang glider would be an example. But in natural sciences we are always extremely cautious about using the p word. It would be an unusual scientist that would stand up and say there was “proof” of anything. Statistical analysis does not “prove” anything – it just gives us a steer on how likely it is that the results are pure fluke. So without scrutinising the paper the use of this term would immediately cause me to seriously doubt it’s credibility.

    • I think you are right.
      the term ‘proof of concept’ STUDY is sometimes used in medicine – but the notion of a ‘proof of concept’ meta-analysis is odd, to say the least.
      but none of this would be my major criticism [which I will post in a day or two].

    • It’s a term that’s widely used in drug development, for phase IIa studies. Those who use it accept that it’s simply an indication that the drug has some clinically significant effect. It’s not in my view an appropriate term to use for a modality that has been struggling for scientific credibility for 200 years.

  • The ‘British Homeopathic Association’ has confirmed two paragraphs of my guest post:

    In medicine, [the] testing process is performed via systematic reviews of multiple, independent, double-blind, placebo-controlled trials — every trial that is insufficiently powered to add meaningfully to the result is rightfully excluded from the aggregation.

    Alt-med relies on a diametrically opposed testing process. It performs a plethora of only underpowered tests; presents those that just happen to show a positive result (just as a random die could’ve produced); and sweeps under the carpet the overwhelming number of tests that produced a negative result. It publishes only the ‘successes’, not its failures. By sweeping its failures under the carpet it feels justified in making the very bold claim: Our plethora of collected evidence shows clearly that it mostly ‘works’ and, when it doesn’t, it causes no harm.

    • YES, IT’S VERY TEMPTING TO THINK THAT!
      but if you were correct, their test for publication bias would have confirmed your suspicion. so, there is no evidence for it.

      • The use of a p-value threshold produces a statistically significant bias. Whenever a p-value threshold is used to determine the publication of results it produces a statistically significant publication bias — the effect of which ripples through to meta-analysis and systematic reviews. There are endless scholarly articles addressing this issue. The AllTrials project addresses this and other issues.

        The effect of this bias is likely to be small (non-dominant) when the prior plausibility and prior probability are both high.

        The effect of this bias will be large (likely dominant) when the prior plausibility and prior probability are both close to zero — as is the case with homeopathy. The study to which you linked suggested that the statistically significant result it found is due to the causality of homeopathy; it failed to stress that there are more probable causal reasons; it also failed to stress that statistical significance DOES NOT AUTOMATICALLY IMPLY clinical significance for the treatment(s).

        The abstract states: “We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.” Of course it showed a statistically significant result because that is what this systematic review was intended to produce. Had the review been intended to properly test whether or not the hypothesis was *in*distinguishable from that of placebos it would have produced results geared towards finding truth that is useful to the public rather than useful to the purveyors of homeopathy.

        As far as I’m aware the ‘British Homeopathic Association’ is not well renowned for previously issuing unbiased publications along the following lines:

        Clinical evidence for homeopathy published.

        Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy have no specific effects whatsoever for any known illness.

  • “The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.” This seems to be a very fanciful way of saying: “Nothing to see here, yet.”

  • So these trials compare one placebo against another. Any difference between the two is likely to be just noise, then.

  • In case anyone isn’t aware, this is a follow-up to a previous paper by the same authors:

    Randomised controlled trials of homeopathy in humans: characterising the research journal literature for systematic review

    It’s well worth reading in full as well, but here’s the abstract:

    BACKGROUND:
    A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.

    METHODS:
    The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

    RESULTS:
    Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

    CONCLUSIONS:
    Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

  • It may take me some time to do my review of this paper. But on first glance I would argue the method of rating for risk of bias. See, one of the best studies about homeopathy, Walach’s headache study from 1997 – finding placebo somewhat superior to homeopathic drugs in this indication – received a C2.2 rating and was not included in the meta analysis.

    I will have to follow up and find out, why this is the case….

  • I find this paper difficult to comment upon. Usually, such analyses assess individual treatments for a specific conditions, with similar objective outcome measures. However, in this case the analysis is of an individual treatment approach for a variety of conditions. Is it possible to reliably carry out a meta-analysis of studies with such varied conditions and outcome measures? Otherwise, surely it is analogous to carrying out a meta-analysis of the use of “drugs” to treat “diseases”?

    However, the take-home message should be that any future trials most be correctly designed. As the paper had a contribution from the British Homeopathic Association, perhaps they could pass this message on to their members or even establish a central trials registry by which the design of all future trials in this area will need to be rigorously scrutinised prior to adoption. This could be carried out in collaboration with the UK CRN.

    • homeopaths are fond of such ‘global reviews’, as they call them. Linde’s review and Shang’s [both published in the Lancet] were ‘global’.

  • I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

    For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

    Domain I: Sequence generation:
    Walach:
    “The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
    Rating: UNCLEAR (Medium risk of bias)

    Jacobs:
    “For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
    Rating: YES (Low risk of bias)

    Domain IIIb: Blinding of outcome assessor
    Walach:
    “The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
    Rating: UNCLEAR (Medium risk of bias)

    Jacobs:
    “All statistical analyses were done before breaking the randomisation code, using the program …”
    Rating: YES (Low risk of bias)

    Domain V: Selective outcome reporting

    Walach:
    Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
    Rating: NO (high risk of bias)

    Jacobs:
    No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
    Rating: YES (low risk of bias)

    Domain VI: Other sources of bias:

    Walach:
    Rating: NO (high risk of bias), no details given

    Jacobs:
    Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
    Rating: YES (low risk of bias), no details given

    In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.

    • very well done!
      I did a similar exercise with our study in this lot [white et al], and there the surprises were even greater – I will report details about this later.

  • I have concerns with the choices for ‘main outcome’. According to the protocol the ‘main outcome’ taken from each study was not necessarily the primary outcome. Your [white et al] primary outcome was the active quality of living subscale from the Childhood Asthma Questionnaire but this review has used the symptom severity subscale as the main outcome and then excluded the study from the meta-analyses, presumably because they couldn’t extract enough data from the paper to include it. The authors report choosing the main outcome based upon a hierachical ranking order derived from the WHO-ICF and justify this by saying that, “The WHO approach is an internationally accepted method to ensure that a selected outcome is the most vital to the functioning and health of the patient: it thus ensured our consistent selection of the most important and objective outcome per trial.” As far as I am aware the WHO-ICF was designed as a system to produce comparative international health data and to assist in health planning and is very complicated. It was not designed to provide a checklist to determine the most important health outcomes for inclusion in a systematic review and I haven’t seen it used like this (although I haven’t conducted anything more than a casual search).

    Anyway, deriving ‘main outcomes’ may be a reasonable thing to do, but I think in this paper the derivation of these are poorly explained and there is a real risk of introducing author bias. What do you think?

  • MY OWN CRITICISM IS NOW PUBLISHED AS A SUBSEQUENT POST ON THIS BLOG

  • No matter the outcome or methodology, what is the point of studying the results of the Tooth Fairy vs. Nothing, when the Tooth Fairy is in fact, also Nothing? It’s clearly Mum and Dad, folks, so save some time and money and stop funding the study of fantasy.

    Nevertheless, I appreciate your effort to expose the shortcomings of these “studies”.

Leave a Reply to Pete Attkins Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories