MD, PhD, FMedSci, FSB, FRCP, FRCPEd

Dodgy science abounds in alternative medicine; this is perhaps particularly true for homeopathy. A brand-new trial seems to confirm this view.

The aim of this study was to test the hypothesis that homeopathy (H) enhances the effects of scaling and root planing (SRP) in patients with chronic periodontitis (CP).

The researchers, dentists from Brazil, randomised 50 patients with CP to one of two treatment groups: SRP (C-G) or SRP + H (H-G). Assessments were made at baseline and after 3 and 12 months of treatment. The local and systemic responses to the treatments were evaluated after one year of follow-up. The results showed that both groups displayed significant improvements, however, the H-G group performed significantly better than C-G group.

The authors concluded that homeopathic medicines, as an adjunctive to SRP, can provide significant local and systemic improvements for CP patients.

Really? I am afraid, I disagree!

Homeopathic medicines might have nothing whatsoever to do with this result. Much more likely is the possibility that the findings are caused by other factors such as:

  • placebo-effects,
  • patients’ expectations,
  • improved compliance with other health-related measures,
  • the researchers’ expectations,
  • the extra attention given to the patients in the H-G group,
  • disappointment of the C-G patients for not receiving the additional care,
  • a mixture of all or some of the above.

I should stress that it would not have been difficult to plan the study in such a way that these factors were eliminated as sources of bias or confounding. But this study was conducted according to the A+B versus B design which we have discussed repeatedly on this blog. In such trials, A is the experimental treatment (homeopathy) and B is the standard care (scaling and root planning). Unless A is an overtly harmful therapy, it is simply not conceivable that A+B does not generate better results than B alone. The simplest way to comprehend this argument is to imagine A and B are two different amounts of money: it is impossible that A+B is not more that B!

It is unclear to me what relevant research question such a study design actually does answer (if anyone knows, please tell me). It seems obvious, however, that it cannot test the hypothesis that homeopathy (H) enhances the effects of scaling and root planing (SRP). This does not necessarily mean that the design is necessarily useless.  But at the very minimum, one would need an adequate research question (one that matches this design) and adequate conclusions based on the findings.

The fact that the conclusions drawn from a dodgy trial are inadequate and misleading could be seen as merely a mild irritation. The facts that, in homeopathy, such poor science and misleading conclusions emerge all too regularly, and that journals continue to publish such rubbish are not just mildly irritating; they are annoying and worrying – annoying because such pseudo-science constitutes an unethical waste of scarce resources; worrying because it almost inevitably leads to wrong decisions in health care.

8 Responses to Another dodgy study of homeopathy

  • Prof Ernst,
    I’m confused by your statement: “Unless A is an overtly harmful therapy, it is simply not conceivable that A+B does not generate better results than B alone.”
    If A is a placebo and the study is double blinded does this statement still hold?

  • Supposing you are correct that this is an A+B study where A is a placebo effect. This means that the placebo effect (B) can according to the study facilitate significant gain and reductions in HDL, LDL and Total Cholesterol, Triglycerides, Glucose and Uric acid. These are clinical parameters that you would not expect to be changed in patients who have had a few extra nice chats with a homeopath and a placebo sugar pill.
    I would suggest then more research on this placebo effect then as quite obviously the full potential of the standard B treatment was not being utilised.

    • I would suggest then more research on this placebo effect then as quite obviously the full potential of the standard B treatment was not being utilised.

      Firstly, the placebo effect is well understood. That is why studies need to be designed in such a way as to eliminate its effects. The B treatment in this case is the real treatment; A the placebo adjunct.

      Secondly, deliberately prescribing inert substances is a form of lying to patients and is considered unethical these days… Except of course, by charlatans, who do it all the time. If a patient wants to take some form of alt-med placebo in addition to the real treatment, the doctor’s response tends to be “Take it if it makes you feel better”, providing, of course, that the placebo treatment is not inherently risky: such as IV injections, with the risk of infection, or absorption of a chemical that may interfere with the treatment.

      • Teapot
        This trial shows evidence that a placebo treatment has produced significant changes in clinical parameters- not just a feel good factor. My point is how best to maximise treatment B? Your approach is just to dismiss a significant result as the well known placebo effect. I would suggest that we need to find out more about this placebo effect and how to utilise it without lying to a patient.

  • Some call it “white coat” effect. If you take your prescribing from a caring doctor you will get placebo effect, no need sugar pill for that. Studies have shown that, it’s why “remote healing” and other things can create placebo effect without even any visible treatement. The prob is obviously all doctor don’t have the time to be “caring” or they are simply bored or exhausted for other (human beeing).
    By memory I remember that placebo effect might come from dopamine release and some people can be more or less sensibilized to it, like some people are more or less prone to depression. Just type placebo – dopamine in medline and you got paper about it.
    (http://archpsyc.jamanetwork.com/article.aspx?articleid=210854)
    (http://www.ncbi.nlm.nih.gov/pubmed/17017561)
    (http://www.ncbi.nlm.nih.gov/pubmed/12449082)

    “So that is repeatable and predictable?”

    It is, or we wouldnt design studies with complicated blinded method.

Leave a Reply

Your email address will not be published. Required fields are marked *


− two = 5

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Recent Comments
Click here for a comprehensive list of recent comments.
Categories