The HRI is an innovative international charity created to address the need for high quality scientific research in homeopathy… HRI is dedicated to promoting cutting research in homeopathy, using the most rigorous methods available, and communicating the results of such work beyond the usual academic circles… HRI aims to bring academically reliable information to a wide international audience, in an easy to understand form. This audience includes the general public, scientists, healthcare providers, healthcare policy makers, government and the media.

This sounds absolutely brilliant!

I should be a member of the HRI!

For years, I have pursued similar aims!

Hold on, perhaps not?

This article makes me wonder:


… By the end of 2014, 189 randomised controlled trials of homeopathy on 100 different medical conditions had been published in peer-reviewed journals. Of these, 104 papers were placebo-controlled and were eligible for detailed review:
41% were positive (43 trials) – finding that homeopathy was effective
5% were negative (5 trials) – finding that homeopathy was ineffective
54% were inconclusive (56 trials)

How does this compare with evidence for conventional medicine?

An analysis of 1016 systematic reviews of RCTs of conventional medicine had strikingly similar findings2:
44% were positive – the treatments were likely to be beneficial
7% were negative – the treatments were likely to be harmful
49% were inconclusive – the evidence did not support either benefit or harm.


The implication here is that the evidence base for homeopathy is strikingly similar to that of real medicine.

Nice try! But sadly it has nothing to do with ‘reliable information’!!!

In fact, it is grossly (and I suspect deliberately) misleading.

Regular readers of this blog will have spotted the reason, because we discussed (part of) it before. Let me remind you:


A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.

For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.

One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about INCONCLUSIVE?

That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!

How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo INCONCLUSIVE.

In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.

This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.

Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.

The little trick of applying the ‘INCONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.


But one trick is not enough for the HRI! For thoroughly misinforming the public they have a second one up their sleeve.

And that is ‘comparing apples with pears’  – RCTs with systematic reviews, in their case.

In contrast to RCTs, systematic reviews can be (and often are) INCONCLUSIVE. As they evaluate the totality of all RCTs on a given subject, it is possible that some RCTs are positive, while others are negative. When, for example, the number of high-quality, positive studies included in a systematic review is similar to the number of high-quality, negative trials, the overall result of that review would be INCONCLUSIVE. And this is one of the reasons why the findings of systematic reviews cannot be compared in this way to those of RCTs.

I suspect that the people at the HRI know all this. They are not daft! In fact, they are quite clever. But unfortunately, they seem to employ their cleverness not for informing but for misleading their ‘wide international audience’.

19 Responses to The HOMEOPATHY RESEARCH INSTITUTE: bringing unreliable information to a wide international audience

  • Wow! Over 100 trials conducted? I wonder how many people wire in each trail? Only a few, perhaps, and also being treated by conventional medicine too?

    Gosh, they must have really good trials!

    • How many times have you heard “there’s absolutely no evidence” from skeptic trolls?

      I suggest you sit down and take a calming cup of tea.

      • who is the ‘troll’ here?

      • Yeah, but when skeptics talk about evidence, they have standards; for homeopaths, any old thing will do, it seems…

      • A skeptic would say that there is no relevant reliable evidence that xxx works. In the case of homeopathy the lack of a plausible mechanism for action complements the lack of said evidence.

        Furthermore if P values are being used for the statistical analysis then strictly speaking for a given P value disproving the null hypothesis does that and only that. It does not mean the control group is proved just that the results between control and null hypothesis are statistically the same.

  • Ernst, dear chap, just get out your grey crayon and scribble over the red bits of pie (which are just information, quite informative without being overwhelming). Best not do it on the computer screen.

    Then what do we have, by Ernst-measure?

    – A comparison of many systematic reviews of RCTs of conventional medicine
    really did show (2014) strikingly similar proportionate findings to 104 available controlled RCTs of homeopathic interventions (rather more now; the pie needs an update. To date there’s a lot more good-quality positive homeopathic trials, so I understand. See Core-Hom.) –

    grey/red not-positive . . homeopathy 59% . . . . “convention” 56%
    lime-green positive . . . . homeopathy 41% . . . . “convention” 44%

    roughly (at five points or so per glyph)
    ############++++++++ . . . hom cures
    ###########+++++++++ . . . con treatments

    There, that wasn’t so hard, was it?

  • I think this tripartite division of evidence is claimed by Robert Mathie to be ‘correct’ practice and I’m sure he has cited to me some source like the BMJ to support his contention.

    But I agree with you, it flies in the face of the common sense that most people use when discussing medical trials. And, yes, the alleged ‘positive’ trials for homeopathy are of uniformly execrable quality. Positive trials for real medicine are of variable quality, but due the inherently higher prior probability of the hypothesis under test the results form a more reliable basis for clinical decisions.

    A frequent strawman implicit in criticism of conventional medicine is that unless the evidence is perfect it is useless. Real medicine used honestly accepts the variability of data quality and understands that we must always work from incomplete data to make the best decisions that we can.

  • Two side notes:
    1) The title of chart A should obviously be “104 RCTs of (…)” instead of “189 RCTs of (…)”.
    I wonder… could it be true that the brilliant scientists at the HOMEOPATHY RESEARCH INSTITUTE have a problem with even very simple numbers?!

    2) Could it be true that chap Will is one of these brilliant minds? His weird comments would fit very nicely.

  • This article by Professor Ernst is why homeopathy proponents can benefit from sceptics keeping a critical eye on what is being put out there.

    I have written some critical comments about Ernst’s character in previous posts (ad homs) but his untiring pursuit in his investigation of homeopathy is astonishing. The High Court judgement for NHS did not even prompt him to take a holiday; a very determined person, I must say!

    Dr Rawlins, let the debate finally BEGIN.

  • There’s another reason why these two graphs are highly misleading: the first one is based on information aggregated by homeopaths, who are not exactly known for their skills in scientific research and data analysis, and for their avoidance of bias, to put it mildly.
    The second one is from Cochrane, an institute known for applying the most rigorous criteria for scientific research.

    It is a bit like comparing kids’ drawings based on grades awarded with, say, assessments of Rembrandt’s works, and concluding that both are comparable art-wise and should fetch similar prices.

    • I do not know who ‘RichardR’ is, but I am the only ‘Dr Rawlins’ who ever posts on this blog.

      When ‘RichardR’ responds to a post mentioning me, I charge that he is a troll and should identify himself more clearly or be barred from this blog.

      Thank you.
      Richard Rawlins

  • No one has yet pointed out the most hilarious fact:

    “By the end of 2014, 189 randomised controlled trials of homeopathy on 100 different medical conditions had been published in peer-reviewed journals. Of these, 104 papers were placebo-controlled and were eligible for detailed review:”

    After almost exactly 200 years, homeopaths have only about 100 trials through whose entrails they endlessly trawl.

  • The Faculty of Homeopathy has a list of systematic reviews of homeopathy up to 2014.

    9 positive, 10 “little or no evidence”, 16 non-conclusive, 1 negative.

    If it’s not positive and it’s not non-conclusive and shows little or no evidence of benefit is it not negative? I think it’s negative. So systematic reviews of homeopathy are more negative than positive but mostly inconclusive.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.

Click here for a comprehensive list of recent comments.