MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.

The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.

The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.

Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.

But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.

And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.

I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.

3 Responses to A ‘bullshit-detector’ for clinical trials? The example of a recent trial of homeopathy

  • Quinquagintamillesimal potencies… Oh my god! I tried to get my head around this and do some calculations but MS Excel refuses to discuss anything higher than 10 to the power of 309. So ten to the power of 50.000 is plain ridiculous.

    The universe is believed to contain ten to the power of 80 molecules. So to understand a Q-potency, you will have to imagine thinning a drop of some stuff in a ridiculous number of universes!
    Here’s an explanation of Q-potentisation in German. You can have Google Chrome translate the text.
    http://www.remedia.at/de-at/homoeopathie/qpotenz.html

    This is an even better gag than remedies made from Vacuum: https://www.helios.co.uk/cgi-bin/store.cgi?action=linkrem&sku=vacu&uid=465

  • I think inconclusive means that there is not enough data (with acceptable statistical variants) for the researchers to evaluate if the treatment is placebo or has a real effect.

    If one cannot judge based on the available data s/he states : inconclusive..if the data does not support his/her hypothesis after comparing it to placebo — one can write “the results failed to show an effect of the remedy or the consultation”.

    If there is no enough data to support one of the above conclusion stating “the results failed to show an effect of the remedy or the consultation” is a false and (maybe dishonest ) statement.

Leave a Reply

Your email address will not be published. Required fields are marked *

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories