MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):

BACKGROUND:

A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.

METHODS:

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

RESULTS:

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

CONCLUSIONS:

Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
Walach:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
Walach:
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Walach:
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

Jacobs:
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Walach:
Rating: NO (high risk of bias), no details given

Jacobs:
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.

Conclusion

So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

199 Responses to HOMEOPATHY: proof of concept or proof of misconduct?

  • I must confess I’m troubled by the depth of trickery here. I have repeatedly reported as a journalist about the efforts of the homeopathy crowd to make their stuff look respectable, but I’m pretty sure I wouldn’t have been able to see though the charade here, even if I’d read the paper front to back. How can non-experts assess the value of medical information, if bad science / pseudoscience is going to lengths like these in obfuscating the bias at work?

    • good question – I am not sure I know the answer.

      • Neither would I ever have been able to see through this “charade” all by myself. Like most people I simply do not have the relevant knowledge to do this. My enthusiasm for critical thinking however causes the outcome so far to be no surprise whatsoever… surely that’s one of the beauties of science: one does not need to know everything about everything in order to find the right conclusions, one only has to know how to find a reliable source.
        So perhaps the truth of this question is that there is no answer in the short term.
        In the long term the answer surely has to lie in our schools – children really do need to grow up with critical thinking skills and some awareness of how modern science works.

        • It is for this reason we trust the journal and its editor to review papers thoroughly before publication. Poor quality research should be screened out before it gets anywhere near to being published. Otherwise the journal should be regarded with as much contempt as those such as “Homeopathy” are.

          • yes, of course, but you must admit that the ‘mistakes’ were very well hidden. you could not find them by just reading that paper, it required some background reading, and very few reviewers bother to do that.

  • So; allowing for the clear impossibility of any vested interest bias in a paper written by a group where the principal author is a member of the British Homeopathic Association and the study is funded by a grant from the Manchester Homeopathic Hospital, the meta-analysis is a perfect example of the way pseudo-dispassionate scholarship can find pseudo-justification for excluding trials that don’t support a positive conclusion for the whacky “magic healing club”? We look forward agog to similar positive pseudo-meta-analyses of prayer, faith healing, reiki and other fluid extracts of serpents.

  • Were the meta-analysis authors blinded as to the selected paper’s results and origins (authors etc) while assessing them for bias. If not, should this be standard for this type of research?

  • I suppose I’m not surprised by all the ‘author bias’ nonsense that’s being batted about here. One sniff of a high-quality piece of homeopathy research that cautiously offers an interesting result, and sceptics immediately assume bad scientific practice! As a PhD physiologist, with over 35 years of peer-reviewed publications to my name, I am aggrieved by these unworthy accusations that are based purely on biased sceptical opinion. I suggest that sceptics simply READ THE FULL PAPER INCLUDING ADDITIONAL FILES. There, you will see that the Cochrane risk-of-bias method (which is not intended to be a precise science) has been applied rigorously, consistently and fairly: in fact, it is probably more stringent than Shang’s. You will also note the consistently cautious tone in accounting for our findings, and you will see that, for the eight trials that we studied in common, our risk-of-bias data are in fact very similar to Shang’s. Our use of the WHO classification approach was to ensure we selected the most clinically important and the most objective outcome per trial – in other words, we made it as difficult as possible to attain a clear treatment effect. Our meta-analysis methods are state-of-the-art, AND THEY WERE NOT APPLIED RETROSPECTIVELY. We found a small treatment effect collectively from the trials with reliable evidence (which would be designated ‘higher methodological quality’ by Shang) and, given the low quality of the trials overall, we formed a reasonable and cautious scientific conclusion. It is an honest and novel statistical finding that deserves open-minded scientific attention.

    • Dr. Mathie, you certainly got my open-minded scientific attention. Based on what I’ve read, it seems like a clear case of Mathie doth protest too much, methinks.

    • Robert

      Can you tell us more about the WHO outcome measures classification and why they compiled it?

    • Dr. Mathie, you imply that a weak positive result, in line with your acknowledged vested interest, based entirely on trials of uncertain bias, is a “high quality” result. Can you see why others might be unconvinced, especially given the lack of any credible evidence that like cures like (so giving no reason to suppose homeopathy should work) and the absence of any remotely plausible mechanism by which it might work?

      Your paper appears to confuse a result that may be consistent with your (vested) hypothesis, with a result that refutes the null hypothesis. I think you will readily acknowledge that your result does not, in fact, refute the null hypothesis – in fact it is entirely consistent with it, especially once other research into the role of publication bias, and the acknowledged uncertain bias in the included studies, is taken into consideration.

    • Interesting.
      Call yourself a “Phd physiologist” and understand neither the first thing about the simple effects of dilution nor scrutiny of evidence. How was the scholarly title earned?

    • Wow you sure convinced me with that indignation. Surely no fraud would go so far as to promise that she was unbiased in choosing studies with CAPS LOCK ON.

    • Robert

      Can you also say why you chose the WHO classification for your meta-analysis if the WHO didn’t intended it for use in meta-analyses and why you don’t seem to have mentioned that you were intending to use it for this purpose in your previous paper (Randomised controlled trials of homeopathy in humans: characterising the research journal literature for systematic review)?

    • So even after excluding some high quality research for reasons that are still unclear, you only found a small treatment effect collectively. This was sufficient for homeopath Anthony Campbell to conclude: “At its best there is evidence for only a small effect, and when an effect is as small as this it may not be there at all.” And that was in 2008!

    • Robert,
      just because you call it ‘author bias nonsense’ does not mean it truly is nonsense. in fact, in non-homeopathic circles, one would call it ‘critical analysis’!
      you seem very good at using fallacies instead of arguments when you write things like “a PhD physiologist, with over 35 years of peer-reviewed publications to my name” [= appeal to authority]. why would your 35 years as a physiologist or your publication record matter, and if it mattered, would it also be relevant that I published about 10 as many systematic reviews as you did? surely not!!! in fact, this type of response is only going to make critics more critical.
      more importantly, you did not really respond to the criticism I voiced:
      – yes, you followed your protocol but it seems to have been flawed by the introduction of the WHO list which was never meant for that purpose. WHY DID YOU DO THAT?
      – you rated out trial as being guilty of ‘selected outcome reporting’ even though it reported all the outcomes, albeit one not in the manner you would have wanted it. WHY DID YOU DO THAT?
      – you felt our trial was burdened with ‘vested interests’ which is not the case. WHY DID YOU DO THAT?
      and please do respond to Aust’s points about your judgements of the other trials.
      even more importantly, your meta-analysis managed to generate an overall result which seems to comply with your methodology [if one is very generous and forgets about the points raised above] but which does not reflect the published evidence.
      as you know, science is about approaching the truth as closely as possible [and not about pleasing the ones who happen to pay your salary]. you know as well as I do that the Wallach trial and our study were rigorous, regardless whether they do or don’t pass your ‘box-ticking’ exercise. yet you ignore them when arriving at your conclusions; even if you had to exclude them from the meta-analysis [which I dispute], you should have mentioned them and their findings in the discussion of the paper. this is what one does in a decent systematic review/meta-analysis – and I am sure you know it.

    • Dr Mathie.
      Coming into this situation having not read the paper but based upon what I do read in this article above, I have no problem attributing your study with a high risk of bias. No amount of indignation nor foot-stomping from you will alter that judgement. It surprises me how willingly you are blind to this.

    • I have read the full paper and the supplementary material. I have also read the protocol and skimmed the paper describing the search strategy. I have found nothing that adequately explains how you used the WHO ICF system to determine main outcomes. Unlike your review, both Shang and Linde preferentially used each study’s primary outcome measure (and Shang doesn’t mention the WHO ICF at all; Linde might but I can’t access the full text). I feel that it would have been helpful to have provided a table detailing how the choice of main outcome was made for each study. According to your protocol, “In cases where, in the judgment of the reviewers, there are two or more outcome measures of equal greatest importance within the WHO ICF rank order, the designated „main outcome measure‟ will be selected randomly..” By flipping a coin. How many studies did you have to do this for in the end? Why is this superior to using the study-defined primary outcome?

      Even if this were a suitable method for choice of main outcome what justification was there for specifically stating in your protocol that you were only going to use data provided in the papers and not attempt to contact authors? You must have been aware that you could end up defining some secondary outcomes as main outcomes and that this increases the chance that there wouldn’t be enough available data to include these studies in the meta-analyses. If your review is supposed to clarify matters what possible justification is there for making absolutely no attempt to acquire all the available data?

      It doesn’t matter how great your statistics are, if selection of the included studies is flawed (perhaps you prefer that to bias) then the results will be flawed.

      (For those unfamiliar with the ICF this gives an overview: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3104216/#!po=47.5000)

      • THANK YOU! I needed that confirmed by a 3rd party.
        any sensible meta-analyst uses the outcome measure which the primary authors have defined as their primary endpoint and which they used for designing their study. anything else is ludicrous.
        even if they chose a different one, they ought to chose one that provides sufficient data for meta-analysis.
        even if they don’t do either, they cannot possibly claim that a study was burdened with ‘selective outcome reporting’ and rate it as ‘high risk of bias’ only because of their nonsensical choice.
        I find this far too close to scientific misconduct not to call for withdrawal of the entire paper.
        ROBERT: WILL YOU VOLUNTARILY WITHDRAW THIS ARTICLE OR SHALL I WRITE TO THE EDITOR ASKING FOR THIS TO BE DONE?

      • It seems that problems with using the WHO classification were brought up by one of the reviewers of the original manuscript. Wayne Jonas commented:

        9. Page 5, lines 144 to 147. While the Linde approach to selecting the main outcome measures is reasonable, the World Health Organization approach does not seem to have been validated. The authors need to justify its use here. Who did the selection of the outcomes and was it conducted in duplicate with a third reviewer confirming agreement? This needs to be detailed in the methodology.

        To which the authors replied:

        The WHO approach is a robust, internationally accepted, method to ensure that the selected outcome is the most important to the functioning and health of the patient, and so it ensures consistent selection of the most objective outcome per trial. We have inserted a statement of this nature in ‘Outcome definitions’, and we have clarified our consensus approach in ‘Data extraction’.

        After revision, Jonas stated that his concerns had been met, but I still don’t see that they have justified the use of the WHO classification as validated in the context of a meta-analysis.

    • “Read the full paper”: I dont have the indepth knowledge of doing just that, I would`ve to read far more than that.

      It really doesn’t need more (I say common sense is enough) to conclude that your are trying to get everyone upset over some supposed “personal insult” instead of addressing criticism. Something you as researcher should be grateful about, because its a good way to see your work from different angles.

      If your work is based on deductive logic, then the premise is something rather important. Sensible authors would discuss the weaknesses of the methods them self. If you don’t do that and, even worse, attack the ones that do this discussion with ‘sceptic bias’ nonsense then I dont even have to “Read the full paper” to understand your intentions arent in rigorous research.

    • “As a PhD physiologist, with over 35 years of peer-reviewed publications to my name, I am aggrieved by these unworthy accusations that are based purely on biased sceptical opinion.”

      Taking a stance that is biased in favour of a sceptical outlook is the keystone of the scientific method. I am likewise ‘aggrieved’ that during your 35+ years of peer-reviewed publications that nobody has insisted that you are long overdue for not just reading, but for fully understanding, the book The Demon-Haunted World: Science as a Candle in the Dark, by astrophysicist Carl Sagan.

      Most PhD courses omit the keystone “Ph” aspect of PhD doctorate-level certification. Critical thinking skills are founded upon the branch of philosophy (supposedly the “Ph” aspect of the acquired PhD) called epistemology. I suggest that it would be more reasonable for you to address your complaints to the provider of your PhD than to criticise those of us who have painstakingly learnt critical thinking skills, the scientific method, and dedicate ourselves to really caring about the health and welfare of our fellow humans, other animals, and our planet.

      The best scientists, medics, and engineers apologise for their inevitable human errors with humility and grace rather than belittle those who reveal their errors.

    • Mathie is right. There is scientific, unbiased, evidence in support of his conclusion. I endorse this.

  • @ Robert Mathie

    ” I suggest that sceptics simply READ THE FULL PAPER INCLUDING ADDITIONAL FILES. There, you will see that the Cochrane risk-of-bias method (…) has been applied rigorously, consistently and fairly”

    I did, more than once, but for instance, I did not find any clue on the additional high riks of bias in the Walach paper (Domain VI).

    And I did not see the consistency and fairness of the ratings. Why don’t you just give some information on how the ratings in the above examples came about? They look mighty inconsistant to me, don’t they

  • I think there is a risk of forgetting that even if we granted Mathie the supposed positive result eked out of this meta-analysis we are still dealing with an effect in the margins of the statistical noise that is clinically insignificant. This is sufficient in itself to bust the main claim of homeopathy to be a ‘complete system’ of medicine that many of its adherents assert to be an effective substitute for conventional medicine. Those same believers are daily making claims that they can reliably cure serious diseases including cancers.

    Raking over the old coals of the same set of trials does not rescue homeopathy. If even a strong adherent like Mathie cannot in conscience produce anything more convincing it really is all over for magical sugarpillery.

  • “Dr. Mathie, you imply that a weak positive result, in line with your acknowledged vested interest, based entirely on trials of uncertain bias, is a “high quality” result.”

    This conflates two separate issues, the quality of the original studies and the quality of the meta-analysis.

    Researchers doing a meta-analysis can’t do anything about the quality of the available research. It is also not their fault if the results are in line with what they would like to believe is true. We all expect or hope for certain results in our research, and the protocols and standardized tools are there, in part, to make it harder to deceive ourselves.

    Unfortunately, the response to this article reminds us that many “skeptics” are just believers of another stripe. I don’t believe in homeopathy. These results are interesting, but like all really surprising results, the likelihood is there’s a mistake somewhere.

    But instead of calmly assessing the results, you and other commenters here have decided to categorically reject these results you don’t like, and attack the scientist presenting them, claiming (with no evidence) fraud, misconduct, and obvious bias.

    It’s the same thing that happens when you port climate science on a denier blog or real medical research on a “natural” childbirth forum. Which is rather ironic.

    • “But instead of calmly assessing the results, you and other commenters here have decided to categorically reject these results…”
      No!
      I have gone to great length explaining the errors and defects in the analysis.

      • “ROBERT: WILL YOU VOLUNTARILY WITHDRAW THIS ARTICLE OR SHALL I WRITE TO THE EDITOR ASKING FOR THIS TO BE DONE?”

        Your explanations sound more like rationalizations for your passionate dislike of the result.

        You’ve demonstrated that the author made different choices than you think you would have made in their stead. This is not an “error” or a “defect.” The method used is standard and has been applied transparently, so anybody can go ahead and apply a different standard retrospectively and see if it gives them a result more to their liking.

        • let me assure you: the method is neither standard nor was it applied correctly. our study should have been ‘A’ rated – even according to the authors’ criteria.

        • Your method is neither standard nor transparent and I am going to repeat my specific concerns because I think these need answering and you keep ignoring them.

          In brief, you haven’t adequately explained how you derived your main outcomes.
          You must have identified the primary and secondary outcomes for each study and you must have produced some kind of checklist from the WHO ICF to score them against. For the sake of transparency it would have been really helpful to have included a table detailing this. (you’ve got 10 supplementary files, why not include an eleventh? And if you weren’t allowed more than 10 then you could drop the PRISMA one – that’s freely available and easy for people to look up.)

          You haven’t justified your protocol which specifically stated that you were not going to contact authors to make sure all your (derived) main outcomes had enough data to include in the meta-analyses. The Cochrane Handbook recommends that where data is missing reviewers should “attempt to contact original investigators”. Not doing so risks bias/error in your results because you haven’t been able to include all available studies in the meta-analyses.

          • Not to blow your mind, Clare, but there’s more than one Robert in the world. I’m not the author of this study.

            I do find the emotional arguments, free use of accusations of fraud and misconduct, and inability to tolerate the existence of evidence not supporting your arguments, ironically similar to climate denial, vaccine truthers and the like.

            Ask yourself: what if a good RCT finds strong benefit from homepathy? Are you prepared to alter your conclusions about it? If you aren’t, then your position is neither rational nor skeptical. You’re just another believer.

        • …rationalizations for your passionate dislike of the result.

          For us to dislike “the result” would require that there is a result for us to dislike.
          We are not ‘disliking results’, we are disgusted by mendacity.
           
          Let’s have a look at what this piece of homeopathic self-gratification titled CONCLUSION and see how well it sums up the “result”. I have bolded the relevant keywords. (Those familiar with poor, unethical and otherwise worthless research will recognise most of these keywords as recurring memes in such conclusions.)
           

          Medicines prescribed…

          Did the paper examine evidence about medicine? No, it is about homeopathy, which does not use medicines but “remedies” made in principle by shaking an awful lot of water. Medicines contain known substances with known mechanisms of effect. Remedies all contain the same inert diluents. All attempts have failed at finding or demonstrating active substances or evidence of effective qualities in homeopathic remedies. (Now please refrain from insulting this community by starting on about people like Benveniste, Montagnier, Emoto or that Indian bloke who had been playing with an electron microscope. It’s all been dealt with).
           

          …in individualised homeopathy

          “Individualised homeopathy” means literally: “Entertain and soothe the patient with smalltalk, caring and compassion for at least an hour and then pick any bottle in the heap with a name that fits your fantasies, they are all the same anyway”.
           

          may have small, specific treatment effects.

          Yeah, right. That actually sums up the importance of the purported “result”.
          About as solid as saying the missing Malaysian plane may have been found on the moon.
           

          Findings are consistent with sub-group data available in a previous ‘global’ systematic review.

          Well… if you sub- and sub-subdivide your data and analyse enough subgroups you are bound to find an interesting correlation or two sooner or later.
           

          The low or unclear overall quality of the evidence prompts caution in interpreting the findings.

          …low or unclear overall quality… Yup, you got that right. Homeoresearch needs to be of low and unclear quality to stumble upon a false positive. It is long past caution in interpreting. It is time to stop pretending.
           

          New high-quality RCT research is necessary to enable more decisive interpretation.

           
          Nope. Homeopaths had their two centuries to prove its effect. Nothing new is forthcoming so let’s do something useful instead.

          • Nope. Homeopaths had their two centuries to prove its effect. Nothing new is forthcoming so let’s do something useful instead.

            Which goes back to what I said earlier. The fact that homeopaths are obliged to try to squeeze out tiny effects at the margins of statistical significance against a vanishingly small prior probability is itself sufficient to defeat their central claim that homeopathy is a medical therapy that creates clinically meaningful effects from their pills and drops.

    • @ Robert (assuming you are Robert Mathie)

      ” … and attack the scientist presenting them, claiming (with no evidence) fraud, misconduct, and obvious bias.”

      Please reread my analysis of the ratings you and your team gave to the Jacobs and Walach paper respectively and please review the citations and short summaries that I gave to back my statements. Check them with the original papers and your own to see if I did reflect the true content.

      At least, this is the best I can do to bring forward evidence of some inconsitency in the work of your team. Unless of course, we have two different understandings, what evidence of obvious bias is all about.

      • “assuming you are Robert Mathie”

        That would be a dumb assumption, considering Robert Mathie is a member of the “British Homeopathic Association” and I said above that I don’t believe in homeopathy.

        Is this the careful weighing of the facts and scrupulous attention to detail you brought to your “debunking” above? It hardly inspires confidence.

        • Sorry, this piece of information slipped me. One point for you.

          So please consider my response rephrased to sound like this:

          “Please reread my analysis of the ratings of the Jacobs and Walach papers respectively and please review the citations and short summaries that I gave to back my statements. Check them with the original papers to see if I did reflect the true content.

          At least, this is the best I can do to bring forward evidence of some inconsitency in the work of the rating team. Unless of course, we have two different understandings, what evidence of obvious bias is all about.”

          Of course you picked the most vital point allready (my assumption of you being identical with RM), that is about to bring my fortress of cards to collaps. But still I would be interested in your comments about the smaller trifles.

    • The elephant I the room is that we already know that a weak net positive evidence base is expected for an inert treatment, and the chances of a positive result being a false positive are strongly influenced by prior plausibility (the excellent Ioannidis).

      Rather than endlessly slicing and dicing trials that are ethically dubious in the absence of plausible mechanism, and cannot by their very nature refute the null hypothesis anyway, homeopaths need to go back to square one and produce a valid and verifiable underpinning for their beliefs.

      Memory of water and suchlike are a sideshow, since there is no actual evidence that symptomatic similarity is a valid basis of cure, and the means by which similarity are assessed are almost comically poor.

      Come back when you have robust evidence that like cures like, a plausible mechanism by which it might work, and objective tests that verify bioavailability. You know, just like everyone else has to.

  • I dare say, Mathie’s review is not addressed to the science community to be discussed and finally increase the knowledge about homeopathy. This looks pretty much like a thing to support marketing of homeopathy by improving the sciency looks of it.

    Imagine, there is a scientist who publishes a paper on such a controversially discussed matter as Homeopathy. What would he do? Of course he would use methods in his research that are above any doubt and publish a flawless paper that would address any open question that critics may raise.

    But what do we have here?

    At first glance the authors followed all the steps that would make it look like first rate research: prepublish the protocol, declare the conflicting interests, use the Cochrane methodology to rate the studies, at least stating that they did.

    A closer look however reveals quite a lot of shortcomings:

    – no steps to avoid the bias of the rating team to affect the result, not even addressing this as an issue. (Rating was done by ateam that consisted of three persons affiliate to organisations that support homeopathy, a fourth (JRTD) publishing about homeopathy for more than twenty years now, two of the three non-homeopaths just do statistics)
    – modifying the main outcome of at least one of the studies to discard it which disposes of the problem that the authors would have to report that ‘reliable evidence’ is not in favor of homeopathy.
    – inconsistent rating of risk of bias, this disposes of another unfavorable study from the top ranks.
    – rendering three studies with unclear risk of bias as ‘reliable evidence’. which was not indicated in the protocol and thus introduced most likely post hoc. Two of the ‘reliable evidence’ consist of underpowered pilot studies.
    – pooling data across a big variety of indications to build statistical significance out of nowhere what still is without any consequence for the patient that suffers from one of these only.

    I guess this justifies a ‘extremely high risk of bias’ rating for this review.

    But look how this paper can be used to support marketing. This is taken from the post on the British Homeopathic Associations website:

    “The paper reporting the findings of this review (Mathie et al. 2014) has been published in the journal Systematic Reviews: A statistically significant overall effect of individualised homeopathic treatment was identified over that of placebos.
    These overall findings suggest that homeopathic medicines have specific treatment effects. Such effects may be difficult to observe readily in any one given placebo-controlled trial of individualised homeopathy, in which both groups of participants have had exactly the same type of empathetic homeopathic consultation, followed by the homeopathic prescription or the placebo.
    The overall quality of the RCT evidence was judged to be low or unclear: only 3 of the trials in the systematic review were judged to comprise ‘reliable evidence’ (Bell et al. 2004; Jacobs et al. 1994; Jacobs et al. 2001). This indicates the need for caution in interpreting the review’s findings, and an obvious requirement for new and higher-quality research in this field.”

    (emphases by me – if they work, that is)

    See, why there had to be favorable studies to provide ‘reliable evidence’ and how the White and Walach papers would have spoiled this picture?

  • Interesting that Mathie conducted and published a metanalysis that did not find any evidence of benefit from homeopathy: http://www.researchgate.net/publication/267044270_Veterinary_homeopathy_systematic_review_of_medical_conditions_studied_by_randomised_placebo-controlled_trials.

    Hard to jive with your portrait of a mustache-twirling homepathy scammer.

    I wonder, if the positions were reversed, if the author of this piece would submit a study that found positive results from homeopathy or some other seemingly pseudoscientific method. Based on his performance here, I sincerely doubt it.

    • “… with your portrait …”

      No, this is discussion is about the paper, not about the individuals who published it.

    • Hmm…I would say that homeopaths find it useful to be seen doing something that looks a bit like science. And the usual “more high quality research is needed” conclusion is a ‘motherhood and applie-pie’ statement disagreement with which can be too readily portrayed as being merely churlish.

      There’s quite a set of these ‘Meta’ papers emerging from Mathie and his colleagues. I don’t mean just meta-analyses. There’s quite a number of papers about the methods of research itself. They all give the appearance of creating the foundations for future high-quality work which has never appeared. They also publish pilot studies that never lead anywhere yet generate favourable press releases that in my view give an impression quite at odds with the underlying reality.

      • nicely observed!
        alt med is full of pilot studies which are not followed by a definitive trial.

        • alt med is full of pilot studies which are not followed by a definitive trial.

          And pilot studies?
          Typically have small n.
          Small n? Means that confounders are less well evened out by randomisation.
          Poorly controlled confounders? Lead to spurious group differences emerging from frequentist statistical tests.
          Spurious group differences? Generate p-values that do not reflect genuine systematic differences between treatment and controls. p-values? Generate press releases.
          Press releases? Generate positive coverage from credulous media.

          Thus, pilot studies –> positive coverage from credulous media.

          Job done.

          The antidotes? 1. Bayes and 2. replication

      • “They also publish pilot studies that never lead anywhere”

        – Five out of the six studies with the best ratings (B1) are pilot studies.
        – Two out of the three ‘reliable evidence’-studies are pilots.

        Especially rating pilot studies as ‘reliable evidence’ is not just a little strange, is it?

        • Oh, excuse me if I misunderstood the purpose of a pilot study… I was under the impression that a pilot study never is to investigate an issue as such, but to evaluate whether a certain procedure would seem to be a good one for investigating that issue?
          So a p value obtained in a pilot study, however appealing it may be to one’s cognitive bias, should never, ever be used as an assessment of the issue itself, but only as a sign that it might be worthwhile to carry on with a real study of the issue.
          Or have I overinterpreted my statistical training?

          • Well, this is the same understanding I have about pilot studies.

            10 out of the 32 studies included in the review are mere pilot studies without a RCT to follow, even after very long years of time. Why didn’t they follow up on these promising leads? You should think that it should be fairly easy to raise funds of parties interested in providing evidence that homeopathy after all is effective. I can only think about two reasons (outside of personal disability to carry on with research work, of course):

            (1) They did perform a full fledged PCT, but this proved negative and was therefore not published (publication bias in action).

            or

            (2) The target of the pilot study was not a scientific one but a marketing issue. If you had your share of positive notice in the public – what is to be gained by the additional effort?

    • I wonder, if the positions were reversed, if the author of this piece would submit a study that found positive results from homeopathy or some other seemingly pseudoscientific method. Based on his performance here, I sincerely doubt it.

      Perhaps if you had done a little background reading, even if only as far as the archives of this blog, your doubts might have been dispelled. You would only have had to go back a couple of weeks.

  • It’s tricky to present an exhaustive set of examples to illustrate the points I’ve just been making while away from home using just my phone, but here is one link;

    http://www.britishhomeopathic.org/our-research-strategy/

    and look particularly at how the “pilot study” on canine atopic dermatitis is described

    • Thanks for the link, Simon. I noticed the reference[2] (freely available online) which states:
      “Conflict of interest There is no conflict of interest. The author is a research physiologist, employed by the British Homeopathic Association to clarify and extend an evidence base in homeopathy.
      Robert T Mathie, …”

      • Conflict of interest There is no conflict of interest. The author is a research physiologist, employed by the British Homeopathic Association to clarify and extend an evidence base in homeopathy.
        Robert T Mathie, …”

        Saying makes it so? There’s so much in homeopathy that is magical. Perhaps this is another example.

  • I work with facts rather than opinion. Here are two facts that relate to discussion above: (1) The Cochrane Handbook’s risk-of-bias tabulation for Selective Reporting includes the following criterion for the judgment of high risk of bias: ‘One or more outcomes of interest in the review are reported incompletely so that they cannot be entered in a meta-analysis’. Non-extractable data for meta-analysis was the case for 10 trials, including those by White et al and Walach et al. By strict Cochrane standards, each trial is therefore at high risk of bias overall (‘C-rated’ in our nomenclature); as we state in our paper, 7 of these 10 trials were already C-rated due to deficiency in at least one other domain of assessment. (2) As per cautionary comments in the Cochrane Handbook, we did not contact the authors of any of the original papers because such contact is intrinsically prone to positive bias. In a review whose earliest trial was published in 1991, it was unlikely to be possible to contact (or expect a reply from) the original author in every case, and so our findings would potentially have been biased (and positively so) by information that was provided by only a sub-set of the total. We preferred to be stringent but even-handed across the entire set of papers. If a matter is unclear in a published paper, then our strict approach serves to highlight the difficulties that ensue in its assessment and/or data extraction, and thus flags the need for improved primary reporting in future. Our study was explicitly based on published material only.

    • Robert

      Can you answer the questions about the WHO classification?

    • Dr. Mathie,

      Sorry, your explanation regarding the Walach-paper seems not satisfactory.

      You claim ‘non extractable data’ for the rating as ‘high risk of bias’ and for the exclusion from your meta-analysis.

      In your paper you indicate that you needed mean, standard deviation, and number of subjects to be able to in the end compute OR.

      The main outcome you selected for the Walach paper was ‘frequency of headache per month’ (Table 2 of your paper).

      Walach gives the values for this main outcome at baseline as median with max / min range (Table 1 of his paper), gives a definition of how he computed his outcome from his data (‘number of days with headache in weeks 8 to 12 minus number of days with headache in weeks 4 to 1’). The result is reported as censored median with 95 % confidence interval for both groups together with the number of participants in each group.

      Could you please indicate, what fact rendered this approach as ‘selective outcome reporting’?

      Please indicate, which data you were unable to extract to justify exclusion of this paper? I am not such a specialist in statistics, but I am under the impression, that SD could very well be estimated from Walach’s data, assuming median being pretty much the same as mean, which seems justified from the plot Walach gives for the number of patients with headaches per day (Fig. 1 of his paper).

      Or am I mistaken?

      There is this WHO-hiearchy issue Alan is asking about.

      You claim strict application of the Cochrane Handbook. Could you then provide a rationale why you rendered three studies rated B1 as ‘reliable evidence’, opposing the Cochrane recommendations that render only ‘A’ studies as such? Two of your ‘reliable evidence’ being mere pilot studies with not even statistical significance reached, as can be seen in your own Table 2 and Figure 3?

      As far as I can see, the Cochrane handbook is very strict about pooling data in a meta-analysis, in that requiring that data should not show too big an inhomogeneity – this for studies of the same indication. The idea of pooling data for such a variety in indications as you did in your paper seems not supported by Cochrane, does it?

  • Let’s cut to the chase: Having read the above, I feel that two pertinent things are apparent:
    * We can never know whether or not the selection criteria were applied retrospectively.
    * Some high-quality studies were omitted.

    On the first point: Without direct evidence to the contrary (and I contend that there is none), we must assume that Dr Mathie acted in good faith and must accept his word on this.

    On the second point: It is rather pointless picking over the bones of why various studies were excluded or included; I doubt it will lead anywhere. However Dr Mathie must, by now, be fully aware that some studies, of better quality of some of the included ones, were excluded. The pertinent question is: what, if anything, does he intend to do about this?

    I suggest that, in the circumstances, the only proper thing to do would be exactly what Prof. Ernst requested, i.e. withdraw the article.

    • Stephen, There is no reason that I can think of for an author, seemingly employed by the British Homeopathic Association to clarify and extend an evidence base in homeopathy, to withdraw an article that supports his/her employer — even if the article deliberately excluded contrary evidence. Simplistically, if an employee of, say, Amazon refused to ship quack products to customers who’ve ordered the quack products then the employee will likely be dismissed rather than be rewarded.

      Employees of Amazon, the British Homeopathic Association, … are not medical doctors treating patients with serious health conditions therefore the opinions and medical ethics of the employer and their employees are completely irrelevant to the consumers of their products and services. Buyer beware!

      NB: We must refrain from referring to Robert Mathie as “Dr Mathie” because in the UK (and other jurisdictions) this implies that he acquired an MD and is/was licenced to practice medicine — unlike Prof Ernst, Mathie has not acquired an MD.

      • There is no reason that I can think of for an author [snip] to withdraw an article that supports his/her employer

        Common decency on having it revealed that the article is misleading? I agree, however, that an author would be unlikely to be rewarded by his employersfor so doing.

        We must refrain from referring to Robert Mathie as “Dr Mathie” because in the UK (and other jurisdictions) this implies that he acquired an MD and is/was licenced to practice medicine

        This is simply untrue. The title “Dr” merely implies either that somebody has a doctorate or that somebody is licensed to practice medicine. An MD is not required to practice medicine in the UK (e.g. my GP has a BM & BS; a friend who is a consultant anaesthetist has a BS; neither has an MD). Also I would have more sympathy with the notion that titles/honorifics should not be used outside the field for which they were awarded if it was applied uniformly. Until such a time, I see it as a matter of courtesy and see no reason to be discourteous (as are, for example, those such as the WDDTY brigade, who refer to Dr Simon Singh as “Mr”).

        • Then write “Dr Simon Sing PhD” to make it clear that you are not inadvertently implying a medical doctor. You are correct in that an MD is not required to practice medicine, however, the rules are complex. The ASA adjudications on the (in)correct use of titles forms a useful guide.

          • Pete, I think you are massively distracting from the point here. It is absolutely customary to refer to people with a PhD as “Dr. Last Name” in British academic circles, without any other qualifiers to indicate whether that is a medical or science degree. As an example, from the University of Edinburgh on the recent discovery of a new fossil:

            During the time of dinosaurs, the waters of Scotland were prowled by big reptiles the size of motor boats. Their fossils are very rare, and only now, for the first time we’ve found a new species that was uniquely Scottish.

            Dr Steve Brusatte
            School of GeoSciences

            and later in the same press release

            Not only is this a very special discovery, but it also marks the beginning of a major new collaboration involving some of the most eminent palaeontologists in Scotland … We are excited by the programme of work and are already working on additional new finds. This is a rich heritage for Scotland.

            Dr Nick Fraser
            National Museums Scotland

            Neither of them is a medical doctor…

          • Catherina, you are generally correct, but you are missing the point. Using what is customary can easily lead to confusing the general public — either accidentally or deliberately.

            Suppose that there exists an Association of Alternative Palaeontologists who’s website states that Dr X refutes the findings of the eminent palaeontologists in Scotland: a layperson would conclude that there may be good reasons to doubt the findings of said palaeontologists. If Dr X has a PhD in, say, astrophysics, this would be an example of deliberate misdirection.

            For matters concerning health, safety, and medicine this type of misdirection violates UK law — as illustrated in various ASA adjudications. When discussing medical issues, the general public associates the title “Dr” with “medical doctor” i.e. a person who is properly qualified to give responsible medical advice.

            When I’m discussing medical issues with members of the public, I refer to Edzard Ernst as Professor Ernst rather than Dr Ernst in order to pre-empt the onslaught of counter-arguments along the lines “Well, Dr X says…”, where Dr X is an ND or some other doctor of quackery rather than a doctor of medicine.

            I apologise to the readers for my appalling lack of clarity in my earlier comments on this issue. No disrespect was intended.

          • This is a diversionary side issue but….

            Suppose that there exists an Association of Alternative Palaeontologists who’s website states that Dr X refutes the findings of the eminent palaeontologists in Scotland: a layperson would conclude that there may be good reasons to doubt the findings of said palaeontologists. If Dr X has a PhD in, say, astrophysics, this would be an example of deliberate misdirection.

            So do you consider it to be “deliberate misdirection” if someone like Prof. Brian Cox, whose qualifications are in particle physics, uses his title when making pronouncements on (say) observational astronomy or evolutionary biology? Or if anyone whose academic title was not awarded in climatology makes a pronouncement on anthropogenic climate change?

          • Professor Brian Cox supports using the scientific method. My fictional Association of Alternative Palaeontologists is an example of an organisation that has nefarious reasons to reject using the scientific method. Likewise, the BHA has nefarious reasons for rejecting the results of large-scale RCTs and the results of the current gold standard of soundly-conducted systematic reviews.

            I sincerely hope that I have answered your question in terms of delineating between science and anti-science, and the deliberate misuse of titles in anti-scientific propaganda.

    • > We can never know whether or not the selection criteria were applied retrospectively.

      Well, we may not know about the selection criteria – but in comparison with the published protocol there are some things that keep me thinking:
      – the procedure to upgrade B1-studies to the status of ‘reliable evidence’ was not included in the pre published protocol.
      – studies where homeopathy was combined with other therapies – either CAM or EBM – should have been excluded according to the protocol. This was not done, it would have lost the only full RCT to the B1 – studies (i.e. Jacobs 1994)
      – Mathie’s model validation tool was to be applied, but this did not happen, once again violating the protocol.
      – according to the protocol studies reporting an attrition rate exceeding 20 % were at least to be rated ‘unclear’ in the domain of selective outcome reporting. However this was not done in six cases (Studies of Cavallcanti, Jacobs 2005, Fisher, Sajedi, Siebenswirth, Brien).

      I do not know the impact these items would have had if carried out – but it would sure give a better impression if they were.

  • Common decency on having it revealed that the article is misleading? I agree, however, that an author would be unlikely to be rewarded by his employersfor so doing.

    This is simply untrue. The title “Dr” merely implies either that somebody has a doctorate or that somebody is licensed to practice medicine. An MD is not required to practice medicine in the UK (e.g. my GP has a BM & BS; a friend who is a consultant anaesthetist has a BS; neither has an MD). Also I would have more sympathy with the notion that titles/honorifics should not be used outside the field for which they were awarded if it was applied uniformly. Until such a time, I see it as a matter of courtesy and see no reason to be discourteous (as are, for example, those such as the WDDTY brigade, who refer to Dr Simon Singh as “Mr”).

    • the use of titles such as “Dr” in the UK never ceases to confuse me. if there is someone out there who can write a definitive piece on this subject, please contact me: I would invite him/her to do a guest blog on the topic.

      • Dr. Ernst:

        the use of titles such as “Dr” in the UK never ceases to confuse me

        Well, there is always a Wikipedia.
        Their piece on the title “Doctor” and particularly the one on “Doctor of Medicine (MD)” are generally quite enlightening but I can agree that the UK situation, described in separate sections, does seem rather complex.

      • I’m not sure this is all that complex. In the UK, as anywhere else, to have the formal title of “Dr” a person needs to have obtained the degree of PhD or MD: i.e. a qualification at doctorate level, as Wikipedia states. (BTW, for confused readers, “i.e.” means “that is”, not “for example” as a lot of people nowadays seem to think.) Robert Mathie has a PhD and is therefore correctly referred to as Dr Mathie.

        In the UK, the basic medical qualification is MBChB or MBBS, neither of which is a qualification at doctorate level. However, because medically qualified people exercise the role of doctor to their patients, they are called “Doctor” as a courtesy. Only medics who’ve gone the extra mile for an MD qualification have truly earned that title.

        To add a touch of confusion, there is a convention (originally a derogatory one) that surgeons in the UK are called “mister”, not “doctor”, because their profession evolved from that of barber.

        In the USA, where medics qualify with a degree labelled “MD”, all physicians and surgeons have the official title of Doctor. The view that non-medics (including arts and science PhDs) should not be called Doctor is nicely embodied by the American public attitude to the PhD degree which defines it as standing for “phoney doctor”. I suspect that’s the thinking behind Pete Atkins’s mistaken comment.

        • Interestingly, I cannot readily find any mention of the Phd dissertation, CV or other academic information for Robert Mathie.
          I admit I only gave it a ten minute stint on google, linkedin and other common sites and search engines as well as an attempt at the main homeopathic webs, so I might have missed the motherlode.
          One might think that a seasoned researcher who counts 123 papers to his credit (researchgate) should openly flaunt his vital information.
          Perhaps he can enlighten us himself? It would be interesting to see.

          • I cannot readily find any mention of the Phd dissertation, CV or other academic information for Robert Mathie.

            Apparently he was awareded a PhD in Physiology form the University of Glasgow. But surely this is getting uncomfortably close to an “Appeal to (absence of) Authority” fallacy? It’s not really relevant to the quality of his arguments, which is what we should be using to assess their worth?

            The whole title/honorific thing is a red herring anyway: jut about everybody has a different view of what a particular title means. For example, “Professor” in the UK used to mean someone who held the chair of a university department or who was of similar academic status; nowadays we seem to be gradually adopting the US practice of bestowing it upon everyone from an asistant college lecturer upwards.

          • No reason for conspiracy theories or herring-waving here. I was simply curious about the scientist and his work. I found it interesting that I could not as readily as is usual for a productive scientist, find any cv or academic credentials. I do not have time for a more in-depth search of my own, that is why I asked.

  • Four more facts: (1) There is no gold-standard approach to selecting the single ‘main outcome’ per trial for the purposes of a ‘global’ meta-analysis of this nature. To avoid being subject to the vagaries of the original authors’ decision about primary outcome measure (and 20 of the 31 RCT papers did not even define such outcome), we elected prospectively to use the robust WHO approach that identified the most vital measure of the functioning and health of the patient. Consistent with the overarching hypothesis we were testing, it was thus possible to examine the impact of individualised homeopathic intervention on the basis of what was most crucial per trial in terms of health, and to apply that evenly across all of them. (2) The Walach paper was one of two that presented non-parametric data only and without sufficient information to allow the data distribution to be assessed. If there had been some description of the distribution (e.g. histogram, ranges) to allow us to assess the distribution we would have done so, but there was not. The Walach paper presented a non-parametric 95% confidence interval for the median/median change, but this provides no help in the matter. (3) We rated the Walach paper ‘unclear’ in domain I (sequence generation) for the following reason: it is not clear what the notary did with the dice and how it was related to assignment. We expect it was 1,3,5 = experimental / 2,4,6 = comparator assignment. And we regard it as anomalous to have 61 and 37 in the two groups, which suggests a problem. The notary does have the opportunity to mess around with the assignment, should he choose to. So, there’s a case to be made to rate domain II (allocation concealment) also as ‘unclear’, but we gave the authors the benefit of that doubt. (4) We rated none of our trials as ‘low risk of bias’ overall, which is why we consistently emphasise important caveats to the statistical conclusions from our paper. Aware nevertheless that Cochrane guidelines state that a trial’s overall ‘low risk of bias’ should relate to all ‘key’ domains, and that we would have been criticised for not using a comparable approach to risk-of-bias categorisation as that adopted by Shang et al (the most recent of the small number of other ‘global’ systematic reviews in homeopathy), we designated a trial as ‘reliable evidence’ if the sole doubt occurred in domain IV or domain V or domain VI; this approach is more stringent than Shang’s criteria for a trial of ‘higher methodological quality’. We have not noted concern by homeopathy’s sceptics over Shang’s use of this method. Nor have we noted such concern about Shang’s analysis of a heterogeneous set of trials.

    • Robert Mathie said:

      we elected prospectively to use the robust WHO approach that identified the most vital measure of the functioning and health of the patient.

      Can you address the criticism that the WHO criteria were not intended to be used for meta-analyses?

    • Robert

      I still find this all to be a storm in a really rather small teacup. Given that you have sought out the best evidence for homeopathy and yet again found so little, it is pretty safe to conclude, even if we did not have the vast panoply of basic science to support our position, that homeopathy is a null treatment. I wonder that you do not respond to this point.

      If homeopathy really worked as its adherents insist then the effects would be screamingly obvious even in small underpowered trials. They are not, so it is not.

      QED

      And it is long overdue for homeopaths to shut up shop and go and get proper jobs. For pity’s saje, some homeopaths claim they can cure rabies!! This is not something that should be tolerated and bargained with but should be outlawed.

      • Beautifully put!

      • Eloquently put, Simon. Here’s a quote from the British Homeopathic Association website that I find to be so absurd that it is not even wrong (why this type of ‘advice’ has not been outlawed is far beyond my comprehension):

        Chronic fatigue syndrome
        I have made frequent references to CFS throughout this article, and it is in this sphere that I use Phosphoric acid the most. I use it a bit like a “homeopathic tonic” in low potency, and find that if the symptoms described above are present in the case, it works extremely well to lift the fatigue. In cases of grief, I tend to use higher potencies. — Janet Gray MA MB BCh FFHom MRCGP DRCOG, a GP for over 25 years, uses homeopathy in her Bristol practice.
        http://www.britishhomeopathic.org/bha-charity/how-we-can-help/medicine-a-z/phosphoric-acid/

    • Thanks for your response, Dr. Mathie, but I am still not satisfied comparing your ratings of the Walach papers compared to that of the Jacobs 1994-paper.

      > and without sufficient information to allow the data distribution to be assessed

      As I said, I am not that specialist about statistics, but Shang et al. in 2005 apparently were able to do so. They evaluated OR = 0.59 (95 % CI = 0.28-1.28) when transformed to your OR-scheme. Link: http://www.ispm.ch/fileadmin/doc_download/1433.Study_characteristics_of_homoeopathy_studies_corrected.pdf

      > If there had been some description of the distribution (e.g. histogram, ranges)…

      There has not been any of this in the Jacobs 1994 paper either. The only data given for the main outcome was mean and SD. Walach gave median and 95 % CI. Hard to believe that this warrants the verdict of incomplete data reporting and having the study excluded from metaanalysis while the other is rated reliable evidence.

      > We rated the Walach paper ‘unclear’ in domain I (sequence generation) for the following reason: it is not clear what the notary did with the dice and how it was related to assignment.

      The Jacobs-paper:
      ‘… there was a box of tubes in sequentially numbered order which had been previously randomized into treatment and control medicatioen using a random numbers table in blocks of four’

      For my understanding, this gives much less information than the description provided by Walach et al. There was not even an indication who performed the randomisation and at what time this was done. Could have been Jacobs herself for all we know. Enough reason to even rate Domain II as ‘unclear’. BTW: There is no hint on how the random numbers were assigned to the sequentially numbered tubes.

      > And we regard it as anomalous to have 61 and 37 in the two groups, which suggests a problem

      Yes, there is a point to that. But baselinedata show Walach’s groups very well matched, so this might have some impact on the statistical power. Whereas there seems to be a slight problem in the Jacobs paper regarding the distribution of age and weight of the children included in tha study.

      > The notary does have the opportunity to mess around with the assignment, should he choose to.

      … so had anybody who did the randomisation for Jacobs.

      > we designated a trial as ‘reliable evidence’ if the sole doubt occurred in domain IV or domain V or domain VI

      Let me see, if I have this right: ‘High risk of bias’ in domain V, incomplete data reporting, is crucial and important enough to exclude a paper from analysis, but an ‘unclear’ (i.e. ‘medium’) risk of bias in the same domain is in line with an overall rating of ‘reliable evidence’?

      > Nor have we noted such concern about Shang’s analysis of a heterogeneous set of trials.

      No need for the skeptics to repeat this:

      http://www.researchgate.net/publication/24282526_The_2005_meta-analysis_of_homeopathy_the_importance_of_post-publication_data/file/5046352652e88babc4.pdf

      http://www.karger.com/Article/FullText/355916

  • My reply to Björn Geir: Ad-hominem jibes like these are deeply hurtful. As all clinical science researchers are aware, arguably the most prestigious bibliographic database is PubMed, where you will currently find 106 of my peer-reviewed publications catalogued.
    My reply to Alan Henness: You have missed the key point about our hypothesis-driven aim to ensure we extracted data for the most clinically important outcome per trial. And you have missed the essence of the WHO ICF document, which includes the following: ‘HOW CAN ICF BE USED? Because of its flexible framework, the detail and completeness of its classifications and the fact that each domain is operationally defined, with inclusions and exclusions, it is expected that ICF, like its predecessor, will be used for a myriad of uses to answer a wide range of questions involving clinical, research and policy development issues. (For specific examples of the uses of ICF in the area of service provision, and the kinds of practical issues that can be addressed, see the box below.)’. The box (‘ICF Applications’) includes: ‘For the evaluation of treatment and other interventions: What are the outcomes of the treatment? How useful were the interventions?’ That is meta-analysis. And here is an example of a systematic review in the Cochrane Library that importantly uses the ICF approach: Aas RW, Tuntland H, Holte KA, Røe C, Lund T, Marklund S,Moller A. Workplace interventions for neck pain in workers. Cochrane Database of Systematic Reviews 2011, Issue 4. Art. No.: CD008160.
    My reply to Simon Baker: I think you may not have read all of our paper’s Discussion, in which we say: ‘Though our conclusions can be made most securely from three trials with reliable evidence, this sub-set of studies is too small to enable a decisive answer to our tested hypothesis. Equivocal RCT evidence of this nature is not unusual in medical science, in which conclusions are commonly based on just two eligible RCTs per systematic review [17]. Given the specific focus of our study, a statistically significant OR of 1.98 may be interpreted as a small ‘effect size’ for these three trials collectively and does not differ significantly from the ‘effect size’ observed in our analysis of 22 trials (OR = 1.53). Such ‘effect sizes’ seem comparable with, for example, sumatriptan for migraine, fluoxetine for major depressive disorder and cholinesterase inhibitors for dementia [18]. The detection of a small yet significant pooled OR, with the perspective that only a few single trials showed statistically significant effects, supports conjecture that the impact of an individualised homeopathic prescription may be difficult to observe readily in the context of any one particular placebo-controlled trial [19,20].’

    • Robert
      two short questions:
      1) do you think a study id not ‘reliable’ because its authors did not report their secondary endpoint fit for meta-analysis?
      2) do you believe your meta-analysis reflects the published evidence or does it merely comply with the methodology you aimed to adhere to?

    • @Robert Mathie
      I am surprised that you take my questíon as a personal atttack. Or was it my outspoken analysis of the conclusion?
      Anyone who openly admits his or her belief in possible healing effects of shaken water should keep prepared for critical opinion. Especially from clinicians who from hard experience see disease and healing as matters not to be trifled with. I see no reason to obscure my sentiments towards those who advocate fantastical and incredible medical practices when it is hard enough to deal with reality.
      You wish to portray yourself as a scientist and a productive researcher. I have, admittedly found this hard to respect, seeing that you defend what knowledge and reasoning says should be pure nonsense, but I am willing to try. Someone with a Phd and 100+ papers should able to be taken seriously. Should your efforts produce credible proof and evidence for homeopathy, I will bow and applaud. Until then I will be honest to my perception of reality and forthright about it.
      I simply asked for help with finding out more about you and your career. I have in text referred to your person as a scientist and researcher, which I consider showing due respect.
      I do not need help with navigating PubMed. Researchgate actually counts markedly more publications. I did find that list rather repetitive in places so that might explain the discrepancy?
      Do you have any reason to be secretive about your curriculum vitae and information on where and how your academic title was earned? Productive scientists are usualy rather proud of those, often display them online and readily inform instead of responding in a defensive manner to queries about their life’s work.
      Or is there reason for doubt?

    • Robert

      My criticism of you and homeopaths is that for evidence to support the sugar pill retail trade you squirrel about in the margins of the statistical noise looking for an effect. This contrasts with the routine claims made for homeopathy that it is a reliable and complete system of medicine that offers genuine cures for the whole range of medical conditions even including cancer and rabies.

      In response you present some examples where you say the effects of conventional medicine are hard to perceive and confirm. Do you really fail to see that this is not an appropriate response to the point I make?

      You conclude by making the bald assertion that “the impact of an individualised homeopathic prescription may be difficult to observe readily in the context of any one particular placebo-controlled trial”.

      There are only two possible explanations for that contention, both of which tacitly concede my point;

      1. Homeopathic remedies are sugar pills that do nothing at all.
      2. Homeopathic remedies have tiny evanescent clinical effects that render them unsuitable for routine clinical use and these effects are so small that their size absolutely contradicts the claims made that homeopathy is a reliable and complete system of medicine.

      Try again.

  • ‘Such ‘effect sizes’ seem comparable with, for example, sumatriptan for migraine, fluoxetine for major depressive disorder and cholinesterase inhibitors for dementia [18].’

    Well, the main difference is those other product have a rationnal, unlike homeopathy. Stat can say what you want if you don’t take a more deep view of the problem : the sole basis of homeopathy is contradictory with all the physics and chemistry we know so if you found a very small effect like this, it’s more likely to be false positive or a bias, don’t you agree?

    Don’t forget, three kind of lies : big lies, godamned lies, and statistics.
    I feel that you are fooling yourself.

  • Edzard: Such a trial does not contain sufficiently reliable evidence to be tested by meta-analysis under our strict hypothesis, in which we examined solely the most objective and most important clinical outcome per trial. As it worked out, only 3 of the trials with high risk of bias overall (and excluded from meta-analysis) were rated in that way solely because of domain V; only one of those 3 trials (yours, as it happened) would have become ‘reliable evidence’ if we had been prepared to accept an outcome of lesser clinical importance that was data-extractable. Our analysis reflects the up-to-date published literature in individualised homeopathic treatment at least as well as Shang’s did 10 years ago for the same type of trial; and our sensitivity analysis contains overall higher-quality trials and more robust outcome measures than Shang’s ‘trials of higher methodological quality’.
    Björn: Of course I understand scepticism about homeopathy: as a scientist and a researcher, I am approaching the clinical review work in isolation and with an open mind, fully aware that mechanism of action is of equally key importance. Your suggestion that I am being ‘secretive’ about my CV etc. is derogatory, and so inappropriate that it does not merit any courteous answer other than to say it is no secret and that there is no suitable website available for its inclusion, even I wanted to post it online. I shall not be responding further to you, and I hope that Edzard will effectively moderate any more of your remarks on the matter. In fact, as a result of your persistent comments, I am not sure if shall contribute further to this blog.
    Simon: Our cautious conclusions are based solely on the facts of our quality assessments and the statistical findings. I do understand that your position reflects your perspective of scientific plausibility and its impact on interpretation of research findings. But I wonder how you view the N=6 meta-analysis results that Shang reported for conventional medicine trials? Their odds ratio (OR) was 0.58 (95% CI, 0.39 to 0.85); taking reciprocals to enable the direction and magnitude of treatment effect to be compared directly with ours, that calculates as OR = 1.72 (95% CI, 1.18 to 2.56). The similarity of their result to ours is worth your (and Quark’s) reflective thoughts.

    • thanks
      but you have not answered my 2nd question: DO YOU THINK THAT YOUR RESULTS REFLECT THE BEST AVAILABLE EVIDENCE?

    • Robert

      I’m not sure whether you are deliberately avoiding my point or do not understand it, but you persist in a tu quoque fallacy that does nothing to address the problem created for homeopathy by its faith community persistently declaring that it is a robust and reliable system of medicine.

      Homeopaths claim to cure rabies and cancer. Your footling about with these meta-analyses contributes nothing useful.

      It is not adequate to respond to Bjorn by airily waving away the issue of mechanism when very straightforward facts from physics and chemistry tell us that the prior probability of an effect from homeopathic sugar pills asymptotically approaches zero.

      I do wonder why you have chosen to waste your professional life on this nonsense. The science of the real world is so much more interesting than the jejune fictions spun by the tateandlylers who pay your salary. It’s all rather sad.

      The one positive thing I can say about homeopathy is that it is a useful testbed for showing how people can kid themselves into thinking a null intervention has biological effects and this is a lesson that practitioners of real medicine need to take to heart and apply proper scepticism to claims of efficacy in all types of therapy. But enough is enough and we really don’t now need actual homeopaths to keep driving that message home.

  • Robert Mathie,
    Congratulations on your achievements and prodigious output. I specifically congratulate you for your intricate filter design that has managed to detect a very weak signal that is deeply buried in both stochastic noise and experimental errors — this is a truly remarkable achievement. It reminds me of the SETI project, which managed to have one Eureka! moment.

    Unlike yourself, I have no expertise in physiology, however, one area of my expertise is test & measurement. If you give me multiple datasets, and tell me what you expect to find in the data, then I could easily design a set of filters to extract the signal(s) that you are looking for.

    But being the generally incompetent ass that I am, I’ve always had my work independently tested against the null hypothesis well before attempting to deliver my work to my taskmaster and/or client(s).

    In my few and very narrow fields of engineering, peer review most definitely does not mean: a consensus reached between one’s peers in that field. It means: a consensus reached between experts in other (even if remotely) relevant fields who always start from the position of rejecting the hypothesis and properly questioning the supplied evidence.

    One very obvious difference between homeopathy and a computer is that one doesn’t work and the other works quite well. This difference in efficacy is caused by the difference between clinging to outdated belief systems and rejecting them in favour of the scientific method.

    I shall leave you, Robert Mathie, with three things on which to ponder:
    1. The hard disk drive in your computer has an error rate in the region of 1E-15 to 1E-18 throughout most of its working life. The only reason that we know this, and that we rely on these devices, is because they were designed and tested using the scientific method rather than by the principles of Samuel Hahnemann and subsequently refined by the advocates of ‘water memory’.

    2. Demonstrate to the JREF that homeopathy is efficacious — you will not only recieve a large sum of money, you will be eligible for a Nobel Prize.

    3. And last, but not least, please clearly demonstrate that your PhD fully included critical thinking skills. Thus far, you have amply demonstrated that either it didn’t or that you have forgotten these skills.

    • In my few and very narrow fields of engineering, peer review most definitely does not mean: a consensus reached between one’s peers in that field.

      I think in homeopathy there’s a mis-spelling. It’s pier review: The work is sent down a blind-end directed out over the ocean to no useful destination.

    • Pete Attkins and Simon Baker quite succinctly summon up the matter of current homeopathic research.
      I have been struggling to understand what causes a person with formal scientific training (I still presume it is genuine) and high ambitions for research to completely ignore the fundamental laws of nature, logic and reasoning.
      I do not think I will ever understand it completely but I have learned a lot from this discourse and the perusal of provings and other demonstrations of the insanity involved in much of the make-belive manufacture of homeopathic nostrums.
       
      One ubiquitous driving force fuelling any marketing of nonsense is of course money. An awful lot of people and many multi million dollar mega industries rely on shaken water for their income. Homeopath organisations help their disciples by producing advertising material of all kinds reliant on misinformation like this misleading tweet that undoubtedly helps to reinforce the religion. Research-like efforts like the one being discussed here of course provides ideal material for the ever ongoing campaign.
      There is one piece of homeopathic writing that I think is quite illustrative of the central error and the mindset and mechanisms behind it that have helped dr. Mathie and other sciency homeopaths up the wrong research path.
      An article from 2012 entitled “Plausibility and evidence: the case of homeopathy” by Lex Rutter was co-authored by our own RT Mathie and the wel known Peter Fisher.
      In it, a group of research-happy homeopaths declare, happily oblivious to their own a-priori fallacy that:

      Prior disbelief in homeopathy is rooted in the perceived implausibility of any conceivable mechanism of action. Using the ‘crossword analogy’, we demonstrate that plausibility bias impedes assessment of the clinical evidence. Sweeping statements about the scientific impossibility of homeopathy are themselves unscientific: scientific statements must be precise and testable.

      Further…

      We concur with Hansen and Kappel that the disagreement concerning the interpretation of reviews of randomised controlled trials (RCTs) is rooted in prior beliefs and their influence on the perception of evidence. We do not concur, however, with their assumption that the homeopathy community’s positive view of the evidence is due to a rejection of the naturalistic scientific outlook. We ourselves, for example, do not reject any part of the naturalistic outlook.

      Blind to the blatant incongruity of their own arguments, they turn the principle of testability on its head and reject any possibility of absence of effect of shaken water. They (try to) lecture the scientific community on their “naturalistic” conviction that despite absence of demonstrable plausibility, effect or mechanism of action (all of which they in effect admit), it must be there because… well, they want it to be there and a lot of people have seen and felt it and… so on.
       
      As expected this epic article became the subject of considerable critique and ridicule

      • And in all of those mental contortions that ToothFairy Scientists put themselves through to try to justify their pursuit of homeopathic research they simply ignore the point that I keep returning to and to which Robert Mathie has steadfastly failed to respond.

        Homeopathy is claimed to be a complete, reliable and robust system of medicine that can achieve everything that conventional medicine does and more. Homeopaths are liberal with their use of the word ‘cure’. They regularly claim cures of many dread diseases. If any of that was true even a handful of well-observed case reports would be enough to be establish homeopathy as a valid hypothesis and small controlled trials to follow up on those case reports would provide very strong evidence despite our views, rooted in basic science, about the low prior probability.

        No such strong evidence exists and the homeopaths are left fishing for tiny signals in noisy data.

        However, once the Strong Homeopathic Hypothesis has been overturned, it is completely legitimate for us to invoke prior probability and in homeopathy the considerations of basic science are quite sufficient to tell us that homeopathic clinical trials are not a legitimate pursuit. Indeed they risk giving a cloak of respectability to a null intervention.

        In no other field of human activity is so much effort devoted to shoring up a defeated hypothesis. We can see that there is nothing in the hypothesis that merits this effort so the answer must lie in the people doing it and their motivations.

        The doggedness of their pursuit and their absolute inability to accept correction in an area that directly affects human morbidity and mortality is what makes them so dangerous. In rich societies we have effective medical systems to which people routinely turn and homeopaths are little more than clowns providing amusement but causing no great harm. But in Africa we see what they would like to do if there are no restrictions and the population has limited access to real medicine. There the homeopath is not fleecing the worried well but preying on sick people with serious diseases. It is quite despicable.

  • By the way, Robert. I do have a modicum of experience of cases treated with homeopathy. They all stayed sick and some died. It’s not objective controlled data but it’s interesting to see the contrast between the public claims where the word ‘cure’ is bandied about with gay abandon and the routine reality.

    Just in the last two weeks I was confronted with a renal failure cat who had recently gone blind through retinal detachment. It was presented in a moribund emaciated state and the owners excused this by saying that it had been treated with homeopathy for a year. We manage renal failure very well these days and we routinely control the hypertension that leads to retinal detachment. I have never yet had a case in an animal that has been started on treatment. Sadly, assuming that nothing more effective could be done they watched their blind cat for 2 weeks.

    Homeopathy gives an excuse for accepting animal suffering and this makes me deeply viscerally angry with those who promote such idiocy. In the human medical world we have that lethal tropical parasite, the homeopath in Africa, engaged on their mission of grief tourism pretending that they can treat HIV, TB and malaria Examples like this are what really goes on once we boil this whole issue down to its basics away from the abstracted rules of meta-analysis.

  • I must declare that I don’t have medical training, training in statistics, or the protocols of testing, so I have read this to see how the tread develops. I must also declare that I am a total rationalist and sceptic (in the true meaning of the word), and, although having failed mechanical engineering (in the days when most did), I do have an understanding of science. Right, that said, I have read this thread with some amusement and, mostly, horror. There has been a consistency though, and for the reasons which weaken one side of the discussion and not the other. Edzard wrote an interesting article about a study and, to my surprise, one of the authors joined in the discussion.

    The consistency has been rationality on the part of the critics of the study, resorting as they do to science, while the co-author has resorted to logical fallacies, defensiveness, and histrionics. Most of the latter has been couched in language of respectability but the intention is clear; to discredit those who question, either the study or the credibility of the researchers. From all I’ve read (just a bit), science holds itself open to question, because that is the nature of science. If there are serious questions, they must be addressed because to do otherwise is unscientific.

    My first problems with homeopathy are that it is based on a nebulous premise that has been untested, and therefore unproven, for over two hundred years, while the second is that it ignores Avogadro’s Law. Neither of these has been addressed by Robert Mathie, so the premise of any study is flawed from the outset. Rather than attacking his critics, he should address these aspects. Despite claims to scientific thinking, the only thing Mathie reveals is a lack of it.

    Until Mathie shows that he and his co-authors show an understanding of basic science, all of their studies will be for nought. Ignore the basics and you get what you deserve; derision. I apologise if this post lacks sophistication but I also trust it lacks sophistry, something Mathie assumes is a worthy skill.

  • The (flawed) paper concluded:

    Conclusions
    Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

    How this has just been interpreted by a homeopath, Nick Thompson, on Twitter:

    STOP PRESS: High quality #metanalysis evidence shows #homeopathy to be significantly better than #placebo http://buff.ly/17KH8NI

    (It was also Tweeted by the British Association of Homeopathic Veterinary Surgeons.)

    When challenged to say where the paper said that, he replied:

    No! ‘Statistically significant treatment effects’ is what it says. And its a good study. I rest my case. #homeopathy

    • As much as you can try to rationalise with true believers, there is a screw loose somewhere in them and this cheering for for what is, essentially, a negative outcome is to be expected. It scares me that they are allowed to vote.

    • For me, homeopathic research is “Going through the motions”.
       
      I learned this term when I was a surgical resident at a large county hospital on the east coast in the eighties . We saw a lot of trauma and terrible situations. Of course we tried everything and went on with life saving efforts for as long as any hope was left and then some. Like a case I will never forget, a ten year old girl we did open chest CPR on in a desperate effort to get her circulation going. I had her little heart in my palm, massaging it much longer than really meaningful.
      But we also tried resuscitating cases that were clearly hopeless from before coming through the doors. Still we always felt obliged to “go through the motions” as it was called, in effect pretending to try to resuscitate hopeless cases e.g. an octogenarian with severe head trauma and pre-hospital cardiac arrest. We knew it was utterly hopeless but still dutifully “went through the motions” of resuscitation for a while.
      We did not do this to be able to brag about it or make up reports of apparent success. Part was being able to say to the relatives that we had tried anyway, part was for a supposed training benefit and part of course due to the cover-your-ass culture.
      When I moved back to Europe it was somewhat difficult to adapt again to the more rational, realistic approach.
       
      Anyway…
      Homeopathy is a two century old corpse, embalmed and dressed up, partly for religious devotion but mostly for economic reasons.
      Homeopaths “go through the motions” of research to reiterate the appearance of evidence.
      They need to fuel their confirmation bias, to convince themselves that a lot of “research” has been done and because some of the results point in a “positive” direction, then it must be proof that shaken water has a genuine effect.
       
      Here is another Tweet that demonstrates the real purpose of the hand waving called homeopathic research:

      British Homeopathic
      ‏@bhahomeopathy
      Reliable clinical evidence shows individualised #homeopathic remedies are twice as likely as placebos to be effective http://goo.gl/yjucMr

       
      My reply was: “No, not reliable (quote to this blogpost) and No, that’s not the conclusion of this analysis”

  • Edzard, we have used an efficient, well-recognised, and commonly used scientific technique to answer the question we posed. The answer we got is what we reported, cautiously and without biased opinion, in our paper. We found a small treatment effect that was robust to sensitivity analysis based on reliable evidence, and that was similar to Shang’s findings for the same type of homeopathy trials. It is also robust to allowing the use of original authors’ ‘primary outcome’ if the data for our own selected ‘main outcome measure’ were not extractable, whose details you can see in a document now posted on the BHA website (http://www.britishhomeopathic.org/wp-content/uploads/2015/01/BHA-16-Jan-2015.pdf). Being in the business of clarifying facts, we shall continue to publish our research results – whether favourable to homeopathy or not – in the normal objective and scientific way, without fear from those who resort to ad-hominem and anti-scientific comments in the face of data they do not like.

    • publish our research results – whether favourable to homeopathy or not

      Hmmm.

      I see you have still not replied to my point about the dysjunction between the grand claims made for homeopathy and your persistent fiddling in the statistical margins of small and rather elderly trials of dubious quality.

      Perhaps you will now address this rather more relevant point.

    • Dr. Mathie,

      In your systematic review you found the OR of the pooled ‘reliable exidence’ to be 1.98 with 95% CI 1.16 – 3.38. The OR in the primary outcome of the White-paper was 1.08 (CI 0.47 – 2.48). Including this to your selection of reliable evidence would yield a pooled OR of 1.66 wich CI 1.06 – 2.60, as you disclosed in this linked paper of yours.

      What would happen if you included the Walach-paper to the selction of riliable evidence? With the OR of 0.59 and CI (0.28 – 1.28) as Shang was able to extract (notation adapted to your usage)? This would spoil the statistical significance of your outcome, wouldn’t it?

  • (2) The target of the pilot study was not a scientific one but a marketing issue. If you had your share of positive notice in the public – what is to be gained by the additional effort?

    Funny you should say that, but a few years ago I engaged in email correspondence with a conventional practitioner who had foolishly entered into a trial of homeopathy with an ardent supporter.
    I warned him that their ‘pilot’ study would be guaranteed to yield meaningless results but that these would be grasped by homeopathic advocacy groups and spun up to be presented as if useful treatment effects had been found but that no definitive study would then occur.
    He got very irate. Was horrified that I doubted the motives and honesty of his collaborator and broke off the exchange.
    In the event I was proven right on each point. No meaningful effects were shown. The study was given a huge positive spin in publicity material and no definitive study has been conducted. Indeed, I believe the effort was formally abandoned leaving just that pilot study as a permanent fixture in the literature.
    So, call me a bit cynical when I see science pretending to be carried out by homeopaths. I think my cynicism is justified.

  • I wonder whether Robert Mathie is coming back…

  • Hadn’t seen this;

    http://www.nightingale-collaboration.org/news/169-nhs-lanarkshire-to-end-referrals-to-glasgow-homeopathic-hospital.html

    until just now.

    Mathie’s review was too little, too late, (and too wrong) to save homeopathy on the NHS in Lanarkshire.

  • Simon: We don’t make ‘grand claims’. Our paper makes reasonable, opinion-free, statements based on factual evidence: ‘Two of the three trials with reliable evidence used medicines that were diluted beyond the Avogadro limit. Our pooled effects estimate for the three trials, therefore, is either a false positive or it reflects the relevance of new hypotheses about the biological mechanism of action of homeopathic dilutions [22,23].’ By the way, I don’t recall seeing your reply to my point for discussion that Shang’s OR for N=6 conventional medicine trials is very similar to our OR for N=22 or for N=3 homeopathy trials. Small effect sizes are not uncommon in medicine, whether the intervention’s mechanism of action is known or unknown: we are not ‘fiddling in the statistical margins’.
    Norbert: Even discounting the non-parametric data presentation of the Walach paper, which prevented data extraction for our analysis, it failed to achieve low risk of bias in a total of four domains of assessment: I explained this on 6th January. So it is nowhere near our required standard for reliable evidence to enable its inclusion in that sensitivity analysis. We used objective, protocol-based, prospective inclusion criteria throughout, uninfluenced therefore by whether a given trial had previously been noted by anyone as ‘positive’, ‘negative’ or ‘non-conclusive’, or whether Shang had somehow managed to derive a distribution in a case of non-parametric data presentation.

  • Simon: We don’t make ‘grand claims’.

    Are you being deliberately obtuse? You know full well the claims that are made for homeopathy and I did not say that you personally made them. I find it hard to believe that you sit in a hermetically sealed room at the BHA isolated from the activities of homeopaths.

    By the way, I don’t recall seeing your reply to my point for discussion that Shang’s OR for N=6 conventional medicine trials is very similar to our OR for N=22 or for N=3 homeopathy trials.

    Your recollection is correct. I think I have made it clear that fiddling in the statistical margins is beside the point. There are weak effects with some real pharmaceuticals. We know that. But real medicine generally has high prior probability and homeopathy does not, which affects the interpretation though you won’t accept this. But I doesn’t matter because there are also many real drugs that have dramatic effects and there are homeopaths constantly claiming big effects on serious diseases with their little sugar pills so fiddling marginal effects really are uninteresting. But the curious thing is that these strong claims for homeopathy are not tested in the objective reliable literature. Instead we get the feeble stuff that you keep meta-analysing as if some new truth will emerge if you torture the numbers enough. Actually, that’s probably not quite true. I suspect that when homeopaths set up their trials they are expecting the dramatic results that they routinely claim to achieve in routine daily practice. The fact is, however, that they end up with feeble results and you try to knit the fog into something more substantial by churning it through meta-analysis.

    we are not ‘fiddling in the statistical margins’.

    You may think that you are not. I think the whole substance of this page rather indicates that you are.

    I see you have not responded to my comments about the enthusiastic PR spinning of feeble results that we see so often from homeopathic organisations. Perhaps you will offer your thoughts. I may come up with some concrete examples if you like.

  • Robert

    I think it is time for you to give an explicit answer to a question that has been implicit in all the foregoing.

    Do you agree that homeopaths make frequent claims for cures of serious diseases including cancer, HIV, TB, malaria, rabies and Ebola?

  • Norbert: The Jacobs 1994 paper gives sufficient information to judge low risk of bias in domain I and to enable data extraction for the endpoint of the trial. The Walach 1997 paper does neither.
    Simon: Yes, some individual homeopaths make claims for cures of serious diseases including cancer, HIV, TB, malaria, rabies and Ebola. These claims are anecdotal and do not contribute in any way whatsoever to my systematic review work on peer-reviewed RCTs.

    • @Robert Mathie
      There are some grand claims here;
      http://www.britishhomeopathic.org/how-does-homeopathy-work/
      “One of the leading current proposals for how such ‘ultramolecular’ dilutions work is that water is capable of storing information relating to substances with which it has previously been in contact.”
      “The Swiss chemist, Louis Rey, found that the structure of hydrogen bonds in homeopathic dilutions of salt solutions is very different from that in pure water.”
      “Viewing the evidence overall, there is some experimental support for the idea that ultra-molecular homeopathic dilutions may possess unique physical properties and can exert physiological effects.”
      Just a couple of claims that homeopathic solutions are different to normal dilutions. More fanciful nonsense from your own website.

      Why not go back to the basics of homeo-witchcraft and prove that “proving” works, or that successive dilutions makes an ingredient “stronger” even when none exists, or that shaking something makes it more effective? I’ve tried shaking my beer but it doesn’t get me drunk any faster.

      • ” I’ve tried shaking my beer but it doesn’t get me drunk any faster.”

        Well, whatever you’re doing, it seems to be working.

        • @jm,
          Righto, as I assumed a degree of intelligence on the part of posters, I didn’t think this would need any clarification. I am, however , wrong so the explanation, specifically for you jm, is this from the same website;
          “Succussion might also be responsible for creating very tiny bubbles (nanobubbles) that could contain gaseous inclusions of oxygen, nitrogen, carbon dioxide and possibly the homeopathic source material.”
          Succussion is the fancy homeo name for shaking the sugar water to “potentise” it. That was the basis of the joke about shaking the beer, to increase its strength.

          I hope I’m not forced to draw a picture with coloured crayons for other posters.

        • I just have to say that I love reading this blog. The random people and comments never disappoint.
          Thank you Edzard for providing the platform and thank you everyone that comments.

    • Robert

      Is it acceptable for homeopaths to claim cures of the serious diseases I listed?

      I wonder why no reliable published evidence exists for these cures while you continue sifting through the statistical noise looking for weak effects at the margins.

    • Robert Mathie, the BHA website makes bold claims for homeopathy plus it gives MEDICAL ADVICE e.g. (titles of PDF documents on the website):
      Flue advice.
      Homeopathy for animals — getting treatment for your pets.
      Homeopathy and dental care — your guide to treatment.
      Homeopathy for pregnancy and childbirth.
      Homeopathy and immunisation.
      Homeopathy for babies and children.
      Homeopathy for eczema.

      The BHA boasts: “There are over 200 articles on this website packed with useful information, written by doctors and other healthcare professionals who practise homeopathy.” These articles would seriously misguide a layperson seeking treatment for a medical condition because HOMEOPATHY DOES NOT WORK, it is only a theatrical placebo.

      Here’s an extract from the influenza article:
      “Homeopathy, however, has a lot to offer, both in the prevention and treatment of influenza. A homeopathic remedy which has been proven in clinical trials to reduce both the duration and intensity of attacks of influenza is Oscillococcinum (also known as Anas barb), which has been used in France for many years and can be bought at nearly any chemist in France. It is not licensed for sale over the counter in the UK but can be prescribed by a registered medical practitioner for individual patients.

      Homeopathy, therefore, can be extremely effective in both the prevention and treatment of influenza and as always, is a cheap and safe alternative to the allopathic approach. Perhaps homeopathy should be the approach promoted in the National Health Service for the treatment of ’flu rather than the new and very expensive anti-viral drugs that are coming into vogue!”
      http://www.britishhomeopathic.org/bha-charity/how-we-can-help/conditions-a-z/influenza/

      You wrote: “Simon: Yes, some individual homeopaths make claims for cures of serious diseases… These claims are anecdotal and do not contribute in any way whatsoever to my systematic review work on peer-reviewed RCTs.” What? The BHA is claiming that homeopathy is efficacious for a wide range of diseases, some/many of which are serious diseases. Are we to conclude that the plethora of claims being made by the BHA are only anecdotal, including your systematic review and your comments here?

      • Those are useful points, Pete.

        It seems to me that we see a very drawn out exposition of the No True Scotsman fallacy from Robert Mathie.

        Where does the ‘true homeopath’ reside? I think he or she may be a mere myth.
        🙂

    • I see. Apparently I tend to see things in Walach’s paper that are not present. I should become a researcher in homeopathy then – or this rare talent of mine to see things that others cannot will go wasted.

  • You may be interested in the Society of Homeopaths’ Research Ebulletin for February 2015, which they devote to the Mathie et al. paper. It’s well worth reading in full (it’s quite short), but it includes this:

    One such study where this selection method caused a study to be excluded was by Edzard Ernst’s team investigating the effectiveness of adjunctive homeopathy for asthma sufferers. That team had selected ‘wellbeing’ as their primary outcome measurement rather than asthma severity, meaning the study couldn’t be included in Mathie’s final analysis due to insufficient data being extractable for the asthma severity outcome. Needless to say Ernst took umbrage at his trial’s exclusion.

    However Mathie has now performed a sensitivity analysis where the primary outcomes selected by the researchers are measured rather than the WHO oneshttp://www.britishhomeopathic.org/wp-content/uploads/2015/01/BHA-16-Jan-2015.pdf With Ernst’s trial included in the meta-analysis, the result still favours homeopathy: OR = 1.66 (CI, 1.06 to 2.60).

    • The following is a continuous text from the source @Alan quotes, which I have broken down for “translation”:

      The published evidence is beginning to accumulate showing how important it is
      to test homeopathy as practiced by homeopaths.

      => It is the homeopaths, not the remedies that have an “effect”.
       

      Shang’s review shows how
      unlikely non-individualised remedies are to be effective.

      => The remedies do not work. Non-individualised means a homeopath was not personally involved in making the choice.
       

      Mathie’s review shows
      how likely individualised remedies are to be effective.

      Eh… incorrect inference. Again it is not the remedy but the attention of the homepath that exerts a detectable “effect”. ‘Individualised’ means the homeopath personally entertains the patient with empathy and attention at an extended session and then chooses remedies from the heap on philosophical, not biological grounds. The system of choosing ‘individualised’ remedies is based on a method of “testing” called ‘provings’ in homeospeak. The choice of remedy is totally irrelevant to the purported effect.*
       

      Homeopaths have long
      argued that research doesn’t represent what they do.

      A seemingly insightful revelation coming from a homeopath. However this is not what it looks like but represents ‘special pleading’ i.e. homeopaths argue that scientific research is not the correct way to find out about homeopathy because the kind of science necessary has not been developed – or whatever…?
       
       
      * The choice of remedies in ‘individualised’ homeopathy is based on so called ‘provings’, which are the collated results of their own special testing ceremonies where a small group of subjects take the remedy and then write a comprehensive journal of everything that successively happens, including dreams, thoughts, desires and fears for days or even weeks afterwards. These journals are then collated, culled and sorted into a system of symptom-basis for choosing a remedy that seems to fit what the homeopath finds characteristic of the patient’s complaints.
      This approach to testing remedies can certainly be considered an “alternative” method of fact-finding.
       —–
      As I am slothfully procrastinating other chores at the moment, and the subject fascinates, I beg to elaborate a bit on the subject of ‘provings’. I hope I am forgiven for this puny attempt at shedding light on an important aspect. I find the following quite relevant to the subject of this thread. At least it may prove informative for the stray audience not previously privy to the less discussed details of homeopathic science and knowledge.
       
      If you have the time and energy, reports of typical homeopathic remedy testing, i.e. ‘provings’, can be found online for anyone to evaluate.
      I warn the audience though, not to engage in the perusal of such documents while eating or drinking as they can on occasion induce explosive sentiments, generally joyful ones.
      Also, be warned that the text may contain graphic representations of the provers sexual experiences and thoughts. Uncensored details of bodily functions usually effectuated behind closed doors may also be on record, potentially affecting appetite and mood.
       
      One should keep in mind when evaluating the utility of this method of testing potential healthcare-products that the subjects involved in the proving process (the ‘provers’) are not randomly chosen from the population but regularly consist of a small convenience sample of homeopathy devotees – often students of the particular scholar responsible for the proving ritual. The provers are consequently, at least in a mental and cultural sense, far from representative of the target population, which inevitably calls into question the applicability of the proving results for individualised remedy choice.
       
      An example of an unusually concise (for a proving) report is one of a housefly remedy that starts at page one here. Note that this is not a remedy intended for killing houseflies but derived from the particular insect.
      For some further examples,
      here you will find three links to proving reports. Be sure to at least browse the one on the remedy based on the light reflected from the planet Venus[sic]. Remedies are being made and used from the most interesting and excentric sources. They all show effect of different kinds in provings and are therefore, according to the principles of homeopathy, applicable. I have not found one report of a remedy rejected after proving. The elaborate system of collating, culling and organising the results seems to be a demanding task. I gather the procedure must be based on knowledge extracted from the voluminous volumes of Hahnemann’s writings. But to a layman reader like me the culling and sorting of the provers’ journal entries i.e. the “raw data” appears quite subjective and arbitrary.
      An interesting observation from reading a number of proving-protocols is that the remedies often seem to have severely “negative” effects on the provers, at times necessitating the administration of ‘antidote remedies’. How these are found and chosen is beyond my understanding but they invariably seem to work to satisfaction.
      Recognising that remedies contain no detectable or deducible (by modern, scientific methods) functional ingredients, reading the listings of journal entries (raw data) recorded by the provers reinforces the universal impression I have developed from the perusal of many proving protocols, that many or most of the subjects involved in homeopathic provings must be eccentric, high-strung, bewildered and even at times, mentally unstable individuals.
       
      The scientific efforts presented by the ‘Society of Homeopaths’ and discussed here, appear uncanningly incompatible with these universally applied methods of selecting, testing and verifying the material being “researched”, i.e. the remedies.
      It would therefore be extremely interesting to see what an experienced homeopath scientist like Robert Mathie would have to say about the validity and applicability of homeopathic provings as the experimental methodology of choice for finding and classifying remedies for use in ‘individualised’ homeopathy?

      • Just while we are at it…
        I would invite you to join in a little game. Following you will find a small list of symptoms and I would ask you to guess, what the provers did take. The list is taken form a PCT about the homeopathic proving process. I will post the solution in a few days.

        Here is the list:

        Mind – Difficulty concentrating
        Mind – Slight problems with language, stuttering
        Mind – My inner hectic feeling has gone completely
        Mind – I make mistakes, loose my stuff, afterwards feeling to be flying; I am thinking: will the others notice?
        Mind – Deaf; I loose oversight; clumsy when eating
        Head – Very strong headache, frontal, extending into the eyes with nausea
        Head – Headache, right, frontal, dull, pressing, extending to upper mandibles
        Eyes – Both eyes red, right worse than left
        Vision – Worse when reading or writing
        Vision – Improved again
        Ears – Left ear suddeenly free, I had not realised it was blocked
        Ear – Pressure in right ear
        Nose – Tickling, coryza, right worse than left

        This is the complete list as it was published.

        • @Norbert
          I will not break your little quiz yet but with a simple trick it was easy to find the answer. It proves to be a hilarious parody of a scientific “trial”! I guess it is difficult to find a better example of how homeopathic research in general is utterly ridiculous? 😀
          The author is, if I am not mistaken, actually well known for his efforts at proving the existence of healing effects procured by shaking water 😉

        • Okay, I see, this is all too difficult 😉

          When first I read this list of symptoms, I first thought about why I spend money on whisky and beer. I would have guessed at a bottle of booze between a few participants just to celebrate the end of the proving.

          No, Björn, not Mathie, if that is whom you thought of.
          This is the source (Table 1, bottom):
          Moellinger, H., Rainer Schneider, and Harald Walach. “Homeopathic pathogenetic trials produce specific symptoms different from placebo.” Forschende Komplementrmedizin (2006.) 16.2 (2009): 105.
          Link: http://www.researchgate.net/profile/Harald_Walach/publication/24407006_Homeopathic_pathogenetic_trials_produce_specific_symptoms_different_from_placebo/links/0fcfd50e4668a31d26000000.pdf

          Ah, yes, I forgot to solve the quiz:
          These were the symptoms encountered after taking placebo, i. e. blank sugar.

          • Hehe… no, I was alluding to Harald Walach. You mentioned him several times in your comment that I linked to as a hint.
             
            I simply presented the entire list to my trusted friend “uncle Google” and he promptly presented at the top of his list this link:
            http://www.academia.edu/1489345/Homeopathic_pathogenetic_trials_produce_specific_symptoms_different_from_placebo._M%C3%B6llinger_H._Schneider_R._and_Walach_H._2009_Forschende_Komplement%C3%A4rmedizin_16_105-110

            This is an incredible example of tooth fairy science. That these eminent gentlemen do not see the futility of the whole effort is beyond comprehension.

            It seems they had 25 purportedly “healthy” MD’s take super-diluted sugar pills of arsenik, table-salt and something undefined they call Placebo. I would challenge the notion that 25 MD’s who chose to spend time on a course in homeopathy can be healthy. At least not entirely sane.
            OK, so what.
            There is no control group who took nothing and writing down their experiences. That would have been a good idea. The placebo might have been contaminated by water shaking in copper pipes or whatever?
            Perhaps they produced their “Placebo” the same way Nelsons homeopathic factory in the UK did when FDA came to. Nelsons apparently produce a placebo remedy in every sixth bottle:

            b. The investigator also observed for Batch #36659 that one out of every six bottles did not receive the dose of active homeopathic drug solution due to the wobbling and vibration of the bottle assembly during filling of the active ingredient. The active ingredient was instead seen dripping down the outside of the vial assembly. Your firm lacked controls to ensure that the active ingredient is delivered to every bottle.

            Strangely no one ever complained or submitted a report of non-effective remedies from Nelsons. Not of over effective either. 😀

            OK…
            Walach et al. do not declare how they prepared the “placebo”. So what. The others are by definition indistinguishable from pure sugar anyway so if they are indistinguishable from the placebo…
            I digress.

            Lets attack the main question.
            What I cannot get my head around is why the authors did not see the logical absurdity in their setup:
            They are comparing two remedies with one another and a placebo. They had a carefully blinded symptom expert quantify the typicality of symptoms in the groups receiving the two ultra-avogadro-diluted remedies made from table salt and arsenic (or the sugar pills made to retain a distant memory of those chemicals).
            They ALSO produce/transform in the same mysterious way, the heap of symptoms in the placebo group into a numerical representation?! Eh… sorry… what did you say the “symptom expert” used as a reference for “typical” placebo symptoms to be able to count them?
            Does the magic book contain listings of symptoms typical for placebo remedies?
            Figure one shows mean values and error bars for placebo symptom counts. What do those values relate to? How is the typicality of remedy and placebo symptoms validated so they can be transformed into comparable numerical values?

            And then they pour all this into SPSS (an advanced statistics program that I use almost daily) and have it produce a P-value in a fancy-named calculation.
            Sure, I was once playing with a deck of cards when I was young. I divided it blindly into two heaps. I got almost entirely red in one and black in the other. Nothing mysterious about that, the likelihood of that combination is as large as of any other combination. The likelihood of getting the exact outcome I got in that little experiment is x. The same applies to the outcome of Walach’s et al. experiment. The likelihood of getting another randomly selected outcome is also x. And any of the millions of possible outcomes is also x. What I am saying is that the particular pattern in the count of “typical” symptoms they reported is very possibly due to chance i.e. the outcome is random. If not simply affected by wishful thinking as in poor professor Benvenistes case.
            If I had been able to repeat my card-experiment (I have tried numerous times) and gotten the same or similar outcome, then we could start talking about p-values. NO, I had a lucky strike, that’s all. If Walach et al. had not had a lucky strike, would they have written it up in a fancy article? No I guess not.
            This silly experiment of Mr. Walach et. al. is in no way valid until they can demonstrate that the result is solidly reproducible. Even if the result is from a group of 25 split in three arms, it does not say diddly-squat. as is.
            Each arm only contains 8.33333 homeopathy student MD’s. (or maybe it was 9+8+8?) There is no way to rely on the power of such a puny study to give a trustworthy estimate, even if they used a non-parametric calculation.
            They would have had to at least do a crossover to see if the results hold after a washout period and switching the group allotment …

            I could go on ranting but I just realised it is half past midnight… best to take a swig of the mineral water bottle and hope it was in contact with a little caffeine at some point and got thoroughly shaken on the delivery lorry on its way to the store…
            Good night.

      • ‘Research doesn’t represent what homeopaths do’. Homeopaths prescribe ‘individualised’ remedies according to the totality of patient’s symptoms, not according to diagnosis. The majority of trials to date do the latter – ie give everyone the same remedy for the same condition. Homeopaths would consider such studies unlikely to demonstrate results, due to poor external validity.
        Trials testing individually prescribed remedies are closer to representing homeopathy as practiced by homeopaths in the real world.

        “It is the homeopaths, not the remedies that have an “effect”. In the studies in Mathie’s review, both arms receive homeopathic consultations, so it patently can’t be the homeopaths having the effect. The only difference between groups is the remedies (placebos or active).

        • please read again: in these trials, treatments are individualised and not standardised according to diagnoses.

          • Agreed. These trials are individiualised. I was explaining why the majority of trials (41 individualised v 96 non-individualised) don’t represent what homeopaths do.

          • So, Philippa, do you reject all trials of non-individualised homeopathy that have been promoted by homeopaths because of their apparently positive results?

            What are you personally doing to correct those misguided homeopaths?

        • Philippa, what homeopaths actually “do” with their clients during their one-to-one private consultations invariably goes far beyond the practitioner’s level of competence in 21st Century medicine — including pharmacy, clinical psychology, and psychiatry. FFS, most homeopaths aren’t even qualified in Counselling & Listening Skills level 1.

          Individualizing homeopathic remedies to the client is absurd: one just has to study the homeopathic proving of each remedy to conclude that the whole system of homeopathy is nothing other than the hopelessly deluded preying on the most gullible and/or most vulnerable members of society.

          Even the dreams of the deluded provers form a significant part of the description and application of each homeopathic ‘remedy’. The only accolade I can think of giving to individualized homeopathy is for being by far the most absurdly deluded nonsense in the whole field of sCAM.

        • Philippa Fibert – some information about Ms Fibert
          http://www.cease-therapy.com/make-appointment/practitioner/philippafibert

          She works for, or in association with, “Dr” Tinus Smits who claims that vaccinations “in children and adults can result in a variety of chronic health problems.” Dr Smits also believes that vaccines contribute to the development of autism and he has success in “curing” autism.
          http://www.tinussmits.com/3734/home.aspx

          Fibert also believes that a range of problems, including ADHD, can be alleviated by homeopathy. I suspect that she has reached the status of “True Believer” and no amount of rational, logical discussion or scientific evidence will dissuade her from the belief in witchcraft.

          • Frank, one would hope that Fibert no longer works for Smits, the latter having been dead for 5 years. His legacy is the lucrative branch of quackery through which the vulnerable and desperate (in this instance, the parents of autistic children) can be exploited.

          • @Stephen Tonkin
            Thanks, I didn’t realise that Smits is dead, having died at age 63 from cancer on 1st of April, 2010. One can only wonder whether he sought treatment from so-called “allopathic” doctors who, possibly, may have saved, or prolonged, his life, or whether he persisted with his own type of witchcraft as sole treatment. Is it tasteless to point out the irony of his date of death?

            Fibert’s website makes note of Smits without mentioning his death, so I assumed he was alive. I also assume that Fibert is indeed doing this; “His legacy is the lucrative branch of quackery through which the vulnerable and desperate (in this instance, the parents of autistic children) can be exploited.”.

            What horrifies me is that Fibert claims she is doing a Master of Science in homeopathy at a British university and then to a PhD. That a university would offer such a thing must surely devalue every other qualification it confers.

          • I see plenty of UK unis that would do this sort of thing – not least Exeter !!!

          • @Frank, I too find it appalling, not only that academic institutions actively promote pseudomedicine, but also that some make life difficult for those on their staff who oppose such things.

            An “interesting” thing I saw on Fibert’s web page is that she has done an Advanced CEASE course that covered “dealing with aggravations, reatment (sic) of chronic diseases, especially autoimmune diseases and hormonal suppression, matridonal remedies”. Could such a course be completed in the mere weeks or months one would expect for an advanced medical course? Nope; in the surreal world that is pseudomedical magic, this course took – wait for it – a whole day!

          • Fibert is currently at the University of Sheffield: Philippa Fibert BEd (Hons, Cantab), BSc, MSc

            After a career working with children with special needs – first as a teacher (having studied English and Education at Cambridge University) then as a parent educator – I came across a treatment method more effective than anything I had encountered before. However few think it can work, apart from those who have experienced it first hand. So, to start untangling what is possible, I embarked on research into this area: a B.Sc [sic] in Homeopathy at Thames Valley University, where I conducted a literature review of the trials of Homeopathy for ADHD; and then a research M.Sc at Goldsmiths University examining the effectiveness of this treatment for children with ADHD. I have come to Sheffield, to the School of Health and Related Research, to design and implement a pragmatic randomised clinical trial in order to provide more substantial information for public health decision makers about what homeopathy can achieve for these children, and how cost effective it is. ADHD trial funding has again been provided by the Homeopathic Research Institute and PhD fees have been covered by a University of Sheffield Faculty fee scholarship.

            Research Interests

            My PhD question centres around what information is required to demonstrate that Homeopathy is an effective intervention for children with behavioural disorders.

            I am interested in pragmatic trial design and outcome measurement for non-specific interventions. I am particularly interested in applying these to therapeutic interventions for children with emotional and behavioural difficulties.

            I am committed to contributing to provision of an evidence base for the therapies that parents of children with emotional and behavioural difficulties turn to for their children.

  • Homeopaths would consider such studies unlikely to demonstrate results, due to poor external validity.

    Strange. I never heard this statement when such a study came to a favourable conclusion. Can you tell me why that is?

  • I entered this debate to defend my paper in the best scientific tradition, not to respond to ad hoc mud slinging at every aspect of homeopathy, and myself.
    The paper in question by Mathie et al. addresses two aspects: it gets closer to representing homeopathy as practiced by homeopaths, and it addresses the question whether any positive responses are down to practitioner effects, which it cautiously refutes since either both or neither arm(s) received consultations: ie that variable was equally balanced between groups. And since the trials are blinded any hawthorne effects are also equally distributed. That there are so few trials of this kind is an issue in need of redress.
    You ask ‘what am I am personally doing to correct misguided homeopaths’. I attempt to translate scientific findings to homeopaths; I attempt to translate homeopathy to scientists, and I attempt to translate clinical practice into research.
    You mention my training in CEASE therapy. For many years I worked in education in what was then termed ‘special needs’. I sent several years with St George’s Hospital psychiatric unit working with school refusers. We were a team of 16 well qualified professionals dealing with 4 children at a time, and yet to my observation we achieved very little, beyond the occasional diagnosis, for these children. Subsequent to that experience I trained as a homepopath and found it more effective than anything I had achieved in my previous work, but unacceptable to stakeholders.
    My research is fuelled by a desire to improve outcomes for this group, not from a priori beliefs in any particular therapy.
    Homeopathy presents a model of holism which may be helpful in the management of chronic conditions with multiple diagnoses, currently managed by many different medicines dealing with constituent parts. I believe we need an honest debate and pragmatic research into alternatives for chronic conditions particularly. Your personal jibes do not facilitate open discussion and re-inforce the defensive trenches which both sides dig into. Please can we stick to the debate at hand, which is Mathie’s recent paper.

    • James Randi — many years ago — suggested a trial method that would specifically test the efficacy of the medicines, regardless of all the other factors people throw in to complain about clinical trials in this arena.

      The idea is that you enroll a small panel of interested homeopaths. They see and treat their patients in their usual way, but each patient is randomized by a neutral third party, blind to the homeopath, to receive the actual prescribed medicine or an indistinguishable placebo labelled as the prescribed medicine. Thus, neither the patient nor the homeopath knows which medicine the patient is getting. If there are return visits, the consultation and treatment are completely as usual, but once again, a “treatment” patient gets the true homeopathic medicine while a “placebo” patient gets the sham — but only the third party knows which is which.

      After a suitable period of time (one month? two months? let the homeopath choose) the homeopath records whether he/she believes the patient was treated with the real thing or the placebo.

      A statistician can easily work out how many correct calls are required to indicate a result beyond chance.

      So far as I know, no homeopath has ever been willing to sign up for this procedure.

    • @Philippa Fibert
      “I entered this debate to defend my paper in the best scientific tradition, not to respond to ad hoc mud slinging at every aspect of homeopathy, and myself.”
      The irony of this is your stated intention to use “the best scientific tradition”, yet you ignore basic secondary school science in accepting unproven methods of homeopathy and Avogadro’s number. The so-called potentisation by banging the dilution on a leather bound (why not another cover or, for that matter, an electronic copy on a smart phone) is nothing other other than fanciful witchcraft.

      “The paper in question by Mathie et al. addresses two aspects: it gets closer to representing homeopathy as practiced by homeopaths, and it addresses the question whether any positive responses are down to practitioner effects, which it cautiously refutes since either both or neither arm(s) received consultations: ie that variable was equally balanced between groups. And since the trials are blinded any hawthorne effects are also equally distributed. That there are so few trials of this kind is an issue in need of redress.”
      The ‘result” is statistical noise; if this was research by a pharmaceutical company, how do you think it would be received by the scientific community if, like Mathie’s, it is trumpeted as a strong positive?

      “You mention my training in CEASE therapy. For many years I worked in education in what was then termed ‘special needs’. I sent several years with St George’s Hospital psychiatric unit working with school refusers. We were a team of 16 well qualified professionals dealing with 4 children at a time, and yet to my observation we achieved very little, beyond the occasional diagnosis, for these children. Subsequent to that experience I trained as a homepopath and found it more effective than anything I had achieved in my previous work, but unacceptable to stakeholders.
      My research is fuelled by a desire to improve outcomes for this group, not from a priori beliefs in any particular therapy.”
      This doesn’t demonstrate anything other than Confirmation Bias; you wanted to see a positive outcome and, bingo, you find it. You haven’t said what prompted you to study homeopathy or why you didn’t find a problem with the complete ignorance of basic science it displays?

      “Homeopathy presents a model of holism which may be helpful in the management of chronic conditions with multiple diagnoses, currently managed by many different medicines dealing with constituent parts. I believe we need an honest debate and pragmatic research into alternatives for chronic conditions particularly. Your personal jibes do not facilitate open discussion and re-inforce the defensive trenches which both sides dig into. Please can we stick to the debate at hand, which is Mathie’s recent paper.”
      Mathie’s paper is fatally flawed and no amount of “debate” will save it, nor homeopathy from scrutiny that shows it too is fundamentally nonsense. Tooth Fairy science is a pointless waste of time. what do you hope to discover; that the basic laws of chemistry and physics, and the scientific method are all wrong?

    • Homeopathy presents a model of holism…

      No it doesn’t. It doesn’t consider the patient as anything other than a collection of symptoms.

    • Philippa Fibert wrote: “I attempt to translate scientific findings to homeopaths; I attempt to translate homeopathy to scientists, and I attempt to translate clinical practice into research.”

      1. Homeopathy doesn’t work — please translate this scientific finding to homeopaths.
      2. The chief medical officer has already translated homeopathy to scientists and the general public by stating that homeopathy is ‘rubbish’.
      3. The practice of homeopathy is best described as witchcraft: attempting to translate this into research is an excellent idea; you’ll be able to reveal to the world many of the tricks used in the promotion and practice of this craft.

  • Oh dear.

    Mathie’s paper is being reported by What Doctors Don’t Tell You as: Homeopathy is twice as effective as placebo:

    Monday, February 16, 2015
    Homeopathy is dismissed as being nothing more than placebo; in other words, any benefits are entirely in the mind of the patient and not in the remedy. But a new review has found that homeopathic remedies are almost twice as effective as placebo.

    People given a homeopathic remedy are, on average, 1.98 times more likely to improve compared to those given only a placebo, or dummy pill. But the key is that the participants were given individualised remedies, which tend to be the type prescribed by a homeopath when face-to-face with the patient.

    Earlier studies—that found no benefit over placebo—had evaluated the effectiveness of general homeopathic remedies, such as arnica for muscle pains, or oscillococcinum for flu symptoms.

    In the new analysis, researcher Robert Mathie from the British Homeopathic Association considered 32 trials, covering 24 medical conditions, but decided to work with only three, which he found to be the most robust.

    (Source: Systematic Reviews, 2014; 3: 142)

    • @Alan
      This is in line with other utterings from the BHA. Like their tweet I referred to here.

      The raison d’être for research in homeopathy is of course to be able to use the results for promotion of said discipline.
      Being homeopaths, they seem to have subliminal powers to potentise these results.

    • Oh dear.
      Mathie’s paper is being reported by What Doctors Don’t Tell You as: Homeopathy is twice as effective as placebo:

      Q: 2 x 0 = ?

      [For all practical purposes for many diseases that matter and for all diseases that can actually kill us.]

  • It’s interesting to read what Mathie himself says about his review. In the Winter 2014/2015 issue of the British Homeopathic Association’s magazine, Health and Homeopathy [sic], he comments on both this review and another ‘exciting development’ in homeopathic research, his systematic review of trials of veterinary homeopathy. It’s worth quoting this extract:

    The BHA’s reviews have begun to see the light of day in papers published in high-profile peer-reviewed journals. And the key findings — based on our strict and objective review methods — are cautiously positive in their interpretations. Certainly, they identify the RCTs that comprise reliable evidence. They point the way to new and better research. They reaffirm the RCT as a suitable method for homeopathy research. And our first review of human RCTs does appear to challenge some of the findings of the Shang paper.

    Our work that reports risk-of-bias assessments and the meta-analysis of 32 placebo-controlled trials of individualised homeopathic treatment was published in the journal Systematic Reviews in December 2014. Two months earlier, on 18 October, the Veterinary Record published our systematic review of placebo-controlled trials in veterinary homeopathy”. Each of these journals is high-ranking in its own specialist field.

    The Systematic Reviews paper is the first of its type to present statistical evidence that medicines prescribed in individualised homeopathic treatment can produce effects measurably greater than those of placebos — which makes it something of a “landmark” publication. Our Veterinary Record manuscript went through two full rounds of peer-review before it was accepted by the editor, so it was thoroughly scrutinised! Its finding that the RCT literature in animals comprises a very small amount of reliable and positive evidence is a helpful clarification of the facts in veterinary homeopathy research.

    Importantly too, each paper concentrates mostly on the high-quality RCTs in its respective field. The fact that such high quality research evidence is the exception, rather than the rule, makes it not surprising that the conclusions of both these reviews are indecisive. And it must now act as a spur for new RCT research that satisfies the best of methodological and reporting standards. By the time this edition of your magazine reaches you, we shall all be aware of the impact these two papers are beginning to have in the scientific, medical, veterinary and homeopathy communities, as well as among the general public.

    Whatever opinions are then expressed about homeopathy’s clinical evidence, we know with certainty that our strong review methods, our clear scientific reporting and our careful conclusions will stand us in good stead. Additional facets of our programme of reviews in human and veterinary homeopathy are continuing apace.

    There is much that can be commented on here (although mostly covers above), but elsewhere in the magazine, there is a report on NHS Lanarkshire’s decision to end referrals to the Glasgow Homeopathic Hospital (see NHS Lanarkshire to end referrals to Glasgow Homeopathic Hospital for background).

    The article is just the usual expected excuses: it says Dr Harpeet Kohli, NHS Lanarkshire’s Director of Public Health, has been accused by some of misunderstanding or misrepresenting the evidence and predisposed to withdrawing funding. The article itself accuses Dr Kohli:

    For it seems that Dr Kohli failed to properly consider the results from the latest systematic review and meta-analysis of individualised homeopathy conducted by a team led by the British Homeopathic Association’s (BHA) research development adviser, Dr Robert Mathie. This study found that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos. The paper was published in the peer-reviewed journal Systematic Reviews only days before the health board’s meeting. Nevertheless, the BHA went to great lengths to ensure that the board had a copy of the paper before it convened to make its decision.

    The Mathie paper was published long after the NHS Lanarkshire consultation and long after the report had been written. In fact, Mathie’s paper was published just three days before NHS Lanarkshire’s Extraordinary Board meeting.

    But, even if the Board had seen the paper and had time to consider it fully before making their decision, we’re left to wonder whether they would have been swayed by its conclusion:

    Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

    Would the Board have been doing its duty if it procured a service on such a flimsy basis?

    Of course, the homeopaths aren’t happy with this:

    The BHA, along with other supporters of the [the Glasgow Homeopathic Hospital], is already planning its first moves in the campaign to get NHS Lanarkshire to reinstate its homeopathy service.

    I wait with bated breath.

  • RCTs conducted without a clear idea of what they are testing or what are the peculiar features of the whole system being tried would only yield deceptive results

    The protocols suitable to test individual medicines against specific conditions are obviously not appropriate for checking the veracity of a system as such. The reason is : Especially homeopathic cures are transformations. here the subjective symptoms are the guides for selection of remedies. hence – even otherwise homeo cures are accompanied by change of patient’s perspective. which is why after the effect he has a tendency to under-report the effect so as to make good his supposed error in his own original report of the subjectives.

    the objective diseases on the otherhand aren’t causally well behaved. it is well accepted that a consequence is removed even before its own cause. this is a change of basis.

    Now the important question it begs is whether these two vital aspects are considered before review of the effects ?
    the answer seems to be a definite ‘No’.
    it is also worth noting that higher the quality of an invalid method, more is the amplification of such flaws. this fact accounts for the positive results becoming negative or indifferent when rigidity is imposed

    • There’s a saying “Better to remain silent and be thought of as a fool than to open one’s mouth and thereby remove all doubt (I can never remember who said it originally – happy to be educated on this!). In his comment here, just like he frequently does on Twitter, Venkatesh has succeeded in removing all doubt.
      What evidence is there to support his assertion that all homeopathic “cures” are transformations? Even though it’s been shown repeatedly to him, homeopathy is just a placebo that cures nothing – except possibly a hole in the homeopath’s bank balance. He attempts to deceive the reader with classical pseudoscientific gobbledeygook and frequently resorts to attempting to bamboozle with nonsensical mathematical equations and Chopra-esque use of the word “quantum”.
      His comment here makes no sense whatsoever and his assertions have no basis in reality. I think it was Steven Law who used the phrase “pseudo-profundity” to describe the way that people such as Deepak Chopra use words to make it seem to the unthinking that what they’re saying is deeply intellectual in some way. Venkatesh is a repeat offender when it comes to pseudo-profundity as can be seen from his comment here. His denial of the scientific method sadly typifies the way homeopaths attempt to deceive their customers (not patients, but possibly “marks”?).

    • RCTs are independent of the system of treatment, its mechanism and indeed its history, an RCT tests the claim as formulated in the null hypothesis, therefore your first sentence is completely wrong. This makes the second paragraph irrelevant.

      The third paragraph contains undefined terms. Interestingly, no matter how you define ‘objective disease’, the paragraph makes no logical sense.

      As nothing of either substance or sense has been established then no question can be begged even if you are using the phrase correctly.

      As you have been unable to establish any flaws then any conclusions on that basis are irrelevant and invalid.

      A clinical trial is in essence a very simple device. It answers the question of whether a treatment is more effective than no treatment. Homeopathy consistently fails that test when clinical trials are carried out properly. The only answer we have from homeopaths is to cherry pick data by the result and not the quality of the data or to attempt to discredit clinical trials with gibberish. Neither technique works.

    • @venkatesh on Thursday 01 October 2015 at 15:29

      Your “word-sallad” explanation of why RCT’s cannot measure the effect of homeopathy are about as valid as saying that you cannot use the same method for counting apples as you use for counting oranges because they are different.
      Please respect the fact that you are discussing this with people of normal intelligence and mental capacity.

    • Venkatesh

      The mental contortions that you go through are extraordinary and all in the service of avoiding the obvious conclusion. Homeopathic pills do nothing. Customers might feel a bit better after spending an hour drinking tea with a homeopath but this has no significant effect on the natural history of their disease process.

      It’s all very sad that you devote your life to this nonsense.

  • I had expected more logical replies. but instead the prejudices were held uppermost than reason.

    Paul Morgan self contradicted by opening his mouth and showed even if he doesn’t understand (with due respect, he doesn’t ) it is his prerogative to denounce whatever is said in support of homeopathy may be also because anyway no one would dare to question. this is not conducive to the scientific method has either escaped him or is too overconfident of himself being Right.

    Aceloron is trying to cling to the notion that a trial of efficacy is a simple test involving only two possible outcomes oblivious that a third possibility of getting a bizarre outcome with no definite answer to the question of efficacy. Which can happen if the method is unsuitable for the purpose.

    And resorting to ad hominems do not serve any purpose other than proving their own hatred towards the system which further widens the misunderstanding between skeptics and homeopaths.

    Casting doubt on the basis of absence of plausible mechanism is violative of fairness, bcoz the very purpose of conducting the Clinical trials is to establish efficacy which should then form a basis for research into such mechanism. also I wish to reiterate that my attempt to hypothesize plausible mechanism as the conservation of information and this info acting on the hilbert space to effect orthogonal vector rotations has not been substantially shown improbable by anyone so far.

    I have given a model of homeopathy which is falsifiable (but not been done )
    I know it is quite difficult to comprehend by both skeptics and homeopaths alike.
    But it is the model by which homeopathy is going to be known if at all in scientific circles upon which any and every further explanation or refutation would be based on.

    I’m not likely to alter the scientific model of homeopathy since every law of homeopathy is consistent with it and only with it.

    the two paradigms, change of patient’s perspective and acausality I have raised here are only further explanation to the hilbert space behavior of remedy actions and two major reasons why RCT outcomes are highly unreliably variable in assessing the efficacy of homeopathic remedies.

    • venkatesh(sic) is, most likely, this person; https://www.practo.com/bangalore/doctor/dr-venkatesh-homeopath-2 who engages in homoeopathy in Bangalore, India.

      It is also the person who responded to James Randi’s million dollar homoeopathy challenge (https://sciencebasedpharmacy.wordpress.com/2011/02/06/win-1-million-if-you-can-prove-homeopathy-works/) in these terms;

      “venkatesh
      FEBRUARY 13, 2011 AT 5:21 AM
      HELLO,

      I , MYSELF IS A HOMOEOPATHY PRACTITIONER AT BANGALORE, KARNATAKA, INDIA.
      I CHALLENGE YOU THAT HOMOEOPATHY IS MOST RFFECTIVE HOLISTIC NATURAL TREATMENT WITHOUT ANY SIDE EFFECTS WHAT SO EVER.
      MY FATHER IS 86 YRS OLD,DIABETIC & HEART PATIENT SINCE 30YRS. HE WAS SUFFERING FROM SEVERE ASTHAMA AND WAS TO BE HOSPITALISED IN EVERY WINTER DUE TO SEVERE COUGH & COLD & BREATHING DIFFICULTY. HE WAS RELUCTENT TO TRY HOMOEOPATHY, THINKING HIS IS SERIOUS HEALTH CONDITION , EVEN HOMOEOPATHY ALSO CAN NOT HELP HIM. HOWEVER , I CONVINCED HIM AND MYSELF TREATED HIM WITH HOMOEO MEDICINES CONTINIUOSLY FOR 3 YRS , NOW EVEN AT THIS AGE HE IS FREE FROM ASTHAMA AND HIS DIABETES ALSO UNDER CONTROL. HE IS VERY HAPPY AND INSIST FOR ONLY HOMOEO MEDICINES.

      LIKE THIS I HAVE MANY CASES TO SHARE WITH YOU THAT MANY CASES LIKE SINUSITIS, THROAT PAIN, TONSILITIS, ULCER, GAS , R A , BP IS TREATED AND 90% CASES CURED EVEN !

      HOW HOMOEOPATHY WORKS?

      TO TEST PLEASE FOLLOW STRICTLY THE FOLLOWING PROCEEDURES :-
      PERSON WHO IS DISEASED – SHOULD FIRST KEEP HIM FROM TOTAL ALCHOHAL FREE, NO SMOKING, NO ALCHOHOL DRINKS, LIMITED NON-VEG CONSUMPTION, NO COFFEE AND FIRST HE OR SHE SHOULD BE CLEANSED HIS/HER BODY FROM TOXIINES AND FEW MORE DESCIPLINARY MEASURES SHOULD BE FOLLOWED BEFORE ADMINISTERING HOMOEO MEDICINES.

      PLEASE NOTE – NATURE IS GOD , GOD IS NATURE. RESPECT THE NATURE AND HAVE FULL FAITH IN NATURE CURE AND THEN TAKE NATURAL REMEDIES. THEN SEE HOW MIRACLES HAPPENS! POLLUTING YOUR BODY FROM ALL SORTS OF TOXIC MEDICINES AND JUNK FOODS, BAD LIFE STYLE, HOW YOU CAN EXPECT NATURE CAN HELP YOU?

      SCIENCE IS NOTHING BUT HIIDEN KNOWLEDGE IN UNIVERSAL POWER, WHICH HUMAN HAVE EXPLORED. UNVERSE ITSELF IS DIVINE POWER.

      YOUR SCIENTIFIC MEDICINES , CAN IT CURE EVEN ORDINERY COLD, ASTHAMA? CAN IT PREVENT HEAR ATTACK? SCIENCE CANNOT CURE TYPE-1 DIABETES !

      YES SCIENCE IS GOOD TO KNOW WHAT IS WHAT, BUT NOT PANECEA FOR ALL. ONE SHOULD SURRENDER , RESPECT AND HAVE STRONG FAITH N GOD AND UNIVERSAL POWER, THEN ONLY YOU WILL BE BLESSED.

      LAST BUT NOT THE LEAST – ANY DISEASE NEEDS TO BE ADDRESSED TO PHYSICAL, MENTAL, EMOTIONAL AND CAUSEL ASPECTS OF ALL HUMAN’S. THAT MEANS HOLISTIC NATURAL APPROACH, SURELY ANYONE WILL BE REWARDED WITH GOOD HEALTH.

      TEST AS PER MY ADVICE AND PLEASE TELL ME WETHER HOMOEOPATHY IS EFFECTIVE OR NOT?

      VENKATESH,
      BANGALORE
      INDIA”

      As s/he has invoked god, we can discount any sense in his/her posts, as is done with the equally senseless rantings of luminaries in this sphere, such as D. D. Palmer.

      As for the contention that “hilbert space behavior of remedy actions” may be responsible for the anecdotally observed “efficacy” of homoeopathy, this is the Wikipedia link for that branch of mathematics;
      https://en.wikipedia.org/wiki/Hilbert_space

      To quote, in part, from the page;
      “The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used.

      Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces. The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer)—and ergodic theory, which forms the mathematical underpinning of thermodynamics. John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions.

      How this relates to the nonsense that is homoeopathy is beyond me, as is the understanding of venkatesh of this practical field of mathematics. Maybe, it is that no one accepts the “quantum” explanation anymore and this is yet another straw at which to grasp.

      Note to venkatesh; don’t criticise anyone’s logic after this piffle.

    • ‘Aceloron is trying to cling to the notion that a trial of efficacy is a simple test involving only two possible outcomes oblivious that a third possibility of getting a bizarre outcome with no definite answer to the question of efficacy. Which can happen if the method is unsuitable for the purpose.’

      There are many clinical trials by homeopaths with low numbers and thus low power that produce no result, the homeopath will nevertheless conclude efficacy. This is why selection of high quality trials is performed so a definite answer is obtained. This result shows no difference between the high priced sugar and water of the homeopath and food grade sugar and water. Nothing bizarre about it at all.

    • venkatesh said:

      I had expected more logical replies. but instead the prejudices were held uppermost than reason.

      You started it!

    • Oh yes, he didn’t win Randi’s one million, which would have made his very wealthy in India. Wonder why?

  • “Aceloron is trying to cling to the notion that a trial of efficacy is a simple test involving only two possible outcomes oblivious that a third possibility of getting a bizarre outcome with no definite answer to the question of efficacy.”

    Oh dear! Like so many touts for pseudomedicine, you misrepresent the nature of RCTs: there are only two possible outcomes, which may be summarised in lay-person’s language as follows:
    * POSITIVE: The substance being trialled IS shown to have a statistically significant benefit compared to the control substance.
    * NEGATIVE: The substance being trialled IS NOT shown to have a statistically significant benefit compared to the control substance.

    There are no possible outcomes other than IS and IS NOT: your “third possibility” falls fully into the latter category.

  • Frank has mistaken my identity. I’m not the venkatesh he is suggesting

    the discussion is about RCT results being deceptive and not mechanism. I happened to incidentally show that homeopathy is amenable to Hilbert space formulations, only because someone pointed out lack of mechanism.

    Just citing link to the Wikipedia section is again not logical, not even a copy
    of the same unless it is pointed out exactly how it supports your denial. On the contrary I’m afraid it is suggestive of the legitimacy of my explanations

    for your kind information homeopathy is amenable to Hilbert space formulations due to the remedy relationships and I have already showed this as follows.

    If A represents arsenic alb and B, belladonna, they r representable as vectors in an abstract vector space-
    A = (0, 1, 1)
    B =.(0, -1, 1)
    their dot product will be as per

    n
    A•B = Σ = A1•B1 A2•B2 …An•Bn
    i=1

    = (0, 1, 1)•(0,-1, 1)
    = 0 -1 1
    = 0
    ie cos90

    if possible and if understood you may try to refute this logically.

    • This is pathetic.

      You have not shown that any dot product of two vectors shows anything about homeopathy. Some handwaving ignorance of the subject is insufficient. You have not shown that a vector can represent homeopathic water or sugar or which vector should be chosen. Ineffective you have incanted some words and then claimed some nonsense.

      A Canticle for Leibowitz has recently been played on BBC Radio 4 extra, you might usefully listen to it, it shows how such witterings as yours are nonsense.

    • Let’s examine what you’ve done… In an arbitrary 3-dimensional space, you have placed two vectors; neither of which occupy the first dimension (the first value is zero for each vector). Therefore, all you have is two vectors in a 2-dimensional space, which have equal magnitude and a phase shift of 90 degrees. Then you show us that they do indeed have a phase shift of 90 degrees. The magnitude and phase of the vectors are:
      A: 1.414 at +45 degrees
      B: 1.414 at -45 degrees

      The dot product (aka scalar product) = 2 at 90 degrees. Your statement that it “=0 ie 90” is obviously false.

      In your 3-dimensional space, only the first dimension is 0 (because it is unoccupied by either vector, *by your original definition*); the other two dimensions are both occupied and non-zero. However, your comment clearly reveals your pathetic attempt at misdirection. Obviously, vectors A (arsenic alb) and B (belladonna) do indeed have a third dimension — it is called time! I suggest that we let time t=0 when the patient is given the remedy and t progresses in the usual fashion after that instant. Again obviously, when t is 0 or negative neither vector can possibly influence the patient. RCTs have shown that when t is positive, neither vector (in homeopathic potencies) influence the patient *because* both vectors have zero magnitude, i.e., both vectors contain zero molecules.

      You know full well that the vast majority of people are unable to spot your deliberate misdirections. There is a commonly applied term for this type of misdirection: fraud. In the context of promoting and/or vending medicine it has a legal term “health fraud” or “medical fraud” (it varies between jurisdictions).

    • Well, I am eager to learn

      Venkatesh, could you please explain why belladonna is represented as (0,-1,1) and not for instance (-1,0,1). Then the dot-product would be 1.

      While you are at it, please explain the significance of a dot product of 1 or 0 with respect to the patient.

      Pleasse be as logical as possible in your explanation.

  • Norbert alone expressed a logical doubt. others either don’t understand what I Have written or not happy about it and so not willing to think.

    the unit vectors are not chosen arbitrarily. both Arsenic alb patient and belladonna pt drink water frequently. arsenic alb
    pt has thirst and loves to drink . hence given ➕ 1 in that dimension. but belladonna pt does so because he has thirst yet dreads drinking . the corresponding vector dimension is hence ➖ 1
    Whole of homeopathy is full of such remedy relationships where giving one remedy alters (rotates) the state vector to align with an orthogonal vector according to the orientation.

    thank you. I’m not going to respond to further skeptical and stupid replies but it is not an acceptance by any means.

    • My reply to you was a solid mathematical refutation. Very obviously, you are completely unable to defend your abject bullshit. Please continue to reside only in your contrived Hilbert space, rather than trying to interact with people who reside in the four dimensions of space-time. Please don’t respond with yet more asinine comments: the only person who will end up looking ever more stupid (if that’s even possible) is only yourself. Goodbye.

    • Hhhmmm, what I feel is not a logical doubt but a complete ignorance about what you are doing here. That is, what your vectors are all about, what the dimesions are, how many dimesions there are anyway, how the components are established, what a mathematical operation and its result should resemble. Why the scalar product and not the vector product? Or addition or substraction?

      I am not just a little confused that you mix vectors of two remedies not of remedy and patient.

      In a nutshell: As far as I can see your maths has nothing to do with anything that happens in homeopathy – neither in fiction nor in real physics.

      • “I’m not going to respond to further skeptical and stupid replies”

        Because you can’t. The ridiculous assertion has been destroyed.

    • “the unit vectors are not chosen arbitrarily. both Arsenic alb patient and belladonna pt drink water frequently. arsenic alb
      pt has thirst and loves to drink . hence given ➕ 1 in that dimension. but belladonna pt does so because he has thirst yet dreads drinking . the corresponding vector dimension is hence ➖ 1
      Whole of homeopathy is full of such remedy relationships where giving one remedy alters (rotates) the state vector to align with an orthogonal vector according to the orientation.”

      Not chosen arbitrarily? The whole lot is arbitrary, conjectural bullshit. I should have posted sooner about this, but it has already been torn to shreds. Alan exposed the nonsense for what it is.

      Admittedly though, your posts have provided a bit of humour in this thread. They are truly laughable, yet you (seem to take it seriously. There is no accounting for some.

  • Dr. Ernst. I enjoyed your forensic analysis of the otherwise apparently (without the background investigation) well constructed SR. I recently happened to be at an “evidence” overview presentation in Toronto given by the Queen’s homeopath (P. Fisher). I took the opportunity to ask him about the White et al exclusion to which he pointed out the issue of the “ceiling effect” relevant to the primary outcome. Could the selective reporting accusation toward your paper be in response to this? White did admit this issue in a reply to letters; it was not, in my opinion, suitably highlighted as a limitation in your RCT report. I am certainly not in the pro-homeopathy camp, but I am interested in your thoughts related to this (the ceiling effect) and other related to Fisher’s response.

    • Mathie gave other reasons; and the ceiling-effect would not be an exclusion criterion according to their methodology.

    • As Prof Ernst says, this was not a criterion in Mathie’s methodology, so it’s odd that Fisher resorted to this ad hoc reasoning. Perhaps Fisher is another one who realises the problems with Mathie’s stated exclusion criteria?

      • I suppose the greater issue with their meta-analysis may be in trying to force an answer to a question (specific effects?) by a method not particularly well suited to the task. I was taught to first consider the appropriateness of pooling studies. If there is too much clinical heterogeneity, (on the micro: mild asthma vs severe asthma; on the macro, asthma vs pain), you shouldn’t pool. The summary effect of a meta-analysis of studies of such vast clinical contexts is likely meaningless.

  • Mathie et al just published a related paper [http://www.ncbi.nlm.nih.gov/pubmed/27062959]
    BACKGROUND:

    To date, our programme of systematic reviews has assessed randomised controlled trials (RCTs) of individualised homeopathy separately for risk of bias (RoB) and for model validity of homeopathic treatment (MVHT).

    OBJECTIVES:

    The purpose of the present paper was to bring together our published RoB and MVHT findings and, using an approach based on GRADE methods, to merge the quality appraisals of these same RCTs, examining the impact on meta-analysis results.

    DESIGN:

    Systematic review with meta-analysis.

    METHODS:

    As previously, 31 papers (reporting a total of 32 RCTs) were eligible for systematic review and were the subject of study.

    MAIN OUTCOME MEASURES:

    For each trial, the separate ratings for RoB and MVHT were merged to obtain a single overall quality designation (‘high’, ‘moderate, “low”, ‘very low’), based on the GRADE principle of ‘downgrading’.

    RESULTS:

    Merging the assessment of MVHT and RoB identified three trials of ‘high quality’, eight of ‘moderate quality’, 18 of ‘low quality’ and three of ‘very low quality’. There was no association between a trial’s MVHT and its RoB or its direction of treatment effect (P>0.05). The three ‘high quality’ trials were those already labelled ‘reliable evidence’ based on RoB, and so no change was found in meta-analysis based on best-quality evidence: a small, statistically significant, effect favouring homeopathy.

    CONCLUSION:

    Accommodating MVHT in overall quality designation of RCTs has not modified our pre-existing conclusion that the medicines prescribed in individualised homeopathy may have small, specific, treatment effects.

  • Ernst claims that had his study been included in the meta-analysis the outcome would probably not have favoured homeopathy. This example highlights the challenge of critically appraising the evidence. However, in Ernst’s study he states that homeopathic medicines were used as an adjunct to conventional treatment. It is not clear at first glance wether the samples in Mathie’s data were trialling only homeopathic remedies or homeopathics as an adjunct to conventional treatment. If you read on however you will see in the last paragraph under the sub-heading “Search strategy, data sources and trial eligibility” that in this particular review they excluded trials of “homeopathy combined with other (complementary or conventional) intervention.”

    • I wonder why they did that.

      In the real world with real diseases, homeopathy is often used in a complementary fashion. Most of the time homeopaths bang on about ‘real world effectiveness’ which is a proxy for looking at uncontrolled observations as if they prove anything. In this instance, ‘real world’ complementary use of homeopathy was excluded from the analysis and just so happened to affect the outcome. You will have to consider whether the exclusion criteria were chosen while blind to the trials that would be affected by that choice. You will also need to recall that the literature base for homeopathy is tiny and the main studies are very well known to workers in the field.

    • ….but: At least some of the studies included in Matie’s review are about adjunct homeopathic treatment:
      Frass [A14 in Matie’s review] covers adjunct homeopathic treatment of patients with severe sepsis in the ICU
      Jacobs [A18, A19] covers homeopathic treatment adjunct to rehydration treatment in children with diarrhea
      White [A39], about asthma in children. This study was not excluded for being an adjunct treatment.
      There may be more.

      • absolutely true; and concomitant treatments were not the reason for excluding out study.

      • I’d missed that fact.

        So, what they said and what they did were different?

        Am I surprised?

      • @Norbert Aust

        You beat me to it.

        It’s also worth looking further at the three trials Mathie deemed to be ‘reliable evidence’: Jacobs 1994 (referenced as A19 in Mathie et al.), Jacobs 2001 (A20) and Bell 2004 (A05).

        The number of participants in them were 81, 75 and 62 participants respectively. Jacobs 2001 self-describes as ‘preliminary’ and Bell 2004 as ‘a pilot study’.

        To describe these as ‘reliable evidence’, takes, I think, a particularly peculiar mindset.

        • Alan, yes it does.

          Then you should consider, this upgrading to ‘reliable evidence’ is in conflict with the Cochrane handbook which the authors state they would follow. And second, it was not described in the protocol but introduced – I would say post hoc to save face – in the paper without any further comment.

          But to consider 2 pilot-studies as ‘reliable evidence’ is really special.

  • Ernest has the right to be butt hurt.

Leave a Reply to Z. Rutherford Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories