critical thinking

1 2 3 32

We have discussed this notorious problem before: numerous charities (such as one that treats HIV and malaria with homeopathy in Botswana, or the one claiming that homeopathy can reverse cancer) are a clear danger to public health. I have previously chosen the example of ‘YES TO LIFE’ and explained that they promote unproven and disproven alternative therapies as cures for cancer (and if you want to get really sickened, look who act as their supporters and advisors). It is clear to me that such behaviour can hasten the death of many vulnerable patients.

Yet, many such charities get tax and reputational benefits by being registered charities in the UK. The question is CAN THIS SITUATION BE JUSTIFIED?

Currently, the UK Charity commission want to answer it. Specifically, they are asking you the following question:

  • Question 1: What level and nature of evidence should the Commission require to establish the beneficial impact of CAM therapies?
  • Question 2: Can the benefit of the use or promotion of CAM therapies be established by general acceptance or recognition, without the need for further evidence of beneficial impact? If so, what level of recognition, and by whom, should the Commission consider as evidence?
  • Question 3: How should the Commission consider conflicting or inconsistent evidence of beneficial impact regarding CAM therapies?
  • Question 4: How, if at all, should the Commission’s approach be different in respect of CAM organisations which only use or promote therapies which are complementary, rather than alternative, to conventional treatments?
  • Question 5: Is it appropriate to require a lesser degree of evidence of beneficial impact for CAM therapies which are claimed to relieve symptoms rather than to cure or diagnose conditions?
  • Question 6: Do you have any other comments about the Commission’s approach to registering CAM organisations as charities?

I am sure that most readers of this blog have something to say about these questions. So, please carefully study the full document, go on the commission’s website, and email your response to: . Don’t delay it; do it now!


On this blog, we have had (mostly unproductive) discussions with homeopath so often that sometimes they sound like a broken disk. I don’t want to add to this kerfuffle; what I hope to do today is to summarise  a certain line of argument which, from the homeopaths’ point of view, seems entirely logical. I do this in the form of a fictitious conversation between a scientist (S) and a classical homeopath (H). My aim is to make the reader understand homeopaths better so that, future debates might be better informed.


S: I have studied the evidence from studies of homeopathy in some detail, and I have to tell you, it fails to show that homeopathy works.

H: This is not true! We have plenty of evidence to prove that patients get better after seeing a homeopath.

S: Yes, but this is not because of the remedy; it is due to non-specific effect like the empathetic consultation with a homeopath. If one controls for these factors in adequately designed trials, the result usually is negative.

I will re-phrase my claim: the evidence fails to show that highly diluted homeopathic remedies are more effective than placebos.

H: I disagree, there are positive studies as well.

S: Let’s not cherry pick. We must always consider the totality of the reliable evidence. We now have a meta-analysis published by homeopaths that demonstrates the ineffectiveness of homeopathy quite clearly.

H: This is because homeopathy was not used correctly in the primary trials. Homeopathy must be individualised for each unique patient; no two cases are alike! Remember: homeopathy is based on the principle that like cures like!!!

S: Are you saying that all other forms of using homeopathy are wrong?

H: They are certainly not adhering to what Hahnemann told us to do; therefore you cannot take their ineffectiveness as proof that homeopathy does not work.

S: This means that much, if not most of homeopathy as it is used today is to be condemned as fake.

H: I would not go that far, but it is definitely not the real thing; it does not obey the law of similars.

S: Let’s leave this to one side for the moment. If you insist on individualised homeopathy, I must tell you that this approach can also be tested in clinical trials.

H: I know; and there is a meta-analysis which proves that it is effective.

S: Not quite; it concluded that medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

If you call this a proof of efficacy, I would have to disagree with you. The effect was tiny and at least two of the best studies relevant to the subject were left out. If anything, this paper is yet another proof that homeopathy is useless!

H: You simply don’t understand homeopathy enough to say that. I tried to tell you that the remedy must be carefully chosen to fit each unique patient. This is a very difficult task, and sometimes it is not successful – mainly because the homeopaths employed in clinical trials are not skilled enough to find it. This means that, in these studies, we will always have a certain failure rate which, in turn, is responsible for the small average effect size.

S: But these studies are always conducted by experienced homeopaths, and only the very best, most experienced homeopaths were chosen to cooperate in them. Your argument that the trials are negative because of the ineffectiveness of the homeopaths – rather than the ineffectiveness of homeopathy – is therefore nonsense.

H: This is what you say because you don’t understand homeopathy!

S: No, it is what you say because you don’t understand science. How else would you prove that your hypothesis is correct?

H: Simple! Just look at individual cases from the primary studies within this meta-analysis . You will see that there are always patients who did improve. These cases are the proof we need. The method of the RCT is only good for defining average effects; this is not what we should be looking at, and it is certainly not what homeopaths are interested in.

S: Are you saying that the method of the RCT is wrong?

H: It is not always wrong. Some RCTs of homeopathy are positive and do very clearly prove that homeopathy works. These are obviously the studies where homeopathy has been applied correctly. We have to make a meta-analysis of such trials, and you will see that the result turns out to be positive.

S: So, you claim that all the positive studies have used the correct method, while all the negative ones have used homeopathy incorrectly.

H: If you insist to put it like that, yes.

S: I see, you define a trial to have used homeopathy correctly by its result. Essentially you accept science only if it generates the outcome you like.

H: Yes, that sounds odd to you – because you don’t understand enough of homeopathy.

S: No, what you seem to insist on is nothing short of double standards. Or would you accept a drug company claiming: some patients did feel better after taking our new drug, and this is proof that it works?

H: You see, not understanding homeopathy leads to serious errors.

S: I give up.

A new survey from the Frazer Institute, an independent, non-partisan Canadian public policy think-tank, suggests that more and more Canadians are using alternative therapies. In 2016, massage was the most common type of therapy that Canadians used over their lifetime with 44 percent having tried it, followed by chiropractic care (42%), yoga (27%), relaxation techniques (25%), and acupuncture (22%). Nationally, the most rapidly expanding therapies over the past two decades or so (rate of change between 1997 and 2016) were massage, yoga, acupuncture, chiropractic care, osteopathy, and naturopathy. High dose/mega vitamins, herbal therapies, and folk remedies appear to be in declining use over that same time period.

“Alternative treatments are playing an increasingly important role in Canadians’ overall health care, and understanding how all the parts of the health-care system fit together is vital if policymakers are going to find ways to improve it,” said Nadeem Esmail, Fraser Institute senior fellow and co-author of Complementary and Alternative Medicine: Use and Public Attitudes, 1997, 2006 and 2016.

The updated survey of 2,000 Canadians finds more than three-quarters of Canadians — 79 per cent — have used at least one complementary or alternative medicine (CAM) or therapy sometime in their lives. That’s an increase from 74 per cent in 2006 and 73 per cent in 1997, when two previous similar surveys were conducted. In fact, more than one in two Canadians (56 per cent) used at least one complementary or alternative medicine or therapy in the previous 12 months, an increase from 54 per cent in 2006 and 50 per cent in 1997.

And Canadians are using those services more often, averaging 11.1 visits in 2016, compared to fewer than nine visits a year in both 2006 and 1997. In total, Canadians spent $8.8 billion on complementary and alternative medicines and therapies last year, up from $8 billion (inflation adjusted) in 2006.

The majority of respondents — 58 per cent — support paying for alternative treatments privately and don’t want them included in provincial health plans. Support for private payment is even highest (at 69 per cent) among 35- to 44-year-olds. “Complementary and alternative therapies play an increasingly important role in Canadians’ overall health care, but policy makers should not see this as an invitation to expand government coverage — the majority of Canadians believe alternative therapies should be paid for privately,” Esmail said.

This seems to be a good survey, and it offers a host of interesting information. Yet, it also leaves many pertinent questions unanswered. The most important one might be WHY?

Why are so many people trying treatments which clearly are unproven or disproven?

Enthusiasts would obviously say this is because they are useful in some way. I would, however, point out that the true reason might well be that consumers are systematically mislead about the value of alternative therapies, as I have shown on this blog so many times.

Nevertheless, this seems to be a good survey – there are hundreds, if not thousands of surveys in the realm of alternative medicine which are of such deplorable quality that they do not deserve to be published at all – but even with a relatively good survey, we need to be cautious. For instance, I have no difficulty designing a questionnaire that would guarantee a result of 100% prevalence of alternative medicine usage. All I would need to do is to include the following two questions:

  • Have you ever used plant-based products for your well-being or comfort?
  • Have you ever prayed while being ill?

Drinking a cup of tea would already have to prompt a positive reply to the 1st question. And if you answer yes to the 2nd question, it would be interpreted as using prayer as a therapy.

I think, I rest my case.

The question whether spinal manipulative therapy (SMT) is effective for acute low back pain is still discussed controversially. Chiropractors (they use SMT more regularly than other professionals) try everything to make us believe it does work, while the evidence is far less certain. Therefore, it is worth considering the best and most up-to-date data.

The  aim of this paper was to systematically review studies of the effectiveness and harms of SMT for acute (≤6 weeks) low back pain. The research question was straight forward: Is the use of SMT in the management of acute (≤6 weeks) low back pain associated with improvements in pain or function?

A through literature search was conducted to locate all relevant papers. Study quality was assessed using the Cochrane Back and Neck (CBN) Risk of Bias tool. The evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) criteria. The main outcome measures were pain (measured by either the 100-mm visual analog scale, 11-point numeric rating scale, or other numeric pain scale), function (measured by the 24-point Roland Morris Disability Questionnaire or Oswestry Disability Index [range, 0-100]), or any harms measured within 6 weeks.

Of 26 eligible RCTs identified, 15 RCTs (1711 patients) provided moderate-quality evidence that SMT has a statistically significant association with improvements in pain (pooled mean improvement in the 100-mm visual analog pain scale, −9.95 [95% CI, −15.6 to −4.3]). Twelve RCTs (1381 patients) produced moderate-quality evidence that SMT has a statistically significant association with improvements in function (pooled mean effect size, −0.39 [95% CI, −0.71 to −0.07]). Heterogeneity was not explained by type of clinician performing SMT, type of manipulation, study quality, or whether SMT was given alone or as part of a package of therapies. No RCT reported any serious adverse event. Minor transient adverse events such as increased pain, muscle stiffness, and headache were reported 50% to 67% of the time in large case series of patients treated with SMT.

The authors concluded that among patients with acute low back pain, spinal manipulative therapy was associated with modest improvements in pain and function at up to 6 weeks, with transient minor musculoskeletal harms. However, heterogeneity in study results was large.

This meta-analysis has been celebrated by chiropractors around the world as a triumph for their hallmark therapy, SMT. But there have also been more cautionary voices – not least from the lead author of the paper. Patients undergoing spinal manipulation experienced a decline of 1 point in their pain rating, says Dr. Paul Shekelle, an internist with the West Los Angeles Veterans Affairs Medical Center and the Rand Corporation who headed the study. That’s about the same amount of pain relief as from NSAIDs, over-the-counter nonsteroidal anti-inflammatory medication, such as ibuprofen. The study also found spinal manipulation modestly improved function. On average, patients reported greater ease and comfort engaging in two day-to-day activities — such as finding they could walk more quickly, were having less difficulty turning over in bed or were sleeping more soundly.

It’s not clear exactly how spinal manipulation relieves back pain. But it may reposition the small joints in the spine in a way that causes less pain, according to Dr. Richard Deyo, an internist and professor of evidence-based medicine at the Oregon Health and Science University. Deyo wrote an editorial published along with the study. Another possibility, Deyo says, is that spinal manipulation may restore some material in the disk between the vertebrae, or it may simply relax muscles, which could be important. There may also be mind-body interaction that comes from the “laying of hands” or a trusting relationship between patients and their health care provider, he says.

Deyo notes that there are many possible treatments for lower back pain, including oral medicine, injected medicine, corsets, traction, surgery, acupuncture and massage therapy. But of about 200 treatment options, “no single treatment is clearly superior,” he says.

In another comment by Paul Ingraham the critical tone was much clearer: “Claiming it as a victory is one of the best examples I’ve ever seen of making lemonade out of science lemons! But I can understand the mistake, because the review itself does seem positive at first glance: the benefits of SMT are disingenuously summarized as “statistically significant” in the abstract, with no mention of clinical significance (effect size; see Statistical Significance Abuse). So the abstract sounds like good news to anyone but the most wary readers, while deep in the main text the same results are eventually conceded to be “clinically modest.” But even even that seems excessively generous: personally, I need at least a 2-point improvement in pain on a scale of 10 to consider it a “modest” improvement! This is not a clearly positive review: it shows weak evidence of minor efficacy, based on “significant unexplained heterogeneity” in the results. That is, the results were all over the place — but without any impressive benefits reported by any study — and the mixture can’t be explained by any obvious, measurable factor. This probably means there’s just a lot of noise in the data, too many things that are at least as influential as the treatment itself. Or — more optimistically — it could mean that SMT is “just” disappointingly mediocre on average, but might have more potent benefits in a minority of cases (that no one seems to be able to reliably identify). Far from being good news, this review continues a strong trend (eg Rubinstein 2012) of damning SMT with faint praise, and also adds evidence of backfiring to mix. Although fortunately “no RCT reported any serious adverse event,” it seems that minor harms were legion: “increased pain, muscle stiffness, and headache were reported 50% to 67% of the time in large case series of patients treated with SMT.” That’s a lot of undesirable outcomes. So the average patient has a roughly fifty-fifty chance of up to roughly maybe a 20% improvement… or feeling worse to some unknown degree! That does not sound like a good deal to me. It certainly doesn’t sound like good medicine.”


As I have made clear in many previous posts, I do fully agree with these latter statements and would add just three points:

  1. We know that many of the SMT studies completely neglect reporting adverse effects. Therefore it is hardly surprising that no serious complications were on record. Yet, we know that they do occur with sad regularity.
  2. None of the studies controlled for placebo effects. It is therefore possible – I would say even likely – that a large chunk of the observed benefit is not due to SMT per se but to a placebo response.
  3. It seems more than questionable whether the benefits of SMT outweigh its risks.

The aim of this pragmatic study was “to investigate the effectiveness of acupuncture in addition to routine care in patients with allergic asthma compared to treatment with routine care alone.”

Patients with allergic asthma were included in a controlled trial and randomized to receive up to 15 acupuncture sessions over 3 months plus routine care, or to a control group receiving routine care alone. Patients who did not consent to randomization received acupuncture treatment for the first 3 months and were followed as a cohort. All trial patients were allowed to receive routine care in addition to study treatment. The primary endpoint was the asthma quality of life questionnaire (AQLQ, range: 1–7) at 3 months. Secondary endpoints included general health related to quality of life (Short-Form-36, SF-36, range 0–100). Outcome parameters were assessed at baseline and at 3 and 6 months.

A total of 1,445 patients were randomized and included in the analysis (184 patients randomized to acupuncture plus routine care and 173 to routine care alone, and 1,088 in the nonrandomized acupuncture plus routine care group). In the randomized part, acupuncture was associated with an improvement in the AQLQ score compared to the control group (difference acupuncture vs. control group 0.7 [95% confidence interval (CI) 0.5–1.0]) as well as in the physical component scale and the mental component scale of the SF-36 (physical: 2.5 [1.0–4.0]; mental 4.0 [2.1–6.0]) after 3 months. Treatment success was maintained throughout 6 months. Patients not consenting to randomization showed similar improvements as the randomized acupuncture group.

The authors concluded that in patients with allergic asthma, additional acupuncture treatment to routine care was associated with increased disease-specific and health-related quality of life compared to treatment with routine care alone.

We have been over this so many times (see for instance here, here and here) that I am almost a little embarrassed to explain it again: it is fairly easy to design an RCT such that it can only produce a positive result. The currently most popular way to achieve this aim in alternative medicine research is to do a ‘A+B versus B’ study, where A = the experimental treatment, and B = routine care. As A always amounts to more than nothing – in the above trial acupuncture would have placebo effects and the extra attention would also amount to something – A+B must always be more than B alone. The easiest way of thinking of this is to imagine that A and B are both finite amounts of money; everyone can understand that A+B must always be more than B!

Why then do acupuncture researchers not get the point? Are they that stupid? I happen to know some of the authors of the above paper personally, and I can assure you, they are not stupid!

So, why?

I am afraid there is only one reason I can think of: they know perfectly well that such an RCT can only produce a positive finding, and precisely that is their reason for conducting such a study. In other words, they are not using science to test a hypothesis, they deliberately abuse it to promote their pet therapy or hypothesis.

As I stated above, it is fairly easy to design an RCT such that it can only produce a positive result. Yet, it is arguably also unethical, perhaps even fraudulent, to do this. In my view, such RCTs amount to pseudoscience and scientific misconduct.

Charlotte Leboeuf-Yde, DC,MPH,PhD, is professor in Clinical Biomechanics at the University of Southern Denmark and works at the French-European Institute of Chiropractic in Paris. She is a chiropractor with extensive research experience, for example, she was one of the first chiropractors to have studied adverse reactions of spinal manipulation.

Charlotte certainly knows a thing or two about adverse effects of spinal manipulation, and I have always found her work interesting. Therefore, I was delighted to find a recent blog post where she discussed the Cassidy study of 2008 and two opposed views on the validity of this much-discussed paper.

One team (Paulus &Thaler) argued, Charlotte explained, that the Cassidy case-control study is faulty, because vertebro-basilar stroke in general was not separated from stroke specifically caused by vertebral artery dissections, the presumed culprit in cervical spinal manipulation. According to Paulus & Thaler, this would potentially result in a dilution of ‘real’ manipulative-related strokes among all other causes of stroke that are much more common. They argue that the Cassidy-analyses therefore were polluted by this misclassification, whereas the other team (Murphy et al) vehemently disagrees.

The final word is clearly not yet pronounced on this issue, Charlotte concluded, and both teams agree that research has to address various methodological challenges to obtain a trustable answer. Nevertheless, without an international collaboration involving prospective cases this seems an almost impossible task, particularly in view of the rarity of the condition; problems in capturing all cases (going from the reversible to the permanent injuries); the likely large anatomical and physiological variations between individuals; and the daunting task of obtaining relevant and precise descriptions of treatments from a multitude of practitioners.

In the meantime, Charlotte concluded, “practitioners and patients have to make a decision, similarly to judging risk in other walks of life, such as, should I take the plane or stay at home?”

I have always thought highly of Charlotte’s work, however, her conclusion made me doubt whether my high opinion of her reasoning was justified.

Should I take the plane or stay at home?

This question is not remotely similar to the question “should I have chiropractic upper neck manipulation or not?”

Here are a the two main reasons why:

  • Taking the plane of demonstrably effective in transporting you from A to B, while neck manipulation is not demonstrably effective for anything.
  • If you want to go from A to B [assuming B is far way], you need to fly. If you have neck pain or other symptoms, you can employ plenty of therapies other than neck manipulations.

Charlotte Leboeuf-Yde, DC,MPH,PhD, may be a professor in Clinical Biomechanics etc., etc., however, logical and critical thinking do not seem to be her forte.

So, how should we deal with the risks of chiropractic neck manipulations? I think, we should deal with them as responsible healthcare professionals deal with any other suspected therapeutic risks: we must ask whether the known risks of the treatment outweigh the known benefits (as they do with spinal manipulation). If that is so, we have an ethical, legal and moral duty not to employ the therapy in question in routine care. At the same time, we must focus or research efforts on producing full clarity about the open questions. It’s called the precautionary principle!

The ‘SOCIETY OF HOMEOPATHS’ (SoH) have published an official complaint they recently filed with the BBC. As it gives an intriguing insight into their mind-set, I could not resist reproducing it here (warts and all):

“Prompted by the interview with Simon Stevens of NHS England on the Today Programme, on 31st March, the Society of Homeopaths deplores the lack of balance in the BBC’s coverage of Homeopathy and urges you to review your approach to coverage of the subject.

During the Today interview, following wide-ranging discussion of issues around the future of the NHS, Sarah Montague suddenly threw in a question about the amount spent on Homeopathy within the NHS, evidently catching Mr Stevens unawares.

The annual budget of the NHS is approximately £110billion.  Of this, £4million per year (0.0036 of the NHS budget) is spent on Homeopathy.  This hardly justifies the unbalanced and hectoring approach from Sarah Montague.

We acknowledge that it is not always possible or necessary to achieve balance on a particular topic within a single programme but the BBC seems to have a consistent line across all of its platforms of opposition to, and disparagement of, Homeopathy.  A recent example is a piece on the Health section of the BBC website in October 2106 by Nick Tiggle which displayed no balance at all and denigrated Homeopathy and Homeopaths with little or no space given to alternative views.

From these and other instances, it seems clear that the BBC has a biased attitude towards Homeopathy, which may be the result of relying too heavily on a small number of ‘experts’, who openly and persistently campaign against complementary and alternative medicine. These ‘experts’ operate in a similar way to climate change deniers, referring to a limited range of research, often of poor quality, to support their claims that there is ‘no evidence for homeopathy’.

We look forward to BBC programmes which fulfill its mission to explain and provide balance and coverage of the positive effects of Homeopathy.

Mark Taylor Chief Executive Society of Homeopaths”


This hardly needs a comment – perhaps just 6 short points:

  • To the best of my knowledge, the BBC has a policy of not being seen to be biased. The discussion referred to above was about the NHS stopping to pay for treatments that are either not effective (e. g. cough syrups) or cheaper to buy OTC than on prescription (e. g. paracetamol). Homeopathy is both. Therefore it would have even been biased NOT to bring homeopathy into the discussion.
  • To claim the BBC-interviewer caught Stevens off guard is just silly: when you go on the radio to discuss such issues, homeopathy MUST be on your mind.
  • To claim that the BBC is generally biased against homeopathy (on the basis of two anecdotes) is equally silly. The SoH should have done some systematic research on this – perhaps they did and found it failed to support their point? – this would have shown that there is plenty of (far too much) pro-homeopathy stuff on the BBC.
  • To say or imply that homeopathy is of debatable or even no value to the NHS does not disclose bias; on the contrary, it is a reflection of the scientific truth which the BBC has an obligation to report.
  • With their complaint, the SoH disclose an embarrassing degree of naivety and an alarming detachment from reality.
  • Whichever way a rational observer might look at this, the BBC should in future become a much more outspoken defender of the scientific truth – on homeopathy and everything else!!!

The recent meta-analysis by Mathie et al for non-individualised homeopathy (recently discussed here) identified just 3 RCTs that were rated as  ‘reliable evidence’. But just how rigorous are these ‘best’ studies? Let’s find out!


The objective of the first trial was “to evaluate the efficacy of the non-hormonal treatment BRN-01 in reducing hot flashes in menopausal women.” Its design was that of a multicentre (35 centres in France), randomized, double-blind, placebo-controlled. One hundred and eight menopausal women, ≥50 years of age, were enrolled in the study. The eligibility criteria included menopause for <24 months and ≥5 hot flashes per day with a significant negative effect on the women’s professional and/or personal life. Treatment was either BRN-01 tablets, a registered homeopathic medicine [not registered in the UK] containing Actaea racemosa (4 centesimal dilutions [4CH]), Arnica montana (4CH), Glonoinum (4CH), Lachesis mutus (5CH), and Sanguinaria canadensis (4CH), or placebo tablets, prepared by Laboratoires Boiron according to European Pharmacopoeia standards [available OTC in France]. Oral treatment (2 to 4 tablets per day) was started on day 3 after study enrolment and was continued for 12 weeks. The main outcome measure was the hot flash score (HFS) compared before, during, and after treatment. Secondary outcome criteria were the quality of life (QoL) [measured using the Hot Flash Related Daily Interference Scale (HFRDIS)], severity of symptoms (measured using the Menopause Rating Scale), evolution of the mean dosage, and compliance. All adverse events (AEs) were recorded. One hundred and one women were included in the final analysis (intent-to-treat population: BRN-01, n = 50; placebo, n = 51). The global HFS over the 12 weeks, assessed as the area under the curve (AUC) adjusted for baseline values, was significantly lower in the BRN-01 group than in the placebo group (mean ± SD 88.2 ± 6.5 versus 107.2 ± 6.4; p = 0.0411). BRN-01 was well tolerated; the frequency of AEs was similar in the two treatment groups, and no serious AEs were attributable to BRN-01. The authors concluded that BRN-01 seemed to have a significant effect on the HFS, compared with placebo. According to the results of this clinical trial, BRN-01 may be considered a new therapeutic option with a safe profile for hot flashes in menopausal women who do not want or are not able to take hormone replacement therapy or other recognized treatments for this indication.

Laboratoires Boiron provided BRN-01, its matching placebo, and financial support for the study. Randomization and allocation were carried out centrally by Laboratoires Boiron. I would argue that the treatment time in this study was way too short for generating a therapeutic response. The evolution of the HFS in the two groups was assessed by analysis of the area under the curve (AUC) of the mean scores recorded weekly from each patient in each group over the duration of the study, including those at enrollment (before any treatment). I wonder whether this method was chosen only when the researchers noted that the HFS at the pre-defined time points did not yield a significant result or whether it was pre-determined (elsewhere in the methods section we are told that “The primary evaluation criterion was the effect of BRN-01 on the HFS, compared with placebo. The HFS was defined as the product of the daily frequency and intensity of all hot flashes experienced by the patient, graded by the women from 1 to 4 (1 = mild; 2 = moderate; 3 = strong; 4 = very strong). These data were recorded by the women on a self-administered questionnaire, assisted by a telephone call from a clinical research associate. Data were collected (i) during the first 2 days after enrolment and before any medication had been taken; (ii) then every Tuesday and Wednesday of each week until the 11th week of treatment, inclusive; and (iii) finally, every day of the 12th week of treatment.”). Two of the authors of this paper are employees of Boiron.


The second trial was aimed at finding out “whether a well-known and frequently prescribed homeopathic preparation could mitigate post-operative pain.” It was a randomized, double-blind, placebo-controlled trial to evaluate the efficacy of the homeopathic preparation Traumeel S® in minimizing post-operative pain and analgesic consumption following surgical correction of hallux valgus. Eighty consecutive patients were randomized to receive either Traumeel tablets or an indistinguishable placebo, and took primary and rescue oral analgesics as needed. Maximum numerical pain scores at rest and consumption of oral analgesics were recorded on day of surgery and for 13 days following surgery. Traumeel was not found superior to placebo in minimizing pain or analgesic consumption over the 14 days of the trial, however a transient reduction in the daily maximum post-operative pain score favoring the Traumeel arm was observed on the day of surgery, a finding supported by a treatment-time interaction test (p = 0.04). The authors concluded that Traumeel was not superior to placebo in minimizing pain or analgesic consumption over the 14 days of the trial. A transient reduction in the daily maximum post-operative pain score on the day of surgery is of questionable clinical importance.

Traumeel is a mixture of 6 ingredients, 4 of which are in the D2 potency. Thus it neither is administered as a homeopathic remedy (no ‘like cures like’) nor is it highly diluted. In fact, it is not homeopathy at all but belongs to a weird offspring of homeopathy called ‘homotoxicology’ [this is an explanation from my book: Homotoxicology is a method inspired by homeopathy which was developed by Hans Heinrich Reckeweg (1905 – 1985). He believed that all or most illness is caused by an overload of toxins in the body. The toxins originate, according to Reckeweg, both from the environment and from the malfunction of physiological processes within the body. His treatment consists mainly in applying homeopathic remedies which usually consist of combinations of single remedies, because health cannot be achieved without ridding the body of toxins. The largest manufacturer and promoter of remedies used in homotoxicology is the German firm Heel.] The HEEL Company (Baden-Baden, Germany) provided funding for the performance and monitoring of this project, supplied the study medication and placebo, and prepared the randomization list. The positive outcome mentioned in the authors’ conclusion refers to a secondary endpoint. I would argue that the authors should not have noted it there and should have made it clear that the trial generated a negative result.


Finally, the third of the 3 ‘rigorous’ studies “evaluated the effectiveness of the homeopathic preparation Plumbum Metallicum  (PM) in reducing the blood lead levels of workers exposed to this metal.” The Brazilian researchers recruited 131 workers to this RCT who took PM in the CH15 potency or placebo for 35 days (10 drops twice daily). Thereafter, the percentage of workers whose lead level had fallen by at least 25% did not differ between the groups, both on intention to treat and per protocol analyses. The authors concluded that PM “had no effect in this study in terms of reducing serum lead in workers exposed to lead.”

This study lacks a power calculation, and arguably the period might have been too short to show an effect. The trial was published in the journal HOMEOPATHY which, some might argue, has not the most rigorous of peer-review procedures.


The third study seems the most rigorous by far, in my view. The other two trials are seriously under-whelming in several respects, primarily because we cannot be sure how much influence the commercial interests of the sponsor had on their findings. I am sure others will spot weaknesses in all three trials that I failed to see.

Mathie et al partly disagree with my assessment when they write in their paper: “We report separately our model validity assessments of these trials, evaluating consequently their overall quality based on a GRADE-like principle of ‘downgrading’ [14]: two trials [23, 25] rated here as reliable evidence were downgraded to ‘low quality’ overall due to the inadequacy of their model validity; the remaining trial with reliable evidence [24] was judged to have adequate model validity. The latter study [24] thus comprises the sole RCT that can be designated ‘high quality’ overall by our approach, a stark finding that reveals further important aspects of the preponderantly low quality of the current body of evidence in non-individualised homeopathy.”

References 23, 24 and 25 are Padilha (the paper on Plumbum Metallicum), Colau (the RCT on menopausal women) and Singer (the Traumeel trial) respectively. This means that – as per Mathie’s assessment – just the Colau study remains as the sole trial with ‘reliable evidence’ for non-individualised homeopathy.

What Mathie et al seem to forget entirely is that none of the 3 RCTs is a trial of homeopathy as defined by treatment according to the ‘like cures like’ principle. The authors of the second study acknowledge this fact by stating: “Homeopathic purists may find fault in the administration of a standardized combination homeopathic formula to all patients, based upon clinical diagnosis – as opposed to the individualized manner dictated by standard homeopathic practice.”

So, which ever way we look upon this evidence, we cannot possibly deny that the evidence for non-individualised homeopathy is rubbish.


This new systematic review by proponents of homeopathy (and supported by a grant from the Manchester Homeopathic Clinic) tested the null hypothesis that “the main outcome of treatment using a non-individualised (standardised) homeopathic medicine is indistinguishable from that of placebo“. An additional aim was to quantify any condition-specific effects of non-individualised homeopathic treatment. In reporting this paper, I will stay very close to the published text hoping that this avoids both misunderstandings and accusations of bias on my side:

Literature search strategy, data extraction and statistical analysis followed the methods described in a pre-published protocol. A trial comprised ‘reliable evidence’ if its risk of bias was low or it was unclear in one specified domain of assessment. ‘Effect size’ was reported as standardised mean difference (SMD), with arithmetic transformation for dichotomous data carried out as required; a negative SMD indicated an effect favouring homeopathy.

The authors excluded the following types of trials: studies of crossover design; of radionically prepared homeopathic medicines; of homeopathic prophylaxis; of homeopathy combined with other (complementary or conventional) intervention; for other specified reasons. The final explicit exclusion criterion was that there was obviously no blinding of participants and practitioners to the assigned intervention.

Forty-eight different clinical conditions were represented in 75 eligible RCTs; 49 were classed as ‘high risk of bias’ and 23 as ‘uncertain risk of bias’; the remaining three trials displayed sufficiently low risk of bias to be designated reliable evidence. Fifty-four trials had extractable data: pooled SMD was -0.33 (95% confidence interval (CI) -0.44, -0.21), which was attenuated to -0.16 (95% CI -0.31, -0.02) after adjustment for publication bias. The three trials with reliable evidence yielded a non-significant pooled SMD: -0.18 (95% CI -0.46, 0.09). There was no single clinical condition for which meta-analysis produced reliable evidence.

A meta-regression was performed to test specifically for within-group differences for each sub-group. The results showed that there were no significant differences between studies that were and were not:

  • included in previous meta-analyses (p = 0.447);
  • pilot studies (p = 0.316);
  • greater than the median sample (p = 0.298);
  • potency ≥ 12C (p = 0.221);
  • imputed for meta-analysis (p = 0.384);
  • free from vested interest (p = 0.391);
  • acute/chronic (p = 0.796);
  • different types of homeopathy (p = 0.217).

After removal of ‘C’-rated trials, the pooled SMD still favoured homeopathy for all sub-groups, but was statistically non-significant for 10 of the 18 (included in previous meta-analysis; pilot study; sample size > median; potency ≥12C; data imputed; free of vested interest; not free of vested interest; combination medicine; single medicine; chronic condition). There remained no significant differences between sub-groups—with the exception of the analysis for sample size > median (p = 0.028).

Meta-analyses were possible for eight clinical conditions, each analysis comprising two to 5 trials. A statistically significant pooled SMD, favouring homeopathy, was observed for influenza (N = 2), irritable bowel syndrome (N = 2), and seasonal allergic rhinitis (N = 5). Each of the other five clinical conditions (allergic asthma, arsenic toxicity, infertility due to amenorrhoea, muscle soreness, post-operative pain) showed non-significant findings. Removal of ‘C’-rated trials negated the statistically significant effect for seasonal allergic rhinitis and left the non-significant effect for post-operative pain unchanged; no higher-rated trials were available for additional analysis of arsenic toxicity, infertility due to amenorrhoea or irritable bowel syndrome. There were no ‘C’-rated trials to remove for allergic asthma, influenza, or muscle soreness. Thus, influenza was the only clinical condition for which higher-rated trials indicated a statistically significant effect; neither of its contributing trials, however, comprised reliable evidence.

The authors concluded that the quality of the body of evidence is low. A meta-analysis of all extractable data leads to rejection of our null hypothesis, but analysis of a small sub-group of reliable evidence does not support that rejection. Reliable evidence is lacking in condition-specific meta-analyses, precluding relevant conclusions. Better designed and more rigorous RCTs are needed in order to develop an evidence base that can decisively provide reliable effect estimates of non-individualised homeopathic treatment.

I am sure that this paper will lead to lively discussions in the comments section of this blog. I will therefore restrict my comments to a bare minimum.

In my view, this new meta-analysis essentially yield a negative result and confirms most previous, similar reviews.

  • It confirms Linde’s conclusion that “insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition”.
  • It confirms Linde’s conclusion that “there was clear evidence that studies with better methodological quality tended to yield less positive results”.
  • It confirms Kleinjen’s conclusion that “most trials are of low methodological quality”.
  • It also confirms the results of the meta-analysis by Shang et al (much-maligned by homeopaths) than “finding is compatible with the notion that the clinical effects of homoeopathy are placebo effects.”
  • Finally, it confirms the conclusion of the analysis of the Australian National Health and Medical Research Council: “Homeopathy should not be used to treat health conditions that are chronic, serious, or could become serious. People who choose homeopathy may put their health at risk if they reject or delay treatments for which there is good evidence for safety and effectiveness. People who are considering whether to use homeopathy should first get advice from a registered health practitioner. Those who use homeopathy should tell their health practitioner and should keep taking any prescribed treatments.”

Another not entirely unimportant point that often gets missed in these discussions is this: even if we believe (which I do not) the most optimistic interpretation of these (and similar data) by homeopaths, we ought to point out that there is no evidence whatsoever that homeopathy cures anything. At the very best it provides marginal symptomatic relief. Yet, the claim of homeopaths that we hear constantly is that homeopathy is a causal and curative therapy.

The first author of the new meta-analysis is an employee of the Homeopathy Research Institute. We might therefore forgive him that he he repeatedly insists on dwelling on largely irrelevant (i. e. based on unreliable primary studies) findings. It seems obvious that firm conclusions can only be based on reliable data. I therefore disregard those analyses and conclusions that include such studies.

In the discussion, the authors of the new meta-analysis confirm my interpretation this by stating that they “reject the null hypothesis (non-individualised homeopathy is indistinguishable from placebo) on the basis of pooling all studies, but fail to reject the null hypothesis on the basis of the reliable evidence only.” And, in the long version of their conclusions, we find this remarkable statement: “Our meta-analysis of the current reliable evidence base therefore fails to reject the null hypothesis that the outcome of treatment using a non-individualised homeopathic medicine is not distinguishable from that using placebo.” A most torturous way of stating the obvious: the more reliable data show no difference between homeopathy and placebo.

As many of you know, my own verdict on homeopathy has changed over time. As a young clinician straight out of medical school, I was taken by homeopathy. Years later, as a researcher, I had to realize that the scientific evidence spoke quite clearly against it (those who are interested should read the full account here). Since then, I have expressed this in several ways. Perhaps the most scientific (based on a sound assessment of the totality of the data) way was here: “…the best clinical evidence for homeopathy available to date does not warrant positive recommendations for its use in clinical practice.” This was 15 years ago, and meanwhile the evidence has become – if anything – more definitively negative.

When I tell this to homeopaths and their followers, they often seem to get annoyed with me and claim that I have an axe to grind, am not objective, am paid by ‘BIG PHARMA’ etc. It is hard or even impossible to persuade them that they are mistaken, and I certainly don’t expect anyone to blindly take my word for anything, not even for my verdict on homeopathy. Therefore, I have tried to collect all the ‘official’ verdicts that I could find. By ‘official’ verdict I mean recent a statement from national or international organisations (rather than from single individuals) with research expertise that:

  • are independent,
  • employed a thorough assessment of the evidence,
  • have a reputation of being beyond reproach,
  • and represent scientific consensus.

For obvious reasons, I excluded statements from organisations of (or close to) homeopaths and those with an ideological or commercial interest in homeopathy. It is important to stress that the direction of the verdict (positive or negative) was NOT a selection criterion.


“The principles of homeopathy contradict known chemical, physical and biological laws and persuasive scientific trials proving its effectiveness are not available”

Russian Academy of Sciences, Russia

Homeopathy should not be used to treat health conditions that are chronic, serious, or could become serious. People who choose homeopathy may put their health at risk if they reject or delay treatments for which there is good evidence for safety and effectiveness.

National Health and Medical Research Council, Australia

“These products are not supported by scientific evidence.”

Health Canada, Canada

“Homeopathic remedies don’t meet the criteria of evidence based medicine.”

Hungarian Academy of Sciences, Hungary

“The incorporation of anthroposophical and homeopathic products in the Swedish directive on medicinal products would run counter to several of the fundamental principles regarding medicinal products and evidence-based medicine.”

Swedish Academy of Sciences, Sweden

“We recommend parents and caregivers not give homeopathic teething tablets and gels to children and seek advice from their health care professional for safe alternatives.”

Food and Drug Administration, USA

There is little evidence to support homeopathy as an effective treatment for any specific condition

National Centre for Complementary and Integrative Health, USA

There is no good-quality evidence that homeopathy is effective as a treatment for any health condition

National Health Service, UK

Homeopathic remedies perform no better than placebos, and that the principles on which homeopathy is based are “scientifically implausible”

House of Commons Science and Technology Committee, UK

I suspect that there are many more statements from similar organisations that I failed to locate. So, if any of my readers know such verdicts, please post them (if possible with a link to the source) in the comments section below. With your help, I might then be able to publish a complete list.

1 2 3 32
Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.

Click here for a comprehensive list of recent comments.