MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

clinical trial

This double-blind RCT aimed to test the efficacy of self-administered acupressure for pain and physical function in adults with knee osteoarthritis (KOA).

150 patients with symptomatic KOA participated and were randomized to

  1. verum acupressure,
  2. sham acupressure,
  3. or usual care.

Verum and sham, but not usual care, participants were taught to self-apply acupressure once daily, five days/week for eight weeks. Assessments were collected at baseline, 4 and 8 weeks. The numeric rating scale (NRS) for pain was administered during weekly phone calls. Outcomes included the WOMAC pain subscale (primary), the NRS and physical function measures (secondary). Linear mixed regression was conducted to test between group differences in mean changes from baseline for the outcomes at eight weeks.

Compared with usual care, both verum and sham participants experienced significant improvements in WOMAC pain, NRS pain and WOMAC function at 8 weeks. There were no significant differences between verum and sham acupressure groups in any of the outcomes.

The authors concluded that self-administered acupressure is superior to usual care in pain and physical function improvement for older people with KOA. The reason for the benefits is unclear and placebo effects may have played a role.

Another very odd conclusion!

The authors’ stated aim was to TEST THE EFFICACY OF ACUPRESSURE. To achieve this aim, they rightly compared it to a placebo (sham) intervention. This comparison did not show any differences between the two. Ergo, the only correct conclusion is that acupressure is a placebo.

I know, the authors (sort of) try to say this in their conclusions: placebo effects may have played a role. But surely, this is more than a little confusing. Placebo effects were quite evidently the sole cause of the observed outcomes. Is it ethical to confuse the public in this way, I wonder.

 

 

Shiatsu is one of those alternative therapies where there is almost no research. Therefore, every new study is of interest, and I was delighted to find this new trial.

Italian researchers tested the efficacy and safety of combining shiatsu and amitriptyline to treat refractory primary headaches in a single-blind, randomized, pilot study. Subjects with a diagnosis of primary headache and who experienced lack of response to ≥2 different prophylactic drugs were randomized in a 1:1:1 ratio to receive one of the following treatments:

  1. shiatsu plus amitriptyline,
  2. shiatsu alone,
  3. amitriptyline alone

The treatment period lasted 3 months and the primary endpoint was the proportion of patients experiencing ≥50%-reduction in headache days. Secondary endpoints were days with headache per month, visual analogue scale, and number of pain killers taken per month.

After randomization, 37 subjects were allocated to shiatsu plus amitriptyline (n = 11), shiatsu alone (n = 13), and amitriptyline alone (n = 13). Randomization ensured well-balanced demographic and clinical characteristics at baseline.

The results show that all the three groups improved in terms of headache frequency, visual analogue scale score, and number of pain killers and there was no between-group difference in the primary endpoint. Shiatsu (alone or in combination) was superior to amitriptyline in reducing the number of pain killers taken per month. Seven (19%) subjects reported adverse events, all attributable to amitriptyline, while no side effects were related with shiatsu treatment.

The authors concluded that shiatsu is a safe and potentially useful alternative approach for refractory headache. However, there is no evidence of an additive or synergistic effect of combining shiatsu and amitriptyline. These findings are only preliminary and should be interpreted cautiously due to the small sample size of the population included in our study.

Yes, I would advocate great caution indeed!

The results could easily be said to demonstrate that shiatsu is NOT effective. There is NO difference between the groups when looking at the primary endpoint. This plus the lack of a placebo-group renders the findings uninterpretable:

  • If we take the comparison 2 versus 3, this might indicate efficacy of shiatsu.
  • If we take the comparison 1 versus 3, it would indicate the opposite.
  • If we finally take the comparison 1 versus 2, it would suggest that the drug was ineffective.

So, we can take our pick!

Moreover, I do object to the authors’ conclusion that “shiatsu is a safe”. For such a statement, we would need sample sizes that are about two dimensions greater that those of this study.

So, what might be an acceptable conclusion from this trial? I see only one that is in accordance with the design and the results of this study:

 

POORLY DESIGNED RESEARCH CANNOT LEAD TO ANY CONCLUSIONS ABOUT THERAPEUTIC EFFICACY OR SAFETY. IT IS A WASTE OF RESOURCES AND A VIOLATION OF RESEARCH ETHICAL.

On this blog, we have had (mostly unproductive) discussions with homeopath so often that sometimes they sound like a broken disk. I don’t want to add to this kerfuffle; what I hope to do today is to summarise  a certain line of argument which, from the homeopaths’ point of view, seems entirely logical. I do this in the form of a fictitious conversation between a scientist (S) and a classical homeopath (H). My aim is to make the reader understand homeopaths better so that, future debates might be better informed.

HERE WE GO:

S: I have studied the evidence from studies of homeopathy in some detail, and I have to tell you, it fails to show that homeopathy works.

H: This is not true! We have plenty of evidence to prove that patients get better after seeing a homeopath.

S: Yes, but this is not because of the remedy; it is due to non-specific effect like the empathetic consultation with a homeopath. If one controls for these factors in adequately designed trials, the result usually is negative.

I will re-phrase my claim: the evidence fails to show that highly diluted homeopathic remedies are more effective than placebos.

H: I disagree, there are positive studies as well.

S: Let’s not cherry pick. We must always consider the totality of the reliable evidence. We now have a meta-analysis published by homeopaths that demonstrates the ineffectiveness of homeopathy quite clearly.

H: This is because homeopathy was not used correctly in the primary trials. Homeopathy must be individualised for each unique patient; no two cases are alike! Remember: homeopathy is based on the principle that like cures like!!!

S: Are you saying that all other forms of using homeopathy are wrong?

H: They are certainly not adhering to what Hahnemann told us to do; therefore you cannot take their ineffectiveness as proof that homeopathy does not work.

S: This means that much, if not most of homeopathy as it is used today is to be condemned as fake.

H: I would not go that far, but it is definitely not the real thing; it does not obey the law of similars.

S: Let’s leave this to one side for the moment. If you insist on individualised homeopathy, I must tell you that this approach can also be tested in clinical trials.

H: I know; and there is a meta-analysis which proves that it is effective.

S: Not quite; it concluded that medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

If you call this a proof of efficacy, I would have to disagree with you. The effect was tiny and at least two of the best studies relevant to the subject were left out. If anything, this paper is yet another proof that homeopathy is useless!

H: You simply don’t understand homeopathy enough to say that. I tried to tell you that the remedy must be carefully chosen to fit each unique patient. This is a very difficult task, and sometimes it is not successful – mainly because the homeopaths employed in clinical trials are not skilled enough to find it. This means that, in these studies, we will always have a certain failure rate which, in turn, is responsible for the small average effect size.

S: But these studies are always conducted by experienced homeopaths, and only the very best, most experienced homeopaths were chosen to cooperate in them. Your argument that the trials are negative because of the ineffectiveness of the homeopaths – rather than the ineffectiveness of homeopathy – is therefore nonsense.

H: This is what you say because you don’t understand homeopathy!

S: No, it is what you say because you don’t understand science. How else would you prove that your hypothesis is correct?

H: Simple! Just look at individual cases from the primary studies within this meta-analysis . You will see that there are always patients who did improve. These cases are the proof we need. The method of the RCT is only good for defining average effects; this is not what we should be looking at, and it is certainly not what homeopaths are interested in.

S: Are you saying that the method of the RCT is wrong?

H: It is not always wrong. Some RCTs of homeopathy are positive and do very clearly prove that homeopathy works. These are obviously the studies where homeopathy has been applied correctly. We have to make a meta-analysis of such trials, and you will see that the result turns out to be positive.

S: So, you claim that all the positive studies have used the correct method, while all the negative ones have used homeopathy incorrectly.

H: If you insist to put it like that, yes.

S: I see, you define a trial to have used homeopathy correctly by its result. Essentially you accept science only if it generates the outcome you like.

H: Yes, that sounds odd to you – because you don’t understand enough of homeopathy.

S: No, what you seem to insist on is nothing short of double standards. Or would you accept a drug company claiming: some patients did feel better after taking our new drug, and this is proof that it works?

H: You see, not understanding homeopathy leads to serious errors.

S: I give up.

The question whether spinal manipulative therapy (SMT) is effective for acute low back pain is still discussed controversially. Chiropractors (they use SMT more regularly than other professionals) try everything to make us believe it does work, while the evidence is far less certain. Therefore, it is worth considering the best and most up-to-date data.

The  aim of this paper was to systematically review studies of the effectiveness and harms of SMT for acute (≤6 weeks) low back pain. The research question was straight forward: Is the use of SMT in the management of acute (≤6 weeks) low back pain associated with improvements in pain or function?

A through literature search was conducted to locate all relevant papers. Study quality was assessed using the Cochrane Back and Neck (CBN) Risk of Bias tool. The evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) criteria. The main outcome measures were pain (measured by either the 100-mm visual analog scale, 11-point numeric rating scale, or other numeric pain scale), function (measured by the 24-point Roland Morris Disability Questionnaire or Oswestry Disability Index [range, 0-100]), or any harms measured within 6 weeks.

Of 26 eligible RCTs identified, 15 RCTs (1711 patients) provided moderate-quality evidence that SMT has a statistically significant association with improvements in pain (pooled mean improvement in the 100-mm visual analog pain scale, −9.95 [95% CI, −15.6 to −4.3]). Twelve RCTs (1381 patients) produced moderate-quality evidence that SMT has a statistically significant association with improvements in function (pooled mean effect size, −0.39 [95% CI, −0.71 to −0.07]). Heterogeneity was not explained by type of clinician performing SMT, type of manipulation, study quality, or whether SMT was given alone or as part of a package of therapies. No RCT reported any serious adverse event. Minor transient adverse events such as increased pain, muscle stiffness, and headache were reported 50% to 67% of the time in large case series of patients treated with SMT.

The authors concluded that among patients with acute low back pain, spinal manipulative therapy was associated with modest improvements in pain and function at up to 6 weeks, with transient minor musculoskeletal harms. However, heterogeneity in study results was large.

This meta-analysis has been celebrated by chiropractors around the world as a triumph for their hallmark therapy, SMT. But there have also been more cautionary voices – not least from the lead author of the paper. Patients undergoing spinal manipulation experienced a decline of 1 point in their pain rating, says Dr. Paul Shekelle, an internist with the West Los Angeles Veterans Affairs Medical Center and the Rand Corporation who headed the study. That’s about the same amount of pain relief as from NSAIDs, over-the-counter nonsteroidal anti-inflammatory medication, such as ibuprofen. The study also found spinal manipulation modestly improved function. On average, patients reported greater ease and comfort engaging in two day-to-day activities — such as finding they could walk more quickly, were having less difficulty turning over in bed or were sleeping more soundly.

It’s not clear exactly how spinal manipulation relieves back pain. But it may reposition the small joints in the spine in a way that causes less pain, according to Dr. Richard Deyo, an internist and professor of evidence-based medicine at the Oregon Health and Science University. Deyo wrote an editorial published along with the study. Another possibility, Deyo says, is that spinal manipulation may restore some material in the disk between the vertebrae, or it may simply relax muscles, which could be important. There may also be mind-body interaction that comes from the “laying of hands” or a trusting relationship between patients and their health care provider, he says.

Deyo notes that there are many possible treatments for lower back pain, including oral medicine, injected medicine, corsets, traction, surgery, acupuncture and massage therapy. But of about 200 treatment options, “no single treatment is clearly superior,” he says.

In another comment by Paul Ingraham the critical tone was much clearer: “Claiming it as a victory is one of the best examples I’ve ever seen of making lemonade out of science lemons! But I can understand the mistake, because the review itself does seem positive at first glance: the benefits of SMT are disingenuously summarized as “statistically significant” in the abstract, with no mention of clinical significance (effect size; see Statistical Significance Abuse). So the abstract sounds like good news to anyone but the most wary readers, while deep in the main text the same results are eventually conceded to be “clinically modest.” But even even that seems excessively generous: personally, I need at least a 2-point improvement in pain on a scale of 10 to consider it a “modest” improvement! This is not a clearly positive review: it shows weak evidence of minor efficacy, based on “significant unexplained heterogeneity” in the results. That is, the results were all over the place — but without any impressive benefits reported by any study — and the mixture can’t be explained by any obvious, measurable factor. This probably means there’s just a lot of noise in the data, too many things that are at least as influential as the treatment itself. Or — more optimistically — it could mean that SMT is “just” disappointingly mediocre on average, but might have more potent benefits in a minority of cases (that no one seems to be able to reliably identify). Far from being good news, this review continues a strong trend (eg Rubinstein 2012) of damning SMT with faint praise, and also adds evidence of backfiring to mix. Although fortunately “no RCT reported any serious adverse event,” it seems that minor harms were legion: “increased pain, muscle stiffness, and headache were reported 50% to 67% of the time in large case series of patients treated with SMT.” That’s a lot of undesirable outcomes. So the average patient has a roughly fifty-fifty chance of up to roughly maybe a 20% improvement… or feeling worse to some unknown degree! That does not sound like a good deal to me. It certainly doesn’t sound like good medicine.”

END OF QUOTE

As I have made clear in many previous posts, I do fully agree with these latter statements and would add just three points:

  1. We know that many of the SMT studies completely neglect reporting adverse effects. Therefore it is hardly surprising that no serious complications were on record. Yet, we know that they do occur with sad regularity.
  2. None of the studies controlled for placebo effects. It is therefore possible – I would say even likely – that a large chunk of the observed benefit is not due to SMT per se but to a placebo response.
  3. It seems more than questionable whether the benefits of SMT outweigh its risks.

The aim of this pragmatic study was “to investigate the effectiveness of acupuncture in addition to routine care in patients with allergic asthma compared to treatment with routine care alone.”

Patients with allergic asthma were included in a controlled trial and randomized to receive up to 15 acupuncture sessions over 3 months plus routine care, or to a control group receiving routine care alone. Patients who did not consent to randomization received acupuncture treatment for the first 3 months and were followed as a cohort. All trial patients were allowed to receive routine care in addition to study treatment. The primary endpoint was the asthma quality of life questionnaire (AQLQ, range: 1–7) at 3 months. Secondary endpoints included general health related to quality of life (Short-Form-36, SF-36, range 0–100). Outcome parameters were assessed at baseline and at 3 and 6 months.

A total of 1,445 patients were randomized and included in the analysis (184 patients randomized to acupuncture plus routine care and 173 to routine care alone, and 1,088 in the nonrandomized acupuncture plus routine care group). In the randomized part, acupuncture was associated with an improvement in the AQLQ score compared to the control group (difference acupuncture vs. control group 0.7 [95% confidence interval (CI) 0.5–1.0]) as well as in the physical component scale and the mental component scale of the SF-36 (physical: 2.5 [1.0–4.0]; mental 4.0 [2.1–6.0]) after 3 months. Treatment success was maintained throughout 6 months. Patients not consenting to randomization showed similar improvements as the randomized acupuncture group.

The authors concluded that in patients with allergic asthma, additional acupuncture treatment to routine care was associated with increased disease-specific and health-related quality of life compared to treatment with routine care alone.

We have been over this so many times (see for instance here, here and here) that I am almost a little embarrassed to explain it again: it is fairly easy to design an RCT such that it can only produce a positive result. The currently most popular way to achieve this aim in alternative medicine research is to do a ‘A+B versus B’ study, where A = the experimental treatment, and B = routine care. As A always amounts to more than nothing – in the above trial acupuncture would have placebo effects and the extra attention would also amount to something – A+B must always be more than B alone. The easiest way of thinking of this is to imagine that A and B are both finite amounts of money; everyone can understand that A+B must always be more than B!

Why then do acupuncture researchers not get the point? Are they that stupid? I happen to know some of the authors of the above paper personally, and I can assure you, they are not stupid!

So, why?

I am afraid there is only one reason I can think of: they know perfectly well that such an RCT can only produce a positive finding, and precisely that is their reason for conducting such a study. In other words, they are not using science to test a hypothesis, they deliberately abuse it to promote their pet therapy or hypothesis.

As I stated above, it is fairly easy to design an RCT such that it can only produce a positive result. Yet, it is arguably also unethical, perhaps even fraudulent, to do this. In my view, such RCTs amount to pseudoscience and scientific misconduct.

The recent meta-analysis by Mathie et al for non-individualised homeopathy (recently discussed here) identified just 3 RCTs that were rated as  ‘reliable evidence’. But just how rigorous are these ‘best’ studies? Let’s find out!

THE FIRST STUDY

The objective of the first trial was “to evaluate the efficacy of the non-hormonal treatment BRN-01 in reducing hot flashes in menopausal women.” Its design was that of a multicentre (35 centres in France), randomized, double-blind, placebo-controlled. One hundred and eight menopausal women, ≥50 years of age, were enrolled in the study. The eligibility criteria included menopause for <24 months and ≥5 hot flashes per day with a significant negative effect on the women’s professional and/or personal life. Treatment was either BRN-01 tablets, a registered homeopathic medicine [not registered in the UK] containing Actaea racemosa (4 centesimal dilutions [4CH]), Arnica montana (4CH), Glonoinum (4CH), Lachesis mutus (5CH), and Sanguinaria canadensis (4CH), or placebo tablets, prepared by Laboratoires Boiron according to European Pharmacopoeia standards [available OTC in France]. Oral treatment (2 to 4 tablets per day) was started on day 3 after study enrolment and was continued for 12 weeks. The main outcome measure was the hot flash score (HFS) compared before, during, and after treatment. Secondary outcome criteria were the quality of life (QoL) [measured using the Hot Flash Related Daily Interference Scale (HFRDIS)], severity of symptoms (measured using the Menopause Rating Scale), evolution of the mean dosage, and compliance. All adverse events (AEs) were recorded. One hundred and one women were included in the final analysis (intent-to-treat population: BRN-01, n = 50; placebo, n = 51). The global HFS over the 12 weeks, assessed as the area under the curve (AUC) adjusted for baseline values, was significantly lower in the BRN-01 group than in the placebo group (mean ± SD 88.2 ± 6.5 versus 107.2 ± 6.4; p = 0.0411). BRN-01 was well tolerated; the frequency of AEs was similar in the two treatment groups, and no serious AEs were attributable to BRN-01. The authors concluded that BRN-01 seemed to have a significant effect on the HFS, compared with placebo. According to the results of this clinical trial, BRN-01 may be considered a new therapeutic option with a safe profile for hot flashes in menopausal women who do not want or are not able to take hormone replacement therapy or other recognized treatments for this indication.

Laboratoires Boiron provided BRN-01, its matching placebo, and financial support for the study. Randomization and allocation were carried out centrally by Laboratoires Boiron. I would argue that the treatment time in this study was way too short for generating a therapeutic response. The evolution of the HFS in the two groups was assessed by analysis of the area under the curve (AUC) of the mean scores recorded weekly from each patient in each group over the duration of the study, including those at enrollment (before any treatment). I wonder whether this method was chosen only when the researchers noted that the HFS at the pre-defined time points did not yield a significant result or whether it was pre-determined (elsewhere in the methods section we are told that “The primary evaluation criterion was the effect of BRN-01 on the HFS, compared with placebo. The HFS was defined as the product of the daily frequency and intensity of all hot flashes experienced by the patient, graded by the women from 1 to 4 (1 = mild; 2 = moderate; 3 = strong; 4 = very strong). These data were recorded by the women on a self-administered questionnaire, assisted by a telephone call from a clinical research associate. Data were collected (i) during the first 2 days after enrolment and before any medication had been taken; (ii) then every Tuesday and Wednesday of each week until the 11th week of treatment, inclusive; and (iii) finally, every day of the 12th week of treatment.”). Two of the authors of this paper are employees of Boiron.

THE SECOND STUDY

The second trial was aimed at finding out “whether a well-known and frequently prescribed homeopathic preparation could mitigate post-operative pain.” It was a randomized, double-blind, placebo-controlled trial to evaluate the efficacy of the homeopathic preparation Traumeel S® in minimizing post-operative pain and analgesic consumption following surgical correction of hallux valgus. Eighty consecutive patients were randomized to receive either Traumeel tablets or an indistinguishable placebo, and took primary and rescue oral analgesics as needed. Maximum numerical pain scores at rest and consumption of oral analgesics were recorded on day of surgery and for 13 days following surgery. Traumeel was not found superior to placebo in minimizing pain or analgesic consumption over the 14 days of the trial, however a transient reduction in the daily maximum post-operative pain score favoring the Traumeel arm was observed on the day of surgery, a finding supported by a treatment-time interaction test (p = 0.04). The authors concluded that Traumeel was not superior to placebo in minimizing pain or analgesic consumption over the 14 days of the trial. A transient reduction in the daily maximum post-operative pain score on the day of surgery is of questionable clinical importance.

Traumeel is a mixture of 6 ingredients, 4 of which are in the D2 potency. Thus it neither is administered as a homeopathic remedy (no ‘like cures like’) nor is it highly diluted. In fact, it is not homeopathy at all but belongs to a weird offspring of homeopathy called ‘homotoxicology’ [this is an explanation from my book: Homotoxicology is a method inspired by homeopathy which was developed by Hans Heinrich Reckeweg (1905 – 1985). He believed that all or most illness is caused by an overload of toxins in the body. The toxins originate, according to Reckeweg, both from the environment and from the malfunction of physiological processes within the body. His treatment consists mainly in applying homeopathic remedies which usually consist of combinations of single remedies, because health cannot be achieved without ridding the body of toxins. The largest manufacturer and promoter of remedies used in homotoxicology is the German firm Heel.] The HEEL Company (Baden-Baden, Germany) provided funding for the performance and monitoring of this project, supplied the study medication and placebo, and prepared the randomization list. The positive outcome mentioned in the authors’ conclusion refers to a secondary endpoint. I would argue that the authors should not have noted it there and should have made it clear that the trial generated a negative result.

THE THIRD STUDY

Finally, the third of the 3 ‘rigorous’ studies “evaluated the effectiveness of the homeopathic preparation Plumbum Metallicum  (PM) in reducing the blood lead levels of workers exposed to this metal.” The Brazilian researchers recruited 131 workers to this RCT who took PM in the CH15 potency or placebo for 35 days (10 drops twice daily). Thereafter, the percentage of workers whose lead level had fallen by at least 25% did not differ between the groups, both on intention to treat and per protocol analyses. The authors concluded that PM “had no effect in this study in terms of reducing serum lead in workers exposed to lead.”

This study lacks a power calculation, and arguably the period might have been too short to show an effect. The trial was published in the journal HOMEOPATHY which, some might argue, has not the most rigorous of peer-review procedures.

CONCLUDING REMARKS

The third study seems the most rigorous by far, in my view. The other two trials are seriously under-whelming in several respects, primarily because we cannot be sure how much influence the commercial interests of the sponsor had on their findings. I am sure others will spot weaknesses in all three trials that I failed to see.

Mathie et al partly disagree with my assessment when they write in their paper: “We report separately our model validity assessments of these trials, evaluating consequently their overall quality based on a GRADE-like principle of ‘downgrading’ [14]: two trials [23, 25] rated here as reliable evidence were downgraded to ‘low quality’ overall due to the inadequacy of their model validity; the remaining trial with reliable evidence [24] was judged to have adequate model validity. The latter study [24] thus comprises the sole RCT that can be designated ‘high quality’ overall by our approach, a stark finding that reveals further important aspects of the preponderantly low quality of the current body of evidence in non-individualised homeopathy.”

References 23, 24 and 25 are Padilha (the paper on Plumbum Metallicum), Colau (the RCT on menopausal women) and Singer (the Traumeel trial) respectively. This means that – as per Mathie’s assessment – just the Colau study remains as the sole trial with ‘reliable evidence’ for non-individualised homeopathy.

What Mathie et al seem to forget entirely is that none of the 3 RCTs is a trial of homeopathy as defined by treatment according to the ‘like cures like’ principle. The authors of the second study acknowledge this fact by stating: “Homeopathic purists may find fault in the administration of a standardized combination homeopathic formula to all patients, based upon clinical diagnosis – as opposed to the individualized manner dictated by standard homeopathic practice.”

So, which ever way we look upon this evidence, we cannot possibly deny that the evidence for non-individualised homeopathy is rubbish.

&nbsp;

This new systematic review by proponents of homeopathy (and supported by a grant from the Manchester Homeopathic Clinic) tested the null hypothesis that “the main outcome of treatment using a non-individualised (standardised) homeopathic medicine is indistinguishable from that of placebo“. An additional aim was to quantify any condition-specific effects of non-individualised homeopathic treatment. In reporting this paper, I will stay very close to the published text hoping that this avoids both misunderstandings and accusations of bias on my side:

Literature search strategy, data extraction and statistical analysis followed the methods described in a pre-published protocol. A trial comprised ‘reliable evidence’ if its risk of bias was low or it was unclear in one specified domain of assessment. ‘Effect size’ was reported as standardised mean difference (SMD), with arithmetic transformation for dichotomous data carried out as required; a negative SMD indicated an effect favouring homeopathy.

The authors excluded the following types of trials: studies of crossover design; of radionically prepared homeopathic medicines; of homeopathic prophylaxis; of homeopathy combined with other (complementary or conventional) intervention; for other specified reasons. The final explicit exclusion criterion was that there was obviously no blinding of participants and practitioners to the assigned intervention.

Forty-eight different clinical conditions were represented in 75 eligible RCTs; 49 were classed as ‘high risk of bias’ and 23 as ‘uncertain risk of bias’; the remaining three trials displayed sufficiently low risk of bias to be designated reliable evidence. Fifty-four trials had extractable data: pooled SMD was -0.33 (95% confidence interval (CI) -0.44, -0.21), which was attenuated to -0.16 (95% CI -0.31, -0.02) after adjustment for publication bias. The three trials with reliable evidence yielded a non-significant pooled SMD: -0.18 (95% CI -0.46, 0.09). There was no single clinical condition for which meta-analysis produced reliable evidence.

A meta-regression was performed to test specifically for within-group differences for each sub-group. The results showed that there were no significant differences between studies that were and were not:

  • included in previous meta-analyses (p = 0.447);
  • pilot studies (p = 0.316);
  • greater than the median sample (p = 0.298);
  • potency ≥ 12C (p = 0.221);
  • imputed for meta-analysis (p = 0.384);
  • free from vested interest (p = 0.391);
  • acute/chronic (p = 0.796);
  • different types of homeopathy (p = 0.217).

After removal of ‘C’-rated trials, the pooled SMD still favoured homeopathy for all sub-groups, but was statistically non-significant for 10 of the 18 (included in previous meta-analysis; pilot study; sample size > median; potency ≥12C; data imputed; free of vested interest; not free of vested interest; combination medicine; single medicine; chronic condition). There remained no significant differences between sub-groups—with the exception of the analysis for sample size > median (p = 0.028).

Meta-analyses were possible for eight clinical conditions, each analysis comprising two to 5 trials. A statistically significant pooled SMD, favouring homeopathy, was observed for influenza (N = 2), irritable bowel syndrome (N = 2), and seasonal allergic rhinitis (N = 5). Each of the other five clinical conditions (allergic asthma, arsenic toxicity, infertility due to amenorrhoea, muscle soreness, post-operative pain) showed non-significant findings. Removal of ‘C’-rated trials negated the statistically significant effect for seasonal allergic rhinitis and left the non-significant effect for post-operative pain unchanged; no higher-rated trials were available for additional analysis of arsenic toxicity, infertility due to amenorrhoea or irritable bowel syndrome. There were no ‘C’-rated trials to remove for allergic asthma, influenza, or muscle soreness. Thus, influenza was the only clinical condition for which higher-rated trials indicated a statistically significant effect; neither of its contributing trials, however, comprised reliable evidence.

The authors concluded that the quality of the body of evidence is low. A meta-analysis of all extractable data leads to rejection of our null hypothesis, but analysis of a small sub-group of reliable evidence does not support that rejection. Reliable evidence is lacking in condition-specific meta-analyses, precluding relevant conclusions. Better designed and more rigorous RCTs are needed in order to develop an evidence base that can decisively provide reliable effect estimates of non-individualised homeopathic treatment.

I am sure that this paper will lead to lively discussions in the comments section of this blog. I will therefore restrict my comments to a bare minimum.

In my view, this new meta-analysis essentially yield a negative result and confirms most previous, similar reviews.

  • It confirms Linde’s conclusion that “insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition”.
  • It confirms Linde’s conclusion that “there was clear evidence that studies with better methodological quality tended to yield less positive results”.
  • It confirms Kleinjen’s conclusion that “most trials are of low methodological quality”.
  • It also confirms the results of the meta-analysis by Shang et al (much-maligned by homeopaths) than “finding is compatible with the notion that the clinical effects of homoeopathy are placebo effects.”
  • Finally, it confirms the conclusion of the analysis of the Australian National Health and Medical Research Council: “Homeopathy should not be used to treat health conditions that are chronic, serious, or could become serious. People who choose homeopathy may put their health at risk if they reject or delay treatments for which there is good evidence for safety and effectiveness. People who are considering whether to use homeopathy should first get advice from a registered health practitioner. Those who use homeopathy should tell their health practitioner and should keep taking any prescribed treatments.”

Another not entirely unimportant point that often gets missed in these discussions is this: even if we believe (which I do not) the most optimistic interpretation of these (and similar data) by homeopaths, we ought to point out that there is no evidence whatsoever that homeopathy cures anything. At the very best it provides marginal symptomatic relief. Yet, the claim of homeopaths that we hear constantly is that homeopathy is a causal and curative therapy.

The first author of the new meta-analysis is an employee of the Homeopathy Research Institute. We might therefore forgive him that he he repeatedly insists on dwelling on largely irrelevant (i. e. based on unreliable primary studies) findings. It seems obvious that firm conclusions can only be based on reliable data. I therefore disregard those analyses and conclusions that include such studies.

In the discussion, the authors of the new meta-analysis confirm my interpretation this by stating that they “reject the null hypothesis (non-individualised homeopathy is indistinguishable from placebo) on the basis of pooling all studies, but fail to reject the null hypothesis on the basis of the reliable evidence only.” And, in the long version of their conclusions, we find this remarkable statement: “Our meta-analysis of the current reliable evidence base therefore fails to reject the null hypothesis that the outcome of treatment using a non-individualised homeopathic medicine is not distinguishable from that using placebo.” A most torturous way of stating the obvious: the more reliable data show no difference between homeopathy and placebo.

Homeopathic remedies work for animals and therefore they cannot be placebos!!!

This argument is the standard reply of believers in homeopathy (not least of Prince Charles). It shows, I think, two things:

  1. Believers in homeopathy fail to understand the placebo effect.
  2. They are ill-informed or lying about the evidence regarding homeopathy in animals.

As we have explained on this blog over and over again: the evidence for homeopathy in animals is very much like that in humans: it fails to show that highly diluted homeopathic remedies are more than placebos (see, for instance here, here and here). Now a further study confirms this fact.

The objective of this triple-blind, randomized controlled trial was to assess the efficacy of homeopathic treatment in bovine clinical mastitis. The study was conducted on a conventionally managed dairy farm between June 2013 and May 2014. Dairy cows with acute mastitis were randomly allocated to homeopathy (n = 70) or placebo (n = 92), for a total of 162 animals. The homeopathic treatment was selected based on clinical symptoms but most commonly consisted of a combination of nosodes with Streptococcinum, Staphylococcinum, Pyrogenium, and Escherichia coli at a potency of 200c. Treatment was administered to cows in the homeopathy group at least once per day for an average of 5 d. The cows in the placebo group were treated similarly, using a placebo preparation instead (lactose globules without active ingredients). If necessary, the researchers also used allopathic drugs (e.g., antibiotics, udder creams, and anti-inflammatory drugs) in both groups. They recorded data relating to the clinical signs of mastitis, treatment, time to recovery, milk yield, somatic cell count at first milk recording after mastitis, and culling. Cows were observed for up to 200 d after clinical recovery. Base-level data did not differ between the homeopathy and placebo groups. Mastitis lasted for an average of 6 d in both groups. No significant differences were noted in time to recovery, somatic cell count, risk of clinical cure within 14 d after disease occurrence, mastitis recurrence risk, or culling risk.

The authors concluded that the results indicated no additional effect of homeopathic treatment compared with placebo.

The question is HOW MUCH MORE EVIDENCE IS NEEDED BEFORE HOMEOPATHS ABANDON THEIR BOGUS CLAIM?

To honour Hahnemann’s birthday, a National Convention was held yesterday on ‘World Homeopathy Day’ in New Delhi. The theme of the convention is “Enhancing Quality Research in Homeopathy through scientific evidence and rich clinical experiences”. They could have done with this new study of Influenzinum 9C, it seems to me. This is a homeopathic remedy made from the current influenza vaccine. Influenzinum 9C, also known as homeopathic flu nosode. It is claimed to:

  • strengthen the body and increase its resistance to the season’s flu viruses,
  • protect against cold & flu symptoms such as body aches, nausea, chills, fever, headaches, sore throat, coughs, and congestion,
  • enforce the flu vaccine’s action if you have opted for the flu shot,
  • deal with aftereffects of the flu, and
  • alleviate adverse effects of the flu shot.

As these are the claims made by homeopaths (here is but one example of many: “I’ve been using this for over 30 years for my family, and we have never had the flu!”), French researchers have tested whether Influenzinum works. They just published the results of the first study examining the effectiveness of Influenzinum against influenza-like illnesses.

They conducted a retrospective cohort study during winter 2014-2015. After influenza epidemic, a self-assessment questionnaire was offered to patients presenting for a consultation. The primary endpoint was the declaration of an influenza-like illness. The exposed patients (treated by Influenzinum) were matched to two non-exposed patients (untreated) with a propensity score. A conditional logistic model expressed influenza-like illness risk reduction provided by the Influenzinum.

The cohort included 3514 patients recruited from 46 general practitioners. After matching, the treated group (n=2041) and the untreated group (n=482) did not differ on variables collected. Thus Influenzinum preventive therapy did not significantly alter the likelihood of influenza-like illness.

The authors concluded that Influenzinum preventive therapy did not appear effective in preventing influenza-like illness.

This can be no surprise to anyone you knows what ‘C9’ means: it signifies a dilution of 1: 1 000 000 000 000 000 000 (plus 9 times vigorous shaking, of course).

I am sure that some homeopaths will now question whether Influenzinum is truly homeopathic. Is it based on the ‘like cures like’ principle? Before some clever Dick comments ‘THIS SHOWS THAT PROF ERNST HAS NOT GOT A CLUE ABOUT HOMEOPATHY’, please let me point out that it was not I but the homeopaths who insisted in labelling Influenzinum ‘homeopathic’ (see, for instance, here: “Influenzinum Dose is a homoeopathic medicine created by Laboratoire Boiron. Single dose to be consumed in one step. This homoeopathic medicine is generally used as a substitute for the flu vaccine”). AND WHO AM I TO QUESTION THE AUTHORITY OF BOIRON???

Acupuncture is little more than a theatrical placebo! If we confront an acupuncture fan with this statement, he/she is bound to argue that there are some indications for which the evidence is soundly positive. One of these conditions, they would claim, is nausea and vomiting. But how strong are these data? A new study sheds some light on this question.

The objective of this RCT was to evaluate if consumption of antiemetics and eating capacity differed between patients receiving verum acupuncture, sham acupuncture, or standard care only during radiotherapy. Patients were randomized to verum (n = 100) or sham (n = 100) acupuncture (telescopic blunt sham needle) (12 sessions) and registered daily their consumption of antiemetics and eating capacity. A standard care group (n = 62) received standard care only.

The results show that more patients in the verum and the sham acupuncture group did not need any antiemetic medications, as compared to the standard care group after receiving 27 Gray dose of radiotherapy. More patients in the verum and the sham acupuncture group were capable of eating as usual, compared to the standard care group. Patients receiving acupuncture had lower consumption of antiemetics and better eating capacity than patients receiving standard antiemetic care, plausible by nonspecific effects of the extra care during acupuncture.

The authors concluded that patients receiving acupuncture had lower consumption of antiemetics and better eating capacity than patients receiving standard antiemetic care, plausible by nonspecific effects of the extra care during acupuncture.

I find these conclusions odd because they seem to state that acupuncture was more effective than standard care. Subsequently – almost as an afterthought – they mention that its effects are brought about by nonspecific effects. This is grossly misleading, in my view.

The study was designed as a comparison between real and sham acupuncture, and the standard care group was not a randomised comparison group. Therefore, the main result and conclusion has to focus on the comparison between verum and sham acupuncture. This comparison shows that the two did not produce different result. Therefore, the study shows that acupuncture was not effective.

A much more reasonable conclusion would have been: THIS STUDY FAILED TO FIND SIGNIFICANT EFFECTS OF ACUPUNCTURE BEYOND PLACEBO.

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories