MD, PhD, FMedSci, FRSB, FRCP, FRCPEd.

meta-analysis

1 2 3 7

An enthusiast of homeopathy recently posted an overview of systematic reviews of homeopathy concluding that the data we do have point towards homeopathy as having an effect greater than that of placebo:

In recent decades, homeopathy has been examined via a number of clinical trials, the number of which now allow meta-analysis. As we can see from the study findings, the type of homeopathy research (ie, individualized vs non-individualized, placebo-controlled vs non-placebo-controlled) can have a strong influence on the results, although trial quality also has a strong effect.

All meta-analyses performed in at least a somewhat open and rigorous manner have found statistically significant effects. This suggests that homeopathy has a greater-than-placebo effect, or at least a strong trend in that direction, when using data from the totality of homeopathy research, or from individualized, placebo-controlled trials. The meta-analyses with questionable methodology, one of which is undergoing government investigation for academic irregularities, have produced negative results, which have been demonstrated to be a direct result of their exclusion of vast swathes of the homeopathic clinical trial literature (based on arbitrary and unexplained criteria), as well as of their failure to differentiate – as Mathie has done – different types of homeopathic research.

The clinical data are flawed. Issues with methodology used in homeopathy RCTs, combined with a lack of research funding, have produced a lack of high-quality trials and data. However, the data we do have point towards homeopathy as having an effect greater than that of placebo.

There can be no argument with this conclusion, aside from possible new data emerging. Anyone who disputes this is going against the existing set of the highest-quality evidence on homeopathy.

His overview is based on the following publications:

Kleijnen, 19911 All types of homeopathy (eg, single remedy vs combination). Methodological quality assessed; 105 trials. Results: Positive trend, regardless of type of homeopathy; 81 trials were positive, 24 showed no effect.
Linde, 19972 All types of homeopathy. Out of 185 trials, 119 met inclusion criteria; 89 of these had extractable data. Results: OR = 2.45 (95% CI 2.05-2.93).
Ernst, 19983 Individualized homeopathy; 5 trials determined to be high-quality. Results: OR = 0.
Linde, 19985 Individualized homeopathy; 32 trials, 19 of which had extractable data. Results: OR = 1.62 for all trials (95% CI 1.17-2.23). Only high-quality trials produced no significant trend.
Cucherat, 20009 All types of homeopathy; 118 trials, 16 of which met inclusion criteria. Used unusual method of combining p values. Results: All trials = p< 0.000036. Less than 10% dropouts: p<0.084; less than 5% dropouts (higher standards than most trials considered reliable): p<0.08 (non-significant).
Shang, 200511 All types of homeopathy; only 8 trials selected from 21 high-quality trials of 110 selected with unusual criteria. Results: OR = 0.88 (0.65-1.19). Result strongly disputed by statisticians.
Mathie, 201413 Individualized homeopathy; of the analysis pooled data from 22 higher-quality, individualized, double-blind RCTs. Results: OR = 1.53 (1.22-1.91) for all trials pooled; OR = 1.93 (1.16-3.38) for the 3 reliable trials.
NHMRC, 201516 Out of 176 studies, 171 were excluded, leaving only 5 for the study. Investigators used unprecedented methods, did not combine data, and are currently under investigation for outcome shopping. Results: Negative results.
Mathie, 201720 Non-individualized homeopathy; very few higher-quality trials. Results: For 54 trials with extractable data, SMD = -0.33 (-0.44, -0.21). When these were adjusted for publication bias, SMD = -0.16 (-0.46,-0.09). The 3 high-quality trials had non-significant results: SMD = -0.18 (-0.46, +0.09).
Mathie, 201821 Individualized, other-than-placebo-controlled trials; 11 trials found, 8 with extractable data. Results: 4 heterogeneous comparative trials showed a non-significant difference. One trial in this group was positive. Three heterogeneous trials with additive homeopathy showed a statistically significant SMD. No definitive conclusion possible due to trial heterogeneity, poor quality, and low number of trials.
Mathie, 201922 Non-individualized, other-than-placebo-controlled trials; 17 RCTs found, 14 with high risk of bias. Results: Significant heterogeneity prevented much comparison; 3 comparable trials showed a non-significant SMD.

Apart from getting the wrong end of the stick when interpreting the results of these papers (see for instance here, and here), there are other rather embarrassing flaws in this overview:

  1. Many older systematic reviews were omitted (including about 10 of my own papers). This is relevant because the author of the above review went beck until 1991 to find the reviews he included.
  2. Several new papers were missing as well. This is relevant because the author evidently included reviews up to 2019. Here are the key passages from the conclusions of some of them:

homoeopathy as a whole may be considered as a placebo treatment.

We tested whether p-curve accurately rejects the evidential value of significant results obtained in placebo-controlled clinical trials of homeopathic ultramolecular dilutions. Our results suggest that p-curve can accurately detect when sets of statistically significant results lack evidential value.

We found no evidence to support the efficacy of homeopathic medicinal products

no firm conclusions regarding the effectiveness and safety of homeopathy for the treatment of IBS can be drawn.

Due to both qualitative and quantitative inadequacies, proofs supporting individualized homeopathy remained inconclusive.

… the use of homeopathy currently cannot claim to have sufficient prognostic validity where efficacy is concerned.

I am, of course, not saying that this overview amounts to anything like a systematic review. It merely gives you a flavour how trustworthy proponents of homeopathy are when they pretend to provide us with an objective evaluation of the best available evidence.

I have been sceptical about Craniosacral Therapy (CST) several times (see for instance here, here and here). Now, a new paper might change all this:

The systematic review assessed the evidence of Craniosacral Therapy (CST) for the treatment of chronic pain. Randomized clinical trials (RCTs) assessing the effects of CST in chronic pain patients were eligible. Pain intensity and functional disability were the primary outcomes. Risk of bias was assessed using the Cochrane tool.

Ten RCTs with a total of 681 patients suffering from neck and back pain, migraine, headache, fibromyalgia, epicondylitis, and pelvic girdle pain were included.

Compared to treatment as usual, CST showed greater post intervention effects on:

  • pain intensity (SMD=-0.32, 95%CI=[−0.61,-0.02])
  • disability (SMD=-0.58, 95%CI=[−0.92,-0.24]).

Compared to manual/non-manual sham, CST showed greater post intervention effects on:

  • pain intensity (SMD=-0.63, 95%CI=[−0.90,-0.37])
  • disability (SMD=-0.54, 95%CI=[−0.81,-0.28]) ;

Compared to active manual treatments, CST showed greater post intervention effects on:

  • pain intensity (SMD=-0.53, 95%CI=[−0.89,-0.16])
  • disability (SMD=-0.58, 95%CI=[−0.95,-0.21]) .

At six months, CST showed greater effects on pain intensity (SMD=-0.59, 95%CI=[−0.99,-0.19]) and disability (SMD=-0.53, 95%CI=[−0.87,-0.19]) versus sham. Secondary outcomes were all significantly more improved in CST patients than in other groups, except for six-month mental quality of life versus sham. Sensitivity analyses revealed robust effects of CST against most risk of bias domains. Five of the 10 RCTs reported safety data. No serious adverse events occurred. Minor adverse events were equally distributed between the groups.

The authors concluded that, in patients with chronic pain, this meta-analysis suggests significant and robust effects of CST on pain and function lasting up to six months. More RCTs strictly following CONSORT are needed to further corroborate the effects and safety of CST on chronic pain.

Robust effects! This looks almost convincing, particularly to an uncritical proponent of so-called alternative medicine (SCAM). However, a bit of critical thinking quickly discloses numerous problems, not with this (technically well-made) review, but with the interpretation of its results and the conclusions. Let me mention a few that spring into my mind:

  1. The literature searches were concluded in August 2018; why publish the paper only in 2020? Meanwhile, there might have been further studies which would render the review outdated even on the day it was published. (I know that there are many reasons for such a delay, but a responsible journal editor must insist on an update of the searches before publication.)
  2. Comparisons to ‘treatment as usual’ do not control for the potentially important placebo effects of CST and thus tell us nothing about the effectiveness of CST per se.
  3. The same applies to comparisons to ‘active’ manual treatments and ‘non-manual’ sham (the purpose of a sham is to blind patients; a non-manual sham defies this purpose).
  4. This leaves us with exactly two trials employing a sham that might have been sufficiently credible to be able to fool patients into believing that they were receiving the verum.
  5. One of these trials (ref 44) is far too flimsy to be taken seriously: it was tiny (n=23), did not adequately blind patients, and failed to mention adverse effects (thus violating research ethics [I cannot take such trials seriously]).
  6. The other trial (ref 41) is by the same research group as the review, and the authors award themselves a higher quality score than any other of the primary studies (perhaps even correctly, because the other trials are even worse). Yet, their study has considerable weaknesses which they fail to discuss: it was small (n=54), there was no check to see whether patient-blinding was successful, and – as with all the CST studies – the therapist was, of course, no blind. The latter point is crucial, I think, because patients can easily be influenced by the therapists via verbal or non-verbal communication to report the findings favoured by the therapist. This means that the small effects seen in such studies are likely to be due to this residual bias and thus have nothing to do with the intervention per se.
  7. Despite the fact that the review findings depend critically on their own primary study, the authors of the review declared that they have no conflict of interest.

Considering all this plus the rather important fact that CST completely lacks biological plausibility, I do not think that the conclusions of the review are warranted. I much prefer the ones from my own systematic review of 2012. It included 6 RCTs (all of which were burdened with a high risk of bias) and concluded that the notion that CST is associated with more than non‐specific effects is not based on evidence from rigorous RCTs.

So, why do the review authors first go to the trouble of conducting a technically sound systematic review and meta-analysis and then fail utterly to interpret its findings critically? I might have an answer to this question. Back in 2016, I included the head of this research group, Gustav Dobos, into my ‘hall of fame’ because he is one of the many SCAM researchers who never seem to publish a negative result. This is what I then wrote about him:

Dobos seems to be an ‘all-rounder’ whose research tackles a wide range of alternative treatments. That is perhaps unremarkable – but what I do find remarkable is the impression that, whatever he researches, the results turn out to be pretty positive. This might imply one of two things, in my view:

I let my readers chose which possibility they deem to be more likely.

Acupuncture is often recommended for relieving symptoms of fibromyalgia syndrome (FMS). The aim of this systematic review was to ascertain whether verum acupuncture is more effective than sham acupuncture in FMS.

Ten RCTs with a total of 690 participants were eligible, and 8 RCTs were eventually included in the meta-analysis. Its results showed a sizable effect of verum acupuncture compared with sham acupuncture on pain relief, improving sleep quality and reforming general status. Its effect on fatigue was insignificant. When compared with a combination of simulation and improper location of needling, the effect of verum acupuncture for pain relief was the most obvious.

The authors concluded that verum acupuncture is more effective than sham acupuncture for pain relief, improving sleep quality, and reforming general status in FMS posttreatment. However, evidence that it reduces fatigue was not found.

I have a much more plausible conclusion for these findings: in (de-randomised) trials comparing real and sham acupuncture, patients are regularly de-blinded and therapists are invariably not blind. The resulting bias and not the alleged effectiveness of acupuncture explains the outcome.

And why do I think that this conclusion is much more plausible?

Firstly, because of Occam’s Razor.

Secondly, because this is roughly what my own systematic review of the subject found (The notion that acupuncture is an effective symptomatic treatment for fibromyaligia is not supported by the results from rigorous clinical trials. On the basis of this evidence, acupuncture cannot be recommended for fibromyalgia). This view is also shared by other critical reviews of the evidence (Current literature does not support the routine use of acupuncture for improving pain or quality of life in FM). Perhaps more crucially, the current Cochrane review seems to concur: There is low to moderate-level evidence that compared with no treatment and standard therapy, acupuncture improves pain and stiffness in people with fibromyalgia. There is moderate-level evidence that the effect of acupuncture does not differ from sham acupuncture in reducing pain or fatigue, or improving sleep or global well-being. EA is probably better than MA for pain and stiffness reduction and improvement of global well-being, sleep and fatigue. The effect lasts up to one month, but is not maintained at six months follow-up. MA probably does not improve pain or physical functioning. Acupuncture appears safe. People with fibromyalgia may consider using EA alone or with exercise and medication. The small sample size, scarcity of studies for each comparison, lack of an ideal sham acupuncture weaken the level of evidence and its clinical implications. Larger studies are warranted.

The journal NATURE has just published an excellent article by Andrew D. Oxman and an alliance of 24 leading scientists outlining the importance and key concepts of critical thinking in healthcare and beyond. The authors state that the Key Concepts for Informed Choices is not a checklist. It is a starting point. Although we have organized the ideas into three groups (claims, comparisons and choices), they can be used to develop learning resources that include any combination of these, presented in any order. We hope that the concepts will prove useful to people who help others to think critically about what evidence to trust and what to do, including those who teach critical thinking and those responsible for communicating research findings.

Here I take the liberty of citing a short excerpt from this paper:

CLAIMS:

Claims about effects should be supported by evidence from fair comparisons. Other claims are not necessarily wrong, but there is an insufficient basis for believing them.

Claims should not assume that interventions are safe, effective or certain.

  • Interventions can cause harm as well as benefits.
  • Large, dramatic effects are rare.
  • We can rarely, if ever, be certain about the effects of interventions.

Seemingly logical assumptions are not a sufficient basis for claims.

  • Beliefs alone about how interventions work are not reliable predictors of the presence or size of effects.
  • An outcome may be associated with an intervention but not caused by it.
  • More data are not necessarily better data.
  • The results of one study considered in isolation can be misleading.
  • Widely used interventions or those that have been used for decades are not necessarily beneficial or safe.
  • Interventions that are new or technologically impressive might not be better than available alternatives.
  • Increasing the amount of an intervention does not necessarily increase its benefits and might cause harm.

Trust in a source alone is not a sufficient basis for believing a claim.

  • Competing interests can result in misleading claims.
  • Personal experiences or anecdotes alone are an unreliable basis for most claims.
  • Opinions of experts, authorities, celebrities or other respected individuals are not solely a reliable basis for claims.
  • Peer review and publication by a journal do not guarantee that comparisons have been fair.

COMPARISONS:

Studies should make fair comparisons, designed to minimize the risk of systematic errors (biases) and random errors (the play of chance).

Comparisons of interventions should be fair.

  • Comparison groups and conditions should be as similar as possible.
  • Indirect comparisons of interventions across different studies can be misleading.
  • The people, groups or conditions being compared should be treated similarly, apart from the interventions being studied.
  • Outcomes should be assessed in the same way in the groups or conditions being compared.
  • Outcomes should be assessed using methods that have been shown to be reliable.
  • It is important to assess outcomes in all (or nearly all) the people or subjects in a study.
  • When random allocation is used, people’s or subjects’ outcomes should be counted in the group to which they were allocated.

Syntheses of studies should be reliable.

  • Reviews of studies comparing interventions should use systematic methods.
  • Failure to consider unpublished results of fair comparisons can bias estimates of effects.
  • Comparisons of interventions might be sensitive to underlying assumptions.

Descriptions should reflect the size of effects and the risk of being misled by chance.

  • Verbal descriptions of the size of effects alone can be misleading.
  • Small studies might be misleading.
  • Confidence intervals should be reported for estimates of effects.
  • Deeming results to be ‘statistically significant’ or ‘non-significant’ can be misleading.
  • Lack of evidence for a difference is not the same as evidence of no difference.

CHOICES:

What to do depends on judgements about the problem, the relevance (applicability or transferability) of evidence available and the balance of expected benefits, harm and costs.

Problems, goals and options should be defined.

  • The problem should be diagnosed or described correctly.
  • The goals and options should be acceptable and feasible.

Available evidence should be relevant.

  • Attention should focus on important, not surrogate, outcomes of interventions.
  • There should not be important differences between the people in studies and those to whom the study results will be applied.
  • The interventions compared should be similar to those of interest.
  • The circumstances in which the interventions were compared should be similar to those of interest.

Expected pros should outweigh cons.

  • Weigh the benefits and savings against the harm and costs of acting or not.
  • Consider how these are valued, their certainty and how they are distributed.
  • Important uncertainties about the effects of interventions should be reduced by further fair comparisons.

__________________________________________________________________________

END OF QUOTE

I have nothing to add to this, except perhaps to point out how very relevant all of this, of course, is for SCAM and to warmly recommend you study the full text of this brilliant paper.

George Vithoulkas, has been mentioned on this blog repeatedly. He is a lay homeopath – one that has no medical background – and has, over the years, become an undisputed hero within the world of homeopathy. Yet, Vithoulkas’ contribution to homeopathy research is perilously close to zero. Judging from a recent article in which he outlines the rules of rigorous research, his understanding of research methodology is even closer to zero. Here is a crucial excerpt from this paper intercepted by a few comment from me in brackets and bold print.

Which are [the] homoeopathic principles to be respected [in clinical trials and meta-analyses]?

1. Homoeopathy does not treat diseases, but only diseased individuals. Therefore, every case may need a different remedy although the individuals may be suffering from the same pathology. This rule was violated by almost all the trials in most meta-analyses. (This statement is demonstrably false; there even has been a meta-analysis of 32 trials that respect this demand)

2. In the homoeopathic treatment of serious chronic pathology, if the remedy is correct usually a strong initial aggravation takes place []. Such an aggravation may last from a few hours to a few weeks and even then we may have a syndrome-shift and not the therapeutic results expected. If the measurements take place in the aggravation period, the outcome will be classified negative. (Homeopathic aggravations exist only in the mind of homeopaths; our systematic review failed to find proof for their existence.)

This factor was also ignored in most trials []. At least sufficient time should be given in the design of the trial, in order to account for the aggravation period. The contrary happened in a recent study [], where the aggravation period was evaluated as a negative sign and the homoeopathic group was pronounced worse than the placebo []. (There are plenty of trials where the follow-up period is long enough to account for this [non-existing] phenomenon.)

3. In severe chronic conditions, the homoeopath may need to correctly prescribe a series of remedies before the improvement is apparent. Such a second or third prescription should take place only after evaluating the effects of the previous remedies []. Again, this rule has also been ignored in most studies. (Again, this is demonstrably wrong; there are many trials where the homeopath was able to adjust his/her prescription according to the clinical response of the patient.)

4. As the prognosis of a chronic condition and the length of time after which any amelioration set in may differ from one to another case [], the treatment and the study-design respectively should take into consideration the length of time the disease was active and also the severity of the case. (This would mean that conditions that have a short history, like post-operative ileus, bruising after injury, common cold, etc. should respond well after merely a short treatment with homeopathics. As this is not so, Vithoulkas’ argument seems to be invalid.)

5. In our experience, Homeopathy has its best results in the beginning stages of chronic diseases, where it might be possible to prevent the further development of the chronic state and this is its most important contribution. Examples of pathologies to be included in such RCTs trials are ulcerative colitis, sinusitis, asthma, allergic conditions, eczema, gangrene rheumatoid arthritis as long as they are within the first six months of their appearance. (Why then is there a lack of evidence that any of the named conditions respond to homeopathy?)

In conclusion, three points should be taken into consideration relating to trials that attempt to evaluate the effectiveness of homoeopathy.

First, it is imperative that from the point of view of homoeopathy, the above-mentioned principles should be discussed with expert homoeopaths before researchers undertake the design of any homoeopathic protocol. (I am not aware of any trial where this was NOT done!)

Second, it would be helpful if medical journals invited more knowledgeable peer-reviewers who understand the principles of homoeopathy. (I am not aware of any trial where this was NOT done!)

Third, there is a need for at least one standardized protocol for clinical trials that will respect not only the state-of-the-art parameters from conventional medicine but also the homoeopathic principles []. (Any standardised protocol would be severely criticised; a good study protocol must always take account of the specific research question and therefore cannot be standardised.)

Fourth, experience so far has shown that the therapeutic results in homeopathy vary according to the expertise of the practitioner. Therefore, if the objective is to validate the homeopathic therapeutic modality, the organizers of the trial have to pick the best possible prescribers existing in the field. (I am not aware of any trial where this was NOT done!)

Only when these points are transposed and put into practice, the trials will be respected and accepted by both homoeopathic practitioners and conventional medicine and can be eligible for meta-analysis.

___________________________________________________________________

I suspect what the ‘GREAT VITHOULKAS’ really wanted to express are ‘THE TWO ESSENTIAL PRINCIPLES OF HOMEOPATHY RESEARCH’:

  1. A well-designed study of homeopathy can always be recognised by its positive result.
  2. Any trial that fails to yield a positive finding is, by definition, wrongly designed.

“Eating elderberries can help minimise influenza symptoms.” This statement comes from a press release by the University of Sydney. As it turned out, the announcement was not just erroneous but it also had concealed that the in-vitro study that formed the basis for the press-release was part-funded by the very company, Pharmacare, which sells elderberry-based flu remedies.

“This is an appalling misrepresentation of this Pharmacare-funded in-vitro study,” said associate professor Ken Harvey, president of Friends of Science in Medicine. “It was inappropriate and misleading to imply from this study that an extract was ‘proven to fight flu’.” A University of Sydney spokeswoman confirmed Pharmacare was shown a copy of the press release before it was published.

This is an embarrassing turn of events, no doubt. But what about elderberry (Sambucus nigra) and the flu? Is there any evidence?

A systematic review quantified the effects of elderberry supplementation. Supplementation with elderberry was found to substantially reduce upper respiratory symptoms. The quantitative synthesis of the effects yielded a large mean effect size. The authors concluded that these findings present an alternative to antibiotic misuse for upper respiratory symptoms due to viral infections, and a potentially safer alternative to prescription drugs for routine cases of the common cold and influenza.

WHAT?!?!

The alternative to antibiotic misuse can only be the correct use of antibiotics. And, in the case of viral infections such as the flu, this can only be the non-use of antibiotics. My trust in this review, published in a SCAM journal of dubious repute, has instantly dropped to zero.

Perhaps a recent overview recently published in THE MEDICAL LETTER provides a more trustworthy picture:

No large randomized, controlled trials evaluating the effectiveness of elderberry for prevention or treatment of influenza have been conducted to date. Elderberry appears to have some activity against influenza virus strains in vitro. In two small studies (conducted outside the US), adults with influenza A or B virus infection taking elderberry extract reported a shorter duration of symptoms compared to those taking placebo. Consuming uncooked blue or black elderberries can cause nausea and vomiting. The rest of the plant (bark, stems, leaves, and root) contains sambunigrin, which can release cyanide. No data are available on the safety of elderberry use during pregnancy or while breastfeeding. CONCLUSION — Prompt treatment with an antiviral drug such as oseltamivir (Tamiflu, and generics) has been shown to be effective in large randomized, controlled trials in reducing the duration of influenza symptoms, and it may reduce the risk of influenza-related complications. There is no acceptable evidence to date that elderberry is effective for prevention or treatment of influenza and its safety is unclear.

Any take-home messages?

Yes:

  1. Elderberry supplements are not of proven effectiveness against the flu.
  2. The press officers at universities should be more cautious when writing press-releases.
  3. They should involve the scientists and avoid the sponsors of the research.
  4. In-vitro studies can never tell us anything about clinical effectiveness.
  5. SCAM-journals’ articles must be taken with a pinch of salt.
  6. Consumers are being misled left, right and centre.

Radix Salviae Miltiorrhizae (Danshen) is a herbal remedy that is part of many TCM herbal mixtures. Allegedly, Danshen has been used in clinical practice for over 2000 years.

But is it effective?

The aim of this systematic review was to evaluate the current available evidence of Danshen for the treatment of cancer. English and Chinese electronic databases were searched from PubMed, the Cochrane Library, EMBASE, and the China National Knowledge Infrastructure (CNKI), VIP database, Wanfang database until September 2018. The methodological quality of the included studies was evaluated by using the method of Cochrane system.

Thirteen RCTs with 1045 participants were identified. The studies investigated the lung cancer (n = 5), leukemia (n = 3), liver cancer (n = 3), breast or colon cancer (n = 1), and gastric cancer (n = 1). A total of 83 traditional Chinese medicines were used in all prescriptions and there were three different dosage forms. The meta-analysis suggested that Danshen formulae had a significant effect on RR (response rate) (OR 2.38, 95% CI 1.66-3.42), 1-year survival (OR 1.70 95% CI 1.22-2.36), 3-year survival (OR 2.78, 95% CI 1.62-4.78), and 5-year survival (OR 8.45, 95% CI 2.53-28.27).

The authors concluded that the current research results showed that Danshen formulae combined with chemotherapy for cancer treatment was better than conventional drug treatment plan alone.

I am getting a little tired of discussing systematic reviews of so-called alternative medicine (SCAM) that are little more than promotion, free of good science. But, because such articles do seriously endanger the life of many patients, I do nevertheless succumb occasionally. So here are a few points to explain why the conclusions of the Chinese authors are nonsense:

  • Even though the authors claim the trials included in their review were of high quality, most were, in fact, flimsy.
  • The trials used no less than 83 different herbal mixtures of dubious quality containing Danshen. It is therefore not possible to define which mixture worked and which did not.
  • There is no detailed discussion of the adverse effects and no mention of possible herb-drug interactions.
  • There seemed to be a sizable publication bias hidden in the data.
  • All the eligible studies were conducted in China, and we know that such trials are unreliable to say the least.
  • Only four articles were published in English which means those of us who cannot read Chinese are unable to check the correctness of the data extraction of the review authors.

I know it sounds terribly chauvinistic, but I do truly believe that we should simply ignore Chinese articles, if they have defects that set our alarm bells ringing – if not, we are likely to do a significant disservice to healthcare and progress.

Did we not have a flurry of systematic reviews of homeopathy in recent months?

And were they not a great disappointment to homeopaths and their patients?

Just as we thought that this is more than enough evidence to show that homeopathy is not effective, here comes another one.

This new review evaluated RCTs of non-individualised homeopathic treatment (NIHT) in which the control group received treatments other than placebo (OTP). Specifically, its aim was to determine the comparative effectiveness of NIHT on health-related outcomes for any given condition.

For each eligible trial, published in the peer-reviewed literature up to the end of 2016, the authors assessed its risk of bias (internal validity) using the seven-domain Cochrane tool, and its relative pragmatic or explanatory attitude (external validity) using the 10-domain PRECIS tool. The researchers grouped RCTs by whether these examined homeopathy as an alternative treatment (study design 1a), adjunctively with another intervention (design 1b), or compared with no intervention (design 2). RCTs were sub-categorised as superiority trials or equivalence/non-inferiority trials. For each RCT, a single ‘main outcome measure’ was selected to use in meta-analysis.

Seventeen RCTs, representing 15 different medical conditions, were eligible for inclusion. Three of the trials were more pragmatic than explanatory, two were more explanatory than pragmatic, and 12 were equally pragmatic and explanatory. Fourteen trials were rated ‘high risk of bias’ overall; the other three trials were rated ‘uncertain risk of bias’ overall. Ten trials had data that were extractable for meta-analysis. Significant heterogeneity undermined the planned meta-analyses or their meaningful interpretation. For the three equivalence or non-inferiority trials with extractable data, the small, non-significant, pooled effect size was consistent with a conclusion that NIHT did not differ from treatment by a comparator (Ginkgo biloba or betahistine) for vertigo or (cromolyn sodium) for seasonal allergic rhinitis.

The authors concluded that the current data preclude a decisive conclusion about the comparative effectiveness of NIHT. Generalisability of findings is restricted by the limited external validity identified overall. The highest intrinsic quality was observed in the equivalence and non-inferiority trials of NIHT.

I do admire the authors’ tenacity in meta-analysing homeopathy trials and empathise with their sadness of the multitude of negative results they thus have to publish. However, I do disagree with their conclusions. In my view, at least two firm conclusions ARE possible:

  1. This dataset confirms yet again that the methodological quality of homeopathy trials is lousy.
  2. The totality of the trial evidence analysed here fails to show that non-individualised homeopathy is effective.

In case you wonder why the authors are not more outspoken about their own findings, perhaps you need to read their statement of conflicts of interest:

Authors RTM, YYYF, PV and AKLT are (or were) associated with a homeopathy organisation whose significant aim is to clarify and extend an evidence base in homeopathy. RTM holds an independent research consultancy contract with the Deutsche Homöopathie-Union, Karlsruhe, Germany. YYYF and AKLT belong to Living Homeopathy Ltd., which has contributed funding to some (but not this current) HRI project work. RTM and PV have no other relationships or activities that could appear to have influenced the submitted work. JRTD had no support from any organisation for the submitted work; in the last 3 years, and for activities outside the submitted study, he received personal fees, royalties or out-of-pocket expenses for advisory work, invitational lectures, use of rating scales, published book chapters or committee membership; he receives royalties from Springer Publishing Company for his book, A Century of Homeopaths: Their Influence on Medicine and Health. JTRD has no other relationships or activities that could appear to have influenced the submitted study.

If one had wanted to add insult to injury, one could have added that, if, despite such conflicts of interest, the overall result of this new review turned out to be not positive, the evidence must be truly negative.

The Spanish Ministries of Health and Sciences have announced their ‘Health Protection Plan against Pseudotherapies’. Very wisely, they have included chiropractic under this umbrella. To a large degree, this is the result of Spanish sceptics pointing out that alternative therapies are a danger to public health, helped perhaps a tiny bit also by the publication of two of my books (see here and here) in Spanish. Unsurprisingly, such delelopments alarm Spanish chiropractors who fear for their livelihoods. A quickly-written statement of the AEQ (Spanish Chiropractic Association) is aimed at averting the blow. It makes the following 11 points (my comments are below):

1. The World Health Organization (WHO) defines chiropractic as a healthcare profession. It is independent of any other health profession and it is neither a therapy nor a pseudotherapy.

2. Chiropractic is statutorily recognised as a healthcare profession in many European countries including Portugal, France, Italy, Switzerland, Belgium, Denmark, Sweden, Norway and the United Kingdom10, as well as in the USA, Canada and Australia, to name a few.

3. Chiropractic members of the AEQ undergo university-level training of at least 5 years full-time (300 ECTS points). Chiropractic training is offered within prestigious institutions such as the Medical Colleges of the University of Zurich and the University of Southern Denmark.

4. Chiropractors are spinal health care experts. Chiropractors practice evidence-based, patient-centred conservative interventions, which include spinal manipulation, exercise prescription, patient education and lifestyle advice.

5. The use of these interventions for the treatment of spine-related disorders is consistent with guidelines and is supported by high quality scientific evidence, including multiple systematic reviews undertaken by the prestigious Cochrane collaboration15, 16, 17.

6. The Global Burden of Disease study shows that spinal disorders are the leading cause of years lived with disability worldwide, exceeding depression, breast cancer and diabetes.

7. Interventions used by chiropractors are recommended in the 2018 Low Back Pain series of articles published in The Lancet and clinical practice guidelines from Denmark, Canada, the European Spine Journal, American College of Physicians and the Global Spine Care Initiative.

8. The AEQ supports and promotes scientific research, providing funding and resources for the development of high quality research in collaboration with institutions of high repute, such as Fundación Jiménez Díaz and the University of Alcalá de Henares.

9. The AEQ strenuously promotes among its members the practice of evidence-based, patient-centred care, consistent with a biopsychosocial model of health.

10. The AEQ demands the highest standards of practice and professional ethics, by implementing among its members the Quality Standard UNE-EN 16224 “Healthcare provision by chiropractors”, issued by the European Committee of Normalisation and ratified by AENOR.

11. The AEQ urges the Spanish Government to regulate chiropractic as a healthcare profession. Without such legislation, citizens of Spain cannot be assured that they are protected from unqualified practitioners and will continue to face legal uncertainties and barriers to access an essential, high-quality, evidence-based healthcare service.

END OF QUOTE

I think that some comments might be in order (they follow the numbering of the AEQ):

  1. The WHO is the last organisation I would consult for information on alternative medicine; during recent years, they have published mainly nonsense on this subject. How about asking the inventor of chiropractic? D.D. Palmer defined it as “a science of healing without drugs.” Chiropractors nowadays prefer to be defined as a profession which has the advantage that one cannot easily pin them down for doing mainly spinal manipulation; if one does, they indignantly respond “but we also use many other interventions, like life-style advice, for instance, and nobody can claim this to be nonsense” (see also point 4 below).
  2. Perfect use of a classical fallacy: appeal to authority.
  3. Appeal to authority, plus ignorance of the fact that teaching nonsense even at the highest level must result in nonsense.
  4. This is an ingenious mix of misleading arguments and lies: most chiros pride themselves of treating also non-spinal conditions. Very few interventions used by chiros are evidence-based. Exercise prescription, patient education and lifestyle advice are hardy typical for chiros and can all be obtained more authoratively from other healthcare professionals.
  5. Plenty of porkies here too. For instance, the AEQ cite three Cochrane reviews. The first concluded that high-quality evidence suggests that there is no clinically relevant difference between SMT and other interventions for reducing pain and improving function in patients with chronic low-back pain. The second stated that combined chiropractic interventions slightly improved pain and disability in the short term and pain in the medium term for acute/subacute LBP. However, there is currently no evidence that supports or refutes that these interventions provide a clinically meaningful difference for pain or disability in people with LBP when compared to other interventions. And the third concluded that, although support can be found for use of thoracic manipulation versus control for neck pain, function and QoL, results for cervical manipulation and mobilisation versus control are few and diverse. Publication bias cannot be ruled out. Research designed to protect against various biases is needed. Findings suggest that manipulation and mobilisation present similar results for every outcome at immediate/short/intermediate-term follow-up. Multiple cervical manipulation sessions may provide better pain relief and functional improvement than certain medications at immediate/intermediate/long-term follow-up. Since the risk of rare but serious adverse events for manipulation exists, further high-quality research focusing on mobilisation and comparing mobilisation or manipulation versus other treatment options is needed to guide clinicians in their optimal treatment choices. Hardly the positive endorsement implied by the AEQ!
  6. Yes, but that is not an argument for chiropractic; in fact, it’s another fallacy.
  7. Did they forget the many guidelines, institutions and articles that do NOT recommend chiropractic?
  8. I believe the cigarette industry also sponsors research; should we therefore all start smoking?
  9. I truly doubt that the AEQ strenuously promotes among its members the practice of evidence-based healthcare; if they did, they would have to discourage spinal manipulation!
  10. The ‘highest standards of practice and professional ethics’ are clearly not compatible with chiropractors’ use of spinal manipulation. In our recent book, we explained in full detail why this is so.
  11. An essential, high-quality, evidence-based healthcare service? Chiropractic is certainly not essential, rarely high-quality, and clearly not evidence-based.

Nice try AEQ.

But not good enough, I am afraid.

The primary objective of this paper was to assess the efficacy of homeopathy by systematically reviewing existing systematic reviews and meta-analyses and to systematically review trials on open-label placebo (OLP) treatments. A secondary objective was to understand whether homoeopathy as a whole may be considered as a placebo treatment. Electronic databases and previously published papers were systematically searched for systematic reviews and meta-analyses on homoeopathy efficacy. In total, 61 systematic reviews of homeopathy were included.

The same databases plus the Journal of Interdisciplinary Placebo Studies (JIPS) were also systematically searched for randomised controlled trials (RCTs) on OLP treatments, and 10 studies were included.

Qualitative syntheses showed that homoeopathy efficacy can be considered comparable to placebo. Twenty‐five reviews demonstrated that homoeopathy efficacy is comparable to placebo, 20 reviews did not come to a definite conclusion, and 16 reviews concluded that homoeopathy has some effect beyond placebo (in some cases of the latter category, authors  drew cautious conclusions, due to low methodological quality of included trials, high risk of bias and sparse data).

Qualitative syntheses also showed that OLP treatments may be effective in some health conditions.

The authors concluded that, if homoeopathy efficacy is comparable to placebo, and if placebo treatments can be effective in some conditions, then homoeopathy as a whole may be considered as a placebo treatment. Reinterpreting homoeopathy as a placebo treatment would define limits and possibilities of this practice. This perspective shift suggests a strategy to manage patients who seek homoeopathic care and to reconcile them with mainstream medicine in a sustainable way.

The authors also mention in their discussion section that  one of the most important work which concluded that homoeopathy has some effect beyond placebo is the meta‐analysis performed by Linde et al. (1997), which included 119 trials with 2,588 participants and aimed to assess the efficacy of homoeopathy for many conditions. Among these ones, there were conditions with various degrees of placebo responsiveness. This work was thoroughly re‐analysed by Linde himself and other authors (Ernst, 1998; Ernst & Pittler, 2000; Linde et al., 1999; Morrison et al., 2000; Sterne et al., 2001), who, selecting high‐quality extractable data and taking into consideration some methodological issues and biases of included trials (like publication bias and biases within studies), underscored that it cannot be demonstrated that homoeopathy has effects beyond placebo.

I agree with much of what the authors state. However, I fail to see that homeopathy should be used as an OLP treatment. I have several reasons for this, for instance:

  1. Placebo effects are unreliable and do occur only in some but not all patients.
  2. Placebo effects are usually of short duration.
  3. Placebo effects are rarely of a clinically relevant magnitude.
  4. The use of placebo, even when given as OLP, usually involves deception which is unethical.
  5. Placebos might replace effective treatments which would amount to neglect.
  6. One does not need a placebo for generating a placebo effect.

The idea that homeopathic remedies could be used in clinical practice as placebos to generate positive health outcomes is by no means new. I know that many doctors have used it that way. The idea that homeopathy could be employed as OLP, might be new, but it is neither practical, nor ethical, nor progressive.

Regardless of this particular debate, this new review confirms yet again:

HOMEOPATHY = PLACEBO THERAPY

1 2 3 7
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories