Amongst all the implausible treatments to be found under the umbrella of ‘alternative medicine’, Reiki might be one of the worst, i. e. least plausible and outright bizarre (see for instance here, here and here). But this has never stopped enthusiasts from playing scientists and conducting some more pseudo-science.
This new study examined the immediate symptom relief from a single reiki or massage session in a hospitalized population at a rural academic medical centre. It was designed as a retrospective analysis of prospectively collected data on demographic, clinical, process, and quality of life for hospitalized patients receiving massage therapy or reiki. Hospitalized patients requesting or referred to the healing arts team received either a massage or reiki session and completed pre- and post-therapy symptom questionnaires. Differences between pre- and post-sessions in pain, nausea, fatigue, anxiety, depression, and overall well-being were recorded using an 11-point Likert scale.
Patients reported symptom relief with both reiki and massage therapy. Reiki improved fatigue and anxiety more than massage. Pain, nausea, depression, and well being changes were not different between reiki and massage encounters. Immediate symptom relief was similar for cancer and non-cancer patients for both reiki and massage therapy and did not vary based on age, gender, length of session, and baseline symptoms.
The authors concluded that reiki and massage clinically provide similar improvements in pain, nausea, fatigue, anxiety, depression, and overall well-being while reiki improved fatigue and anxiety more than massage therapy in a heterogeneous hospitalized patient population. Controlled trials should be considered to validate the data.
Don’t I just adore this little addendum to the conclusions, “controlled trials should be considered to validate the data” ?
The thing is, there is nothing to validate here!
The outcomes are not due to the specific effects of Reiki or massage; they are almost certainly caused by:
- the extra attention,
- the expectation of patients,
- the verbal or non-verbal suggestions of the therapists,
- the regression towards the mean,
- the natural history of the condition,
- the concomitant therapies administered in parallel,
- the placebo effect,
- social desirability.
Such pseudo-research only can only serve one purpose: to mislead (some of) us into thinking that treatments such as Reiki might work.
What journal would be so utterly devoid of critical analysis to publish such unethical nonsense?
Ahh … it’s our old friend the Journal of Alternative and Complementary Medicine
Say no more!
Generally speaking, Cochrane reviews provide the best (most rigorous, transparent and independent) evidence on the effectiveness of medical or surgical interventions. It is therefore important to ask what they tell us about homeopathy. In 2010, I did exactly that and published it as an overview of the current best evidence. At the time, there were 6 relevant Cochrane reviews. They covered the following conditions: cancer, attention-deficit hyperactivity disorder, asthma, dementia, influenza and induction of labour. And their results were clear: they did not show that homeopathic medicines have effects beyond placebo.
Now a further Cochrane review has been published.
Does it change this situation?
This systematic review assessed the effectiveness and safety of oral homeopathic medicinal products compared with placebo or conventional therapy to prevent and treat acute respiratory tract infections (ARTIs) in children. The researchers conducted extensive literature searches, checked references, and contacted study authors to identify additional studies. They included all double-blind, randomised controlled trials (RCTs) or double-blind cluster-RCTs comparing oral homeopathy medicinal products with identical placebo or self selected conventional treatments to prevent or treat ARTIs in children aged 0 to 16 years.
Eight RCTs of 1562 children receiving oral homeopathic medicinal products or a control treatment (placebo or conventional treatment) for upper respiratory tract infections (URTIs). Four treatment studies examined the effect on recovery from URTIs, and four studies investigated the effect on preventing URTIs after one to three months of treatment and followed up for the remainder of the year. Two treatment and two prevention studies involved homeopaths individualising treatment for children. The other studies used predetermined, non-individualised treatments. All studies involved highly diluted homeopathic medicinal products.
Several key limitations to the included studies were identified, in particular methodological inconsistencies and high attrition rates, failure to conduct intention-to-treat analysis, selective reporting, and apparent protocol deviations. The authors deemed three studies to be at high risk of bias in at least one domain, and many had additional domains with unclear risk of bias. Three studies received funding from homeopathy manufacturers; one reported support from a non-government organisation; two received government support; one was co-sponsored by a university; and one did not report funding support.
Methodological inconsistencies and significant clinical and statistical heterogeneity precluded robust quantitative meta-analysis. Only four outcomes were common to more than one study and could be combined for analysis. Odds ratios (OR) were generally small with wide confidence intervals (CI), and the contributing studies found conflicting effects, so there was little certainty that the efficacy of the intervention could be ascertained.
All studies assessed as at low risk of bias showed no benefit from oral homeopathic medicinal products; trials at uncertain and high risk of bias reported beneficial effects. The authors found low-quality evidence that non-individualised homeopathic medicinal products confer little preventive effect on ARTIs (OR 1.14, 95% CI 0.83 to 1.57). They also found low-quality evidence from two individualised prevention studies that homeopathy has little impact on the need for antibiotic usage (N = 369) (OR 0.79, 95% CI 0.35 to 1.76).
The authors also assessed adverse events, hospitalisation rates and length of stay, days off school (or work for parents), and quality of life, but were not able to pool data from any of these secondary outcomes. There is insufficient evidence from two pooled individualised treatment studies (N = 155) to determine the effect of homeopathy on short-term cure (OR 1.31, 95% CI 0.09 to 19.54; very low-quality evidence) and long-term cure rates (OR 1.01, 95% CI 0.10 to 9.96; very low-quality evidence). Adverse events were reported inconsistently; however, serious events were not reported. One study found an increase in the occurrence of non-severe adverse events in the treatment group.
The authors concluded that pooling of two prevention and two treatment studies did not show any benefit of homeopathic medicinal products compared to placebo on recurrence of ARTI or cure rates in children. We found no evidence to support the efficacy of homeopathic medicinal products for ARTIs in children. Adverse events were poorly reported, so conclusions about safety could not be drawn.
In their paper, the authors state that “there are no established explanatory models for how highly diluted homeopathic medicinal products might work. For this reason, homeopathy remains highly controversial because the key concepts governing this form of medicine are not consistent with the established laws of conventional therapeutics.” In other words, there is no reason why highly diluted homeopathic remedies should work. Yet, remarkably, when asked what conditions responds best to homeopathy, most homeopaths would probably include ARTI of children.
The authors also point out that “The results of this review are consistent with all previous systematic reviews on homeopathy. Funders and study investigators contemplating any further research in this area need to consider whether further research will advance our knowledge, given the uncertain mechanism of action and debate about how the lack of a measurable dose can make them effective. The studies we identified did not use a uniform approach to choosing and measuring outcomes or assigning appropriate time points for outcome measurement. The use of validated symptom scales would facilitate future meta-analyses. It is unclear if there is any benefit from individualised (classical) homeopathy over the use of commercially available products.”
Even though I agree with the authors on most of their views and comment their excellent work, I would be more outspoken regarding the need of further research. In my view, it would be a foolish, wasteful and therefore unethical activity to fund, plan or conduct further research in this area.
Since many months, I have noticed a proliferation of so-called pilot studies of alternative therapies. A pilot study (also called feasibility study) is defined as a small scale preliminary study conducted in order to evaluate feasibility, time, cost, adverse events, and improve upon the study design prior to performance of a full-scale research project. Here I submit that most of the pilot studies of alternative therapies are, in fact, bogus.
To qualify as a pilot study, an investigation needs to have an aim that is in line with the above-mentioned definition. Another obvious hallmark must be that its conclusions are in line with this aim. We do not need to conduct much research to find that even these two elementary preconditions are not fulfilled by the plethora of pilot studies that are currently being published, and that proper pilot studies of alternative medicine are very rare.
Three recent examples of dodgy pilot studies will have to suffice (but rest assured, there are many, many more).
The aim of this study was to evaluate the effects of foot reflexotherapy on pain and postural balance in elderly individuals with low back pain. And the conclusions drawn by its authors were that this study demonstrated that foot reflexotherapy induced analgesia but did not affect postural balance in elderly individuals with low back pain.
The aim of this study was to investigate the effect of Tai Chi training on dual-tasking performance that involved stepping down and compared it with that of conventional exercise among stroke survivors. And the conclusions read: These results suggest a beneficial effect of Tai Chi training on cognition among stroke survivors without compromising physical task performance in dual-tasking.
The aim of this study was to evaluate the efficacy [of acupuncture] over 12 weeks of treatment and 12 weeks of follow-up. And the conclusion: Acupuncture decreases WC, HC, HbA1c, TG, and TC values and blood pressure in MetS.
It is almost painfully obvious that these studies are not ‘pilot’ studies as defined above.
So, what are they, and why are they so popular in alternative medicine?
The way I see it, they are the result of amateur researchers conducting pseudo-research for publication in lamentable journals in an attempt to promote their pet therapies (I have yet to find such a study that reports a negative finding). The sequence of events that lead to the publication of such pilot studies is usually as follows:
- An enthusiast or a team of enthusiasts of alternative medicine decide that they will do some research.
- They have no or very little know-how in conducting a clinical trial.
- They nevertheless feel that such a study would be nice as it promotes both their careers and their pet therapy.
- They design some sort of a plan and start recruiting patients for their trial.
- At this point they notice that things are not as easy as they had imagined.
- They have too few funds and too little time to do anything properly.
- This does, however, not stop them to continue.
- The trial progresses slowly, and patient numbers remain low.
- After a while the would-be researchers get fed up and decide that their study has enough patients to stop the trial.
- They improvise some statistical analyses with their results.
- They write up the results the best they can.
- They submit it for publication in a 3rd class journal and, in order to get it accepted, they call it a ‘pilot study’.
- They feel that this title is an excuse for even the most obvious flaws in their work.
- The journal’s reviewers and editors are all proponents of alternative medicine who welcome any study that seems to confirm their belief.
- Thus the study does get published despite the fact that it is worthless.
Some might say ‘so what? no harm done!’
But I beg to differ: these studies pollute the medical literature and misguide people who are unable or unwilling to look behind the smoke-screen. Enthusiasts of alternative medicine popularise these bogus trials, while hiding the fact that their results are unreliable. Journalists report about them, and many consumers assume they are being told the truth – after all it was published in a ‘peer-reviewed’ medical journal!
My conclusions are as simple as they are severe:
- Such pilot studies are the result of gross incompetence on many levels (researchers, funders, ethics committees, reviewers, journal editors).
- They can cause considerable harm, because they mislead many people.
- In more than one way, they represent a violation of medical ethics.
- The could be considered scientific misconduct.
- We should think of stopping this increasingly common form of scientific misconduct.
In recent days, journalists across the world had a field day (mis)reporting that doctors practising integrative medicine were doing something positive after all. I think that the paper shows nothing of the kind – but please judge for yourself.
The authors of this article wanted to determine differences in antibiotic prescription rates between conventional General Practice (GP) surgeries and GP surgeries employing general practitioners (GPs) additionally trained in integrative medicine (IM) or complementary and alternative medicine (CAM) (referred to as IM GPs) working within National Health Service (NHS) England.
They conducted a retrospective study on antibiotic prescription rates per STAR-PU (Specific Therapeutic group Age–sex weighting Related Prescribing Unit) using NHS Digital data over 2016. Publicly available data were used on prevalence of relevant comorbidities, demographics of patient populations and deprivation scores. setting Primary Care. Participants were 7283 NHS GP surgeries in England. The association between IM GPs and antibiotic prescribing rates per STAR-PU with the number of antibiotic prescriptions (total, and for respiratory tract infection (RTI) and urinary tract infection (UTI) separately) as outcome. results IM GP surgeries (n=9) were comparable to conventional GP surgeries in terms of list sizes, demographics, deprivation scores and comorbidity prevalence.
Statistically significant fewer total antibiotics were prescribed at NHS IM GP surgeries compared with conventional NHS GP surgeries. In contrast, the number of antibiotics prescribed for UTI were similar between both practices.
The authors concluded that NHS England GP surgeries employing GPs additionally trained in IM/CAM have lower antibiotic prescribing rates. Accessibility of IM/CAM within NHS England primary care is limited. Main study limitation is the lack of consultation data. Future research should include the differences in consultation behaviour of patients self-selecting to consult an IM GP or conventional surgery, and its effect on antibiotic prescription. Additional treatment strategies for common primary care infections used by IM GPs should be explored to see if they could be used to assist in the fight against antimicrobial resistance.
The study was flimsy to say the least:
- It was retrospective and is therefore open to no end of confounders.
- There were only 9 surgeries in the IM group.
Moreover, the results were far from impressive. The differences in antibiotic prescribing between the two groups of GP surgeries were minimal or non-existent. Finally, the study was financed via an unrestricted grant of WALA Heilmittel GmbH, Germany (“approx. 900 different remedies conforming to the anthroposophic understanding of man and nature”) and its senior author has a long track record of publishing papers promotional for anthroposophic medicine.
Such pseudo-research seems to be popular in the realm of CAM, and I have commented before on similarly futile projects. The comparison, I sometimes use is that of a Hamburger restaurant:
Employees by a large Hamburger chain set out to study the association between utilization of Hamburger restaurant services and vegetarianism. The authors used a retrospective cohort design. The study population comprised New Hampshire residents aged 18-99 years, who had entered the premises of a Hamburger restaurant within 90 days for a primary purpose of eating. The authors excluded subjects with a diagnosis of cancer. They measured the likelihood of vegetarianism among recipients of services delivered by Hamburger restaurants compared with a control group of individuals not using meat-dispensing facilities. They also compared the cohorts with regard to the money spent in Hamburger restaurants. The adjusted likelihood of being a vegetarian was 55% lower among the experimental group compared to controls. The average money spent per person in Hamburger restaurants were also significantly lower among the Hamburger group.
To me, it is obvious that such analyses must produce a seemingly favourable result for CAM. In the present case, there are several reasons for this:
- GPs who volunteer to be trained in CAM tend to be in favour of ‘natural’ treatments and oppose synthetic drugs such as antibiotics.
- Education in CAM would only re-inforce this notion.
- Similarly, patients electing to consult IM GPs tend to be in favour of ‘natural’ treatments and oppose synthetic drugs such as antibiotics.
- Such patients might be less severely ill that the rest of the patient population (the data from the present study do in fact imply this to be true).
- These phenomena work in concert to generate less antibiotic prescribing in the IM group.
In the final analysis, all this finding amounts to is a self-fulfilling prophecy: grocery shops sell less meat than butchers! You don’t believe me? Perhaps you need to read a previous post then; it concluded that physicians practicing integrative medicine (the 80% who did not respond to the survey were most likely even worse) not only use and promote much quackery, they also tend to endanger public health by their bizarre, irrational and irresponsible attitudes towards vaccination.
What is upsetting with the present paper, in my view, are the facts that:
- a reputable journal published this junk,
- the international press has a field-day reporting this study implying that CAM is a good thing.
The fact is that it shows nothing of the kind. Imagine we send GPs on a course where they are taught to treat all their patients with blood-letting. This too would result in less prescription of antibiotics, wouldn’t it? But would it be a good thing? Of course not!
True, we prescribe too much antibiotics. Nobody doubts that. And nobody doubts that it is a big problem. The solution to this problem is not more CAM, but less antibiotics. To realise the solution we do not need to teach GPs CAM but we need to remind them of the principles of evidence-based practice. And the two are clearly not the same; in fact, they are opposites.
An announcement (it’s in German, I’m afraid) proudly declaring that ‘homeopathy fulfils the criteria of evidence-based medicine‘ caught my attention.
Here is the story:
In 2016, Dr. Melanie Wölk, did a ‘Master of Science’* at the ‘Donau University’ in Krems, Austria investigating the question whether homeopathy follows the rules of evidence-based medicine (EBM). She arrived at the conclusion that YES, IT DOES! This pleased the leading Austrian manufacturer of homeopathics (Dr Peithner) so much and so durably that, on 23 March 2018, he gave her a ‘scientific’ award (the annual Peithner award) for her ‘research’.
So far so good.
Her paper is unpublished, or at least not available on Medline; therefore, I am unable to evaluate it directly. All I know about it from the announcement is that she did her ‘research at the ‘Zentrum für Traditionelle Chinesische Medizin und Komplementärmedizin‘ of the said university. A quick Medline search revealed that this unit has never published anything, not a single paper, it seems! Disappointed I search for Dr. Christine Schauhuber, the leader of the unit; and again I find no Medline-listed publications in her name. My interim conclusion is thus that this institution might not be at the cutting edge of science.
But what do we know about Dr. Melanie Wölk’s award-winning master thesis *?
The announcement tells us that she investigated all RCTs published between 2010 and 2016. In addition, she evaluated:
- the ‘Swiss report’,
- the NHMRC report,
- Shang 2005,
- Ernst 2002,
- the Frass sepsis trial of 2005,
- Linde 1997 (why not Linde 1999? I ask myself; perhaps because this re-analysis of the same material came to a largly negative conclusion?)
On that basis, she arrived at her positive verdict – not just tentatively, but without doubt (“Das Ergebnis steht fest”).
Dr Peithner, the owner of the company and awarder of the prize, was quoted stating that this is a very important piece of work for homeopathy; it shows yet again what we see in our daily routine, namely that homeopathics are effective. Wölk’s investigation demonstrates furthermore that high-quality trials of homeopathy do exist, and that it is time to end the witch-hunt aimed at discrediting an effective therapy. Conventional medicine and homeopathy ought to finally work hand in hand – for the benefit of our patients. (“Für die Homöopathie ist das eine sehr wichtige Arbeit, die wieder zeigt, was wir in der ärztlichen Praxis täglich erleben, nämlich dass homöopathische Arzneimittel wirken. Wölks Untersuchung zeigt weiters deutlich, dass es sehr wohl hochqualitative Homöopathie-Studien gibt und es an der Zeit ist, die Hexenjagd zu beenden, mit der eine wirksame medizinische Therapie diskreditiert werden soll. Konventionelle Medizin und Homöopathie sollten endlich Hand in Hand arbeiten – zum Wohle der Patientinnen und Patienten.”)
I do hope that Dr Wölk uses the prize money (by no means a fortune; see photo) to buy some time for publishing her work (one of my teachers, all those years ago, used to say ‘unpublished research is no research’) so that we can all benefit from it. Until it becomes available, I should perhaps mention that the description of her methodology (publications between 2010 and 2016 [plus a few other papers that nicely fitted the arguments?]; including one Linde review and not his more recent re-analysis [see above]) does not inspire me to think that Dr Wölk’s research was anywhere near rigorous, systematic or complete. In the same vein, I am tempted to point out that the Swiss report is probably the very last document I would select, if I wanted to generate an objective picture about the value of homeopathy.
Taking all this into account, I conclude that we seem to be dealing here with a
- pseudo-prize (given by a commercial firm to further its business) for a piece of
- pseudo-research (the project seems to have been aimed to white-wash homeopathy) into
- pseudo-medicine (a treatment that has been tested extensively but has not been shown to work beyond placebo).
*Wölk, Melanie: Eminenz oder Evidenz: Die Homöopathie auf dem Prüfstand der Evidence based Medicine. Masterarbeit zur Erlangung des akademischen Abschlusses Master of Science im Universitätslehrgang Natural Medicine. Donau-Universität Krems, Department für Gesundheitswissenschaften und Biomedizin. Krems, Mai 2016.
The media have (rightly) paid much attention to the three Lancet-articles on low back pain (LBP) which were published this week. LBP is such a common condition that its prevalence alone renders it an important subject for us all. One of the three papers covers the treatment and prevention of LBP. Specifically, it lists various therapies according to their effectiveness for both acute and persistent LBP. The authors of the article base their judgements mainly on published guidelines from Denmark, UK and the US; as these guidelines differ, they attempt a synthesis of the three.
Several alternative therapist organisations and individuals have consequently jumped on the LBP bandwagon and seem to feel encouraged by the attention given to the Lancet-papers to promote their treatments. Others have claimed that my often critical verdicts of alternative therapies for LBP are out of line with this evidence and asked ‘who should we believe the international team of experts writing in one of the best medical journals, or Edzard Ernst writing on his blog?’ They are trying to create a division where none exists,
The thing is that I am broadly in agreement with the evidence presented in Lancet-paper! But I also know that things are a bit more complex.
Below, I have copied the non-pharmacological, non-operative treatments listed in the Lancet-paper together with the authors’ verdicts regarding their effectiveness for both acute and persistent LBP. I find no glaring contradictions with what I regard as the best current evidence and with my posts on the subject. But I feel compelled to point out that the Lancet-paper merely lists the effectiveness of several therapeutic options, and that the value of a treatment is not only determined by its effectiveness. Crucial further elements are a therapy’s cost and its risks, the latter of which also determines the most important criterion: the risk/benefit balance. In my version of the Lancet table, I have therefore added these three variables for non-pharmacological and non-surgical options:
|EFFECTIVENESS ACUTE LBP||EFFECTIVENESS PERSISTENT LBP||RISKS||COSTS||RISK/BENEFIT BALANCE|
|Advice to stay active||+, routine||+, routine||None||Low||Positive|
|Education||+, routine||+, routine||None||Low||Positive|
|Superficial heat||+/-||Ie||Very minor||Low to medium||Positive (aLBP)|
|Exercise||Limited||+/-, routine||Very minor||Low||Positive (pLBP)|
|CBT||Limited||+/-, routine||None||Low to medium||Positive (pLBP)|
|Rehab||Ie||+/-||Minor||Medium to high||Questionable|
Routine = consider for routine use
+/- = second line or adjunctive treatment
Ie = insufficient evidence
Limited = limited use in selected patients
vfbmae = very frequent, minor adverse effects
sae = serious adverse effects, including deaths, are on record
aLBP = acute low back pain
The reason why my stance, as expressed on this blog and elsewhere, is often critical about certain alternative therapies is thus obvious and transparent. For none of them (except for massage) is the risk/benefit balance positive. And for spinal manipulation, it even turns out to be negative. It goes almost without saying that responsible advice must be to avoid treatments for which the benefits do not demonstrably outweigh the risks.
I imagine that chiropractors, osteopaths and acupuncturists will strongly disagree with my interpretation of the evidence (they might even feel that their cash-flow is endangered) – and I am looking forward to the discussions around their objections.
At least this is what the authors of this new study want us to believe.
But are they right?
This RCT is entitled ‘Efficacy and tolerability of a complex homeopathic drug in children suffering from dry cough-A double-blind, placebo- controlled, clinical trial’. It recruited children suffering from acute dry cough to assess the efficacy and tolerability of a complex homeopathic remedy in liquid form (Drosera, Coccus cacti, Cuprum Sulfuricum, Ipecacuanha = Monapax syrup, short: verum).
The authors stated that “preparations of Drosera, Coccus cacti, Cuprum sulfuricum, and Ipecacuanha are well-known antitussives in homeopathic medicine. Each of them is connected with special subtypes of cough. Drosera is intended for inflammations of the respiratory tract, especially for whooping cough. Coccus cacti is intended for inflammations of the nasopharyngeal space and the respiratory tract. Cuprum sulfuricum is intended for spasmodic coughing at night. Ipecacuanha is intended for bronchitis, bronchial asthma, and whooping cough. The complex homeopathic drug explored in this trial consists of all four of these active substances.”
According to the authors of the paper, “the primary objective of the trial was to demonstrate the superiority of verum compared to the placebo”.
A total of 89 children, enrolled in the Ukraine between 15/04/2008 and 26/05/2008 in 9 trial centres, received verum and 91 received placebo daily for 7 days (age groups 0.5–3, 4–7 and 8–12 years). The trial was conducted using an adaptive 3-stage group sequential design with possible sample size adjustments after the two planned interim analyses. The inverse normal method of combining the p-values from all three stages was used for confirmatory hypothesis testing at the interim analyses as well as at the final analysis. The primary efficacy variable was the improvement of the Cough Assessment Score. Tolerability and compliance were also assessed. A confirmatory statistical analysis was performed for the primary efficacy variable and a descriptive analysis for the secondary parameters.
A total of 180 patients (89 in the verum and 91 in the placebo group) evaluable according to the intention-to-treat principle were included in the trial. The Cough Assessment Score showed an improvement of 5.2 ± 2.6 points for children treated with verum and 3.2 ± 2.6 points in the placebo group (p < 0.0001). The difference of the least square means of the improvements was 1.9 ± 0.4. The effect size of Cohen´s d was d = 0.77. In all secondary parameters the patients in the verum group showed higher rates of improvement and remission than those in the placebo group. In 15 patients (verum: n = 6; placebo: n = 9) 18 adverse drug reactions of mild or moderate intensity were observed.
The authors concluded that the administering verum resulted in a statistically significantly greater improvement of the Cough Assessment Score than the placebo. The tolerability was good and not inferior to that of the placebo.
This study seems fairly rigorous. What is more, it has been published in a mainstream journal of reasonably high standing. So, how can its results be positive? We all know that homeopathy does not work, don’t we?
Are we perhaps mistaken?
Are highly diluted homeopathic remedies effective after all?
I don’t think so.
Let me explain to you a few points that raise my suspicions about this study:
- It was conducted 10 years ago; why did it take that long to get it published?
- I don’t think highly of a study with “the primary objective … to demonstrate the superiority” of the experimental interventions. Scientists use RCTs for testing efficacy and pseudo-scientist use it for demonstrating it, I think.
- The study was conducted in the Ukraine in 9 centres, yet no Ukrainian is an author of the paper, and there is not even an acknowledgement of these primary investigators.
- The ‘adaptive 3-stage group sequential design with possible sample size adjustments’ sounds very odd to me, but I may be wrong; I am not a statistician.
- We learn that 180 patients were evaluated, but not how many were entered into the trial?
- The Cough Assessment Score is not a validated outcome measure.
- Was the verum distinguishable from the placebo? It would be easy to test whether the patients/parents were truly blinded. Yet no such results were included.
- The trial was funded by the manufacturer of the homeopathic remedy.
- The paper has three authors 1)Hans W. Voß has no conflict of interest to declare. 2) Rainer Brünjes is employed at Cassella-med, the marketing authorisation holder of the study product. 3) Andreas Michalsen has consulted for Cassella-med and participated in advisory boards.
I know, homeopathy fans will think I am nit-picking; and perhaps they are correct. So, let me tell you why I really do strongly reject the notion that this study shows or even suggests that highly diluted homeopathic remedies are more than placebos.
The remedy used in this study is composed of Drosera 0,02 g, Hedera helix Ø 0,04 g, China D1 0,02 g, Coccus cacti D1 0,04 g, Cuprum sulfuricum D4 2,0 g, Ipecacuanha D4 2,0 g, Hyoscyamus D4 2,0 g.
In case you don’t know what ‘Ø’ stands for (I don’t blame you, hardly anyone outside the world of homeopathy does), it signifies a ‘mother tincture’, i. e. an undiluted herbal extract; and ‘D1’ signifies diluted 1:10. This means that the remedy may be homeopathic from a regulatory point of view, but for all intents and purposes it is a herbal medicine. It contains an uncounted amount of active compounds, and it is therefore hardly surprising that it might have pharmacological effects. In turn, this means that this trial does by no means overturn the fact that highly diluted homeopathic remedies are pure placebos.
It’s a pity, I find, that the authors of the paper fail to explain this simple fact in full detail – might one think that they intentionally aimed at misleading us?
As I often said, I find it regrettable that sceptics often say THERE IS NOT A SINGLE STUDY THAT SHOWS HOMEOPATHY TO BE EFFECTIVE (or something to that extent). This is quite simply not true, and it gives homeopathy-fans the occasion to suggest sceptics wrong. The truth is that THE TOTALITY OF THE MOST RELIABLE EVIDENCE FAILS TO SUGGEST THAT HIGHLY DILUTED HOMEOPATHIC REMEDIES ARE EFFECTIVE BEYOND PLACEBO. As a message for consumers, this is a little more complex, but I believe that it’s worth being well-informed and truthful.
And that also means admitting that a few apparently rigorous trials of homeopathy exist and some of them show positive results. Today, I want to focus on this small set of studies.
How can a rigorous trial of a highly diluted homeopathic remedy yield a positive result? As far as I can see, there are several possibilities:
- Homeopathy does work after all, and we have not fully understood the laws of physics, chemistry etc. Homeopaths favour this option, of course, but I find it extremely unlikely, and most rational thinkers would discard this possibility outright. It is not that we don’t quite understand homeopathy’s mechanism; the fact is that we understand that there cannot be a mechanism that is in line with the laws of nature.
- The trial in question is the victim of some undetected error.
- The result has come about by chance. Of 100 trials, 5 would produce a positive result at the 5% probability level purely by chance.
- The researchers have cheated.
When we critically assess any given trial, we attempt, in a way, to determine which of the 4 solutions apply. But unfortunately we always have to contend with what the authors of the trial tell us. Publications never provide all the details we need for this purpose, and we are often left speculating which of the explanations might apply. Whatever it is, we assume the result is false-positive.
Naturally, this assumption is hard to accept for homeopaths; they merely conclude that we are biased against homeopathy and conclude that, however, rigorous a study of homeopathy is, sceptics will not accept its result, if it turns out to be positive.
But there might be a way to settle the argument and get some more objective verdict, I think. We only need to remind ourselves of a crucially important principle in all science: INDEPENDENT REPLICATION. To be convincing, a scientific paper needs to provide evidence that the results are reproducible. In medicine, it unquestionably is wise to accept a new finding only after it has been confirmed by other, independent researchers. Only if we have at least one (better several) independent replications, can we be reasonably sure that the result in question is true and not false-positive due to bias, chance, error or fraud.
And this is, I believe, the extremely odd phenomenon about the ‘positive’ and apparently rigorous studies of homeopathic remedies. Let’s look at the recent meta-analysis of Mathie et al. The authors found several studies that were both positive and fairly rigorous. These trials differ in many respects (e. g. remedies used, conditions treated) but they have, as far as I can see, one important feature in common: THEY HAVE NOT BEEN INDEPENDENTLY REPLICATED.
If that is not astounding, I don’t know what is!
Think of it: faced with a finding that flies in the face of science and would, if true, revolutionise much of medicine, scientists should jump with excitement. Yet, in reality, nobody seems to take the trouble to check whether it is the truth or an error.
To explain this absurdity more fully, let’s take just one of these trials as an example, one related to a common and serious condition: COPD
The study is by Prof Frass and was published in 2005 – surely long enough ago for plenty of independent replications to emerge. Its results showed that potentized (C30) potassium dichromate decreases the amount of tracheal secretions was reduced, extubation could be performed significantly earlier, and the length of stay was significantly shorter. This is a scientific as well as clinical sensation, if there ever was one!
The RCT was published in one of the leading journals on this subject (Chest) which is read by most specialists in the field, and it was at the time widely reported. Even today, there is hardly an interview with Prof Frass in which he does not boast about this trial with truly sensational results (only last week, I saw one). If Frass is correct, his findings would revolutionise the lives of thousands of seriously suffering patients at the very brink of death. In other words, it is inconceivable that Frass’ result has not been replicated!
But it hasn’t; at least there is nothing in Medline.
Why not? A risk-free, cheap, universally available and easy to administer treatment for such a severe, life-threatening condition would normally be picked up instantly. There should not be one, but dozens of independent replications by now. There should be several RCTs testing Frass’ therapy and at least one systematic review of these studies telling us clearly what is what.
But instead there is a deafening silence.
For heaven sakes, why?
The only logical explanation is that many centres around the world did try Frass’ therapy. Most likely they found it does not work and soon dismissed it. Others might even have gone to the trouble of conducting a formal study of Frass’ ‘sensational’ therapy and found it to be ineffective. Subsequently they felt too silly to submit it for publication – who would not laugh at them, if they said they trailed a remedy that was diluted 1: 1000000000000000000000000000000000000000000000000000000000000 and found it to be worthless? Others might have written up their study and submitted it for publication, but got rejected by all reputable journals in the field because the editors felt that comparing one placebo to another placebo is not real science.
And this is roughly, how it went with the other ‘positive’ and seemingly rigorous studies of homeopathy as well, I suspect.
Regardless of whether I am correct or not, the fact is that there are no independent replications (if readers know any, please let me know).
Once a sufficiently long period of time has lapsed and no replications of a ‘sensational’ finding did not emerge, the finding becomes unbelievable or bogus – no rational thinker can possibly believe such a results (I for one have not yet met an intensive care specialist who believes Frass’ findings, for instance). Subsequently, it is quietly dropped into the waste-basket of science where it no longer obstructs progress.
The absence of independent replications is therefore a most useful mechanism by which science rids itself of falsehoods.
It seems that homeopathy is such a falsehood.
The plethora of dodgy meta-analyses in alternative medicine has been the subject of a recent post – so this one is a mere update of a regular lament.
This new meta-analysis was to evaluate evidence for the effectiveness of acupuncture in the treatment of lumbar disc herniation (LDH). (Call me pedantic, but I prefer meta-analyses that evaluate the evidence FOR AND AGAINST a therapy.) Electronic databases were searched to identify RCTs of acupuncture for LDH, and 30 RCTs involving 3503 participants were included; 29 were published in Chinese and one in English, and all trialists were Chinese.
The results showed that acupuncture had a higher total effective rate than lumbar traction, ibuprofen, diclofenac sodium and meloxicam. Acupuncture was also superior to lumbar traction and diclofenac sodium in terms of pain measured with visual analogue scales (VAS). The total effective rate in 5 trials was greater for acupuncture than for mannitol plus dexamethasone and mecobalamin, ibuprofen plus fugui gutong capsule, loxoprofen, mannitol plus dexamethasone and huoxue zhitong decoction, respectively. Two trials showed a superior effect of acupuncture in VAS scores compared with ibuprofen or mannitol plus dexamethasone, respectively.
The authors from the College of Traditional Chinese Medicine, Jinan University, Guangzhou, Guangdong, China, concluded that acupuncture showed a more favourable effect in the treatment of LDH than lumbar traction, ibuprofen, diclofenac sodium, meloxicam, mannitol plus dexamethasone and mecobalamin, fugui gutong capsule plus ibuprofen, mannitol plus dexamethasone, loxoprofen and huoxue zhitong decoction. However, further rigorously designed, large-scale RCTs are needed to confirm these findings.
Why do I call this meta-analysis ‘dodgy’? I have several reasons, 10 to be exact:
- There is no plausible mechanism by which acupuncture might cure LDH.
- The types of acupuncture used in these trials was far from uniform and included manual acupuncture (MA) in 13 studies, electro-acupuncture (EA) in 10 studies, and warm needle acupuncture (WNA) in 7 studies. Arguably, these are different interventions that cannot be lumped together.
- The trials were mostly of very poor quality, as depicted in the table above. For instance, 18 studies failed to mention the methods used for randomisation. I have previously shown that some Chinese studies use the terms ‘randomisation’ and ‘RCT’ even in the absence of a control group.
- None of the trials made any attempt to control for placebo effects.
- None of the trials were conducted against sham acupuncture.
- Only 10 studies 10 trials reported dropouts or withdrawals.
- Only two trials reported adverse reactions.
- None of these shortcomings were critically discussed in the paper.
- Despite their affiliation, the authors state that they have no conflicts of interest.
- All trials were conducted in China, and, on this blog, we have discussed repeatedly that acupuncture trials from China never report negative results.
And why do I find the journal ‘dodgy’?
Because any journal that publishes such a paper is likely to be sub-standard. In the case of ‘Acupuncture in Medicine’, the official journal of the British Medical Acupuncture Society, I see such appalling articles published far too frequently to believe that the present paper is just a regrettable, one-off mistake. What makes this issue particularly embarrassing is, of course, the fact that the journal belongs to the BMJ group.
… but we never really thought that science publishing was about anything other than money, did we?
What an odd title, you might think.
Systematic reviews are the most reliable evidence we presently have!
Yes, this is my often-voiced and honestly-held opinion but, like any other type of research, systematic reviews can be badly abused; and when this happens, they can seriously mislead us.
A new paper by someone who knows more about these issues than most of us, John Ioannidis from Stanford university, should make us think. It aimed at exploring the growth of published systematic reviews and meta‐analyses and at estimating how often they are redundant, misleading, or serving conflicted interests. Ioannidis demonstrated that publication of systematic reviews and meta‐analyses has increased rapidly. In the period January 1, 1986, to December 4, 2015, PubMed tags 266,782 items as “systematic reviews” and 58,611 as “meta‐analyses.” Annual publications between 1991 and 2014 increased 2,728% for systematic reviews and 2,635% for meta‐analyses versus only 153% for all PubMed‐indexed items. Ioannidis believes that probably more systematic reviews of trials than new randomized trials are published annually. Most topics addressed by meta‐analyses of randomized trials have overlapping, redundant meta‐analyses; same‐topic meta‐analyses may exceed 20 sometimes.
Some fields produce massive numbers of meta‐analyses; for example, 185 meta‐analyses of antidepressants for depression were published between 2007 and 2014. These meta‐analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests. China has rapidly become the most prolific producer of English‐language, PubMed‐indexed meta‐analyses. The most massive presence of Chinese meta‐analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes. Furthermore, many contracting companies working on evidence synthesis receive industry contracts to produce meta‐analyses, many of which probably remain unpublished. Many other meta‐analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta‐analyses are both non‐misleading and useful.
The author concluded that the production of systematic reviews and meta‐analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta‐analyses are unnecessary, misleading, and/or conflicted.
Ioannidis makes the following ‘Policy Points’:
- Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta‐analyses. Instead of promoting evidence‐based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools.
- Suboptimal systematic reviews and meta‐analyses can be harmful given the major prestige and influence these types of studies have acquired.
- The publication of systematic reviews and meta‐analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.
Obviously, Ioannidis did not have alternative medicine in mind when he researched and published this article. But he easily could have! Virtually everything he stated in his paper does apply to it. In some areas of alternative medicine, things are even worse than Ioannidis describes.
Take TCM, for instance. I have previously looked at some of the many systematic reviews of TCM that currently flood Medline, based on Chinese studies. This is what I concluded at the time:
Why does that sort of thing frustrate me so much? Because it is utterly meaningless and potentially harmful:
- I don’t know what treatments the authors are talking about.
- Even if I managed to dig deeper, I cannot get the information because practically all the primary studies are published in obscure journals in Chinese language.
- Even if I did read Chinese, I do not feel motivated to assess the primary studies because we know they are all of very poor quality – too flimsy to bother.
- Even if they were formally of good quality, I would have my doubts about their reliability; remember: 100% of these trials report positive findings!
- Most crucially, I am frustrated because conclusions of this nature are deeply misleading and potentially harmful. They give the impression that there might be ‘something in it’, and that it (whatever ‘it’ might be) could be well worth trying. This may give false hope to patients and can send the rest of us on a wild goose chase.
So, to ease the task of future authors of such papers, I decided give them a text for a proper EVIDENCE-BASED conclusion which they can adapt to fit every review. This will save them time and, more importantly perhaps, it will save everyone who might be tempted to read such futile articles the effort to study them in detail. Here is my suggestion for a conclusion soundly based on the evidence, not matter what TCM subject the review is about:
OUR SYSTEMATIC REVIEW HAS SHOWN THAT THERAPY ‘X’ AS A TREATMENT OF CONDITION ‘Y’ IS CURRENTLY NOT SUPPORTED BY SOUND EVIDENCE.
On another occasion, I stated that I am getting very tired of conclusions stating ‘…XY MAY BE EFFECTIVE/HELPFUL/USEFUL/WORTH A TRY…’ It is obvious that the therapy in question MAY be effective, otherwise one would surely not conduct a systematic review. If a review fails to produce good evidence, it is the authors’ ethical, moral and scientific obligation to state this clearly. If they don’t, they simply misuse science for promotion and mislead the public. Strictly speaking, this amounts to scientific misconduct.
In yet another post on the subject of systematic reviews, I wrote that if you have rubbish trials, you can produce a rubbish review and publish it in a rubbish journal (perhaps I should have added ‘rubbish researchers).
And finally this post about a systematic review of acupuncture: it is almost needless to mention that the findings (presented in a host of hardly understandable tables) suggest that acupuncture is of proven or possible effectiveness/efficacy for a very wide array of conditions. It also goes without saying that there is no critical discussion, for instance, of the fact that most of the included evidence originated from China, and that it has been shown over and over again that Chinese acupuncture research never seems to produce negative results.
The main point surely is that the problem of shoddy systematic reviews applies to a depressingly large degree to all areas of alternative medicine, and this is misleading us all.
So, what can be done about it?
My preferred (but sadly unrealistic) solution would be this:
STOP ENTHUSIASTIC AMATEURS FROM PRETENDING TO BE RESEARCHERS!
Research is not fundamentally different from other professional activities; to do it well, one needs adequate training; and doing it badly can cause untold damage.