A new study of homeopathic arnica suggests efficacy. How come?
Subjects scheduled for rhinoplasty surgery with nasal bone osteotomies by a single surgeon were prospectively randomized to receive either oral perioperative arnica or placebo in a double-blinded fashion. A commercially available preparation was used which contained 12 capsules: one 500 mg capsule with arnica 1M is given preoperatively on the morning of surgery and two more later that day after surgery. Thereafter, arnica was administered in the 12C potency three times daily for the next 3 days (“C” indicates a 100-fold serial dilution; and M, a 1000-fold dilution)
Ecchymosis was measured in digital “three-quarter”-view photographs at three postoperative time points. Each bruise was outlined with Adobe Photoshop and the extent was scaled to a standardized reference card. Cyan, magenta, yellow, black, and luminosity were analyzed in the bruised and control areas to calculate change in intensity.
Compared with 13 subjects receiving placebo, 9 taking arnica had 16.2%, 32.9%, and 20.4% less extent on postoperative days 2/3, 7, and 9/10, a statistically significant difference on day 7. Color change initially showed 13.1% increase in intensity with arnica, but 10.9% and 36.3% decreases on days 7 and 9/10, a statistically significant difference on day 9/10. One subject experienced mild itching and rash with the study drug that resolved during the study period.
The authors concluded that Arnica montana seems to accelerate postoperative healing, with quicker resolution of the extent and the intensity of ecchymosis after osteotomies in rhinoplasty surgery, which may dramatically affect patient satisfaction.
Why are the results positive? Pervious systematic reviews confirm that homeopathic arnica is a pure placebo. First, I thought the answer lies in the 1M potency. It could well still contain active molecules. But then I realised that the answer is much more simple: if we apply the conventional level of statistical significance, there are no statistically significant differences to placebo at all! I had not noticed the little sentence by the authors: a P value of 0.1 was set as a meaningful difference with statistical significance. In fact, none of the effects called significant by the authors pass the conventionally used probability level of 5%.
So, what so the results of this new study truly mean? In my view, they show what was known all along: HOMEOPATHIC REMEDIES ARE PLACEBOS.
A recent comment to a post of mine (by a well-known and experienced German alt med researcher) made the following bold statement aimed directly at me and at my apparent lack of understanding research methodology:
C´mon , as researcher you should know the difference between efficacy and effectiveness. This is pharmacological basic knowledge. Specific (efficacy) + nonspecific effects = effectiveness. And, in fact, everything can be effective – because of non-specific or placebo-like effects. That does not mean that efficacy is existent.
The point he wanted to make is that outcome studies – studies without a control group where the researcher simply observe the outcome of a particular treatment in a ‘real life’ situation – suffice to demonstrate the effectiveness of therapeutic interventions. This belief is very wide-spread in alternative medicine and tends to mislead all concerned. It is therefore worth re-visiting this issue here in an attempt to create some clarity.
When a patient’s condition improves after receiving a therapy, it is very tempting to feel that this improvement reflects the effectiveness of the intervention (as the researcher mentioned above obviously does). Tempting but wrong: there are many other factors involved as well, for instance:
- the placebo effect (mainly based on conditioning and expectation),
- the therapeutic relationship with the clinician (empathy, compassion etc.),
- the regression towards the mean (outliers tend to return to the mean value),
- the natural history of the patient’s condition (most conditions get better even without treatment),
- social desirability (patients tend to say they are better to please their friendly clinician),
- concomitant treatments (patients often use treatments other than the prescribed one without telling their clinician).
So, how does this fit into the statement above ‘Specific (efficacy) + nonspecific effects = effectiveness’? Even if this formula were correct, it would not mean that outcome studies of the nature described demonstrate the effectiveness of a therapy. It all depends, of course, on what we call ‘non-specific’ effects. We all agree that placebo-effects belong to this category. Probably, most experts also would include the therapeutic relationship and the regression towards the mean under this umbrella. But the last three points from my list are clearly not non-specific effects of the therapy; they are therapy-independent determinants of the clinical outcome.
The most important factor here is usually the natural history of the disease. Some people find it hard to imagine what this term actually means. Here is a little joke which, I hope, will make its meaning clear and memorable.
CONVERATION BETWEEN TWO HOSPITAL DOCTORS:
Doc A: The patient from room 12 is much better today.
Doc B: Yes, we stared his treatment just in time; a day later and he would have been cured without it!
I am sure that most of my readers now understand (and never forget) that clinical improvement cannot be equated with the effectiveness of the treatment administered (they might thus be immune to the misleading messages they are constantly exposed to). Yet, I am not at all sure that all ‘alternativists’ have got it.
In my last post, I claimed that researchers of alternative medicine tend to be less than rigorous. I did not link this statement to any evidence at all. Perhaps I should have at least provided an example!? As it happens, I just came across a brand new paper which nicely demonstrates what I meant.
According to its authors, this non-interventional study was performed to generate data on safety and treatment effects of a complex homeopathic drug. They treated 1050 outpatients suffering from common cold with a commercially available homeopathic remedy for 8 days. The study was conducted in 64 German outpatient practices of medical doctors trained in CAM. Tolerability, compliance and the treatment effects were assessed by the physicians and by patient diaries. Adverse events were collected and assessed with specific attention to homeopathic aggravation and proving symptoms. Each adverse effect was additionally evaluated by an advisory board of experts.
The physicians detected 60 adverse events from 46 patients (4.4%). Adverse drug reactions occurred in 14 patients (1.3%). Six patients showed proving symptoms (0.57%) and only one homeopathic aggravation (0.1%) appeared. The rate of compliance was 84% for all groups. The global assessment of the treatment effects resulted in the verdict “good” and “very good” in 84.9% of all patients.
The authors concluded that the homeopathic complex drug was shown to be safe and effective for children and adults likewise. Adverse reactions specifically related to homeopathic principles are very rare. All observed events recovered quickly and were of mild to moderate intensity.
So why do I think this is ‘positively barmy’?
The study had no control group. This means that there is no way anyone can attribute the observed ‘treatment effects’ to the homeopathic remedy. There are many other phenomena that may have caused or contributed to it, e. g.:
- a placebo effect
- the natural history of the condition
- regression to the mean
- other treatments which the patients took but did not declare
- the empathic encounter with the physician
- social desirability
To plan a study with the aim as stated above and to draw the conclusion as cited above is naïve and unprofessional (to say the least) on the part of the researchers (I often wonder where, in such cases, the boundary between incompetence and research misconduct might lie). To pass such a paper through the peer review process is negligent on the part of the reviewers. To publish the article is irresponsible on the part of the editor.
In a nut-shell: COLLECTIVELY, THIS IS ‘POSITIVELY BARMY’!!!
Distant healing is one of the most bizarre yet popular forms of alternative medicine. Healers claim they can transmit ‘healing energy’ towards patients to enable them to heal themselves. There have been many trials testing the effectiveness of the method, and the general consensus amongst critical thinkers is that all variations of ‘energy healing’ rely entirely on a placebo response. A recent and widely publicised paper seems to challenge this view.
This article has, according to its authors, two aims. Firstly it reviews healing studies that involved biological systems other than ‘whole’ humans (e.g., studies of plants or cell cultures) that were less susceptible to placebo-like effects. Secondly, it presents a systematic review of clinical trials on human patients receiving distant healing.
All the included studies examined the effects upon a biological system of the explicit intention to improve the wellbeing of that target; 49 non-whole human studies and 57 whole human studies were included.
The combined weighted effect size for non-whole human studies yielded a highly significant (r = 0.258) result in favour of distant healing. However, outcomes were heterogeneous and correlated with blind ratings of study quality; 22 studies that met minimum quality thresholds gave a reduced but still significant weighted r of 0.115.
Whole human studies yielded a small but significant effect size of r = .203. Outcomes were again heterogeneous, and correlated with methodological quality ratings; 27 studies that met threshold quality levels gave an r = .224.
From these findings, the authors drew the following conclusions: Results suggest that subjects in the active condition exhibit a significant improvement in wellbeing relative to control subjects under circumstances that do not seem to be susceptible to placebo and expectancy effects. Findings with the whole human database suggests that the effect is not dependent upon the previous inclusion of suspect studies and is robust enough to accommodate some high profile failures to replicate. Both databases show problems with heterogeneity and with study quality and recommendations are made for necessary standards for future replication attempts.
In a press release, the authors warned: the data need to be treated with some caution in view of the poor quality of many studies and the negative publishing bias; however, our results do show a significant effect of healing intention on both human and non-human living systems (where expectation and placebo effects cannot be the cause), indicating that healing intention can be of value.
My thoughts on this article are not very complimentary, I am afraid. The problems are, it seems to me, too numerous to discuss in detail:
- The article is written such that it is exceedingly difficult to make sense of it.
- It was published in a journal which is not exactly known for its cutting edge science; this may seem a petty point but I think it is nevertheless important: if distant healing works, we are confronted with a revolution in the understanding of nature – and surely such a finding should not be buried in a journal that hardly anyone reads.
- The authors seem embarrassingly inexperienced in conducting and publishing systematic reviews.
- There is very little (self-) critical input in the write-up.
- A critical attitude is necessary, as the primary studies tend to be by evangelic believers in and amateur enthusiasts of healing.
- The article has no data table where the reader might learn the details about the primary studies included in the review.
- It also has no table to inform us in sufficient detail about the quality assessment of the included trials.
- It seems to me that some published studies of distant healing are missing.
- The authors ignored all studies that were not published in English.
- The method section lacks detail, and it would therefore be impossible to conduct an independent replication.
- Even if one ignored all the above problems, the effect sizes are small and would not be clinically important.
- The research was sponsored by the ‘Confederation of Healing Organisations’ and some of the comments look as though the sponsor had a strong influence on the phraseology of the article.
Given these reservations, my conclusion from an analysis of the primary studies of distant healing would be dramatically different from the one published by the authors: DESPITE A SIZABLE AMOUNT OF PRIMARY STUDIES ON THE SUBJECT, THE EFFECTIVENESS OF DISTANT HEALING REMAINS UNPROVEN. AS THIS THERAPY IS BAR OF ANY BIOLOGICAL PLAUSIBILITY, FURTHER RESEARCH IN THIS AREA SEEMS NOT WARRANTED.
Twenty years ago, I published a short article in the British Journal of Rheumatology. Its title was ALTERNATIVE MEDICINE, THE BABY AND THE BATH WATER. Reading it again today – especially in the light of the recent debate (with over 700 comments) on acupuncture – indicates to me that very little has since changed in the discussions about alternative medicine (AM). Does that mean we are going around in circles? Here is the (slightly abbreviated) article from 1995 for you to judge for yourself:
“Proponents of alternative medicine (AM) criticize the attempt of conducting RCTs because they view this is in analogy to ‘throwing out the baby with the bath water’. The argument usually goes as follows: the growing popularity of AM shows that individuals like it and, in some way, they benefit through using it. Therefore it is best to let them have it regardless of its objective effectiveness. Attempts to prove or disprove effectiveness may even be counterproductive. Should RCTs prove that a given intervention is not superior to a placebo, one might stop using it. This, in turn, would be to the disadvantage of the patient who, previous to rigorous research, has unquestionably been helped by the very remedy. Similar criticism merely states that AM is ‘so different, so subjective, so sensitive that it cannot be investigated in the same way as mainstream medicine’. Others see reasons to change the scientific (‘reductionist’) research paradigm into a broad ‘philosophical’ approach. Yet others reject the RCTs because they think that ‘this method assumes that every person has the same problems and there are similar causative factors’.
The example of acupuncture as a (popular) treatment for osteoarthritis, demonstrates the validity of such arguments and counter-arguments. A search of the world literature identified only two RCTs on the subject. When acupuncture was tested against no treatment, the experimental group of osteoarthritis sufferers reported a 23% decrease of pain, while the controls suffered a 12% increase. On the basis of this result, it might seem highly unethical to withhold acupuncture from pain-stricken patients—’if a patient feels better for whatever reason and there are no toxic side effects, then the patient should have the right to get help’.
But what about the placebo effect? It is notoriously difficult to find a placebo indistinguishable to acupuncture which would allow patient-blinded studies. Needling non-acupuncture points may be as close as one can get to an acceptable placebo. When patients with osteoarthritis were randomized into receiving either ‘real acupuncture or this type of sham acupuncture both sub-groups showed the same pain relief.
These findings (similar results have been published for other AMs) are compatible only with two explanations. Firstly acupuncture might be a powerful placebo. If this were true, we need to establish how safe acupuncture is (clearly it is not without potential harm); if the risk/benefit ratio is favourable and no specific, effective form of therapy exists one might still consider employing this form as a ‘placebo therapy’ for easing the pain of osteoarthritis sufferers. One would also feel motivated to research this powerful placebo and identify its characteristics or modalities with the aim of using the knowledge thus generated to help future patients.
Secondly, it could be the needling, regardless of acupuncture points and philosophy, that decreases pain. If this were true, we could henceforward use needling for pain relief—no special training in or equipment for acupuncture would be required, and costs would therefore be markedly reduced. In addition, this knowledge would lead us to further our understanding of basic mechanisms of pain reduction which, one day, might evolve into more effective analgesia. In any case the published research data, confusing as they often are, do not call for a change of paradigm; they only require more RCTs to solve the unanswered problems.
Conducting rigorous research is therefore by no means likely to ‘throw out the baby with the bath water’. The concept that such research could harm the patient is wrong and anti-scientific. To follow its implications would mean neglecting the ‘baby in the bath water’ until it suffers serious damage. To conduct proper research means attending the ‘baby’ and making sure that it is safe and well.
Iyengar Yoga, named after and developed by B. K. S. Iyengar, is a form of Hatha Yoga that has an emphasis on detail, precision and alignment in the performance of posture (asana) and breath control (pranayama). The development of strength, mobility and stability is gained through the asanas.
B.K.S. Iyengar has systematised over 200 classical yoga poses and 14 different types of Pranayama (with variations of many of them) ranging from the basic to advanced. This helps ensure that students progress gradually by moving from simple poses to more complex ones and develop their mind, body and spirit step by step.
Iyengar Yoga often makes use of props, such as belts, blocks, and blankets, as aids in performing asanas (postures). The props enable students to perform the asanas correctly, minimising the risk of injury or strain, and making the postures accessible to both young and old.
Sounds interesting? But does it work?
The objective of this recent systematic review was to conduct a systematic review of the existing research on Iyengar yoga for relieving back and neck pain. The authors conducted extensive literature searches and found 6 RCTs that met the inclusion criteria.
The difference between the groups on the post-intervention pain or functional disability intensity assessment was, in all 6 studies, favouring the yoga group, which projected a decrease in back and neck pain.
The authors concluded that Iyengar yoga is an effective means for both back and neck pain in comparison to control groups. This systematic review found strong evidence for short-term effectiveness, but little evidence for long-term effectiveness of yoga for chronic spine pain in the patient-centered outcomes.
So, if we can trust this evidence (I would not call the evidence ‘strong), we have yet another treatment that might be effective for acute back and neck pain. The trouble, I fear, is not that we have too few such treatments, the trouble seems to be that we have too many of them. They all seem similarly effective, and I cannot help but wonder whether, in fact, they are all similarly ineffective.
Regardless of the answer to this troubling question, I feel the need to re-state what I have written many times before: FOR A CONDITION WITH A MULTITUDE OF ALLEGEDLY EFFECTIVE THERAPIES, IT MIGHT BE BEST TO CHOSE THE ONE THAT IS SAFEST AND CHEAPEST.
A recent article in the BMJ about my new book seems to have upset fellow researchers of alternative medicine. I am told that the offending passage is the following:
“Too much research on complementary therapies is done by people who have already made up their minds,” the first UK professor of complementary medicine has said. Edzard Ernst, who left his chair at Exeter University early after clashing with the Prince of Wales, told journalists at the Science Media Centre in London that, although more research into alternative medicines was now taking place, “none of the centres is anywhere near critical enough.”
Following this publication, I received indignant inquiries from colleagues asking whether I meant to say that their work lacks critical thinking. As this is a valid question, I will try to answer it the best I presently can.
Any critical evaluation of alternative medicine has to yield its fair share of negative conclusions about the value of alternative medicine. If it fails to do that, one would need to assume that most or all alternative therapies generate more good than harm – and very few experts (who are not proponents of alternative medicine) would assume that this can possibly be the case.
Put differently, this means that a researcher or a research group that does not generate its fair share of negative conclusions is suspect of lacking a critical attitude. In a previous post, I have addressed this issue in more detail by creating an ‘index’: THE TRUSTWORTHINESS INDEX. I have also provided a concrete example of a researcher who seems to be associated with a remarkably high index (the higher the index, the more suspicion of critical attitude).
Instead of unnecessarily upsetting my fellow researchers of alternative medicine any further, I will just issue this challenge: if any research group can demonstrate to have an index below 0.5 (which would mean the team has published twice as many negative conclusions as positive ones), I will gladly and publicly retract my suspicion that this group is “anywhere near critical enough”.
Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?
Here is a brand new one which might stand for dozens of others.
In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).
The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.
Good news then for enthusiasts of homeopathy? 91% improvement!
Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:
Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.
Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:
- How on earth can we take this and so many other articles on homeopathy seriously?
- When does this sort of article cross the line between wishful thinking and scientific misconduct?
I would have never thought that someone would be able to identify the author of the text I quoted in the previous post:
It is known that not just novel therapies but also traditional ones, such as homeopathy, suffer opposition and rejection by some doctors without having ever been subjected to serious tests. The doctor is in charge of medical treatment; he is thus responsible foremost for making sure all knowledge and all methods are employed for the benefit of public health…I ask the medical profession to consider even previously excluded therapies with an open mind. It is necessary that an unbiased evaluation takes place, not just of the theories but also of the clinical effectiveness of alternative medicine.
More often than once has science, when it relied on theory alone, arrived at verdicts which later had to be overturned – frequently this occurred only after long periods of time, after progress had been hindered and most acclaimed pioneers had suffered serious injustice. I do not need to remind you of the doctor who, more than 100 years ago, in fighting puerperal fever, discovered sepsis and asepsis but was laughed at and ousted by his colleagues throughout his lifetime. Yet nobody would today deny that this knowledge is most relevant to medicine and that it belongs to the basis of medicine. Insightful doctors, some of whom famous, have, during the recent years, spoken openly about the crisis in medicine and the dead end that health care has maneuvered itself into. It seems obvious that the solution is going in directions which embrace nature. Hardly any other form of science is so tightly bound to nature as is the science occupied with healing living creatures. The demand for holism is getting stronger and stronger, a general demand which has already been fruitful on the political level. For medicine, the challenge is to treat more than previously by influencing the whole organism when we aim to heal a diseased organ.
It is from the opening speech by Rudolf Hess on the occasion of the WORLD CONFERENCE ON HOMEOPATHY 1937, in Berlin. Hess, at the time Hitler’s deputy, was not the only Nazi-leader. I knew of the opening speech because, a few years ago, DER SPIEGEL published a theme issue on homeopathy, and they published a photo of the opening ceremony of this meeting. It shows many men in SS-uniform and, in the first row of the auditorium, we see Hess (as well as Himmler) ready to spring into action.
Hess in particular was besotted with alternative medicine which the Nazis elected to call NEUE DEUTSCHE HEILKUNDE. Somewhat to the dismay of today’s alternative medicine enthusiasts, I have repeatedly published on this aspect of alternative medicine’s past, and it also is an important part of my new book A SCIENTIST IN WONDERLAND which the lucky winner (my congratulations!) of my little competition to identify the author has won. The abstract of an 2001 article explains this history succinctly:
The aim of this article is to discuss complementary/alternative medicine (CAM) in the Third Reich. Based on a general movement towards all things natural, a powerful trend towards natural ways of healing had developed in the 19(th)century. By 1930 this had led to a situation where roughly as many lay practitioners of CAM existed in Germany as doctors. To re-unify German medicine under the banner of ‘Neue Deutsche Heilkunde’, the Nazi officials created the ‘Heilpraktiker’ – a profession which was meant to become extinct within one generation. The ‘flag ship’ of the ‘Neue Deutsche Heilkunde’ was the ‘Rudolf Hess Krankenhaus’ in Dresden. It represented a full integration of CAM and orthodox medicine. An example of systematic research into CAM is the Nazi government’s project to validate homoeopathy. Even though the data are now lost, the results of this research seem to have been negative. Even though there are some striking similarities between today’s CAM and yesterday’s ‘Neue Deutsche Heilkunde’ there are important differences. Most importantly, perhaps, today’s CAM is concerned with the welfare of the individual, whereas the ‘Neue Deutsche Heilkunde’ was aimed at ensuring the dominance of the Aryan race.
One fascinating aspect of this past is the fact that the NEUE DEUTSCHE HEILKUNDE was de facto the invention of what we today call ‘integrated medicine’. Then it was more like a ‘shot-gun marriage’, while today it seems to be driven more by political correctness and sloppy thinking. It did not work 70 years ago for the same reason that it will fail today: the integration of bogus (non-evidence based) treatments into conventional medicine must inevitably render health care not better but worse!
One does not need to be a rocket scientist to understand that, and Hess as well as other proponents of alternative medicine of his time had certainly got the idea. So they initiated the largest ever series of scientific tests of homeopathy. This research program was not just left to the homeopaths, who never had a reputation of being either rigorous or unbiased, but some of the best scientists of the era were recruited for it. The results vanished in the hands of the homeopaths during the turmoil of the war. But one eye-witness report of a homeopaths, Fritz Donner, makes it very clear: as it turned out, there was not a jot of evidence in favour of homeopathy.
And this, I think, is the other fascinating aspect of the story: homeopaths did not give up their plight to popularise homeopathy. On the contrary, they re-doubled their efforts to fool us all and to convince us with dodgy results (see recent posts on this blog) that homeopathy somehow does defy the laws of nature and is, in effect, very effective for all sorts of diseases.
My readers suggested all sorts of potential authors for the Hess speech; and they are right! It could have been written by any proponent of alternative medicine. This fact is amusing and depressing at the same time. Amusing because it discloses the lack of new ideas and arguments (even the same fallacies are being used). Depressing because it suggests that progress in alternative medicine is almost totally absent.
As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.
To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):
A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.
The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.
Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).
Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.
Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.
I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.
It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.
Why did they do that?
The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).
By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?
Well, I think they committed several serious mistakes.
- Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
- Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.
There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.
And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:
I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.
For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.
Domain I: Sequence generation:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)
Domain IIIb: Blinding of outcome assessor
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)
Domain V: Selective outcome reporting
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)
Domain VI: Other sources of bias:
Rating: NO (high risk of bias), no details given
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given
In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.
So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying.
Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.