Today, I had a great day: two wonderful book reviews, one in THE TIMES HIGHER EDUCATION and one in THE SPECTATOR. But then I did something that I shouldn’t have done – I looked whether someone had already written a review on the Amazon site. There were three reviews; the first was nice the last was very stupid and the third one almost made me angry. Here it is:
I was at Exeter when Ernst took over what was already a successful Chair in CAM. I am afraid this part of it appears to be fiction. It was embarrassing for those of us CAM scientists trying to work there, but the university nevertheless supported his right to freedom of speech through all the one-sided attacks he made on CAM. Sadly, it became impossible to do genuine CAM research at Exeter, as one had to either agree with him that CAM is rubbish, or go elsewhere. He was eventually asked to leave the university, having spent the £2.M charity pot set up by Maurice Laing to help others benefit from osteopathy. CAM research funding is so tiny (in fact it is pretty much non-existent) and the remedies so cheap to make, that there is not the kind of corruption you find in multi-billion dollar drug companies (such as that recently in China) or the intrigue described. Subsequently it is not possible to become a big name in CAM in the UK (which may explain the ‘about face’ from the author when he found that out?). The book bears no resemblance to what I myself know about the field of CAM research, which is clearly considerably more than the author, and I would recommend anyone not to waste time and money on this particular account.
I know, I should just ignore it, but outright lies have always made me cross!
Here are just some of the ‘errors’ in the above text:
- There was no chair when I came.
- All the CAM scientists – not sure what that is supposed to mean.
- I was never asked to leave.
- The endowment was not £ 2 million.
- It was not set up to help others benefit from osteopathy.
It is a pity that this ‘CAM-expert’ hides behind a pseudonym. Perhaps he/she will tell us on this blog who he/she is. And then we might find out how well-informed he/she truly is and how he/she was able to insert so many lies into such a short text.
Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?
Here is a brand new one which might stand for dozens of others.
In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).
The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.
Good news then for enthusiasts of homeopathy? 91% improvement!
Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:
Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.
Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:
- How on earth can we take this and so many other articles on homeopathy seriously?
- When does this sort of article cross the line between wishful thinking and scientific misconduct?
Guest post by Nick Ross
If you’re a fan of Edzard Ernst – and who with a rational mind would not be – then you will be a fan of HealthWatch.
Edzard is a distinguished supporter. Do join us. I can’t promise much in return except that you will be part of a small and noble organisation that campaigns for treatments that work – in other words for evidence based medicine. Oh, and you get a regular Newsletter, which is actually rather good.
HealthWatch was inspired 25 years ago by Professor Michael Baum, the breast cancer surgeon who was incandescent that so many women presented to his clinic late, doomed and with suppurating sores, because they had been persuaded to try ‘alternative treatment’ rather than the real thing.
But like Edzard (and indeed like Michael Baum), HealthWatch keeps an open mind. If there are reliable data to show that an apparently weirdo treatment works, hallelujah. If there is evidence that an orthodox one doesn’t then it deserves a raspberry. HealthWatch has worked to expose quacks and swindlers and to get the Advertising Standards Authority to do its job regulating against false claims and flimflam. It has fought the NHS to have women given fair and balanced advice about the perils of mass screening. It has campaigned with Sense About Science, English Pen and Index to protect whistleblowing scientists from vexatious libel laws, and it has joined the AllTrials battle for transparency in drug trials. It has an annual competition for medical and nursing students to encourage critical analysis of clinical research protocols, and it stages the annual HealthWatch Award and Lecture which has featured Edzard (in 2005) and a galaxy of other champions of scepticism and good evidence including Sir Iain Chalmers, Richard Smith, David Colquhoun, Tim Harford, John Diamond, Richard Doll, Peter Wilmshurst, Ray Tallis, Ben Goldacre, Fiona Godlee and, last year, Simon Singh. We are shortly to sponsor a national debate on Lord Saatchi’s controversial Medical innovation Bill.
But we need new blood. Do please check us out. Be careful, because since we first registered our name a host of brazen copycats have emerged, not least Her Majesty’s Government with ‘Healthwatch England’ which is part of the Care Quality Commission. We have had to put ‘uk’ at the end of our web address to retain our identity. So take the link to http://www.healthwatch-uk.org/, or better still take out a (very modestly priced) subscription.
As Edmund Burke might well have said, all it takes for quackery to flourish is that good men and women do nothing.
As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.
To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):
A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.
The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.
Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).
Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.
Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.
I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.
It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.
Why did they do that?
The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).
By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?
Well, I think they committed several serious mistakes.
- Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
- Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.
There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.
And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:
I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.
For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.
Domain I: Sequence generation:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)
Domain IIIb: Blinding of outcome assessor
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)
Domain V: Selective outcome reporting
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)
Domain VI: Other sources of bias:
Rating: NO (high risk of bias), no details given
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given
In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.
So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying.
Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.
One thing that has often irritated me – alright, I admit it: sometimes it even infuriated me – is the pseudoscientific language of authors writing about alternative medicine. Reading publications in this area often seems to me like being in the middle of a game of ‘bullshit bingo’ (I am afraid that some of the commentators on this blog have importantly contributed to this phenomenon). In an article of 2004, I once discussed this issue in some detail and concluded that “… pseudo-scientific language … can be seen as an attempt to present nonsense as science…this misleads patients and can thus endanger their health…” For this paper, I had focussed on examples from the ‘bioresonance’- literature – more by coincidence than by design, I should add. I could have selected any other alternative treatment or diagnostic method; the use of pseudoscientific language is truly endemic in alternative medicine.
To give you a little flavour, here is the section of my 2004 paper where I used 5 quotes from recent articles on bioresonance and added a brief comment after each of them.
Quote No. 1
‘The biophysical control processes are superordinate to the biochemical processes. In the same way as the atomic processes result in chemical compounds the ultrafine biocommunication results in the biochemical processes. Control signals have an electromagnetic quality. Disturbing signals or ‘disturbing energies’ also have an electromagnetic quality. This is the reason why they can, for example, be conducted through cables and transformed into therapy signals by means of sophisticated electronic devices. The purpose is to clear the pathological part of the signals.’
Here the author uses highly technical language which, at first, sounds very complicated and scientific. However, after a second read, one is bound to discover that the words hide more than they reveal. In particular, the scientific tone distracts from the lack of logic in the argument. The basic message, once the pseudoscientific veneer is stripped away, seems to be the following. Living systems display electromagnetic phenomena. The electromagnetic energies that they rely upon can make us ill. The energies can also be transferred into an electronic instrument where they can be changed so that they don’t cause any more harm.
Quote No. 2
‘A very important advantage of the BICOM device as compared to the original form of the MORA-therapy in paediatry is the possibility to reduce the oscillation, a fact which meets much better the reaction pattern of the child and gives better results’ .
This paragraph essentially states that the BICOM instrument can change (the frequency or amplitude of) some sort of (electromagnetic) wave. We are told that, for children, this is preferable because of the way children tend to react. This would then be more effective.
Quote No. 3
‘The question how causative the Bioresonanz-Therapy can be must be answered in a differentiated way. The BR is in the first place effective on the informative level, which means on the ultrafine biokybernetical regulation level of the organism. This also includes the time factor and with that the functional aspect, and thus it influences the material-biochemical area of the body. The BRT is in comparison to other therapy procedures very high on the scale of causativeness, but it still remains in the physical level, and does not reach into the spiritual area. The freeing of the patient from his diseases can self evidently also lead to a change and improvement of conduct and attitudes and to a general wellbeing of the patient’ .
This amazing statement is again not easy to understand. If my reading is correct, the author essentially wants to tell us that BR interferes with the flow of information within organisms. The process is time-dependent and therefore affects function, physical and biochemical properties. Compared to other treatments, BR is more causative without affecting our spiritual sphere. As BR cures a disease, it can also change behaviour, attitudes and wellbeing.
Quote No. 4
‘MORA therapy is an auto-iso-therapy using the patient’s own vibrations in a wide range of the electromagnetic spectrum. Strictly speaking, we have hyperwaves in a six-dimensional cosmos with two hidden parameters (as predicted by Albert Einstein and others). Besides the physical plane there are six other planes of existence and the MORA therapy works in the biological plane, a region called the M-field, according to Sheldrake and Burkhard Heim’ .
Here we seem to be told that the MORA therapy is a selftreatment using the body’s own resources, namely a broad range of electromagnetic waves. These waves are hyperwaves in 6 dimensions and their existence has already been predicted by Einstein. Six (or 7?) planes of existence seem to have been discovered and the MORA therapy is operative in one of them.
Quote No. 5
‘The author presents an overall medical conception of the world between mass maximum and masslessness and completes it with the pair of concepts of subjectivity/objectivity. Three test procedures of the bioelectronic function diagnostics are presented and incorporated in addition to other procedures in this conception of the world. Therefore, in the sense of a holistic medicine, there is a useful indication for every medical procedure, because there are different objectives associated with each procedure. A one-sided assessment of the procedures does not do justice to the human being as a whole’ .
This author introduces a new concept of the world between maxima and minima of mass or objectivity. He has developed 3 tests of BR diagnosis that fit into the new concept. Therefore, holistically speaking, any therapy is good for something because each may have a different aim. One-sided assessments of such holistic treatments are too narrow bearing in mind the complexity of a human being.
The danger of pseudoscientific language in health care is obvious: it misleads patients, consumers, journalists, politicians, and everyone else (perhaps even some of the original authors?) into believing that nonsense is credible; to express it more bluntly: it is a method of cheating the unsuspecting public. Yes, the way I see it, it is a form of health fraud. Thus it leads to wrong therapeutic decisions and endangers public health.
I could easily get quite cross with the many authors who publish such drivel. But let’s not allow them to spoil our day; let’s take a different approach: let’s try to have some fun.
I herewith invite my readers to post quotes in the comments section of the most extraordinary excesses of pseudoscientific language that they have come across. If the result is sufficiently original, I might try to design a new BULLSHIT BINGO with it.
Rigorous research into the effectiveness of a therapy should tell us the truth about the ability of this therapy to treat patients suffering from a given condition — perhaps not one single study, but the totality of the evidence (as evaluated in systematic reviews) should achieve this aim. Yet, in the realm of alternative medicine (and probably not just in this field), such reviews are often highly contradictory.
A concrete example might explain what I mean.
There are numerous systematic reviews assessing the effectiveness of acupuncture for fibromyalgia syndrome (FMS). It is safe to assume that the authors of these reviews have all conducted comprehensive searches of the literature in order to locate all the published studies on this subject. Subsequently, they have evaluated the scientific rigor of these trials and summarised their findings. Finally they have condensed all of this into an article which arrives at a certain conclusion about the value of the therapy in question. Understanding this process (outlined here only very briefly), one would expect that all the numerous reviews draw conclusions which are, if not identical, at least very similar.
However, the disturbing fact is that they are not remotely similar. Here are two which, in fact, are so different that one could assume they have evaluated a set of totally different primary studies (which, of course, they have not).
One recent (2014) review concluded that acupuncture for FMS has a positive effect, and acupuncture combined with western medicine can strengthen the curative effect.
Another recent review concluded that a small analgesic effect of acupuncture was present, which, however, was not clearly distinguishable from bias. Thus, acupuncture cannot be recommended for the management of FMS.
How can this be?
By contrast to most systematic reviews of conventional medicine, systematic reviews of alternative therapies are almost invariably based on a small number of primary studies (in the above case, the total number was only 7 !). The quality of these trials is often low (all reviews therefore end with the somewhat meaningless conclusion that more and better studies are needed).
So, the situation with primary studies of alternative therapies for inclusion into systematic reviews usually is as follows:
- the number of trials is low
- the quality of trials is even lower
- the results are not uniform
- the majority of the poor quality trials show a positive result (bias tends to generate false positive findings)
- the few rigorous trials yield a negative result
Unfortunately this means that the authors of systematic reviews summarising such confusing evidence often seem to feel at liberty to project their own pre-conceived ideas into their overall conclusion about the effectiveness of the treatment. Often the researchers are in favour of the therapy in question – in fact, this usually is precisely the attitude that motivated them to conduct a review in the first place. In other words, the frequently murky state of the evidence (as outlined above) can serve as a welcome invitation for personal bias to do its effect in skewing the overall conclusion. The final result is that the readers of such systematic reviews are being misled.
Authors who are biased in favour of the treatment will tend to stress that the majority of the trials are positive. Therefore the overall verdict has to be positive as well, in their view. The fact that most trials are flawed does not usually bother them all that much (I suspect that many fail to comprehend the effects of bias on the study results); they merely add to their conclusions that “more and better trials are needed” and believe that this meek little remark is sufficient evidence for their ability to critically analyse the data.
Authors who are not biased and have the necessary skills for critical assessment, on the other hand, will insist that most trials are flawed and therefore their results must be categorised as unreliable. They will also emphasise the fact that there are a few reliable studies and clearly point out that these are negative. Thus their overall conclusion must be negative as well.
In the end, enthusiasts will conclude that the treatment in question is at least promising, if not recommendable, while real scientists will rightly state that the available data are too flimsy to demonstrate the effectiveness of the therapy; as it is wrong to recommend unproven treatments, they will not recommend the treatment for routine use.
The difference between the two might just seem marginal – but, in fact, it is huge: IT IS THE DIFFERENCE BETWEEN MISLEADING PEOPLE AND GIVING RESPONSIBLE ADVICE; THE DIFFERENCE BETWEEN VIOLATING AND ADHERING TO ETHICAL STANDARDS.
‘Healing, hype or harm? A critical analysis of complementary or alternative medicine’ is the title of a book that I edited and that was published in 2008. Its publication date coincided with that of ‘Trick or Treatment?’ and therefore the former was almost completely over-shadowed by the latter. Consequently few people know about it. This is a shame, I think, and this post is dedicated to encouraging my readers to have a look at ‘Healing, hype or harm?’
One reviewer commented on Amazon about this book as follows: Vital and informative text that should be read by everyone alongside Ben Goldacre’s ‘Bad Science’ and Singh and Ernt’s ‘Trick or Treatment’. Everyone should be able to made informed choices about the treatments that are peddled to the desperate and gullible. As Tim Minchin famously said ‘What do you call Alternative Medicine that has been proved to work? . . . Medicine!’
This is high praise indeed! But I should not omit the fact that others have commented that they were appalled by our book and found it “disappointing and unsettling”. This does not surprise me in the least; after all, alternative medicine has always been a divisive subject.
The book was written by a total of 17 authors and covers many important aspects of alternative medicine. Some of its most famous contributors are Michael Baum, Gustav Born, David Colquhoun, James Randi and Nick Ross. Some of the most important subjects include:
As already mentioned, our book is already 6 years old; however, this does not mean that it is now out-dated. The subject areas were chosen such that it will be timely for a long time to come. Nor does this book reflect one single point of view; as it was written by over a dozen different experts with vastly different backgrounds, it offers an entire spectrum of views and attitudes. It is, in a word, a book that stimulates critical thinking and thoughtful analysis.
I sincerely think you should have a look at it… and, in case you think I am hoping to maximise my income by telling you all this: all the revenues from this book go to charity.
After the usually challenging acute therapy is behind them, cancer patients are often desperate to find a therapy that might improve their wellbeing. At that stage they may suffer from a wide range of symptoms which can seriously limit their quality of life. Any treatment that can be shown to restore them to their normal mental and physical health would be more than welcome.
Most homeopaths believe that their remedies can do just that, particularly if they are tailored not to the disease but to the individual patient. Sadly, the evidence that this might be so is almost non-existent. Now, a new trial has become available; it was conducted by Jennifer Poole, a chartered psychologist and registered homeopath, and researcher and teacher at Nemeton Research Foundation, Romsey.
The aim of this study was to explore the benefits of a three-month course of individualised homeopathy (IH) for survivors of cancer. Fifteen survivors of any type of cancer were recruited from a walk-in cancer support centre. Conventional treatment had to have taken place within the last three years. Patients saw a homeopath who prescribed IH. After three months of IH, they scored their total, physical and emotional wellbeing using the Functional Assessment of Chronic Illness Therapy for Cancer (FACIT-G). The results show that 11 of the 14 women had statistically positive outcomes for emotional, physical and total wellbeing.
The conclusions of the author are clear: Findings support previous research, suggesting CAM or IH could be beneficial for survivors of cancer.
This article was published in the NURSING TIMES, and the editor added a footnote informing us that “This article has been double-blind “.
I find this surprising. A decent peer-review should have picked up the point that a study of that nature cannot possibly produce results which tell us anything about the benefits of IH. The reasons for this are fairly obvious:
- there was no control group,
- therefore the observed outcomes are most likely due to 1) natural history, 2) placebo, 3) regression towards the mean and 4) social desirability; it seems most unlikely that IH had anything to do with the result
- the sample size was tiny,
- the patients elected to receive IH which means that had high expectations of a positive outcome,
- only subjective outcome measures were used,
- there is no good previous research suggesting that IH benefits cancer patients.
On the last point, a recent systematic review showed that the studies available on this topic had mixed results either showing a significantly greater improvement in QOL in the intervention group compared to the control group, or no significant difference between groups. The authors concluded that there existed significant gaps in the evidence base for the effectiveness of CAM on QOL in cancer survivors. Further work in this field needs to adopt more rigorous methodology to help support cancer survivors to actively embrace self-management and effective CAMs, without recommending inappropriate interventions which are of no proven benefit.
All this new study might tell us is that IH did not seem to harm these patients – but even this finding is not certain; to be sure, we would need to include many more patients. Any conclusions about the effectiveness of IH are totally unwarranted. But are there ANY generalizable conclusions that can be drawn from this article? Yes, I can think of a few:
- Some cancer patients can be persuaded to try the most implausible treatments.
- Some journals will publish any rubbish.
- Some peer-reviewers fail to spot the most obvious defects.
- Some ‘researchers’ haven’t got a clue.
- The attempts of misleading us about the value of homeopathy are incessant.
One might argue that this whole story is too trivial for words; who cares what dodgy science is published in the NURSING TIMES? But I think it does matter – not so much because of this one silly article itself, but because similarly poor research with similarly ridiculous conclusions is currently published almost every day. Subsequently it is presented to the public as meaningful science heralding important advances in medicine. It matters because this constant drip of bogus research eventually influences public opinion and determines far-reaching health care decisions.
Dodgy science abounds in alternative medicine; this is perhaps particularly true for homeopathy. A brand-new trial seems to confirm this view.
The aim of this study was to test the hypothesis that homeopathy (H) enhances the effects of scaling and root planing (SRP) in patients with chronic periodontitis (CP).
The researchers, dentists from Brazil, randomised 50 patients with CP to one of two treatment groups: SRP (C-G) or SRP + H (H-G). Assessments were made at baseline and after 3 and 12 months of treatment. The local and systemic responses to the treatments were evaluated after one year of follow-up. The results showed that both groups displayed significant improvements, however, the H-G group performed significantly better than C-G group.
The authors concluded that homeopathic medicines, as an adjunctive to SRP, can provide significant local and systemic improvements for CP patients.
Really? I am afraid, I disagree!
Homeopathic medicines might have nothing whatsoever to do with this result. Much more likely is the possibility that the findings are caused by other factors such as:
- patients’ expectations,
- improved compliance with other health-related measures,
- the researchers’ expectations,
- the extra attention given to the patients in the H-G group,
- disappointment of the C-G patients for not receiving the additional care,
- a mixture of all or some of the above.
I should stress that it would not have been difficult to plan the study in such a way that these factors were eliminated as sources of bias or confounding. But this study was conducted according to the A+B versus B design which we have discussed repeatedly on this blog. In such trials, A is the experimental treatment (homeopathy) and B is the standard care (scaling and root planning). Unless A is an overtly harmful therapy, it is simply not conceivable that A+B does not generate better results than B alone. The simplest way to comprehend this argument is to imagine A and B are two different amounts of money: it is impossible that A+B is not more that B!
It is unclear to me what relevant research question such a study design actually does answer (if anyone knows, please tell me). It seems obvious, however, that it cannot test the hypothesis that homeopathy (H) enhances the effects of scaling and root planing (SRP). This does not necessarily mean that the design is necessarily useless. But at the very minimum, one would need an adequate research question (one that matches this design) and adequate conclusions based on the findings.
The fact that the conclusions drawn from a dodgy trial are inadequate and misleading could be seen as merely a mild irritation. The facts that, in homeopathy, such poor science and misleading conclusions emerge all too regularly, and that journals continue to publish such rubbish are not just mildly irritating; they are annoying and worrying – annoying because such pseudo-science constitutes an unethical waste of scarce resources; worrying because it almost inevitably leads to wrong decisions in health care.
There must be well over 10 000 clinical trials of acupuncture; Medline lists ~5 000, and many more are hidden in the non-Medline listed literature. That should be good news! Sadly, it isn’t.
It should mean that we now have a pretty good idea for what conditions acupuncture is effective and for which illnesses it does not work. But we don’t! Sceptics say it works for nothing, while acupuncturists claim it is a panacea. The main reason for this continued controversy is that the quality of the vast majority of these 10 000 studies is not just poor, it is lousy.
“Where is the evidence for this outraging statement???” – I hear the acupuncture-enthusiasts shout. Well, how about my own experience as editor-in-chief of FACT? No? Far too anecdotal?
How about looking at Cochrane reviews then; they are considered to be the most independent and reliable evidence in existence? There are many such reviews (most, if not all [co-]authored by acupuncturists) and they all agree that the scientific rigor of the primary studies is fairly awful. Here are the crucial bits of just the last three; feel free to look for more:
Or how about providing an example? Good idea! Here is a new trial which could stand for numerous others:
This study was performed to compare the efficacy of acupuncture versus corticosteroid injection for the treatment of Quervain’s tendosynovitis (no, you do not need to look up what condition this is for understanding this post). Thirty patients were treated in two groups. The acupuncture group received 5 acupuncture sessions of 30 minutes duration. The injection group received one methylprednisolone acetate injection in the first dorsal compartment of the wrist. The degree of disability and pain was evaluated by using the Quick Disabilities of the Arm, Shoulder, and Hand (Q-DASH) scale and the Visual Analogue Scale (VAS) at baseline and at 2 weeks and 6 weeks after the start of treatment. The baseline means of the Q-DASH and the VAS scores were 62.8 and 6.9, respectively. At the last follow-up, the mean Q-DASH scores were 9.8 versus 6.2 in the acupuncture and injection groups, respectively, and the mean VAS scores were 2 versus 1.2. Thus there were short-term improvements of pain and function in both groups.
The authors drew the following conclusions: Although the success rate was somewhat higher with corticosteroid injection, acupuncture can be considered as an alternative option for treatment of De Quervain’s tenosynovitis.
The flaws of this study are exemplary and numerous:
- This should have been a study that compares two treatments – the technical term is ‘equivalence trial – and such studies need to be much larger to produce a meaningful result. Small sample sizes in equivalent trials will always make the two treatments look similarly effective, even if one is a pure placebo.
- There is no gold standard treatment for this condition. This means that a comparative trial makes no sense at all. In such a situation, one ought to conduct a placebo-controlled trial.
- There was no blinding of patients; therefore their expectation might have distorted the results.
- The acupuncture group received more treatments than the injection group; therefore the additional attention might have distorted the findings.
- Even if the results were entirely correct, one cannot conclude from them that acupuncture was effective; the notion that it was similarly ineffective as the injections is just as warranted.
These are just some of the most fatal flaws of this study. The sad thing is that similar criticisms can be made for most of the 10 000 trials of acupuncture. But the point here is not to nit-pick nor to quack-bust. My point is a different and more serious one: fatally flawed research is not just a ‘poor show’, it is unethical because it is a waste of scarce resources and, even more importantly, an abuse of patients for meaningless pseudo-science. All it does is it misleads the public into believing that acupuncture might be good for this or that condition and consequently make wrong therapeutic decisions.
In acupuncture (and indeed in most alternative medicine) research, the problem is so extremely wide-spread that it is high time to do something about it. Journal editors, peer-reviewers, ethics committees, universities, funding agencies and all others concerned with such research have to work together so that such flagrant abuse is stopped once and for all.