Irritable bowel syndrome (IBS) is common and often difficult to treat – unless, of course, you consult a homeopath. Here is just one of virtually thousands of quotes from homeopaths available on the Internet: Homeopathic medicine can reduce Irritable Bowel Syndrome (IBS) symptoms by lowering food sensitivities and allergies. Homeopathy treats the patient as a whole and does not simply focus on the disease. Careful attention is given to the minute details about the presenting complaints, including the severity of diarrhea, constipation, pain, cramps, mucus in the stools, nausea, heartburn, emotional triggers and conventional laboratory findings. In addition, the patient’s eating habits, food preferences, thermal attributes and sleep patterns are noted. The patient’s family history and diseases, along with the patient’s emotions are discussed. Then the homeopathic practitioner will select the remedy that most closely matches the symptoms.
Such optimism might be refreshing, but is there any reason for it? Is homeopathy really an effective treatment for IBS? To answer this question, we now have a brand-new Cochrane review. The aim of this review was to assess the effectiveness and safety of homeopathic treatment for treating irritable bowel syndrome (IBS). (This type of statement always makes me a little suspicious; how on earth can anyone truly assess the safety of a treatment by looking at a few studies? This is NOT how one evaluates safety!) The authors conducted extensive literature searches to identify all RCTs, cohort and case-control studies that compared homeopathic treatment with placebo, other control treatments, or usual care in adults with IBS. The primary outcome was global improvement in IBS.
Three RCTs with a total of 213 participants were included. No cohort or case-control studies were identified. Two studies compared homeopathic remedies to placebos for constipation-predominant IBS. One study compared individualised homeopathic treatment to usual care defined as high doses of dicyclomine hydrochloride, faecal bulking agents and a high fibre diet. Due to the low quality of reporting, the risk of bias in all three studies was unclear on most criteria and high for some criteria.
A meta-analysis of two studies with a total of 129 participants with constipation-predominant IBS found a statistically significant difference in global improvement between the homeopathic ‘asafoetida’ and placebo at a short-term follow-up of two weeks. Seventy-three per cent of patients in the homeopathy group improved compared to 45% of placebo patients. There was no statistically significant difference in global improvement between the homeopathic asafoetida plus nux vomica compared to placebo. Sixty-eight per cent of patients in the homeopathy group improved compared to 52% of placebo patients.
The overall quality of the evidence was very low. There was no statistically significant difference between individualised homeopathic treatment and usual care for the outcome “feeling unwell”. None of the studies reported on adverse events (which, by the way, should be seen as a breech in research ethics on the part of the authors of the three primary studies).
The authors concluded that a pooled analysis of two small studies suggests a possible benefit for clinical homeopathy, using the remedy asafoetida, over placebo for people with constipation-predominant IBS. These results should be interpreted with caution due to the low quality of reporting in these trials, high or unknown risk of bias, short-term follow-up, and sparse data. One small study found no statistically difference between individualised homeopathy and usual care (defined as high doses of dicyclomine hydrochloride, faecal bulking agents and diet sheets advising a high fibre diet). No conclusions can be drawn from this study due to the low number of participants and the high risk of bias in this trial. In addition, it is likely that usual care has changed since this trial was conducted. Further high quality, adequately powered RCTs are required to assess the efficacy and safety of clinical and individualised homeopathy compared to placebo or usual care.
THIS REVIEW REQUIRES A FEW FURTHER COMMENTS, I THINK
Asafoetida, the remedy used in two of the studies, is a plant native to Pakistan, Iran and Afghanistan. It is used in Ayurvedic herbal medicine to treat colic, intestinal parasites and irritable bowel syndrome. In the ‘homeopathic’ trials, asafoetida was used in relatively low dilutions, one that still contains molecules. It is therefore debatable whether this was really homeopathy or whether it is more akin to herbal medicine - it was certainly not homeopathy with its typical ultra-high dilutions.
Regardless of this detail, the Cochrane review does hardly provide sound evidence for homeopathy’s efficacy. On the contrary, my reading of its findings is that the ‘possible benefit’ is NOT real but a false positive result caused by the serious limitations of the original studies. The authors stress that the apparently positive result ‘should be interpreted with caution’; that is certainly correct.
So, if you are a proponent of homeopathy, as the authors of the review seem to be, you will claim that homeopathy offers ‘possible benefits’ for IBS-sufferers. But if you are not convinced of the merits of homeopathy, you might suggest that the evidence is insufficient to recommend homeopathy. I imagine that IBS-sufferers might get as frustrated with such confusion as most scientists will be. Yet there is hope; the answer could be imminent: apparently, a new trial is to report its results within this year.
IS THIS NEW TRIAL GOING TO CONTRIBUTE MEANINGFULLY TO OUR KNOWLEDGE?
It is a three-armed study (same 1st author as in the Cochrane review) which, according to its authors, seeks to explore the effectiveness of individualised homeopathic treatment plus usual care compared to both an attention control plus usual care and usual care alone, for patients with IBS. (Why “explore” and not “determine”, I ask myself.) Patients are randomly selected to be offered, 5 sessions of homeopathic treatment plus usual care, 5 sessions of supportive listening plus usual care or usual care alone. (“To be offered” looks odd to me; does that mean patients are not blinded to the interventions? Yes, indeed it does.) The primary clinical outcome is the IBS Symptom Severity at 26 weeks. Analysis will be by intention to treat and will compare homeopathic treatment with usual care at 26 weeks as the primary analysis, and homeopathic treatment with supportive listening as an additional analysis.
Hold on…the primary analysis “will compare homeopathic treatment with usual care“. Are they pulling my leg? They just told me that patients will be “offered, 5 sessions of homeopathic treatment plus usual care… or usual care alone“.
Oh, I see! We are again dealing with an A+B versus B design, on top of it without patient- or therapist-blinding. This type of analysis cannot ever produce a negative result, even if the experimental treatment is a pure placebo: placebo + usual care is always more than usual care alone. IBS-patients will certainly experience benefit from having the homeopaths’ time, empathy and compassion – never mind the remedies they get from them. And for the secondary analyses, things do not seem to be much more rigorous either.
Do we really need more trials of this nature? The Cochrane review shows that we currently have three studies which are too flimsy to be interpretable. What difference will a further flimsy trial make in this situation? When will we stop wasting time and money on such useless ’research’? All it can possibly achieve is that apologists of homeopathy will misinterpret the results and suggest that they demonstrate efficacy.
Obviously, I have not seen the data (they have not yet been published) but I think I can nevertheless predict the conclusions of the primary analysis of this trial; they will read something like this: HOMEOPATHY PROVED TO BE SIGNIFICANTLY MORE EFFECTIVE THAN USUAL CARE. I have asked the question before and I do it again: when does this sort of ‘research’ cross the line into the realm of scientific misconduct?
Alternative medicine thrives in the realm of common chronic conditions which conventional medicine cannot cure and which respond well to treatment with placebos. Irritable bowel syndrome (IBS) is such a condition, and IBS-sufferers who are often frustrated with the symptomatic relief conventional medicine has to offer are only too keen to try any therapy that promises help. There is hardly an alternative therapy which does not claim to be the solution to IBS-symptoms: herbal medicine, mind-body interventions, homeopathy (the subject of my next post), acupuncture, even ‘MOXIBUSTION‘.
Moxibustion is a derivative of acupuncture; instead of needles, this method employs heat to stimulate acupuncture points. Proponents believe that the effects of moxibustion are roughly equivalent to those of acupuncture but many acupuncturists feel that they are less powerful. One website explains: Moxibustion is a traditional Chinese medicine technique that involves the burning of mugwort, a small, spongy herb, to facilitate healing. Moxibustion has been used throughout Asia for thousands of years; in fact, the actual Chinese character for acupuncture, translated literally, means “acupuncture-moxibustion.” The purpose of moxibustion, as with most forms of traditional Chinese medicine, is to strengthen the blood, stimulate the flow of qi, and maintain general health.
Many proponents of moxibustion claim that their treatment works for IBS. The evidence is, however, far less clear. Two recent meta-analyses might tell us more.
The first systematic review and meta-analysis was published by Korean researchers and aimed at critically evaluating the current evidence on moxibustion for improving global symptoms of IBS. The authors conducted extensive searches and found a total of 20 RCTs to be included in their analyses. The risk of bias in these studies was generally high. Compared with pharmacological medications, moxibustion significantly alleviated overall IBS symptoms but there was a moderate inconsistency among the 7 RCTs. Moxibustion combined with acupuncture was more effective than pharmacological therapy but a moderate inconsistency among the 4 studies was found. When moxibustion was added to pharmacological medications or herbal medicine, no additive benefit of moxibustion was shown compared with pharmacological medications or herbal medicine alone. One small sham-controlled trial found no difference between moxibustion and sham control in symptom severity. Moxibustion appeared to be associated with few adverse events but the evidence is limited due to poor reporting.
The authors concluded that moxibustion may provide benefit to IBS patients although the risk of bias in the included studies is relatively high. Future studies are necessary to confirm whether this finding is reproducible in carefully-designed and conducted trials and to firmly establish the place of moxibustion in current practice.
The way I see it, these conclusions are far too optimistic. There was only one RCT that controlled for placebo-effects, and the results of that study were negative. Thus I would conclude that some studies report effectiveness of moxibustion for IBS, yet the effects seem not to be caused by the treatment per se but are most likely due to a placebo-effect.
The second systematic review and meta-analysis was published by Chinese researchers and aimed at evaluating the clinical efficacy and safety of moxibustion and acupuncture in treatment of IBS. The authors included randomized and quasi-randomized clinical trials in their analyses and were able to include 11 trials. Their meta analysis suggests that the effectiveness of the combined methods of acupuncture and moxibustion is superior to conventional western medication treatment. The authors concluded that acupuncture-moxibustion for IBS is better than the conventional western medication treatment.
While the first meta-analysis was at least technically sound, the second seems to have too many flaws to mention: the search methodology was flimsy, many available studies were not included, their risk of bias was not assessed critically, the conclusions are based more on wishful thinking than on the available data, etc.
If we consider that moxibustion is a method of stimulating acupoints, we have to assume that it can at best be as effective as acupuncture, quite possibly slightly less. Thus it is relevant to see what the evidence tells us about acupuncture for IBS. The current Cochrane review of acupuncture for IBS shows that sham-controlled RCTs have found no benefits of acupuncture relative to a credible sham acupuncture control for IBS symptom severity or IBS-related quality of life.
I think I rest my case.
Some experts concede that chiropractic spinal manipulation is effective for chronic low back pain (cLBP). But what is the right dose? There have been no full-scale trials of the optimal number of treatments with spinal manipulation. This study was aimed at filling this gap by trying to identify a dose-response relationship between the number of visits to a chiropractor for spinal manipulation and cLBP outcomes. A further aim was to determine the efficacy of manipulation by comparison with a light massage control.
The primary cLBP outcomes were the 100-point pain intensity scale and functional disability scales evaluated at the 12- and 24-week primary end points. Secondary outcomes included days with pain and functional disability, pain unpleasantness, global perceived improvement, medication use, and general health status.
One hundred patients with cLBP were randomized to each of 4 dose levels of care: 0, 6, 12, or 18 sessions of spinal manipulation from a chiropractor. Participants were treated three times per week for 6 weeks. At sessions when manipulation was not assigned, the patients received a focused light massage control. Covariate-adjusted linear dose effects and comparisons with the no-manipulation control group were evaluated at 6, 12, 18, 24, 39, and 52 weeks.
For the primary outcomes, mean pain and disability improvement in the manipulation groups were 20 points by 12 weeks, an effect that was sustainable to 52 weeks. Linear dose-response effects were small, reaching about two points per 6 manipulation sessions at 12 and 52 weeks for both variables. At 12 weeks, the greatest differences compared to the no-manipulation controls were found for 12 sessions (8.6 pain and 7.6 disability points); at 24 weeks, differences were negligible; and at 52 weeks, the greatest group differences were seen for 18 visits (5.9 pain and 8.8 disability points).
The authors concluded that the number of spinal manipulation visits had modest effects on cLBP outcomes above those of 18 hands-on visits to a chiropractor. Overall, 12 visits yielded the most favorable results but was not well distinguished from other dose levels.
This study is interesting because it confirms that the effects of chiropractic spinal manipulation as a treatment for cLBP are tiny and probably not clinically relevant. And even these tiny effects might not be due to the treatment per se but could be caused by residual confounding and bias.
As for the optimal dose, the authors suggest that, on average, 18 sessions might be the best. But again, we have to be clear that the dose-response effects were small and of doubtful clinical relevance. Since the therapeutic effects are tiny, it is obviously difficult to establish a dose-response relationship.
In view of the cost of chiropractic spinal manipulation and the uncertainty about its safety, I would probably not rate this approach as the treatment of choice but would consider the current Cochrane review which concludes that “high quality evidence suggests that there is no clinically relevant difference between spinal manipulation and other interventions for reducing pain and improving function in patients with chronic low-back pain“ Personally, I think it is more prudent to recommend exercise, back school, massage or perhaps even yoga to cLBP-sufferers.
Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.
Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.
An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.
Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.
The principle that is being followed here is simple:
- believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
- the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
- they publish their findings in one of the many journals that specialise in this sort of nonsense;
- they make sure that advocates across the world learn about their results;
- the community of apologists of this treatment picks up the information without the slightest critical analysis;
- the researchers conduct more and more of such pseudo-research;
- nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
- thus the body of false or misleading ‘evidence’ grows and grows;
- proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
- too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
- eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
- important health care decisions are thus based on data which are false and misleading.
So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:
- authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
- editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
- if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
- in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).
What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ’evidence’ critically…but how realistic is that?
According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.
The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.
The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.
Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.
But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.
And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.
I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.
We have probably all fallen into the trap of thinking that something which has stood the ’test of time’, i.e. something that has been used for centuries with apparent success, must be ok. In alternative medicine, this belief is extremely wide-spread, and one could argue that the entire sector is built on it. Influential proponents of ‘traditional’ medicine like Prince Charles do their best to strengthen this assumption. Sadly, however, it is easily disclosed as a classical fallacy: things that have stood the ‘test of time’ might work, of course, but the ’test of time’ is never a proof of anything.
A recent study brought this message home loud and clear. This trial tested the efficacy of Rhodiola crenulata (R. crenulata), a traditional remedy which has been used widely in the Himalayan areas and in Tibet to prevent acute mountain sickness . As no scientific studies of this traditional treatment existed, the researchers conducted a double-blind, placebo-controlled crossover RCT to test its efficacy in acute mountain sickness prevention.
Healthy adult volunteers were randomized to two treatment sequences, receiving either 800 mg R. crenulata extract or placebo daily for 7 days before ascent and two days during mountaineering. After a three-month wash-out period, they were crossed over to the alternate treatment. On each occasion, the participants ascended rapidly from 250 m to 3421 m. The primary outcome measure was the incidence of acute mountain sickness with headache and at least one of the symptoms of nausea or vomiting, fatigue, dizziness, or difficulty sleeping.
One hundred and two participants completed the trial. No significant differences in the incidence of acute mountain sickness were found between R. crenulata extract and placebo groups. If anything, the incidence of severe acute mountain sickness with Rhodiola extract was slightly higher compared to the one with placebo: 35.3% vs. 29.4%.
R. crenulata extract was not effective in reducing the incidence or severity of acute mountain sickness as compared to placebo.
Similar examples could be found by the dozen. They demonstrate very clearly that the notion of the ‘test of time’ is erroneous: a treatment which has a long history of usage is not necessarily effective (or safe) – not only that, it might be dangerous. The true value of a therapy cannot be judged by experience, to be sure, we need rigorous clinical trials. Acute mountain sickness is a potentially life-threatening condition for which there are reasonably effective treatments. If people relied on the ‘ancient wisdom’ instead of using a therapy that actually works, they might pay for their error with their lives. The sooner alternative medicine proponents realise that, the better.
Acupressure is a treatment-variation of acupuncture; instead of sticking needles into the skin, pressure is applied over ‘acupuncture points’ which is supposed to provide a stimulus similar to needling. Therefore the effects of both treatments should theoretically be similar.
Acupressure could have several advantages over acupuncture:
- it can be used for self-treatment
- it is suitable for people with needle-phobia
- it is painless
- it is not invasive
- it has less risks
- it could be cheaper
But is acupressure really effective? What do the trial data tell us? Our own systematic review concluded that the effectiveness of acupressure is currently not well documented for any condition. But now there is a new study which might change this negative verdict.
The primary objective of this 3-armed RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care alone in the management of chemotherapy-induced nausea. 500 patients from outpatient chemotherapy clinics in three regions in the UK involving 14 different cancer units/centres were randomised to the wristband arm, the sham wristband arm and the standard care only arm. Participants were chemotherapy-naive cancer patients receiving chemotherapy of low, moderate and high emetogenic risk. The experimental group were given acupressure wristbands pressing the P6 point (anterior surface of the forearm). The Rhodes Index for Nausea/Vomiting, the Multinational Association of Supportive Care in Cancer (MASCC) Antiemesis Tool and the Functional Assessment of Cancer Therapy General (FACT-G) served as outcome measures. At baseline, participants completed measures of anxiety/depression, nausea/vomiting expectation and expectations from using the wristbands.
Data were available for 361 participants for the primary outcome. The primary outcome analysis (nausea in cycle 1) revealed no statistically significant differences between the three arms. The median nausea experience in patients using wristbands (both real and sham ones) was somewhat lower than that in the anti-emetics only group (median nausea experience scores for the four cycles: standard care arm 1.43, 1.71, 1.14, 1.14; sham acupressure arm 0.57, 0.71, 0.71, 0.43; acupressure arm 1.00, 0.93, 0.43, 0). Women responded more favourably to the use of sham acupressure wristbands than men (odds ratio 0.35 for men and 2.02 for women in the sham acupressure group; 1.27 for men and 1.17 for women in the acupressure group). No significant differences were detected in relation to vomiting outcomes, anxiety and quality of life. Some transient adverse effects were reported, including tightness in the area of the wristbands, feeling uncomfortable when wearing them and minor swelling in the wristband area (n = 6). There were no statistically significant differences in the costs associated with the use of real acupressure band.
26 subjects took part in qualitative interviews. Participants perceived the wristbands (both real and sham) as effective and helpful in managing their nausea during chemotherapy.
The authors concluded that there were no statistically significant differences between the three arms in terms of nausea, vomiting and quality of life, although apparent resource use was less in both the real acupressure arm and the sham acupressure arm compared with standard care only; therefore; no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting. However, the study provided encouraging evidence in relation to an improved nausea experience and some indications of possible cost savings to warrant further consideration of acupressure both in practice and in further clinical trials.
I could argue about several of the methodological details of this study. But I resist the temptation in order to focus on just one single point which I find important and which has implications beyond the realm of acupressure.
Why on earth do the authors conclude that no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting? The stated aim of this RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care. The results failed to show significant differences of the primary outcome measures, consequently the conclusion cannot be “unclear”, it has to be that ACUPRESSURE WRIST BANDS ARE NOT MORE EFFECTIVE THAN SHAM ACUPRESSURE WRIST BANDS AS AN ADJUNCT TO ANTI-EMETIC DRUG TREATMENT (or something to that extent).
As long as RCTs of alternative therapies are run by evangelic believers in the respective therapy, we are bound to regularly encounter this lamentable phenomenon of white-washing negative findings with an inadequate conclusion. In my view, this is not research or science, it is pseudo-research or pseudo-science. And it is much more than a nuisance or a trivial matter; it is a waste of research funds, a waste of patients’ good will that has reached a point where people will lose trust in alternative medicine research. Someone should really do a systematic study to identify those research teams that regularly commit such scientific misconduct and ensure that they are cut off public funding and support.
This post will probably work best, if you have read the previous one describing how the parallel universe of acupuncture research insists on going in circles in order to avoid admitting that their treatment might not be as effective as they pretend. The way they achieve this is fairly simple: they conduct trials that are designed in such a way that they cannot possibly produce a negative result.
A brand-new investigation which was recently vociferously touted via press releases etc. as a major advance in proving the effectiveness of acupuncture is an excellent case in point. According to its authors, the aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care. This sounds alright, but wait!
755 patients with depression were randomised to one of three arms to 1)acupuncture, 2)counselling, and 3)usual care alone. The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat. PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of 10 sessions for acupuncture and 9 sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 and 12 months for acupuncture and counselling.
From this, the authors conclude that both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.
Acupuncture for depression? Really? Our own systematic review with co-authors who are the most ardent apologists of acupuncture I have come across showed that the evidence is inconsistent on whether manual acupuncture is superior to sham… Therefore, I thought it might be a good idea to have a closer look at this new study.
One needs to search this article very closely indeed to find out that the authors did not actually evaluate acupuncture versus usual care and counselling versus usual care at all, and that comparisons were not made between acupuncture, counselling, and usual care (hints like the use of the word “alone” are all we get to guess that the authors’ text is outrageously misleading). Not even the methods section informs us what really happened in this trial. You find this hard to believe? Here is the unabbreviated part of the article that describes the interventions applied:
Patients allocated to the acupuncture and counselling groups were offered up to 12 sessions usually on a weekly basis. Participating acupuncturists were registered with the British Acupuncture Council with at least 3 years post-qualification experience. An acupuncture treatment protocol was developed and subsequently refined in consultation with participating acupuncturists. It allowed for customised treatments within a standardised theory-driven framework. Counselling was provided by members of the British Association for Counselling and Psychotherapy who were accredited or were eligible for accreditation having completed 400 supervised hours post-qualification. A manualised protocol, using a humanistic approach, was based on competences independently developed for Skills for Health. Practitioners recorded in logbooks the number and length of sessions, treatment provided, and adverse events. Further details of the two interventions are presented in Tables S2 and S3. Usual care, both NHS and private, was available according to need and monitored for all patients in all three groups for the purposes of comparison.
It is only in the results tables that we can determine what treatments were actually given; and these were:
1) Acupuncture PLUS usual care (i.e. medication)
2) Counselling PLUS usual care
3) Usual care
Its almost a ‘no-brainer’ that, if you compare A+B to B (or in this three-armed study A+B vs C+B vs B), you find that the former is more than the latter – unless A is a negative, of course. As acupuncture has significant placebo-effects, it never can be a negative, and thus this trial is an entirely foregone conclusion. As, in alternative medicine, one seems to need experimental proof even for ‘no-brainers’, we have some time ago demonstrated that this common sense theory is correct by conducting a systematic review of all acupuncture trials with such a design. We concluded that the ‘A + B versus B’ design is prone to false positive results…What makes this whole thing even worse is the fact that I once presented our review in a lecture where the lead author of the new trial was in the audience; so there can be no excuse of not being aware of the ‘no-brainer’.
Some might argue that this is a pragmatic trial, that it would have been unethical to not give anti-depressants to depressed patients and that therefore it was not possible to design this study differently. However, none of these arguments are convincing, if you analyse them closely (I might leave that to the comment section, if there is interest in such aspects). At the very minimum, the authors should have explained in full detail what interventions were given; and that means disclosing these essentials even in the abstract (and press release) - the part of the publication that is most widely read and quoted.
It is arguably unethical to ask patients’ co-operation, use research funds etc. for a study, the results of which were known even before the first patient had been recruited. And it is surely dishonest to hide the true nature of the design so very sneakily in the final report.
In my view, this trial begs at least 5 questions:
1) How on earth did it pass the peer review process of one of the most highly reputed medical journals?
2) How did the protocol get ethics approval?
3) How did it get funding?
4) Does the scientific community really allow itself to be fooled by such pseudo-research?
5) What do I do to not get depressed by studies of acupuncture for depression?
It was 20 years ago today that I started my job as ’Professor of Complementary Medicine’ at the University of Exeter and became a full-time researcher of all matters related to alternative medicine. One issue that was discussed endlessly during these early days was the question whether alternative medicine can be investigated scientifically. There were many vociferous proponents of the view that it was too subtle, too individualised, too special for that and that it defied science in principle. Alternative medicine, they claimed, needed an alternative to science to be validated. I spent my time arguing the opposite, of course, and today there finally seems to be a consensus that alternative medicine can and should be submitted to scientific tests much like any other branch of health care.
Looking back at those debates, I think it is rather obvious why apologists of alternative medicine were so vehement about opposing scientific investigations: they suspected, perhaps even knew, that the results of such research would be mostly negative. Once the anti-scientists saw that they were fighting a lost battle, they changed their tune and adopted science – well sort of: they became pseudo-scientists (‘if you cannot beat them, join them’). Their aim was to prevent disaster, namely the documentation of alternative medicine’s uselessness by scientists. Meanwhile many of these ‘anti-scientists turned pseudo-scientists’ have made rather surprising careers out of their cunning role-change; professorships at respectable universities have mushroomed. Yes, pseudo-scientists have splendid prospects these days in the realm of alternative medicine.
The term ’pseudo-scientist’ as I understand it describes a person who thinks he/she knows the truth about his/her subject well before he/she has done the actual research. A pseudo-scientist is keen to understand the rules of science in order to corrupt science; he/she aims at using the tools of science not to test his/her assumptions and hypotheses, but to prove that his/her preconceived ideas were correct.
So, how does one become a top pseudo-scientist? During the last 20 years, I have observed some of the careers with interest and think I know how it is done. Here are nine lessons which, if followed rigorously, will lead to success (… oh yes, in case I again have someone thick enough to complain about me misleading my readers: THIS POST IS SLIGHTLY TONGUE IN CHEEK).
- Throw yourself into qualitative research. For instance, focus groups are a safe bet. This type of pseudo-research is not really difficult to do: you assemble about 5 -10 people, let them express their opinions, record them, extract from the diversity of views what you recognise as your own opinion and call it a ‘common theme’, write the whole thing up, and - BINGO! – you have a publication. The beauty of this approach is manifold: 1) you can repeat this exercise ad nauseam until your publication list is of respectable length; there are plenty of alternative medicine journals who will hurry to publish your pseudo-research; 2) you can manipulate your findings at will, for instance, by selecting your sample (if you recruit people outside a health food shop, for instance, and direct your group wisely, you will find everything alternative medicine journals love to print); 3) you will never produce a paper that displeases the likes of Prince Charles (this is more important than you may think: even pseudo-science needs a sponsor [or would that be a pseudo-sponsor?]).
- Conduct surveys. These are very popular and highly respected/publishable projects in alternative medicine – and they are almost as quick and easy as focus groups. Do not get deterred by the fact that thousands of very similar investigations are already available. If, for instance, there already is one describing the alternative medicine usage by leg-amputated police-men in North Devon, and you nevertheless feel the urge of going into this area, you can safely follow your instinct: do a survey of leg-amputated police men in North Devon with a medical history of diabetes. There are no limits, and as long as you conclude that your participants used a lot of alternative medicine, were very satisfied with it, did not experience any adverse effects, thought it was value for money, and would recommend it to their neighbour, you have secured another publication in an alternative medicine journal.
- If, for some reason, this should not appeal to you, how about taking a sociological, anthropological or psychological approach? How about studying, for example, the differences in worldviews, the different belief systems, the different ways of knowing, the different concepts about illness, the different expectations, the unique spiritual dimensions, the amazing views on holism – all in different cultures, settings or countries? Invariably, you will, of course, conclude that one truth is at least as good as the next. This will make you popular with all the post-modernists who use alternative medicine as a playground for getting a few publications out. This approach will allow you to travel extensively and generally have a good time. Your papers might not win you a Nobel prize, but one cannot have everything.
- It could well be that, at one stage, your boss has a serious talk with you demanding that you start doing what (in his narrow mind) constitutes ’real science’. He might be keen to get some brownie-points at the next RAE and could thus want you to actually test alternative treatments in terms of their safety and efficacy. Do not despair! Even then, there are plenty of possibilities to remain true to your pseudo-scientific principles. By now you are good at running surveys, and you could, for instance, take up your boss’ suggestion of studying the safety of your favourite alternative medicine with a survey of its users. You simply evaluate their experiences and opinions regarding adverse effects. But be careful, you are on somewhat thinner ice here; you don’t want to upset anyone by generating alarming findings. Make sure your sample is small enough for a false negative result, and that all participants are well-pleased with their alternative medicine. This might be merely a question of selecting your patients cleverly. The main thing is that your conclusion is positive. If you want to go the extra pseudo-scientific mile, mention in the discussion of your paper that your participants all felt that conventional drugs were very harmful.
- If your boss insists you tackle the daunting issue of therapeutic efficacy, there is no reason to give up pseudo-science either. You can always find patients who happened to have recovered spectacularly well from a life-threatening disease after receiving your favourite form of alternative medicine. Once you have identified such a person, you write up her experience in much detail and call it a ‘case report’. It requires a little skill to brush over the fact that the patient also had lots of conventional treatments, or that her diagnosis was assumed but never properly verified. As a pseudo-scientist, you will have to learn how to discretely make such irritating details vanish so that, in the final paper, they are no longer recognisable. Once you are familiar with this methodology, you can try to find a couple more such cases and publish them as a ‘best case series’ – I can guarantee that you will be all other pseudo-scientists’ hero!
- Your boss might point out, after you have published half a dozen such articles, that single cases are not really very conclusive. The antidote to this argument is simple: you do a large case series along the same lines. Here you can even show off your excellent statistical skills by calculating the statistical significance of the difference between the severity of the condition before the treatment and the one after it. As long as you show marked improvements, ignore all the many other factors involved in the outcome and conclude that these changes are undeniably the result of the treatment, you will be able to publish your paper without problems.
- As your boss seems to be obsessed with the RAE and all that, he might one day insist you conduct what he narrow-mindedly calls a ‘proper’ study; in other words, you might be forced to bite the bullet and learn how to plan and run an RCT. As your particular alternative therapy is not really effective, this could lead to serious embarrassment in form of a negative result, something that must be avoided at all cost. I therefore recommend you join for a few months a research group that has a proven track record in doing RCTs of utterly useless treatments without ever failing to conclude that it is highly effective. There are several of those units both in the UK and elsewhere, and their expertise is remarkable. They will teach you how to incorporate all the right design features into your study without there being the slightest risk of generating a negative result. A particularly popular solution is to conduct what they call a ‘pragmatic’ trial, I suggest you focus on this splendid innovation that never fails to produce anything but cheerfully positive findings.
- It is hardly possible that this strategy fails – but once every blue moon, all precautions turn out to be in vain, and even the most cunningly designed study of your bogus therapy might deliver a negative result. This is a challenge to any pseudo-scientist, but you can master it, provided you don’t lose your head. In such a rare case I recommend to run as many different statistical tests as you can find; chances are that one of them will nevertheless produce something vaguely positive. If even this method fails (and it hardly ever does), you can always home in on the fact that, in your efficacy study of your bogus treatment, not a single patient died. Who would be able to doubt that this is a positive outcome? Stress it clearly, select it as the main feature of your conclusions, and thus make the more disappointing findings disappear.
- Now that you are a fully-fledged pseudo-scientist who has produced one misleading or false positive result after the next, you may want a ‘proper’ confirmatory study of your pet-therapy. For this purpose run the same RCT over again, and again, and again. Eventually you want a meta-analysis of all RCTs ever published. As you are the only person who ever conducted studies on the bogus treatment in question, this should be quite easy: you pool the data of all your trials and, bob’s your uncle: a nice little summary of the totality of the data that shows beyond doubt that your therapy works. Now even your narrow-minded boss will be impressed.
These nine lessons can and should be modified to suit your particular situation, of course. Nothing here is written in stone. The one skill any pseudo-scientist must have is flexibility.
Every now and then, some smart arse is bound to attack you and claim that this is not rigorous science, that independent replications are required, that you are biased etc. etc. blah, blah, blah. Do not panic: either you ignore that person completely, or (in case there is a whole gang of nasty sceptics after you) you might just point out that:
- your work follows a new paradigm; the one of your critics is now obsolete,
- your detractors fail to understand the complexity of the subject and their comments merely reveal their ridiculous incompetence,
- your critics are less than impartial, in fact, most are bought by BIG PHARMA,
- you have a paper ‘in press’ that fully deals with all the criticism and explains how inappropriate it really is.
In closing, allow me a final word about publishing. There are hundreds of alternative medicine journals out there to chose from. They will love your papers because they are uncompromising promotional. These journals all have one thing in common: they are run by apologists of alternative medicine who abhor to read anything negative about alternative medicine. Consequently hardly a critical word about alternative medicine will ever appear in these journals. If you want to make double sure that your paper does not get criticised during the peer-review process (this would require a revision, and you don’t need extra work of that nature), you can suggest a friend for peer-reviewing it. In turn, you can offer to him/her that you do the same to him/her the next time he/she has an article to submit. This is how pseudo-scientists make sure that the body of pseudo-evidence for their pseudo-treatments is growing at a steady pace.
I have said it so often that I hesitate to state it again: an uncritical researcher is a contradiction in terms. This begs the question as to how critical the researchers of alternative medicine truly are. In my experience, most tend to be uncritical in the extreme. But how would one go about providing evidence for this view? In a previous blog-post, I have suggested a fairly simple method: to calculate an index of negative conclusions drawn in the articles published by a specific researcher. This is what I wrote:
If we calculated the percentage of a researcher’s papers arriving at positive conclusions and divided this by the percentage of his papers drawing negative conclusions, we might have a useful measure. A realistic example might be the case of a clinical researcher who has published a total of 100 original articles. If 50% had positive and 50% negative conclusions about the efficacy of the therapy tested, his trustworthiness index (TI) would be 1.
Depending on what area of clinical medicine this person is working in, 1 might be a figure that is just about acceptable in terms of the trustworthiness of the author. If the TI goes beyond 1, we might get concerned; if it reaches 4 or more, we should get worried.
An example would be a researcher who has published 100 papers of which 80 are positive and 20 arrive at negative conclusions. His TI would consequently amount to 4. Most of us equipped with a healthy scepticism would consider this figure highly suspect.
So how would alternative medicine researchers do, if we applied this method for assessing their trustworthiness? Very poorly, I fear - but that is speculation! Let’s see some data. Let’s look at one prominent alternative medicine researcher and see. As an example, I have chosen Professor George Lewith (because his name is unique which avoids confusion with researchers), did a quick Medline search to identify the abstracts of his articles on alternative medicine, and extracted the crucial sentence from the conclusions of the most recent ones:
- The study design of registered TCM trials has improved in estimating sample size, use of blinding and placebos
- Real treatment was significantly different from sham demonstrating a moderate specific effect of PKP
- These findings highlight the importance of helping patients develop coherent illness representations about their LBP before trying to engage them in treatment-decisions, uptake, or adherence
- Existing theories of how context influences health outcomes could be expanded to better reflect the psychological components identified here, such as hope, desire, optimism and open-mindedness
- …mainstream science has moved on from the intellectual sterility and ad hominem attacks that characterise the sceptics’ movement
- Trustworthy and appropriate information about practitioners (e.g. from professional regulatory bodies) could empower patients to make confident choices when seeking individual complementary practitioners to consult
- Comparative effectiveness research is an emerging field and its development and impact must be reflected in future research strategies within complementary and integrative medicine
- The I-CAM-Q has low face validity and low acceptability, and is likely to produce biased estimates of CAM use if applied in England, Romania, Italy, The Netherlands or Spain
- Our main finding was of beta power decreases in primary somatosensory cortex and SFG, which opens up a line of future investigation regarding whether this contributes toward an underlying mechanism of acupuncture.
- …physiotherapy was appraised more negatively in the National Health Service than the private sector but osteopathy was appraised similarly within both health-care sectors
This is a bit tedious, I agree, so I stop after just 10 articles. But even this short list does clearly indicate the absence of negative conclusions. In fact, I see none at all – arguably a few neutral ones, but nothing negative. All is positive in the realm of alternative medicine research then? In case you don’t agree with that assumption, you might prefer to postulate that this particular alternative medicine researcher somehow avoids negative conclusions. And if you believe that, you are not far from considering that we are being misinformed.
Alternative medicine is not really a field where one might reasonably expect that rigorous research generates nothing but positive results; even to expect 50 or 40% of such findings would be quite optimistic. It follows, I think, that if researchers only find positives, something must be amiss. I have recently demonstrated that the most active research homeopathic group (Professor Witt from the Charite in Berlin) has published nothing but positive findings; even if the results were not quite positive, they managed to formulate a positive conclusion. Does anyone doubt that this amounts to misinformation?
So, I do have produced at least some tentative evidence for my suspicion that some alternative medicine researchers misinform us. But how precisely do they do it? I can think of several methods for avoiding publishing a negative result or conclusion, and I fear that all of them are popular with alternative medicine researchers:
- design the study in such a way that it cannot possibly give a negative result
- manipulate the data
- be inventive when it comes to statistics
- home in on to the one positive aspect your generally negative data might show
- do not write up your study; like this nobody will ever see your negative results
And why do they do it? My impression is that they use science not for testing their interventions but for proving them. Critical thinking is a skill that alternative medicine researchers do not seem to cultivate. Often they manage to hide this fact quite cleverly and for good reasons: no respectable funding body would give money for such an abuse of science! Nevertheless, the end-result is plain to see: no negative conclusions are being published!
There are at least two further implications of the fact that alternative medicine researchers misinform the public. The first concerns the academic centres in which these researchers are organised. If a prestigious university accommodates a research unit of alternative medicine, it gives considerable credence to alternative medicine itself. If the research that comes out of the unit is promotional pseudo-science, the result, in my view, amounts to misleading the public about the value of alternative medicine.
The second implication relates to the journals in which researchers of alternative medicine prefer to publish their articles. Today, there are several hundred journals specialised in alternative medicine. We have shown over and over again that these journals publish next to nothing in terms of negative results. In my view, this too amounts to systematic misinformation.
My conclusion from all this is depressing: the type of research that currently dominates alternative medicine is, in fact, pseudo-research aimed not at rigorously falsifying hypotheses but at promoting bogus treatments. In other words alternative medicine researchers crucially contribute to the ‘sea of misinformation’ in this area.