Irritable bowel syndrome (IBS) is common and often difficult to treat – unless, of course, you consult a homeopath. Here is just one of virtually thousands of quotes from homeopaths available on the Internet: Homeopathic medicine can reduce Irritable Bowel Syndrome (IBS) symptoms by lowering food sensitivities and allergies. Homeopathy treats the patient as a whole and does not simply focus on the disease. Careful attention is given to the minute details about the presenting complaints, including the severity of diarrhea, constipation, pain, cramps, mucus in the stools, nausea, heartburn, emotional triggers and conventional laboratory findings. In addition, the patient’s eating habits, food preferences, thermal attributes and sleep patterns are noted. The patient’s family history and diseases, along with the patient’s emotions are discussed. Then the homeopathic practitioner will select the remedy that most closely matches the symptoms.
Such optimism might be refreshing, but is there any reason for it? Is homeopathy really an effective treatment for IBS? To answer this question, we now have a brand-new Cochrane review. The aim of this review was to assess the effectiveness and safety of homeopathic treatment for treating irritable bowel syndrome (IBS). (This type of statement always makes me a little suspicious; how on earth can anyone truly assess the safety of a treatment by looking at a few studies? This is NOT how one evaluates safety!) The authors conducted extensive literature searches to identify all RCTs, cohort and case-control studies that compared homeopathic treatment with placebo, other control treatments, or usual care in adults with IBS. The primary outcome was global improvement in IBS.
Three RCTs with a total of 213 participants were included. No cohort or case-control studies were identified. Two studies compared homeopathic remedies to placebos for constipation-predominant IBS. One study compared individualised homeopathic treatment to usual care defined as high doses of dicyclomine hydrochloride, faecal bulking agents and a high fibre diet. Due to the low quality of reporting, the risk of bias in all three studies was unclear on most criteria and high for some criteria.
A meta-analysis of two studies with a total of 129 participants with constipation-predominant IBS found a statistically significant difference in global improvement between the homeopathic ‘asafoetida’ and placebo at a short-term follow-up of two weeks. Seventy-three per cent of patients in the homeopathy group improved compared to 45% of placebo patients. There was no statistically significant difference in global improvement between the homeopathic asafoetida plus nux vomica compared to placebo. Sixty-eight per cent of patients in the homeopathy group improved compared to 52% of placebo patients.
The overall quality of the evidence was very low. There was no statistically significant difference between individualised homeopathic treatment and usual care for the outcome “feeling unwell”. None of the studies reported on adverse events (which, by the way, should be seen as a breech in research ethics on the part of the authors of the three primary studies).
The authors concluded that a pooled analysis of two small studies suggests a possible benefit for clinical homeopathy, using the remedy asafoetida, over placebo for people with constipation-predominant IBS. These results should be interpreted with caution due to the low quality of reporting in these trials, high or unknown risk of bias, short-term follow-up, and sparse data. One small study found no statistically difference between individualised homeopathy and usual care (defined as high doses of dicyclomine hydrochloride, faecal bulking agents and diet sheets advising a high fibre diet). No conclusions can be drawn from this study due to the low number of participants and the high risk of bias in this trial. In addition, it is likely that usual care has changed since this trial was conducted. Further high quality, adequately powered RCTs are required to assess the efficacy and safety of clinical and individualised homeopathy compared to placebo or usual care.
THIS REVIEW REQUIRES A FEW FURTHER COMMENTS, I THINK
Asafoetida, the remedy used in two of the studies, is a plant native to Pakistan, Iran and Afghanistan. It is used in Ayurvedic herbal medicine to treat colic, intestinal parasites and irritable bowel syndrome. In the ‘homeopathic’ trials, asafoetida was used in relatively low dilutions, one that still contains molecules. It is therefore debatable whether this was really homeopathy or whether it is more akin to herbal medicine - it was certainly not homeopathy with its typical ultra-high dilutions.
Regardless of this detail, the Cochrane review does hardly provide sound evidence for homeopathy’s efficacy. On the contrary, my reading of its findings is that the ‘possible benefit’ is NOT real but a false positive result caused by the serious limitations of the original studies. The authors stress that the apparently positive result ‘should be interpreted with caution’; that is certainly correct.
So, if you are a proponent of homeopathy, as the authors of the review seem to be, you will claim that homeopathy offers ‘possible benefits’ for IBS-sufferers. But if you are not convinced of the merits of homeopathy, you might suggest that the evidence is insufficient to recommend homeopathy. I imagine that IBS-sufferers might get as frustrated with such confusion as most scientists will be. Yet there is hope; the answer could be imminent: apparently, a new trial is to report its results within this year.
IS THIS NEW TRIAL GOING TO CONTRIBUTE MEANINGFULLY TO OUR KNOWLEDGE?
It is a three-armed study (same 1st author as in the Cochrane review) which, according to its authors, seeks to explore the effectiveness of individualised homeopathic treatment plus usual care compared to both an attention control plus usual care and usual care alone, for patients with IBS. (Why “explore” and not “determine”, I ask myself.) Patients are randomly selected to be offered, 5 sessions of homeopathic treatment plus usual care, 5 sessions of supportive listening plus usual care or usual care alone. (“To be offered” looks odd to me; does that mean patients are not blinded to the interventions? Yes, indeed it does.) The primary clinical outcome is the IBS Symptom Severity at 26 weeks. Analysis will be by intention to treat and will compare homeopathic treatment with usual care at 26 weeks as the primary analysis, and homeopathic treatment with supportive listening as an additional analysis.
Hold on…the primary analysis “will compare homeopathic treatment with usual care“. Are they pulling my leg? They just told me that patients will be “offered, 5 sessions of homeopathic treatment plus usual care… or usual care alone“.
Oh, I see! We are again dealing with an A+B versus B design, on top of it without patient- or therapist-blinding. This type of analysis cannot ever produce a negative result, even if the experimental treatment is a pure placebo: placebo + usual care is always more than usual care alone. IBS-patients will certainly experience benefit from having the homeopaths’ time, empathy and compassion – never mind the remedies they get from them. And for the secondary analyses, things do not seem to be much more rigorous either.
Do we really need more trials of this nature? The Cochrane review shows that we currently have three studies which are too flimsy to be interpretable. What difference will a further flimsy trial make in this situation? When will we stop wasting time and money on such useless ’research’? All it can possibly achieve is that apologists of homeopathy will misinterpret the results and suggest that they demonstrate efficacy.
Obviously, I have not seen the data (they have not yet been published) but I think I can nevertheless predict the conclusions of the primary analysis of this trial; they will read something like this: HOMEOPATHY PROVED TO BE SIGNIFICANTLY MORE EFFECTIVE THAN USUAL CARE. I have asked the question before and I do it again: when does this sort of ‘research’ cross the line into the realm of scientific misconduct?
According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.
The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.
The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.
Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.
But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.
And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.
I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.
Acupressure is a treatment-variation of acupuncture; instead of sticking needles into the skin, pressure is applied over ‘acupuncture points’ which is supposed to provide a stimulus similar to needling. Therefore the effects of both treatments should theoretically be similar.
Acupressure could have several advantages over acupuncture:
- it can be used for self-treatment
- it is suitable for people with needle-phobia
- it is painless
- it is not invasive
- it has less risks
- it could be cheaper
But is acupressure really effective? What do the trial data tell us? Our own systematic review concluded that the effectiveness of acupressure is currently not well documented for any condition. But now there is a new study which might change this negative verdict.
The primary objective of this 3-armed RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care alone in the management of chemotherapy-induced nausea. 500 patients from outpatient chemotherapy clinics in three regions in the UK involving 14 different cancer units/centres were randomised to the wristband arm, the sham wristband arm and the standard care only arm. Participants were chemotherapy-naive cancer patients receiving chemotherapy of low, moderate and high emetogenic risk. The experimental group were given acupressure wristbands pressing the P6 point (anterior surface of the forearm). The Rhodes Index for Nausea/Vomiting, the Multinational Association of Supportive Care in Cancer (MASCC) Antiemesis Tool and the Functional Assessment of Cancer Therapy General (FACT-G) served as outcome measures. At baseline, participants completed measures of anxiety/depression, nausea/vomiting expectation and expectations from using the wristbands.
Data were available for 361 participants for the primary outcome. The primary outcome analysis (nausea in cycle 1) revealed no statistically significant differences between the three arms. The median nausea experience in patients using wristbands (both real and sham ones) was somewhat lower than that in the anti-emetics only group (median nausea experience scores for the four cycles: standard care arm 1.43, 1.71, 1.14, 1.14; sham acupressure arm 0.57, 0.71, 0.71, 0.43; acupressure arm 1.00, 0.93, 0.43, 0). Women responded more favourably to the use of sham acupressure wristbands than men (odds ratio 0.35 for men and 2.02 for women in the sham acupressure group; 1.27 for men and 1.17 for women in the acupressure group). No significant differences were detected in relation to vomiting outcomes, anxiety and quality of life. Some transient adverse effects were reported, including tightness in the area of the wristbands, feeling uncomfortable when wearing them and minor swelling in the wristband area (n = 6). There were no statistically significant differences in the costs associated with the use of real acupressure band.
26 subjects took part in qualitative interviews. Participants perceived the wristbands (both real and sham) as effective and helpful in managing their nausea during chemotherapy.
The authors concluded that there were no statistically significant differences between the three arms in terms of nausea, vomiting and quality of life, although apparent resource use was less in both the real acupressure arm and the sham acupressure arm compared with standard care only; therefore; no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting. However, the study provided encouraging evidence in relation to an improved nausea experience and some indications of possible cost savings to warrant further consideration of acupressure both in practice and in further clinical trials.
I could argue about several of the methodological details of this study. But I resist the temptation in order to focus on just one single point which I find important and which has implications beyond the realm of acupressure.
Why on earth do the authors conclude that no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting? The stated aim of this RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care. The results failed to show significant differences of the primary outcome measures, consequently the conclusion cannot be “unclear”, it has to be that ACUPRESSURE WRIST BANDS ARE NOT MORE EFFECTIVE THAN SHAM ACUPRESSURE WRIST BANDS AS AN ADJUNCT TO ANTI-EMETIC DRUG TREATMENT (or something to that extent).
As long as RCTs of alternative therapies are run by evangelic believers in the respective therapy, we are bound to regularly encounter this lamentable phenomenon of white-washing negative findings with an inadequate conclusion. In my view, this is not research or science, it is pseudo-research or pseudo-science. And it is much more than a nuisance or a trivial matter; it is a waste of research funds, a waste of patients’ good will that has reached a point where people will lose trust in alternative medicine research. Someone should really do a systematic study to identify those research teams that regularly commit such scientific misconduct and ensure that they are cut off public funding and support.
This post will probably work best, if you have read the previous one describing how the parallel universe of acupuncture research insists on going in circles in order to avoid admitting that their treatment might not be as effective as they pretend. The way they achieve this is fairly simple: they conduct trials that are designed in such a way that they cannot possibly produce a negative result.
A brand-new investigation which was recently vociferously touted via press releases etc. as a major advance in proving the effectiveness of acupuncture is an excellent case in point. According to its authors, the aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care. This sounds alright, but wait!
755 patients with depression were randomised to one of three arms to 1)acupuncture, 2)counselling, and 3)usual care alone. The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat. PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of 10 sessions for acupuncture and 9 sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 and 12 months for acupuncture and counselling.
From this, the authors conclude that both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.
Acupuncture for depression? Really? Our own systematic review with co-authors who are the most ardent apologists of acupuncture I have come across showed that the evidence is inconsistent on whether manual acupuncture is superior to sham… Therefore, I thought it might be a good idea to have a closer look at this new study.
One needs to search this article very closely indeed to find out that the authors did not actually evaluate acupuncture versus usual care and counselling versus usual care at all, and that comparisons were not made between acupuncture, counselling, and usual care (hints like the use of the word “alone” are all we get to guess that the authors’ text is outrageously misleading). Not even the methods section informs us what really happened in this trial. You find this hard to believe? Here is the unabbreviated part of the article that describes the interventions applied:
Patients allocated to the acupuncture and counselling groups were offered up to 12 sessions usually on a weekly basis. Participating acupuncturists were registered with the British Acupuncture Council with at least 3 years post-qualification experience. An acupuncture treatment protocol was developed and subsequently refined in consultation with participating acupuncturists. It allowed for customised treatments within a standardised theory-driven framework. Counselling was provided by members of the British Association for Counselling and Psychotherapy who were accredited or were eligible for accreditation having completed 400 supervised hours post-qualification. A manualised protocol, using a humanistic approach, was based on competences independently developed for Skills for Health. Practitioners recorded in logbooks the number and length of sessions, treatment provided, and adverse events. Further details of the two interventions are presented in Tables S2 and S3. Usual care, both NHS and private, was available according to need and monitored for all patients in all three groups for the purposes of comparison.
It is only in the results tables that we can determine what treatments were actually given; and these were:
1) Acupuncture PLUS usual care (i.e. medication)
2) Counselling PLUS usual care
3) Usual care
Its almost a ‘no-brainer’ that, if you compare A+B to B (or in this three-armed study A+B vs C+B vs B), you find that the former is more than the latter – unless A is a negative, of course. As acupuncture has significant placebo-effects, it never can be a negative, and thus this trial is an entirely foregone conclusion. As, in alternative medicine, one seems to need experimental proof even for ‘no-brainers’, we have some time ago demonstrated that this common sense theory is correct by conducting a systematic review of all acupuncture trials with such a design. We concluded that the ‘A + B versus B’ design is prone to false positive results…What makes this whole thing even worse is the fact that I once presented our review in a lecture where the lead author of the new trial was in the audience; so there can be no excuse of not being aware of the ‘no-brainer’.
Some might argue that this is a pragmatic trial, that it would have been unethical to not give anti-depressants to depressed patients and that therefore it was not possible to design this study differently. However, none of these arguments are convincing, if you analyse them closely (I might leave that to the comment section, if there is interest in such aspects). At the very minimum, the authors should have explained in full detail what interventions were given; and that means disclosing these essentials even in the abstract (and press release) - the part of the publication that is most widely read and quoted.
It is arguably unethical to ask patients’ co-operation, use research funds etc. for a study, the results of which were known even before the first patient had been recruited. And it is surely dishonest to hide the true nature of the design so very sneakily in the final report.
In my view, this trial begs at least 5 questions:
1) How on earth did it pass the peer review process of one of the most highly reputed medical journals?
2) How did the protocol get ethics approval?
3) How did it get funding?
4) Does the scientific community really allow itself to be fooled by such pseudo-research?
5) What do I do to not get depressed by studies of acupuncture for depression?
Many reader of this blog will remember the libel case of the British Chiropractic Association (BCA) against Simon Singh. Simon had disclosed in a Guardian comment that the BCA was happily promoting bogus chiropractic treatments for 6 paediatric conditions, including infant colic. The BCA not only lost the case but the affair almost destroyed this strange organisation and resulted in an enormous reputational damage of chiropractors worldwide. In an article entitled AFTER THE STORM, the then-president of the BCA later described the defeat in his own words: “in 2009, events in the UK took a turn which was to consume the British Chiropractic Association (BCA) for two years and force the wider profession to confront key issues that for decades had kept it distanced from its medical counterparts and attracting ridicule from its critics…the BCA began one of the darkest periods in its history; one that was ultimately to cost it financially, reputationally and politically…The GCC itself was in an unprecedented situation. Faced with a 1500% rise in complaints, Investigating Committees were assembled to determine whether there was a case to answer…The events of the past two years have exposed a blind adherence to outdated principles amongst a small but significant minority of the profession. Mindful of the adage that it’s the squeaky wheel that gets the grease, the vocalism of this group has ensured that chiropractic is characterised by its critics as unscientific, unsafe and slightly wacky. Claims that the vertebral subluxation complex is the cause of illness and disease have persisted despite the three UK educational establishments advising the GCC that no evidence of acceptable quality exists to support such claims.”
Only a few years AFTER THE STORM, this story seems to have changed beyond recognition. Harald Walach, who is known to readers of this blog because I reported that he was elected ‘pseudo-scientist of the year’ in 2012, recently published a comment on the proceedings of the European Congress of Integrated Medicine where we find the following intriguing version of the libel case:
Mein Freund und Kollege George Lewith aus Southampton hatte einen Hauptvortrag über seine Überblicksarbeit über chiropraktische Interventionen für kleinkindliche Koliken vorgelegt. Sie ist ausgelöst worden durch die Behauptung, die Singh und Ernst vor einigen Jahren erhoben hatten, dass Chiropraktik gefährlich ist, dass es keine Daten dafür gäbe, dass sie wirksam sei und dass sie gefährliche Nebenwirkungen habe, speziell wenn sie bei Kindern angewendet würde. Die Chiropraktiker hatten den Wissenschaftsjournalisten Singh damals wegen Verleumdung verklagt und recht erhalten. George Lewith hatte dem Gericht die Expertise geliefert und nun seine Analyse auf Kinder ausgedehnt.
Kurz gefasst: Die Intervention wirkt sogar ziemlich stark, etwa eine Standardabweichung war der Effekt groß. Die Kinder schreien kürzer und weniger. Und die Durchforstung der Literatur nach gefährlichen Nebenwirkungen hatte keinen, wortwörtlich: nicht einen, Fall zu Tage gefördert, der von Nebenwirkungen, geschweige denn gefährlichen, berichtet hätte. Die Aufregung war seinerzeit dadurch entstanden, dass eine unqualifizierte Person einer zart gebauten Frau über den Rücken gelaufen ist und ihr dabei das Genick gebrochen hat. Die Presse hatte das ganze dann zu „tödlicher Nebenwirkung chiropraktischer Intervention“ aufgebauscht.
Oh, I almost forgot, you don’t read German? Here is my translation of this revealing text:
“My friend and colleague Geoorge Lewith from Southampton gave a keynote lecture on his review of chiropractic interventions for infant colic. This was prompted by the claim, made by Singh and Ernst a few years ago, that chiropractic was dangerous, that no data existed showing its effectiveness, and that it had dangerous side-effects, particularly for children. The chiropractors had sued the science journalist Singh for libel and won the case. George Lewith had provided the expert report for the court and has now extended his analysis on children.
To put it briefly: the intervention is even very effective; the effect-size is about one standard deviation. The children cry less long and more rarely. And the search of the literature for dangerous side-effects resulted in no – literally: not one – case of side-effects, not to mention dangerous ones. The fuzz had started back then because an unqualified person had walked over the back of a thin woman and had thus broken her neck. The press had subsequently hyped the whole thing to a “deadly side-effect of a chiropractic intervention”. (I am sorry for the clumsy language but the original is even worse.)
Now, isn’t that remarkable? Not only has the truth about the libel case been turned upside down, but also the evidence on chiropractic as a treatment for infant colic seems mysteriously improved; other reviews which might just be a bit more independent and objective come to the following conclusions:
The literature concerning this topic is surprisingly scarce, of poor quality and lack of convincing conclusions. With the present day data on this topic, it is impossible to say whether this kind of treatment has a significant effect.
And what should we make of all this? I don’t know about you, but I conclude that, for some apologists of alternative medicine, the truth is a rather flexible commodity.
The following is a guest post by Preston H. Long. It is an excerpt from his new book entitled ‘Chiropractic Abuse—A Chiropractor’s Lament’. Preston H. Long is a licensed chiropractor from Arizona. His professional career has spanned nearly 30 years. In addition to treating patients, he has testified at about 200 trials, performed more than 10,000 chiropractic case evaluations, and served as a consultant to several law enforcement agencies. He is also an associate professor at Bryan University, where he teaches in the master’s program in applied health informatics. His new book is one of the very few that provides an inside criticism of chiropractic. It is well worth reading, in my view.
Have you ever consulted a chiropractor? Are you thinking about seeing one? Do you care whether your tax and health-care dollars are spent on worthless treatment? If your answer to any of these questions is yes, there are certain things you should know.
1. Chiropractic theory and practice are not based on the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community.
Most chiropractors believe that spinal problems, which they call “subluxations,” cause ill health and that fixing them by “adjusting” the spine will promote and restore health. The extent of this belief varies from chiropractor to chiropractor. Some believe that subluxations are the primary cause of ill health; others consider them an underlying cause. Only a small percentage (including me) reject these notions and align their beliefs and practices with those of the science-based medical community. The ramifications and consequences of subluxation theory will be discussed in detail throughout this book.
2. Many chiropractors promise too much.
The most common forms of treatment administered by chiropractors are spinal manipulation and passive physiotherapy measures such as heat, ultrasound, massage, and electrical muscle stimulation. These modalities can be useful in managing certain problems of muscles and bones, but they have little, if any, use against the vast majority of diseases. But chiropractors who believe that “subluxations” cause ill health claim that spinal adjustments promote general health and enable patients to recover from a wide range of diseases. The illustrations below reflect these beliefs. The one to the left is part of a poster that promotes the notion that periodic spinal “adjustments” are a cornerstone of good health. The other is a patient handout that improperly relates “subluxations” to a wide range of ailments that spinal adjustments supposedly can help. Some charts of this type have listed more than 100 diseases and conditions, including allergies, appendicitis, anemia, crossed eyes, deafness, gallbladder problems, hernias, and pneumonia.
A 2008 survey found that exaggeration is common among chiropractic Web sites. The researchers looked at the Web sites of 200 chiropractors and 9 chiropractic associations in Australia, Canada, New Zealand, the United Kingdom, and the United States. Each site was examined for claims suggesting that chiropractic treatment was appropriate for asthma, colic, ear infection/earache/otitis media, neck pain, whiplash, headache/migraine, and lower back pain. The study found that 95% of the surveyed sites made unsubstantiated claims for at least one of these conditions and 38% made unsubstantiated claims for all of them.1 False promises can have dire consequences to the unsuspecting.
3. Our education is vastly inferior to that of medical doctors.
I rarely encountered sick patients in my school clinic. Most of my “patients” were friends, students, and an occasional person who presented to the student clinic for inexpensive chiropractic care. Most had nothing really wrong with them. In order to graduate, chiropractic college students are required to treat a minimum number of people. To reach their number, some resort to paying people (including prostitutes) to visit them at the college’s clinic.2
Students also encounter a very narrow range of conditions, most related to aches and pains. Real medical education involves contact with thousands of patients with a wide variety of problems, including many severe enough to require hospitalization. Most chiropractic students see patients during two clinical years in chiropractic college. Medical students also average two clinical years, but they see many more patients and nearly all medical doctors have an additional three to five years of specialty training before they enter practice.
Chiropractic’s minimum educational standards are quite low. In 2007, chiropractic students were required to evaluate and manage only 15 patients in order to graduate. Chiropractic’s accreditation agency ordered this number to increase to 35 by the fall of 2011. However, only 10 of the 35 must be live patients (eight of whom are not students or their family members)! For the remaining cases, students are permitted to “assist, observe, or participate in live, paper-based, computer-based, distance learning, or other reasonable alternative.”3 In contrast, medical students see thousands of patients.
Former National Council Against Health Fraud President William T. Jarvis, Ph.D., has noted that chiropractic school prepares its students to practice “conversational medicine”—where they glibly use medical words but lack the knowledge or experience to deal appropriately with the vast majority of health problems.4 Dr. Stephen Barrett reported a fascinating example of this which occurred when he visited a chiropractor for research purposes. When Barrett mentioned that he was recovering from an attack of vertigo (dizziness), the chiropractor quickly rattled off a textbook-like list of all the possible causes. But instead of obtaining a proper history and conducting tests to pinpoint a diagnosis, he x-rayed Dr. Barrett’s neck and recommended a one-year course of manipulations to make his neck more curved. The medical diagnosis, which had been appropriately made elsewhere, was a viral infection that cleared up spontaneously in about ten days.5
4. Our legitimate scope is actually very narrow.
Appropriate chiropractic treatment is relevant only to a narrow range of ailments, nearly all related to musculoskeletal problems. But some chiropractors assert that they can influence the course of nearly everything. Some even offer adjustments to farm animals and family pets.
5. Very little of what chiropractors do has been studied.
Although chiropractic has been around since 1895, little of what we do meets the scientific standard through solid research. Chiropractic apologists try to sound scientific to counter their detractors, but very little research actually supports what chiropractors do.
6. Unless your diagnosis is obvious, it’s best to get diagnosed elsewhere.
During my work as an independent examiner, I have encountered many patients whose chiropractor missed readily apparent diagnoses and rendered inappropriate treatment for long periods of time. Chiropractors lack the depth of training available to medical doctors. For that reason, except for minor injuries, it is usually better to seek medical diagnosis first.
7. We offer lots of unnecessary services.
Many chiropractors, particularly those who find “subluxations” in everyone, routinely advise patients to come for many months, years, and even for their lifetime. Practice-builders teach how to persuade people they need “maintenance care” long after their original problem has resolved. In line with this, many chiropractors offer “discounts” to patients who pay in advance and sign a contract committing them for 50 to 100 treatments. And “chiropractic pediatric specialists” advise periodic examinations and spinal adjustments from early infancy onward. (This has been aptly described as “womb to tomb” care.) Greed is not the only factor involved in overtreatment. Many who advise periodic adjustments are “true believers.” In chiropractic school, one of my classmates actually adjusted his newborn son while the umbilical cord was still attached. Another student had the school radiology department take seven x-rays of his son’s neck to look for “subluxations” presumably acquired during the birth process. The topic of unnecessary care is discussed further in Chapter 8.
8. “Cracking” of the spine doesn’t mean much.
Spinal manipulation usually produces a “popping” or “cracking” sound similar to what occurs when you crack your knuckles. Both are due to a phenomenon called cavitation, which occurs when there is a sudden decrease in joint pressure brought on by the manipulation. That allows dissolved gasses in the joint fluid to be released into the joint itself. Chiropractors sometimes state that the noise means that something therapeutic has taken place. However, the noise has no health-related significance and does not indicate that anything has been realigned. It simply means that gas was allowed to escape under less pressure than normal. Knuckles do not “go back into place” when you crack them, and neither do spinal bones.
9. If the first few visits don’t help you, more treatment probably won’t help.
I used to tell my patients “three and through.” If we did not see significant objective improvement in three visits, it was time to move on.
10. We take too many x-rays.
No test should be done unless it is likely to provide information that will influence clinical management of the patient. X-ray examinations are appropriate when a fracture, tumor, infection, or neurological defect is suspected. But they are not needed for evaluating simple mechanical-type strains, such as back or neck pain that develops after lifting a heavy object.
The average number of x-rays taken during the first visit by chiropractors whose records I have been asked to review has been about eleven. Those records were sent to me because an insurance company had flagged them for investigation into excessive billing, so this number of x-rays is much higher than average. But many chiropractors take at least a few x-rays of everyone who walks through their door.
There are two main reasons why chiropractors take more x-rays than are medically necessary. One is easy money. It costs about 35¢ to buy an 8- x 10-inch film, for which they typically charge $40. In chiropractic, the spine encompasses five areas: the neck, mid-back, low-back, pelvic, and sacral regions. That means five separate regions to bill for—typically three to seven views of the neck, two to six for the low back, and two for each of the rest. So eleven x-ray films would net the chiropractor over $400 for just few minutes of work. In many accident cases I have reviewed, the fact that patients had adequate x-ray examinations in a hospital emergency department to rule out fractures did not deter the chiropractor from unnecessarily repeating these exams.
Chiropractors also use x-ray examinations inappropriately for marketing purposes. Chiropractors who do this point to various things on the films that they interpret as (a) subluxations, (b) not enough spinal curvature, (c) too much spinal curvature, and/or (d) “spinal decay,” all of which supposedly call for long courses of adjustments with periodic x-ray re-checks to supposedly assess progress. In addition to wasting money, unnecessary x-rays entail unnecessary exposure to the risks of ionizing radiation.
11. Research on spinal manipulation does not reflect what takes place in most chiropractic offices.
Research studies that look at spinal manipulation are generally done under strict protocols that protect patients from harm. The results reflect what happens when manipulation is done on patients who are appropriately screened—usually by medical teams that exclude people with conditions that would make manipulation dangerous. The results do not reflect what typically happens when patients select chiropractors on their own. The chiropractic marketplace is a mess because most chiropractors ignore research findings and subject their patients to procedures that are unnecessary and/or senseless.
12. Neck manipulation is potentially dangerous.
Certain types of chiropractic neck manipulation can damage neck arteries and cause a stroke. Chiropractors claim that the risk is trivial, but they have made no systematic effort to actually measure it. Chapter 9 covers this topic in detail.
13. Most chiropractors don’t know much about nutrition.
Chiropractors learn little about clinical nutrition during their schooling. Many offer what they describe as “nutrition counseling.” But this typically consists of superficial advice about eating less fat and various schemes to sell you supplements that are high-priced and unnecessary.
14. Chiropractors who sell vitamins charge much more than it costs them.
Chiropractors who sell vitamins typically recommend them unnecessarily and charge two to three times what they pay for them. Some chiropractors center their practice around selling vitamins to patients. Their recommendations are based on hair analysis, live blood analysis, applied kinesiology muscle-testing or other quack tests that will be discussed later in this book. Patients who are victimized this way typically pay several dollars a day and are encouraged to stay on the products indefinitely. In one case I investigated, an Arizona chiropractor advised an 80+-year-old grandma to charge more than $10,000 for vitamins to her credit cards to avoid an impending stroke that he had diagnosed by testing a sample of her pubic hair. No hair test can determine that a stroke is imminent or show that dietary supplements are needed. Doctors who evaluated the woman at the Mayo Clinic found no evidence to support the chiropractor’s assessment.
15. Chiropractors have no business treating young children.
The pediatric training chiropractors receive during their schooling is skimpy and based mainly on reading. Students see few children and get little or no experience in diagnosing or following the course of the vast majority of childhood ailments. Moreover, spinal adjustment has no proven effectiveness against childhood diseases. Some adolescents with spinal stiffness might benefit from manipulation, but most will recover without treatment. Chiropractors who claim to practice “chiropractic pediatrics” typically aim to adjust spines from birth onward and are likely to oppose immunization. Some chiropractors claim they can reverse or lessen the spinal curvature of scoliosis, but there is no scientific evidence that spinal manipulation can do this.6
16. The fact that patients swear by us does not mean we are actually helping them.
Satisfaction is not the same thing as effectiveness. Many people who believe they have been helped had conditions that would have resolved without treatment. Some have had treatment for dangers that did not exist but were said by the chiropractor to be imminent. Many chiropractors actually take courses on how to trick patients to believe in them. (See Chapter 8)
17. Insurance companies don’t want to pay for chiropractic care.
Chiropractors love to brag that their services are covered by Medicare and most insurance companies. However, this coverage has been achieved though political action rather than scientific merit. I have never encountered an insurance company that would reimburse for chiropractic if not forced to do so by state laws. The political pressure to mandate chiropractic coverage comes from chiropractors, of course, but it also comes from the patients whom they have brainwashed.
18. Lots of chiropractors do really strange things.
The chiropractic profession seems to attract people who are prone to believe in strange things. One I know of does “aura adjustments” to treat people’s “bruised karma.” Another rents out a large crystal to other chiropractors so they can “recharge” their own (smaller) crystals. Another claims to get advice by “channeling” a 15th Century Scottish physician. Another claimed to “balance a woman’s harmonics” by inserting his thumb into her vagina and his index finger into her anus. Another treated cancer with an orange light that was mounted in a wooden box. Another did rectal exams on all his female patients. Even though such exams are outside the legitimate scope of chiropractic, he also videotaped them so that if his bills for this service were questioned, he could prove that he had actually performed what he billed for.
19. Don’t expect our licensing boards to protect you.
Many chiropractors who serve on chiropractic licensing boards harbor the same misbeliefs that are rampant among their colleagues. This means, for example, that most boards are unlikely to discipline chiropractors for diagnosing and treating imaginary “subluxations.”
20. The media rarely look at what we do wrong.
The media rarely if ever address chiropractic nonsense. Reporting on chiropractic is complicated because chiropractors vary so much in what they do. (In fact, a very astute observer once wrote that “for every chiropractor, there is an equal and opposite chiropractor.”) Consumer Reports published superb exposés in 1975 and 1994, but no other print outlet has done so in the past 35 years. This lack of information is the main reason I have written this book.
1. Ernst E, Gilbey A. Chiropractic claims in the English-speaking world. New Zealand Medical Journal 123:36–44, 2010.
2. Bernet J. Affidavit, April 12, 1996. Posted to Chirobase Web site.
3. Standards for Doctor of Chiropractic Programs and Requirements for Institutional Status. Council on Chiropractic Education, Scottsdale, Arizona, Jan 2007.
4. Jarvis WT. Why becoming a chiropractor may be risky. Chirobase Web site, October 5, 1999.
5. Barrett S. My visit to a “straight” chiropractor. Quackwatch Web site, Oct 10, 2002.
6. Romano M, Negrini S. Manual therapy as a conservative treatment for idiopathic scoliosis: A review. Scoliosis 3:2, 2008.
It was 20 years ago today that I started my job as ’Professor of Complementary Medicine’ at the University of Exeter and became a full-time researcher of all matters related to alternative medicine. One issue that was discussed endlessly during these early days was the question whether alternative medicine can be investigated scientifically. There were many vociferous proponents of the view that it was too subtle, too individualised, too special for that and that it defied science in principle. Alternative medicine, they claimed, needed an alternative to science to be validated. I spent my time arguing the opposite, of course, and today there finally seems to be a consensus that alternative medicine can and should be submitted to scientific tests much like any other branch of health care.
Looking back at those debates, I think it is rather obvious why apologists of alternative medicine were so vehement about opposing scientific investigations: they suspected, perhaps even knew, that the results of such research would be mostly negative. Once the anti-scientists saw that they were fighting a lost battle, they changed their tune and adopted science – well sort of: they became pseudo-scientists (‘if you cannot beat them, join them’). Their aim was to prevent disaster, namely the documentation of alternative medicine’s uselessness by scientists. Meanwhile many of these ‘anti-scientists turned pseudo-scientists’ have made rather surprising careers out of their cunning role-change; professorships at respectable universities have mushroomed. Yes, pseudo-scientists have splendid prospects these days in the realm of alternative medicine.
The term ’pseudo-scientist’ as I understand it describes a person who thinks he/she knows the truth about his/her subject well before he/she has done the actual research. A pseudo-scientist is keen to understand the rules of science in order to corrupt science; he/she aims at using the tools of science not to test his/her assumptions and hypotheses, but to prove that his/her preconceived ideas were correct.
So, how does one become a top pseudo-scientist? During the last 20 years, I have observed some of the careers with interest and think I know how it is done. Here are nine lessons which, if followed rigorously, will lead to success (… oh yes, in case I again have someone thick enough to complain about me misleading my readers: THIS POST IS SLIGHTLY TONGUE IN CHEEK).
- Throw yourself into qualitative research. For instance, focus groups are a safe bet. This type of pseudo-research is not really difficult to do: you assemble about 5 -10 people, let them express their opinions, record them, extract from the diversity of views what you recognise as your own opinion and call it a ‘common theme’, write the whole thing up, and - BINGO! – you have a publication. The beauty of this approach is manifold: 1) you can repeat this exercise ad nauseam until your publication list is of respectable length; there are plenty of alternative medicine journals who will hurry to publish your pseudo-research; 2) you can manipulate your findings at will, for instance, by selecting your sample (if you recruit people outside a health food shop, for instance, and direct your group wisely, you will find everything alternative medicine journals love to print); 3) you will never produce a paper that displeases the likes of Prince Charles (this is more important than you may think: even pseudo-science needs a sponsor [or would that be a pseudo-sponsor?]).
- Conduct surveys. These are very popular and highly respected/publishable projects in alternative medicine – and they are almost as quick and easy as focus groups. Do not get deterred by the fact that thousands of very similar investigations are already available. If, for instance, there already is one describing the alternative medicine usage by leg-amputated police-men in North Devon, and you nevertheless feel the urge of going into this area, you can safely follow your instinct: do a survey of leg-amputated police men in North Devon with a medical history of diabetes. There are no limits, and as long as you conclude that your participants used a lot of alternative medicine, were very satisfied with it, did not experience any adverse effects, thought it was value for money, and would recommend it to their neighbour, you have secured another publication in an alternative medicine journal.
- If, for some reason, this should not appeal to you, how about taking a sociological, anthropological or psychological approach? How about studying, for example, the differences in worldviews, the different belief systems, the different ways of knowing, the different concepts about illness, the different expectations, the unique spiritual dimensions, the amazing views on holism – all in different cultures, settings or countries? Invariably, you will, of course, conclude that one truth is at least as good as the next. This will make you popular with all the post-modernists who use alternative medicine as a playground for getting a few publications out. This approach will allow you to travel extensively and generally have a good time. Your papers might not win you a Nobel prize, but one cannot have everything.
- It could well be that, at one stage, your boss has a serious talk with you demanding that you start doing what (in his narrow mind) constitutes ’real science’. He might be keen to get some brownie-points at the next RAE and could thus want you to actually test alternative treatments in terms of their safety and efficacy. Do not despair! Even then, there are plenty of possibilities to remain true to your pseudo-scientific principles. By now you are good at running surveys, and you could, for instance, take up your boss’ suggestion of studying the safety of your favourite alternative medicine with a survey of its users. You simply evaluate their experiences and opinions regarding adverse effects. But be careful, you are on somewhat thinner ice here; you don’t want to upset anyone by generating alarming findings. Make sure your sample is small enough for a false negative result, and that all participants are well-pleased with their alternative medicine. This might be merely a question of selecting your patients cleverly. The main thing is that your conclusion is positive. If you want to go the extra pseudo-scientific mile, mention in the discussion of your paper that your participants all felt that conventional drugs were very harmful.
- If your boss insists you tackle the daunting issue of therapeutic efficacy, there is no reason to give up pseudo-science either. You can always find patients who happened to have recovered spectacularly well from a life-threatening disease after receiving your favourite form of alternative medicine. Once you have identified such a person, you write up her experience in much detail and call it a ‘case report’. It requires a little skill to brush over the fact that the patient also had lots of conventional treatments, or that her diagnosis was assumed but never properly verified. As a pseudo-scientist, you will have to learn how to discretely make such irritating details vanish so that, in the final paper, they are no longer recognisable. Once you are familiar with this methodology, you can try to find a couple more such cases and publish them as a ‘best case series’ – I can guarantee that you will be all other pseudo-scientists’ hero!
- Your boss might point out, after you have published half a dozen such articles, that single cases are not really very conclusive. The antidote to this argument is simple: you do a large case series along the same lines. Here you can even show off your excellent statistical skills by calculating the statistical significance of the difference between the severity of the condition before the treatment and the one after it. As long as you show marked improvements, ignore all the many other factors involved in the outcome and conclude that these changes are undeniably the result of the treatment, you will be able to publish your paper without problems.
- As your boss seems to be obsessed with the RAE and all that, he might one day insist you conduct what he narrow-mindedly calls a ‘proper’ study; in other words, you might be forced to bite the bullet and learn how to plan and run an RCT. As your particular alternative therapy is not really effective, this could lead to serious embarrassment in form of a negative result, something that must be avoided at all cost. I therefore recommend you join for a few months a research group that has a proven track record in doing RCTs of utterly useless treatments without ever failing to conclude that it is highly effective. There are several of those units both in the UK and elsewhere, and their expertise is remarkable. They will teach you how to incorporate all the right design features into your study without there being the slightest risk of generating a negative result. A particularly popular solution is to conduct what they call a ‘pragmatic’ trial, I suggest you focus on this splendid innovation that never fails to produce anything but cheerfully positive findings.
- It is hardly possible that this strategy fails – but once every blue moon, all precautions turn out to be in vain, and even the most cunningly designed study of your bogus therapy might deliver a negative result. This is a challenge to any pseudo-scientist, but you can master it, provided you don’t lose your head. In such a rare case I recommend to run as many different statistical tests as you can find; chances are that one of them will nevertheless produce something vaguely positive. If even this method fails (and it hardly ever does), you can always home in on the fact that, in your efficacy study of your bogus treatment, not a single patient died. Who would be able to doubt that this is a positive outcome? Stress it clearly, select it as the main feature of your conclusions, and thus make the more disappointing findings disappear.
- Now that you are a fully-fledged pseudo-scientist who has produced one misleading or false positive result after the next, you may want a ‘proper’ confirmatory study of your pet-therapy. For this purpose run the same RCT over again, and again, and again. Eventually you want a meta-analysis of all RCTs ever published. As you are the only person who ever conducted studies on the bogus treatment in question, this should be quite easy: you pool the data of all your trials and, bob’s your uncle: a nice little summary of the totality of the data that shows beyond doubt that your therapy works. Now even your narrow-minded boss will be impressed.
These nine lessons can and should be modified to suit your particular situation, of course. Nothing here is written in stone. The one skill any pseudo-scientist must have is flexibility.
Every now and then, some smart arse is bound to attack you and claim that this is not rigorous science, that independent replications are required, that you are biased etc. etc. blah, blah, blah. Do not panic: either you ignore that person completely, or (in case there is a whole gang of nasty sceptics after you) you might just point out that:
- your work follows a new paradigm; the one of your critics is now obsolete,
- your detractors fail to understand the complexity of the subject and their comments merely reveal their ridiculous incompetence,
- your critics are less than impartial, in fact, most are bought by BIG PHARMA,
- you have a paper ‘in press’ that fully deals with all the criticism and explains how inappropriate it really is.
In closing, allow me a final word about publishing. There are hundreds of alternative medicine journals out there to chose from. They will love your papers because they are uncompromising promotional. These journals all have one thing in common: they are run by apologists of alternative medicine who abhor to read anything negative about alternative medicine. Consequently hardly a critical word about alternative medicine will ever appear in these journals. If you want to make double sure that your paper does not get criticised during the peer-review process (this would require a revision, and you don’t need extra work of that nature), you can suggest a friend for peer-reviewing it. In turn, you can offer to him/her that you do the same to him/her the next time he/she has an article to submit. This is how pseudo-scientists make sure that the body of pseudo-evidence for their pseudo-treatments is growing at a steady pace.
I have said it so often that I hesitate to state it again: an uncritical researcher is a contradiction in terms. This begs the question as to how critical the researchers of alternative medicine truly are. In my experience, most tend to be uncritical in the extreme. But how would one go about providing evidence for this view? In a previous blog-post, I have suggested a fairly simple method: to calculate an index of negative conclusions drawn in the articles published by a specific researcher. This is what I wrote:
If we calculated the percentage of a researcher’s papers arriving at positive conclusions and divided this by the percentage of his papers drawing negative conclusions, we might have a useful measure. A realistic example might be the case of a clinical researcher who has published a total of 100 original articles. If 50% had positive and 50% negative conclusions about the efficacy of the therapy tested, his trustworthiness index (TI) would be 1.
Depending on what area of clinical medicine this person is working in, 1 might be a figure that is just about acceptable in terms of the trustworthiness of the author. If the TI goes beyond 1, we might get concerned; if it reaches 4 or more, we should get worried.
An example would be a researcher who has published 100 papers of which 80 are positive and 20 arrive at negative conclusions. His TI would consequently amount to 4. Most of us equipped with a healthy scepticism would consider this figure highly suspect.
So how would alternative medicine researchers do, if we applied this method for assessing their trustworthiness? Very poorly, I fear - but that is speculation! Let’s see some data. Let’s look at one prominent alternative medicine researcher and see. As an example, I have chosen Professor George Lewith (because his name is unique which avoids confusion with researchers), did a quick Medline search to identify the abstracts of his articles on alternative medicine, and extracted the crucial sentence from the conclusions of the most recent ones:
- The study design of registered TCM trials has improved in estimating sample size, use of blinding and placebos
- Real treatment was significantly different from sham demonstrating a moderate specific effect of PKP
- These findings highlight the importance of helping patients develop coherent illness representations about their LBP before trying to engage them in treatment-decisions, uptake, or adherence
- Existing theories of how context influences health outcomes could be expanded to better reflect the psychological components identified here, such as hope, desire, optimism and open-mindedness
- …mainstream science has moved on from the intellectual sterility and ad hominem attacks that characterise the sceptics’ movement
- Trustworthy and appropriate information about practitioners (e.g. from professional regulatory bodies) could empower patients to make confident choices when seeking individual complementary practitioners to consult
- Comparative effectiveness research is an emerging field and its development and impact must be reflected in future research strategies within complementary and integrative medicine
- The I-CAM-Q has low face validity and low acceptability, and is likely to produce biased estimates of CAM use if applied in England, Romania, Italy, The Netherlands or Spain
- Our main finding was of beta power decreases in primary somatosensory cortex and SFG, which opens up a line of future investigation regarding whether this contributes toward an underlying mechanism of acupuncture.
- …physiotherapy was appraised more negatively in the National Health Service than the private sector but osteopathy was appraised similarly within both health-care sectors
This is a bit tedious, I agree, so I stop after just 10 articles. But even this short list does clearly indicate the absence of negative conclusions. In fact, I see none at all – arguably a few neutral ones, but nothing negative. All is positive in the realm of alternative medicine research then? In case you don’t agree with that assumption, you might prefer to postulate that this particular alternative medicine researcher somehow avoids negative conclusions. And if you believe that, you are not far from considering that we are being misinformed.
Alternative medicine is not really a field where one might reasonably expect that rigorous research generates nothing but positive results; even to expect 50 or 40% of such findings would be quite optimistic. It follows, I think, that if researchers only find positives, something must be amiss. I have recently demonstrated that the most active research homeopathic group (Professor Witt from the Charite in Berlin) has published nothing but positive findings; even if the results were not quite positive, they managed to formulate a positive conclusion. Does anyone doubt that this amounts to misinformation?
So, I do have produced at least some tentative evidence for my suspicion that some alternative medicine researchers misinform us. But how precisely do they do it? I can think of several methods for avoiding publishing a negative result or conclusion, and I fear that all of them are popular with alternative medicine researchers:
- design the study in such a way that it cannot possibly give a negative result
- manipulate the data
- be inventive when it comes to statistics
- home in on to the one positive aspect your generally negative data might show
- do not write up your study; like this nobody will ever see your negative results
And why do they do it? My impression is that they use science not for testing their interventions but for proving them. Critical thinking is a skill that alternative medicine researchers do not seem to cultivate. Often they manage to hide this fact quite cleverly and for good reasons: no respectable funding body would give money for such an abuse of science! Nevertheless, the end-result is plain to see: no negative conclusions are being published!
There are at least two further implications of the fact that alternative medicine researchers misinform the public. The first concerns the academic centres in which these researchers are organised. If a prestigious university accommodates a research unit of alternative medicine, it gives considerable credence to alternative medicine itself. If the research that comes out of the unit is promotional pseudo-science, the result, in my view, amounts to misleading the public about the value of alternative medicine.
The second implication relates to the journals in which researchers of alternative medicine prefer to publish their articles. Today, there are several hundred journals specialised in alternative medicine. We have shown over and over again that these journals publish next to nothing in terms of negative results. In my view, this too amounts to systematic misinformation.
My conclusion from all this is depressing: the type of research that currently dominates alternative medicine is, in fact, pseudo-research aimed not at rigorously falsifying hypotheses but at promoting bogus treatments. In other words alternative medicine researchers crucially contribute to the ‘sea of misinformation’ in this area.
In my last post and several others before, I have stated that consumers are incessantly being mislead about the value of alternative medicine. This statement requires evidence, and I intend to provide it – not just in one post but in a series of posts following in fast succession.
I start with an investigation we did over a decade ago. Its primary aim was to determine which complementary therapies are believed by their respective representing UK professional organizations to be suited for which medical conditions.
For this purpose, we sent out 223 questionnaires to CAM organizations representing a single CAM therapy (yes, amazingly that many such institutions exist just in the UK!). They were asked to list the 15 conditions which they felt benefited most from their specific CAM therapy, as well as the 15 most important contra-indications, the typical costs of initial and any subsequent treatments and the average length of training required to become a fully qualified practitioner. The conditions and contra-indications quoted by responding CAM organizations were recorded and the top five of each were determined. Treatment costs and hours of training were expressed as ranges.
Only 66 questionnaires were returned. Taking undelivered questionnaires into account, the response rate was 34%. Two or more responses were received from CAM organizations representing twelve therapies: aromatherapy, Bach flower remedies, Bowen technique, chiropractic, homoeopathy, hypnotherapy, magnet therapy, massage, nutrition, reflexology, Reiki and yoga.
The top seven common conditions deemed to benefit from all twelve therapies, in order of frequency, were: stress/anxiety, headaches/migraine, back pain, respiratory problems (including asthma), insomnia, cardiovascular problems and musculoskeletal problems. It is perhaps important at this stage to point out that some of these conditions are serious, even life-threatening. Aromatherapy, Bach flower remedies, hypnotherapy, massage, nutrition, reflexology, Reiki and yoga were all recommended as suitable treatments for stress/anxiety. Aromatherapy, Bowen technique, chiropractic, hypnotherapy, massage, nutrition, reflexology, Reiki and yoga were all recommended for headache/migraine. Bowen technique, chiropractic, magnet therapy, massage, reflexology and yoga were recommended for back pain. None of the therapies cost more than £60 for an initial consultation and treatment. No correlation between length of training and treatment cost was noted.
I think, this article provides ample evidence to show that, at least in the UK, professional organisations of alternative medicine readily issue statements about the effectiveness of specific alternative therapies which are not supported by evidence. Several years later, Simon Singh noted that phenomenon in a Guardian-comment and wrote about the British Chiropractic Association “they happily promote bogus claims”. He was famously sued for libel but won the case. Simon had picked the BCA merely by chance. The frightening thought is that he could have targeted any other of the 66 organisations from our investigation: they all seem to promote bogus claims quite happily.
Several findings from our study stood out for being particularly worrying: according to the respective professional organisation, Bach Flower Remedies were deemed to be effective for cancer and AIDS, for instance. If their peers put out such irresponsible nonsense, we should not be amazed at the claims made by the practitioners. And if the practitioners tell such ‘tall tales’ to their clients, to journalists and to everyone else, how can we be amazed that we seem to be drowning in a sea of misinformation?
Can one design a clinical study in such a way that it looks highly scientific but, at the same time, has zero chances of generating a finding that the investigators do not want? In other words, can one create false positive findings at will and get away with it? I think it is possible; what is more, I believe that, in alternative medicine, this sort of thing happens all the time. Let me show you how it is done; four main points usually suffice:
- The first rule is that it ought to be an RCT, if not, critics will say the result was due to selection bias. Only RCTs have the reputation of being ‘top notch’.
- Once we are clear about this design feature, we need to define the patient population. Here the trick is to select individuals with an illness that cannot be quantified objectively. Depression, stress, fatigue…the choice is vast. The aim must be to employ an outcome measure that is well-accepted, validated etc. but which nevertheless is entirely subjective.
- Now we need to consider the treatment to be “tested” in our study. Obviously we take the one we are fond of and want to “prove”. It helps tremendously, if this intervention has an exotic name and involves some exotic activity; this raises our patients’ expectations which will affect the result. And it is important that the treatment is a pleasant experience; patients must like it. Finally it should involve not just one but several sessions in which the patient can be persuaded that our treatment is the best thing since sliced bread - even if, in fact, it is entirely bogus.
- We also need to make sure that, for our particular therapy, no universally accepted placebo exists which would allow patient-blinding. That would be fairly disastrous. And we certainly do not want to be innovative and create such a placebo either; we just pretend that controlling for placebo-effects is impossible or undesirable. By far the best solution would be to give the control group no treatment at all. Like this, they are bound to be disappointed for missing out a pleasant experience which, in turn, will contribute to unfavourable outcomes in the control group. This little trick will, of course, make the results in the experimental group look even better.
That’s about it! No matter how ineffective our treatment is, there is no conceivable way our study can generate a negative result; we are in the pink!
Now we only need to run the trial and publish the positive results. It might be advisable to recruit several co-authors for the publication – that looks more serious and is not too difficult: people are only too keen to prolong their publication-list. And we might want to publish our study in one of the many CAM-journals that are not too critical, as long as the result is positive.
Once our article is in print, we can legitimately claim that our bogus treatment is evidence-based. With a bit of luck, other research groups will proceed in the same way and soon we will have not just one but several positive studies. If not, we need to do two or three more trials along the same lines. The aim is to eventually do a meta-analysis that yields a convincingly positive verdict on our phony intervention.
You might think that I am exaggerating beyond measure. Perhaps a bit, I admit, but I am not all that far from the truth, believe me. You want proof? What about this one?
Researchers from the Charite in Berlin just published an RCT to investigate the effectiveness of a mindful walking program in patients with high levels of perceived psychological distress.
To prevent allegations of exaggeration, selective reporting, spin etc. I take the liberty of reproducing the abstract of this study unaltered:
Participants aged between 18 and 65 years with moderate to high levels of perceived psychological distress were randomized to 8 sessions of mindful walking in 4 weeks (each 40 minutes walking, 10 minutes mindful walking, 10 minutes discussion) or to no study intervention (waiting group). Primary outcome parameter was the difference to baseline on Cohen’s Perceived Stress Scale (CPSS) after 4 weeks between intervention and control.
Seventy-four participants were randomized in the study; 36 (32 female, 52.3 ± 8.6 years) were allocated to the intervention and 38 (35 female, 49.5 ± 8.8 years) to the control group. Adjusted CPSS differences after 4 weeks were -8.8 [95% CI: -10.8; -6.8] (mean 24.2 [22.2; 26.2]) in the intervention group and -1.0 [-2.9; 0.9] (mean 32.0 [30.1; 33.9]) in the control group, resulting in a highly significant group difference (P < 0.001).
Conclusion. Patients participating in a mindful walking program showed reduced psychological stress symptoms and improved quality of life compared to no study intervention. Further studies should include an active treatment group and a long-term follow-up
This whole thing could just be a bit of innocent fun, but I am afraid it is neither innocent nor fun, it is, in fact, quite serious. If we accept manipulated trials as evidence, we do a disservice to science, medicine and, most importantly, to patients. If the result of a trial is knowable before the study has even started, it is unethical to run the study. If the trial is not a true test but a simple promotional exercise, research degenerates into a farcical pseudo-science. If we abuse our patients’ willingness to participate in research, we jeopardise more serious investigations for the benefit of us all. If we misuse the scarce funds available for research, we will not have the money to conduct much needed investigations. If we tarnish the reputation of clinical research, we hinder progress.