Believe it or not, but my decision – all those years ago – to study medicine was to a significant degree influenced by a somewhat naive desire to, one day, be able to save lives. In my experience, most medical students are motivated by this wish – “to save lives” in this context stands not just for the dramatic act of administering a life-saving treatment to a moribund patient but it is meant as a synonym for helping patients in a much more general sense.
I am not sure whether, as a young clinician, I ever did manage to save many lives. Later, I had a career-change and became a researcher. The general view about researchers seems to be that they are detached from real life, sit in ivory towers and write clever papers which hardly anyone understands and few people will ever read. Researchers therefore cannot save lives, can they?
So, what happened to those laudable ambitions of the young Dr Ernst? Why did I decide to go into research, and why alternative medicine; why did I not conduct research in more the promotional way of so many of my colleagues (my life would have been so much more hassle-free, and I even might have a knighthood by now); why did I feel the need to insist on rigorous assessments and critical thinking, often at high cost? For my many detractors, the answers to these questions seem to be more than obvious: I was corrupted by BIG PHARMA, I have an axe to grind against all things alternative, I have an insatiable desire to be in the lime-light, I defend my profession against the concurrence from alternative practitioners etc. However, for me, the issues are a little less obvious (today, I will, for the first time, disclose the bribe I received from BIG PHARMA for criticising alternative medicine: the precise sum was zero £ and the same amount again in $).
As I am retiring from academic life and doing less original research, I do have the time and the inclination to brood over such questions. What precisely motivated my research agenda in alternative medicine, and why did I remain unimpressed by the number of powerful enemies I made pursuing it?
If I am honest – and I know this will sound strange to many, particularly to those who are convinced that I merely rejoice in being alarmist – I am still inspired by this hope to save lives. Sure, the youthful naivety of the early days has all but disappeared, yet the core motivation has remained unchanged.
But how can research into alternative medicine ever save a single life?
Since about 20 years, I am regularly pointing out that the most important research questions in my field relate to the risks of alternative medicine. I have continually published articles about these issues in the medical literature and, more recently, I have also made a conscious effort to step out of the ivory towers of academia and started writing for a much wider lay-audience (hence also this blog). Important landmarks on this journey include:
Alternative medicine is cleverly, heavily and incessantly promoted as being natural and hence harmless. Several of my previous posts and the ensuing discussions on this blog strongly suggest that some chiropractors deny that their neck manipulations can cause a stroke. Similarly, some homeopaths are convinced that they can do no harm; some acupuncturists insist that their needles are entirely safe; some herbalists think that their medicines are risk-free, etc. All of them tend to agree that the risks are non-existent or so small that they are dwarfed by those of conventional medicine, thus ignoring that the potential risks of any treatment must be seen in relation to their proven benefit.
For 20 years, I have tried my best to dispel these dangerous myths and fallacies. In doing so, I had to fight many tough battles (sometimes even with the people who should have protected me, e.g. my peers at Exeter university), and I have the scars to prove it. If, however, I did save just one life by conducting my research into the risks of alternative medicine and by writing about it, the effort was well worth it.
The developed world is in the middle of a major obesity epidemic. It is predicted to cause millions of premature deaths and billions of dollars, money that would be badly needed elsewhere. The well-known method of eating less and moving more is most efficacious but sadly not very effective, that is to say people do not easily adopt and adhere to it. This is why many experts are searching for a treatment that works and is acceptable to all or at least most patients.
Entrepreneurs of alternative medicine have long jumped on this band waggon. They have learnt that the regulations are lax or non-existent, that consumers are keen to believe anything they tell them and that the opportunities to make a fast buck are thus enormous. Today, they are offering an endless array of treatments which are cleverly marketed, for instance via the Internet.
Since many years, my research team are involved in a programme of assessing the alternative slimming aids mostly through systematic reviews and occasionally also through conducting our own clinical trials. Our published analyses include the following treatments:
Supplements containing conjugated linoleic acid
There are, of course, many more but, for most, no evidence exist at all. The treatments listed above have all been submitted to clinical trials. The results show invariably that the outcomes were not convincingly positive: either there were too few data, or there were too many flaws in the studies, or the weight reduction achieved was too small to be clinically relevant.
Our latest systematic review is a good example; its aim was to evaluate the evidence from randomized controlled trials (RCTs) involving the use of the African Bush Mango, Irvingia gabonensis, for body weight reduction in obese and overweight individuals. Three RCTs were identified, and all had major methodological flaws. All RCTs reported statistically significant reductions in body weight and waist circumference favoring I. gabonensis over placebo. They also suggested positive effects of I. gabonensis on blood lipids. Adverse events included headache and insomnia. Despite these apparently positive findings, our conclusions had to be cautious: “Due to the paucity and poor reporting quality of the RCTs, the effect of I. gabonensis on body weight and related parameters are unproven. Therefore, I. gabonensis cannot be recommended as a weight loss aid. Future research in this area should be more rigorous and better reported.”
People who want to loose weight are often extremely desperate and ready to try anything. They are thus easy victims of the irresponsible promises that are being made on the Internet and elsewhere. Despite the overwhelmingly evidence to the contrary, consumers are led to believe that alternative slimming aids are effective. What is more, they are also misled to assume they are risks-free. This latter assumption is false too: apart from the harm done to the patient’s bank account, many alternative slimming aids are associated with side-effects which, in some cases, are serious and can even include death.
The conclusion from all this is short and simple: alternative slimming aids are bogus.
Still in the spirit of ACUPUNCTURE AWARENESS WEEK, I have another critical look at a recent paper. If you trust some of the conclusions of this new article, you might think that acupuncture is an evidence-based treatment for coronary heart disease. I think this would be a recipe for disaster.
This condition affects millions and eventually kills a frighteningly large percentage of the population. Essentially, it is caused by the fact that, as we get older, the blood vessels supplying the heart also change, become narrower and get partially or even totally blocked. This causes lack of oxygen in the heart which causes pain known as angina pectoris. Angina is a most important warning sign indicating that a full blown heart attack might be not far.
The treatment of coronary heart disease consists in trying to let more blood flow through the narrowed coronaries, either by drugs or by surgery. At the same time, one attempts to reduce the oxygen demand of the heart, if possible. Normalisation of risk factors like hypertension and hypercholesterolaemia are key preventative strategies. It is not immediate clear to me how acupuncture might help in all this – but I have been wrong before!
The new meta-analysis included 16 individual randomised clinical trials. All had a high or moderate risk of bias. Acupuncture combined with conventional drugs (AC+CD) turned out to be superior to conventional drugs alone in reducing the incidence of acute myocardial infarction (AMI). AC+CD was superior to conventional drugs in reducing angina symptoms as well as in improving electrocardiography (ECG). Acupuncture by itself was also superior to conventional drugs for angina symptoms and ECG improvement. AC+CD was superior to conventional drugs in shortening the time to onset of angina relief. However, the time to onset was significantly longer for acupuncture treatment than for conventional treatment alone.
From these results, the authors [who are from the Chengdu University of Traditional Chinese Medicine in Sichuan, China] conclude that “AC+CD reduced the occurrence of AMI, and both acupuncture and AC+CD relieved angina symptoms and improved ECG. However, compared with conventional treatment, acupuncture showed a longer delay before its onset of action. This indicates that acupuncture is not suitable for emergency treatment of heart attack. Owing to the poor quality of the current evidence, the findings of this systematic review need to be verified by more RCTs to enhance statistical power.”
As in the meta-analysis discussed in my previous post, the studies are mostly Chinese, flawed, and not obtainable for an independent assessment. As in the previous article, I fail to see a plausible mechanism by which acupuncture might bring about the effects. This is not just a trivial or coincidental observation – I could cite dozens of systematic reviews for which the same criticism applies.
What is different, however, from the last post on gout is simple and important: if you treat gout with a therapy that is ineffective, you have more pain and eventually might opt for an effective one. If you treat coronary heart disease with a therapy that does not work, you might not have time to change, you might be dead.
Therefore I strongly disagree with the authors of this meta-analysis; “the findings of this systematic review need NOT to be verified by more RCTs to enhance statistical power” — foremost, I think, the findings need to be interpreted with much more caution and re-written. In fact, the findings show quite clearly that there is no good evidence to use acupuncture for coronary heart disease. To pretend otherwise is, in my view, not responsible.
There might be an important lesson here: A SEEMINGLY SLIGHT CORRECTION OF CONCLUSIONS OF SUCH SYSTEMATIC REVIEWS MIGHT SAVE LIVES.
This week is acupuncture awareness week, and I will use this occasion to continue focusing on this therapy. This first time ever event is supported by the British Acupuncture Council who state that it aims to “help better inform people about the ancient practice of traditional acupuncture. With 2.3 million acupuncture treatments carried out each year, acupuncture is one of the most popular complementary therapies practised in the UK today.“
Right, let’s inform people about acupuncture then! Let’s show them that there is often more to acupuncture research than meets the eye.
My team and I have done lots of research into acupuncture and probably published more papers on this than any other subject. We had prominent acupuncturists on board from the UK, Korea, China and Japan, we ran conferences, published books and are proud to have been innovative and productive in our multidisciplinary research. But here I do not intend to dwell on our own achievements, rather I will highlight several important new papers in this area.
Korean authors just published a meta-analysis to assess the effectiveness of acupuncture as therapy for gouty arthritis. Ten RCTs involving 852 gouty arthritis patients were included. Six studies of 512 patients reported a significant decrease in uric acid in the treatment group compared with a control group, while two studies of 120 patients reported no such effect. The remaining four studies of 380 patients reported a significant decrease in pain in the treatment group.
The authors conclude “that acupuncture is efficacious as complementary therapy for gouty arthritis patients”.
We should be delighted with such a positive and neat result! Why then do I hesitate and have doubts?
I believe that this paper reveals several important issues in relation to systematic reviews of Chinese acupuncture trials and studies of other TCM interventions. In fact, this is my main reason for discussing the new meta-analysis here. The following three points are crucial, in my view:
1) All the primary studies were from China, and 8 of the 10 were only available in Chinese.
2) All of them had major methodological flaws.
3) It has been shown repeatedly that all acupuncture-trials from China are positive.
Given this situation, the conclusions of any review for which there are only Chinese acupuncture studies might as well be written before the actual research has started. If the authors are pro-acupuncture, as the ones of the present article clearly are, they will conclude that “acupuncture is efficacious“. If the research team has some critical thinkers on board, the same evidence will lead to an entirely different conclusion, such as “due to the lack of rigorous trials, the evidence is less than compelling.“
Systematic reviews are supposed to be the best type of evidence we currently have; they are supposed to guide therapeutic decisions. I find it unacceptable that one and the same set of data could be systematically analysed to generate such dramatically different outcomes. This is confusing and counter-productive!
So what is there to do? How can we prevent being misled by such articles? I think that medical journals should refuse to publish systematic reviews which so clearly lack sufficient critical input. I also believe that reviewers of predominantly Chinese studies should provide English translations of these texts so that they can be independently assessed by those who are not able to read Chinese – and for the sake of transparency, journal editors should insist on this point.
And what about the value of acupuncture for gouty arthritis? I think I let the readers draw their own conclusion.
There are at least two dramatically different kinds of herbal medicine, and the proper distinction of the two is crucially important. The first type is supported by some reasonably sound evidence and essentially uses well-tested herbal remedies against specific conditions; this approach has been called by some experts RATIONAL PHYTOTHERAPY. An example is the use of St John’s Wort for depression.
The second type of herbal medicine. It entails consulting a herbal practitioner who takes a history, makes a diagnosis (usually according to obsolete concepts) and prescribes a mixture of several herbal remedies tailor-made to the characteristics of his patient. Thus 10 patients with the identical diagnosis (say depression) might receive 10 different mixtures of herbs. This is true for individualized herbalism of all traditions, e.g. Chinese, Indian or European, and virtually every herbalist you might consult will employ this individualized, traditional approach.
Many consumers know that, in principle, there is some reasonably good evidence for herbal medicine. They fail to appreciate, however, that this does only apply to (sections of) rational phytotherapy. So, they consult herbal practitioners in the belief that they are about to receive an evidence-based therapy. Nothing could be further from the truth! The individualised approach is not evidence-based; even if the individual extracts employed were all supported by sound data (which they frequently are not) the mixutres applied are clearly not.
And this is where the danger of traditional herbalism lies; over the years, herbalists have fooled us all with this fundamental misunderstanding. In the UK, they might even achieve statutory regulation on the back of this self-serving misconception. When this happens, we would have a situation where a completely unproven practice has obtained the same status as doctors, nurses and physiotherapists. If this is not grossly misleading for the consumer, I do not know what is!!!
Some claim that individualized herbalism cannot be tested in clinical trials. This notion can very easily be shown to be wrong: several such studies testing individualized herbalism have been published. To the dismay of traditional herbalists, their results fail to confirm that such treatments are effective for any condition.
Now a further trial has become available that importantly contributes to this knowledge-base. Its authors (all enthusiasts of individualized herbalism) randomized 102 patients suffering from hip or knee-osteoarthritis into two groups. The experimental group received tailor-made mixtures of 7 to 10 Chinese herbs which were traditionally assumed to be helpful. The control group took a mixture of plants known to be ineffective but tasting similar. After 20 weeks of treatment, there were no differences between the groups in any of the outcome measures: pain, stiffness and function. These results thus confirm that this approach is not effective. Not only that, it also carries more risks.
As individualized herbalism employs a multitude of ingredients, the dangers of adverse-effects and herb-drug interactions, contamination, adulteration etc. are bigger that those with the use of single herbal extracts. It seems to follow therefore that the risks of individualized herbalism do not outweigh its benefit.
My recommendations are thus fairly straight forward: if we consider herbal medicine, it is vital to differentiate between the two types. Rational phytotherapy might be fine – of course, depending on the remedy and the condition we are aiming to treat. Individualised or traditional herbalism is not fine; it is not demonstrably effective and has considerable risks. This means consulting a herbalist is not a reasonable approach to treating any human ailment. It also means that regulating herbalists (as we are about to do in the UK) is a seriously bad idea: the regulation of non-sense will result in non-sense!
If I had a pint of beer for every time I have been accused of bias against chiropractic, I would rarely be sober. The thing is that I do like to report about decent research in this field and I am almost every day looking out for new articles which might be worth writing about – but they are like gold dust!
“Huuuuuuuuh, that just shows how very biased he is” I hear the chiro community shout. Well let’s put my hypothesis to the test. Here is a complete list of recent (2013)Medline-listed articles on chiropractic; no omission, no bias, just facts (for clarity, the Pubmed-link is listed first, then the title in bold followed by a short comment in italics):
Towards establishing an occupational threshold for cumulative shear force in the vertebral joint – An in vitro evaluation of a risk factor for spondylolytic fractures using porcine specimens.
This is an interesting study of the shear forces observed in porcine vertebral specimen during maneuvers which might resemble spinal manipulation in humans. The authors conclude that “Our investigation suggested that pars interarticularis damage may begin non-linearly accumulating with shear forces between 20% and 40% of failure tolerance (approximately 430 to 860N”
Development of an equation for calculating vertebral shear failure tolerance without destructive mechanical testing using iterative linear regression.
This is a mathematical modelling of the forces that might act on the spine during manipulation. The authors draw no conclusions.
Collaborative Care for Older Adults with low back pain by family medicine physicians and doctors of chiropractic (COCOA): study protocol for a randomized controlled trial.
This is merely the publication of a trial that is about to commence.
Military Report More Complementary and Alternative Medicine Use than Civilians.
This is a survey which suggests that ~45% of all military personnel use some form of alternative medicine.
Complementary and Alternative Medicine Use by Pediatric Specialty Outpatients
This is another survey; it concludes that ” that CAM use is high among pediatric specialty clinic outpatients”
Extending ICPC-2 PLUS terminology to develop a classification system specific for the study of chiropractic encounters
This is an article on chiropractic terminology which concludes that “existing ICPC-2 PLUS terminology could not fully represent chiropractic practice, adding terms specific to chiropractic enabled coding of a large number of chiropractic encounters at the desired level. Further, the new system attempted to record the diversity among chiropractic encounters while enabling generalisation for reporting where required. COAST is ongoing, and as such, any further encounters received from chiropractors will enable addition and refinement of ICPC-2 PLUS (Chiro)”.
US Spending On Complementary And Alternative Medicine During 2002-08 Plateaued, Suggesting Role In Reformed Health System
This is a study of the money spent on alternative medicine concluding as follows “Should some forms of complementary and alternative medicine-for example, chiropractic care for back pain-be proven more efficient than allopathic and specialty medicine, the inclusion of complementary and alternative medicine providers in new delivery systems such as accountable care organizations could help slow growth in national health care spending”
A Royal Chartered College joins Chiropractic & Manual Therapies.
This is a short comment on the fact that a chiro institution received a Royal Charter.
Exposure-adjusted incidence rates and severity of competition injuries in Australian amateur taekwondo athletes: a 2-year prospective study.
This is a study by chiros to determine the frequency of injuries in taekwondo athletes.
The first thing that strikes me is the paucity of articles; ok, we are talking of just january 2013 but by comparison most medical fields like neurology, rheumatology have produced hundreds of articles during this period and even the field of acupuncture research has generated about three times more.
The second and much more important point is that I fail to see much chiropractic research that is truly meaningful or tells us anything about what I consider the most urgent questions in this area, e.g. do chiropractic interventions work? are they safe?
My last point is equally critical. After reading the 9 papers, I have to honestly say that none of them impressed me in terms of its scientific rigor.
So, what does this tiny investigation suggest? Not a lot, I have to admit, but I think it supports the hypothesis that research into chiropractic is not very active, nor high quality, nor does it address the most urgent questions.
In my very first post on this blog, I proudly pronounced that this would not become one of those places where quack-busters have field-day. However, I am aware that, so far, I have not posted many complimentary things about alternative medicine. My ‘excuse’ might be that there are virtually millions of sites where this area is uncritically promoted and very few where an insider dares to express a critical view. In the interest of balance, I thus focus of critical assessments.
Yet I intend, of course, report positive news when I think it is relevant and sound. So, today I shall discuss a new trial which is impressively sound and generates some positive results:
French rheumatologists conducted a prospective, randomised, double blind, parallel group, placebo controlled trial of avocado-soybean-unsaponifiables (ASU). This dietary supplement has complex pharmacological activities and has been used since years for osteoarthritis (OA) and other conditions. The clinical evidence has, so far, been encouraging, albeit not entirely convincing. My own review arrived at the conclusion that “the majority of rigorous trial data available to date suggest that ASU is effective for the symptomatic treatment of OA and more research seems warranted. However, the only real long-term trial yielded a largely negative result”.
For the new trial, patients with symptomatic hip OA and a minimum joint space width (JSW) of the target hip between 1 and 4 mm were randomly assigned to three years of 300 mg/day ASU-E or placebo. The primary outcome was JSW change at year 3, measured radiographically at the narrowest point.
A total of 399 patients were randomised. Their mean baseline JSW was 2.8 mm. There was no significant difference on mean JSW loss, but there was 20% less progressors in the ASU than in the placebo group (40% vs 50%, respectively). No difference was observed in terms of clinical outcomes. Safety was excellent.
The authors concluded that 3 year treatment with ASU reduces the speed of JSW narrowing, indicating a potential structure modifying effect in hip OA. They cautioned that their results require independent confirmation and that the clinical relevance of their findings require further assessment.
I like this study, and here are just a few reasons why:
It reports a massive research effort; I think anyone who has ever attempted a 3-year RCT might agree with this view.
It is rigorous; all the major sources of bias are excluded as far as humanly possible.
It is well-reported; all the essential details are there and anyone who has the skills and funds would be able to attempt an independent replication.
The authors are cautious in their interpretation of the results.
The trial tackles an important clinical problem; OA is common and any treatment that helps without causing significant harm would be more than welcome.
It yielded findings which are positive or at least promising; contrary to what some people seem to believe, I do like good news as much as anyone else.
I WISH THERE WERE MORE ALT MED STUDIES/RESEARCHERS OF THIS CALIBER!
Musculoskeletal and rheumatic conditions, often just called “arthritis” by lay people, bring more patients to alternative practitioners than any other type of disease. It is therefore particularly important to know whether alternative medicines (AMs) demonstrably generate more good than harm for such patients. Most alternative practitioners, of course, firmly believe in what they are doing. But what does the reliable evidence show?
To find out, ‘Arthritis Research UK’ has sponsored a massive project lasting several years to review the literature and critically evaluate the trial data. They convened a panel of experts (I was one of them) to evaluate all the clinical trials that are available in 4 specific clinical areas. The results for those forms of AM that are to be taken by mouth or applied topically have been published some time ago, now the report, especially written for lay people, on those treatments that are practitioner-based has been published. It covers the following 25 modalities:
Chiropractic (spinal manipulation)
Kinesiology (applied kinesiology)
Magnet therapy (static magnets)
Osteopathy (spinal manipulation)
Qigong (internal qigong)
Our findings are somewhat disappointing: only very few treatments were shown to be effective.
In the case of rheumatoid arthritis, 24 trials were included with a total of 1,500 patients. The totality of this evidence failed to provide convincing evidence that any form of AM is effective for this particular condition.
For osteoarthritis, 53 trials with a total of ~6,000 patients were available. They showed reasonably sound evidence only for two treatments: Tai chi and acupuncture.
Fifty trials were included with a total of ~3,000 patients suffering from fibromyalgia. The results provided weak evidence for Tai chi and relaxation-therapies, as well as more conclusive evidence for acupuncture and massage therapy.
Low back pain had attracted more research than any of the other diseases: 75 trials with ~11,600 patients. The evidence for Alexander Technique, osteopathy and relaxation therapies was promising by not ultimately convincing, and reasonably good evidence in support of yoga and acupuncture was also found.
The majority of the experts felt that the therapies in question did not frequently cause harm, but there were two important exceptions: osteopathy and chiropractic. For both, the report noted the existence of frequent yet mild, as well as serious but rare adverse effects.
As virtually all osteopaths and chiropractors earn their living by treating patients with musculoskeletal problems, the report comes as an embarrassment for these two professions. In particular, our conclusions about chiropractic were quite clear:
There are serious doubts as to whether chiropractic works for the conditions considered here: the trial evidence suggests that it’s not effective in the treatment of fibromyalgia and there’s only little evidence that it’s effective in osteoarthritis or chronic low back pain. There’s currently no evidence for rheumatoid arthritis.
Our point that chiropractic is not demonstrably effective for chronic back pain deserves some further comment, I think. It seems to be in contradiction to the guideline by NICE, as chiropractors will surely be quick to point out. How can this be?
One explanation is that, since the NICE-guidelines were drawn up, new evidence has emerged which was not positive. The recent Cochrane review, for instance, concludes that spinal manipulation “is no more effective for acute low-back pain than inert interventions, sham SMT or as adjunct therapy”
Another explanation could be that the experts on the panel writing the NICE-guideline were less than impartial towards chiropractic and thus arrived at false-positive or over-optimistic conclusions.
Chiropractors might say that my presence on the ‘Arthritis Research’-panel suggests that we were biased against chiropractic. If anything, the opposite is true: firstly, I am not even aware of having a bias against chiropractic, and no chiropractor has ever demonstrated otherwise; all I ever aim at( in my scientific publications) is to produce fair, unbiased but critical assessments of the existing evidence. Secondly, I was only one of a total of 9 panel members. As the following list shows, the panel included three experts in AM, and most sceptics would probably categorise two of them (Lewith and MacPherson) as being clearly pro-AM:
Professor Michael Doherty – professor of rheumatology, University of Nottingham
Professor Edzard Ernst – emeritus professor of complementary medicine, Peninsula Medical School
Margaret Fisken – patient representative, Aberdeenshire
Dr Gareth Jones (project lead) – senior lecturer in epidemiology, University of Aberdeen
Professor George Lewith – professor of health research, University of Southampton
Dr Hugh MacPherson – senior research fellow in health sciences, University of York
Professor Gary Macfarlane (chair of committee) – professor of epidemiology, University of Aberdeen
Professor Julius Sim – professor of health care research, Keele University
Jane Tadman – representative from Arthritis Research UK, Chesterfield
What can we conclude from all that? I think it is safe to say that the evidence for practitioner-based AMs as a treatment of the 4 named conditions is disappointing. In particular, chiropractic is not a demonstrably effective therapy for any of them. This, of course begs the question, for what condition is chiropractic proven to work! I am not aware of any, are you?
The question whether spinal manipulation is an effective treatment for infant colic has attracted much attention in recent years. The main reason for this is, of course, that a few years ago Simon Singh had disclosed in a comment that the British Chiropractic Association (BCA) was promoting chiropractic treatment for this and several other childhood condition on their website. Simon famously wrote “they (the BCA) happily promote bogus treatments” and was subsequently sued for libel by the BCA. Eventually, the BCA lost the libel action as well as lots of money, and the entire chiropractic profession ended up with enough egg on their faces to cook omelets for all their patients.
At the time, the BCA had taken advice from several medical and legal experts; one of their medical advisers, I was told, was Prof George Lewith. Intriguingly, he and several others have just published a Cochrane review of manipulative therapies for infant colic. Here are the unabbreviated conclusions from their article:
“The studies included in this meta-analysis were generally small and methodologically prone to bias, which makes it impossible to arrive at a definitive conclusion about the effectiveness of manipulative therapies for infantile colic. The majority of the included trials appeared to indicate that the parents of infants receiving manipulative therapies reported fewer hours crying per day than parents whose infants did not, based on contemporaneous crying diaries, and this difference was statistically significant. The trials also indicate that a greater proportion of those parents reported improvements that were clinically significant. However, most studies had a high risk of performance bias due to the fact that the assessors (parents) were not blind to who had received the intervention. When combining only those trials with a low risk of such performance bias, the results did not reach statistical significance. Further research is required where those assessing the treatment outcomes do not know whether or not the infant has received a manipulative therapy. There are inadequate data to reach any definitive conclusions about the safety of these interventions”
Cochrane reviews also carry a “plain language” summary which might be easier to understand for lay people. And here are the conclusions from this section of the review:
The studies involved too few participants and were of insufficient quality to draw confident conclusions about the usefulness and safety of manipulative therapies. Although five of the six trials suggested crying is reduced by treatment with manipulative therapies, there was no evidence of manipulative therapies improving infant colic when we only included studies where the parents did not know if their child had received the treatment or not. No adverse effects were found, but they were only evaluated in one of the six studies.
If we read it carefully, this article seems to confirm that there is no reliable evidence to suggest that manipulative therapies are effective for infant colic. In the analyses, the positive effect disappears, if the parents are properly blinded; thus it is due to expectation or placebo. The studies that seem to show a positive effect are false positive, and spinal manipulation is, in fact, not effective.
The analyses disclose another intriguing aspect: most trials failed to mention adverse effects. This confirms the findings of our own investigation and amounts to a remarkable breach of publication ethics (nobody seems to be astonished by this fact; is it normal that chiropractic researchers ignore generally accepted rules of ethics?). It also reflects badly on the ability of the investigators of the primary studies to be objective. They seem to aim at demonstrating only the positive effects of their intervention; science is, however, not about confirming the researchers’ prejudices, it is about testing hypotheses.
The most remarkable thing about the new Cochrane review is, I think, the in-congruence of the actual results and the authors’ conclusion. To a critical observer, the former are clearly negative but the latter sound almost positive. I think this begs the question about the possibility of reviewer bias.
We have recently discussed on this blog whether reviews by one single author are necessarily biased. The new Cochrane review has 6 authors, and it seems to me that its conclusions are considerably more biased than my single-author review of chiropractic spinal manipulation for infant colic; in 2009, I concluded simply that “the claim [of effectiveness] is not based on convincing data from rigorous clinical trials”.
Which of the two conclusions describe the facts more helpfully and more accurately?
I think, I rest my case.
In my last post, we discussed the “A+B versus B” trial design as a tool to produce false positive results. This method is currently very popular in alternative medicine, yet it is by no means the only approach that can mislead us. Today, let’s look at other popular options with a view of protecting us against trialists who naively or willfully might fool us.
The crucial flaw of the “A+B versus B” design is that it fails to account for non-specific effects. If the patients in the experimental group experience better outcomes than the control group, this difference could well be due to effects that are unrelated to the experimental treatment. There are, of course, several further ways to ignore non-specific effects in clinical research. The simplest option is to include no control group at all. Homeopaths, for instance, are very proud of studies which show that ~70% of their patients experience benefit after taking their remedies. This type of result tends to impress journalists, politicians and other people who fail to realise that such a result might be due to a host of factors, e.g. the placebo-effect, the natural history of the disease, regression towards the mean or treatments which patients self-administered while taking the homeopathic remedies. It is therefore misleading to make causal inferences from such data.
Another easy method to generate false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the outcomes in clinical trials is to make sure that expectation is not the cause of or contributor to the outcome. They say that expectation can move mountains; this might be an exaggeration, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment. Similarly, the parents of a child or the owners of an animal can transfer their expectations, and this is one of several reasons why it is incorrect to claim that children and animals are immune to placebo-effects.
Failure to randomise is another source of bias which can make an ineffective therapy look like an effective one when tested in a clinical trial. If we allow patients or trialists to select or choose which patients receive the experimental and which get the control-treatment, it is likely that the two groups differ in a number of variables. Some of these variables might, in turn, impact on the outcome. If, for instance, doctors allocate their patients to the experimental and control groups, they might select those who will respond to the former and those who don’t to the latter. This may not happen with malicious intent but through intuition or instinct: responsible health care professionals want those patients who, in their experience, have the best chances to benefit from a given treatment to receive that treatment. Only randomisation can, when done properly, make sure we are comparing comparable groups of patients, and non-randomisation is likely to produce misleading findings.
While these options for producing false positives are all too obvious, the next possibility is slightly more intriguing. It refers to studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but to investigations attempting to assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce the same or similarly positive results, both must be effective. For instance, such a study might compare the effects of acupuncture to a common pain-killer. Such trials are aptly called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a deliberately silly example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.
Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In our hypothetical example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then apparently confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition, and people who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.
Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, the most useless or even slightly detrimental treatment would appear to be effective simply because it is equivalent to or less harmful than the comparator.
A variation of this theme is the plethora of controlled clinical trials which compare one unproven therapy to another unproven treatment. Perdicatbly, the results indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic researchers then tend to conclude that this proves both treatments to be equally effective.
Another option for creating misleadingly positive findings is to cherry-pick the results. Most trails have many outcome measures; for instance, a study of acupuncture for pain-control might quantify pain in half a dozen different ways, it might also measure the length of the treatment until pain has subsided, the amount of medication the patients took in addition to receiving acupuncture, the days off work because of pain, the partner’s impression of the patient’s health status, the quality of life of the patient, the frequency of sleep being disrupted by pain etc. If the researchers then evaluate all the results, they are likely to find that one or two of them have changed in the direction they wanted. This can well be a chance finding: with the typical statistical tests, one in 20 outcome measures would produce a significant result purely by chance. In order to mislead us, the researchers only need to “forget” about all the negative results and focus their publication on the ones which by chance have come out as they had hoped.
One fail-proof method for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.
And finally, there is always the possibility of overt cheating. Researchers are only human and are thus not immune to temptation. They may have conflicts of interest or may know that positive results are much easier to publish than negative ones. Certainly they want to publish their work – “publish or perish”! So, faced with disappointing results of a study, they might decide to prettify them or even invent new ones which are more pleasing to them, their peers, or their sponsors.
Am I claiming that this sort of thing only happens in alternative medicine? No! Obviously, the way to minimise the risk of such misconduct is to train researchers properly and make sure they are able to think critically. Am I suggesting that investigators of alternative medicine are often not well-trained and almost always uncritical? Yes.