Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.
Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.
An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.
Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.
The principle that is being followed here is simple:
- believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
- the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
- they publish their findings in one of the many journals that specialise in this sort of nonsense;
- they make sure that advocates across the world learn about their results;
- the community of apologists of this treatment picks up the information without the slightest critical analysis;
- the researchers conduct more and more of such pseudo-research;
- nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
- thus the body of false or misleading ‘evidence’ grows and grows;
- proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
- too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
- eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
- important health care decisions are thus based on data which are false and misleading.
So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:
- authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
- editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
- if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
- in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).
What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ‘evidence’ critically…but how realistic is that?
We have probably all fallen into the trap of thinking that something which has stood the ‘test of time’, i.e. something that has been used for centuries with apparent success, must be ok. In alternative medicine, this belief is extremely wide-spread, and one could argue that the entire sector is built on it. Influential proponents of ‘traditional’ medicine like Prince Charles do their best to strengthen this assumption. Sadly, however, it is easily disclosed as a classical fallacy: things that have stood the ‘test of time’ might work, of course, but the ‘test of time’ is never a proof of anything.
A recent study brought this message home loud and clear. This trial tested the efficacy of Rhodiola crenulata (R. crenulata), a traditional remedy which has been used widely in the Himalayan areas and in Tibet to prevent acute mountain sickness . As no scientific studies of this traditional treatment existed, the researchers conducted a double-blind, placebo-controlled crossover RCT to test its efficacy in acute mountain sickness prevention.
Healthy adult volunteers were randomized to two treatment sequences, receiving either 800 mg R. crenulata extract or placebo daily for 7 days before ascent and two days during mountaineering. After a three-month wash-out period, they were crossed over to the alternate treatment. On each occasion, the participants ascended rapidly from 250 m to 3421 m. The primary outcome measure was the incidence of acute mountain sickness with headache and at least one of the symptoms of nausea or vomiting, fatigue, dizziness, or difficulty sleeping.
One hundred and two participants completed the trial. No significant differences in the incidence of acute mountain sickness were found between R. crenulata extract and placebo groups. If anything, the incidence of severe acute mountain sickness with Rhodiola extract was slightly higher compared to the one with placebo: 35.3% vs. 29.4%.
R. crenulata extract was not effective in reducing the incidence or severity of acute mountain sickness as compared to placebo.
Similar examples could be found by the dozen. They demonstrate very clearly that the notion of the ‘test of time’ is erroneous: a treatment which has a long history of usage is not necessarily effective (or safe) – not only that, it might be dangerous. The true value of a therapy cannot be judged by experience, to be sure, we need rigorous clinical trials. Acute mountain sickness is a potentially life-threatening condition for which there are reasonably effective treatments. If people relied on the ‘ancient wisdom’ instead of using a therapy that actually works, they might pay for their error with their lives. The sooner alternative medicine proponents realise that, the better.
This post will probably work best, if you have read the previous one describing how the parallel universe of acupuncture research insists on going in circles in order to avoid admitting that their treatment might not be as effective as they pretend. The way they achieve this is fairly simple: they conduct trials that are designed in such a way that they cannot possibly produce a negative result.
A brand-new investigation which was recently vociferously touted via press releases etc. as a major advance in proving the effectiveness of acupuncture is an excellent case in point. According to its authors, the aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care. This sounds alright, but wait!
755 patients with depression were randomised to one of three arms to 1)acupuncture, 2)counselling, and 3)usual care alone. The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat. PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of 10 sessions for acupuncture and 9 sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 and 12 months for acupuncture and counselling.
From this, the authors conclude that both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.
Acupuncture for depression? Really? Our own systematic review with co-authors who are the most ardent apologists of acupuncture I have come across showed that the evidence is inconsistent on whether manual acupuncture is superior to sham… Therefore, I thought it might be a good idea to have a closer look at this new study.
One needs to search this article very closely indeed to find out that the authors did not actually evaluate acupuncture versus usual care and counselling versus usual care at all, and that comparisons were not made between acupuncture, counselling, and usual care (hints like the use of the word “alone” are all we get to guess that the authors’ text is outrageously misleading). Not even the methods section informs us what really happened in this trial. You find this hard to believe? Here is the unabbreviated part of the article that describes the interventions applied:
Patients allocated to the acupuncture and counselling groups were offered up to 12 sessions usually on a weekly basis. Participating acupuncturists were registered with the British Acupuncture Council with at least 3 years post-qualification experience. An acupuncture treatment protocol was developed and subsequently refined in consultation with participating acupuncturists. It allowed for customised treatments within a standardised theory-driven framework. Counselling was provided by members of the British Association for Counselling and Psychotherapy who were accredited or were eligible for accreditation having completed 400 supervised hours post-qualification. A manualised protocol, using a humanistic approach, was based on competences independently developed for Skills for Health. Practitioners recorded in logbooks the number and length of sessions, treatment provided, and adverse events. Further details of the two interventions are presented in Tables S2 and S3. Usual care, both NHS and private, was available according to need and monitored for all patients in all three groups for the purposes of comparison.
It is only in the results tables that we can determine what treatments were actually given; and these were:
1) Acupuncture PLUS usual care (i.e. medication)
2) Counselling PLUS usual care
3) Usual care
Its almost a ‘no-brainer’ that, if you compare A+B to B (or in this three-armed study A+B vs C+B vs B), you find that the former is more than the latter – unless A is a negative, of course. As acupuncture has significant placebo-effects, it never can be a negative, and thus this trial is an entirely foregone conclusion. As, in alternative medicine, one seems to need experimental proof even for ‘no-brainers’, we have some time ago demonstrated that this common sense theory is correct by conducting a systematic review of all acupuncture trials with such a design. We concluded that the ‘A + B versus B’ design is prone to false positive results…What makes this whole thing even worse is the fact that I once presented our review in a lecture where the lead author of the new trial was in the audience; so there can be no excuse of not being aware of the ‘no-brainer’.
Some might argue that this is a pragmatic trial, that it would have been unethical to not give anti-depressants to depressed patients and that therefore it was not possible to design this study differently. However, none of these arguments are convincing, if you analyse them closely (I might leave that to the comment section, if there is interest in such aspects). At the very minimum, the authors should have explained in full detail what interventions were given; and that means disclosing these essentials even in the abstract (and press release) – the part of the publication that is most widely read and quoted.
It is arguably unethical to ask patients’ co-operation, use research funds etc. for a study, the results of which were known even before the first patient had been recruited. And it is surely dishonest to hide the true nature of the design so very sneakily in the final report.
In my view, this trial begs at least 5 questions:
1) How on earth did it pass the peer review process of one of the most highly reputed medical journals?
2) How did the protocol get ethics approval?
3) How did it get funding?
4) Does the scientific community really allow itself to be fooled by such pseudo-research?
5) What do I do to not get depressed by studies of acupuncture for depression?
As I write these words, I am travelling back from a medical conference. The organisers had invited me to give a lecture which I concluded saying: “anyone in medicine not believing in evidence-based health care is in the wrong business”. This statement was meant to stimulate the discussion and provoke the audience who were perhaps just a little on the side of those who are not all that taken by science.
I may well have been right, because, in the coffee break, several doctors disputed my point; to paraphrase their arguments: “You don’t believe in the value of experience, you think that science is the way to know everything. But you are wrong! Philosophers and other people, who are a lot cleverer than you, tell us that science is not the way to real knowledge; and in some forms of medicine we have a wealth of experience which we cannot ignore. This is at least as important as scientific knowledge. Take TCM, for instance, thousands of years of tradition must mean something; in fact it tells us more than science will ever be able to. Qi-energy, for instance, is a concept based on experience, and science is useless at verifying it.”
I disagreed, of course. But I am afraid that I did not convince my colleagues. The appeal to tradition is amazingly powerful, so much so that even well-seasoned physicians fall for it. Yet it nevertheless is a fallacy, I am sure.
So what does experience tell us, how is it generated and why should it be unreliable?
On the level of the individual, experience emerges when a clinician makes similar observations several times in a row. This is so persuasive that few doctors are immune to the phenomenon. Let’s assume the experience is about acupuncture, more precisely about acupuncture for smoking cessation. The acupuncturist presumably has learnt during his training that his therapy works for that indication via stimulating the flow of Qi, and promptly tries it on several patients. Some of them come back for more and report that they find it easier to give up cigarettes after consulting him. This happens repeatedly, and our clinician forthwith is convinced – in fact, he knows – that acupuncture is effective for smoking cessation.
If we critically analyse this scenario, what does it tell us? It tells us very little of relevance, I am afraid. The scenario is entirely compatible with a whole host of explanations which have nothing to do with the effects of acupuncture per se:
- Those patients who did not manage to stop smoking might not have returned. Only seeing his successes without his failures, the acupuncturist would have got the wrong end of the stick.
- Human memory is selective such that the few patients who did come back and reported failure might easily get forgotten by the clinician. We all remember the good things and forget the disappointments, particularly if we are clinicians.
- The placebo-effect might have played a dirty trick on the experience of our acupuncturist.
- Some patients might have used nicotine patches that helped him to stop smoking without disclosing this fact to the acupuncturist who then, of course, attributed the benefit to his needling.
- The acupuncturist – being a very kind and empathetic clinician – might have involuntarily induced some of his patients to show kindness in return and thus tell porkies about their smoking habits which would have created a false positive impression about the effectiveness of his treatment.
- Being so empathetic, the acupuncturists would have provided lots of encouragement to stop smoking which, in some patients, might have been sufficient to kick the habit.
The long and short of all this is that our acupuncturist gradually got convinced by this interplay of factors that Qi exists and that acupuncture is an ineffective treatment. Hence forth he would bet his last shirt that he is right about this – after all, he has seen it with his own eyes, not just once but many times. And he will doubt anyone who shows him evidence that says otherwise. In fact, he is likely become very sceptical about scientific evidence in general – just like the doctors who talked to me after my lecture.
On a population level, such experience will be prevalent in not just one but most acupuncturists. Our clinician’s experience is certainly not unique; others will have made it too. In fact, as an acupuncturist, it is hard not to make it. Acupuncturists would have told everyone else about it, perhaps reported it on conferences or published it in articles or books. Experience of this nature is passed on from generation to generation, and soon someone will be able to demonstrate that acupuncture has been used ’effectively’ for smoking cessation since decades or centuries. The creation of a myth out of unreliable experience is thus complete.
Am I saying that experience of this nature is always and necessarily wrong or useless? No, I am not. It can be and often is correct. But, at the same time, it is frequently incorrect. It can serve as a valuable indicator but not more. Experience is not a tool for reliably informing us about the effectiveness of medical interventions. Experience based-medicine is an obsolete pseudo-medicine burdened with concepts that are counter-productive to optimal health care.
Philosophers and other people who are much cleverer than I am have been trying for some time to separate good from bad science and evidence from experience. Most recently, two philosophers, MASSIMO PIGLIUCCI and MAARTEN BOUDRY, commented specifically on this problem in relation to TCM. I leave you with some extensive quotes from what they wrote.
… pointing out that some traditional Chinese remedies (like drinking fresh turtle blood to alleviate cold symptoms) may in fact work, and therefore should not be dismissed as pseudoscience… risks confusing the possible effectiveness of folk remedies with the arbitrary theoretical-metaphysical baggage attached to it. There is no question that some folk remedies do work. The active ingredient of aspirin, for example, is derived from willow bark…
… claims about the existence of “Qi” energy, channeled through the human body by way of “meridians,” though, is a different matter. This sounds scientific, because it uses arcane jargon that gives the impression of articulating explanatory principles. But there is no way to test the existence of Qi and associated meridians, or to establish a viable research program based on those concepts, for the simple reason that talk of Qi and meridians only looks substantive, but it isn’t even in the ballpark of an empirically verifiable theory.
…the notion of Qi only mimics scientific notions such as enzyme actions on lipid compounds. This is a standard modus operandi of pseudoscience: it adopts the external trappings of science, but without the substance.
…The notion of Qi, again, is not really a theory in any meaningful sense of the word. It is just an evocative word to label a mysterious force of which we do not know and we are not told how to find out anything at all.
Still, one may reasonably object, what’s the harm in believing in Qi and related notions, if in fact the proposed remedies seem to help? Well, setting aside the obvious objections that the slaughtering of turtles might raise on ethical grounds, there are several issues to consider. To begin with, we can incorporate whatever serendipitous discoveries from folk medicine into modern scientific practice, as in the case of the willow bark turned aspirin. In this sense, there is no such thing as “alternative” medicine, there’s only stuff that works and stuff that doesn’t.
Second, if we are positing Qi and similar concepts, we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them. Most importantly, pseudo-medical treatments often do not work, or are even positively harmful. If you take folk herbal “remedies,” for instance, while your body is fighting a serious infection, you may suffer severe, even fatal, consequences.
…Indulging in a bit of pseudoscience in some instances may be relatively innocuous, but the problem is that doing so lowers your defenses against more dangerous delusions that are based on similar confusions and fallacies. For instance, you may expose yourself and your loved ones to harm because your pseudoscientific proclivities lead you to accept notions that have been scientifically disproved, like the increasingly (and worryingly) popular idea that vaccines cause autism.
Philosophers nowadays recognize that there is no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other. For example, alchemy was a (somewhat) legitimate science in the times of Newton and Boyle, but it is now firmly pseudoscientific (movements in the opposite direction, from full-blown pseudoscience to genuine science, are notably rare)….
The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning. To try a modicum of turtle blood here and a little aspirin there is not the hallmark of wisdom and even-mindedness. It is a dangerous gateway to superstition and irrationality
“Wer heilt hat recht”. Every German knows this saying and far too many believe it. Literally translated, it means THE ONE WHO HEALS IS RIGHT, and indicates that, in health care, the proof of efficacy of a treatment is self-evident: if a clinician administers a treatment and the patient improves, she was right in prescribing it and the treatment must have been efficacious. The only English saying which is vaguely similar (but rarely used for therapies) is THE PROOF OF THE PUDDING IS IN THE EATING, translated into a medical context: the proof of the treatment is in the clinical outcome.
The saying is German but the sentiment behind it is amazingly widespread across the world, particularly the alternative one. If I had a fiver for each time a German journalist has asked me to comment on this ‘argument’ I could probably invite all my readers for a beer in the pub. The notion seems to be irresistibly appealing and journalists, consumers, patients, politicians etc. fall for it like flies. It is popular foremost as a counter-argument against scientists’ objections to homeopathy and similar placebo-treatments. If the homeopath cured her patient, then she and her treatments are evidently fine!
It is time, I think, that I scrutinise the argument and refute it once and for all.
The very first thing to note is that placebos never cure a condition. They might alleviate symptoms, but cure? No!
The next issue relates to causality. The saying assumes that the sole reason for the clinical outcome is the treatment. Yet, if a patient’s symptoms improve, the reason might have been the prescribed treatment, but this is just one of a multitude of different options, e.g.:
- the placebo-effect
- the regression towards the mean
- the natural history of the condition
- the Hawthorne effect
- the compassion of the clinician
- other treatments that might have been administered in parallel
Often it is a complex mixture of these and possibly other phenomena that is responsible and, unless we run a proper clinical trial, we cannot even guess the relative importance of each factor. To claim in such a messy situation that the treatment given by the clinician was the cause of the improvement, is ridiculously simplistic and overtly wrong.
But that is precisely what the saying WER HEILT HAT RECHT does. It assumes a simple mono-causal relationship that never exists in clinical settings. And, annoyingly, it somewhat arrogantly dismisses any scientific evidence by implying that the anecdotal observation is so much more accurate and relevant.
The true monstrosity of the saying can be easily disclosed with a little thought experiment. Let’s assume the saying is correct and we adopt it as a major axiom in health care. This would have all sorts of terrible consequences. For instance, any pharmaceutical company would be allowed to produce colourful placebos and sell them for a premium; they would only need to show that some patients do experience some relief after taking it. THE ONE WHO HEALS IS RIGHT!
The saying is a dangerously misleading platitude. That it happens to be German and that the Germans remain so frightfully fond of it disturbs me. That the notion, in one way or another, is deeply ingrained in the mind of charlatans across the world is worrying but hardly surprising – after all, it is said to have been coined by Samuel Hahnemann.
If one spends a lot of time, as I presently do, sorting out old files, books, journals etc., one is bound to come across plenty of weird and unusual things. I for one, am slow at making progress with this task, mainly because I often start reading the material that is in front of me. It was one of those occasions that I had begun studying a book written by one of the more fanatic proponent of alternative medicine and stumbled over the term THE PROOF OF EXPERIENCE. It made me think, and I began to realise that the notion behind these four words is quite characteristic of the field of alternative health care.
When I studied medicine, in the 1970s, we were told by our peers what to do, which treatments worked for which conditions and why. They had all the experience and we, by definition, had none. Experience seemed synonymous with proof. Nobody dared to doubt the word of ‘the boss’. We were educated, I now realise, in the age of EMINENCE-BASED MEDICINE.
All of this gradually changed when the concepts of EVIDENCE-BASED MEDICINE became appreciated and generally adopted by responsible health care professionals. If now the woman or man on top of the medical ‘pecking order’ claims something that is doubtful in view of the published evidence, it is possible (sometimes even desirable) to say so – no matter how junior the doubter happened to be. As a result, medicine has thus changed for ever: progress is no longer made funeral by funeral [of the bosses] but new evidence is much more swiftly translated into clinical practice.
Don’t get me wrong, EVIDENCE-BASED MEDICINE does not does not imply disrespect EXPERIENCE; it merely takes it for what it is. And when EVIDENCE and EXPERIENCE fail to agree with each other, we have to take a deep breath, think hard and try to do something about it. Depending on the specific situation, this might involve further study or at least an acknowledgement of a degree of uncertainty. The tension between EXPERIENCE and EVIDENCE often is the impetus for making progress. The winner in this often complex story is the patient: she will receive a therapy which, according to the best available EVIDENCE and careful consideration of the EXPERIENCE, is best for her.
NOT SO IN ALTERNATIVE MEDICINE!!! Here EXPERIENCE still trumps EVIDENCE any time, and there is no need for acknowledging uncertainty: EXPERIENCE = proof!!!
In case you think I am exaggerating, I recommend thumbing through a few books on the subject. As I already stated, I have done this quite a bit in recent months, and I can assure you that there is very little evidence in these volumes to suggest that data, research, science, etc.. matter a hoot. No critical thinking is required, as long as we have EXPERIENCE on our side!
‘THE PROOF OF EXPERIENCE’ is still a motto that seems to be everywhere in alternative medicine. In many ways, it seems to me, this motto symbolises much of what is wrong with alternative medicine and the mind-set of its proponents. Often, the EXPERIENCE is in sharp contrast to the EVIDENCE. But this little detail does not seem to irritate anyone. Apologists of alternative medicine stubbornly ignore such contradictions. In the rare case where they do comment at all, the gist of their response normally is that EXPERIENCE is much more relevant than EVIDENCE. After all, EXPERIENCE is based on hundreds of years and thousands of ‘real-life’ cases, while EVIDENCE is artificial and based on just a few patients.
As far as I can see, nobody in alternative medicine pays more than a lip service to the fact that EXPERIENCE can be [and often is] grossly misleading. Little or no acknowledgement exists of the fact that, in clinical routine, there are simply far too many factors that interfere with our memories, impressions, observations and conclusions. If a patient gets better after receiving a therapy, she might have improved for a dozen reasons which are unrelated to the treatment per se. And if a patient does not get better, she might not come back at all, and the practitioner’s memory will therefore fail register such events as therapeutic failures. Whatever EXPERIENCE is, in health care, it rarely constitutes proof!
The notion of THE PROOF OF EXPERIENCE, it thus turns out, is little more than self-serving, wishful thinking which characterises the backward attitude that seems to be so remarkably prevalent in alternative medicine. No tension between EXPERIENCE and EVIDENCE is noticeable because the EVIDENCE is being ignored; as a result, there is no progress. The looser is, of course, the patient: she will receive a treatment based on criteria which are less than reliable.
Isn’t it time to burry the fallacy of THE PROOF OF EXPERIENCE once and for all?
Swiss chiropractors have just published a clinical trial to investigate outcomes of patients with radiculopathy due to cervical disk herniation (CDH). All patients had neck pain and dermatomal arm pain; sensory, motor, or reflex changes corresponding to the involved nerve root and at least one positive orthopaedic test for cervical radiculopathy were included. CDH was confirmed by magnetic resonance imaging. All patients received regular neck manipulations.
Baseline data included two pain numeric rating scales (NRSs), for neck and arm, and the Neck Disability Index (NDI). At two, four and twelve weeks after the initial consultation, patients were contacted by telephone, and the data for NDI, NRSs, and patient’s global impression of change were collected. High-velocity, low-amplitude thrusts were administered by experienced chiropractors. The proportion of patients reporting to feel “better” or “much better” on the patient’s global impression of change scale was calculated. Pre-treatment and post-treatment NRSs and NDIs were analysed.
Fifty patients were included. At two weeks, 55.3% were “improved,” 68.9% at four and 85.7% at twelve weeks. Statistically significant decreases in neck pain, arm pain, and NDI scores were noted at 1 and 3 months compared with baseline scores. 76.2% of all sub-acute/chronic patients were improved at 3 months.
The authors concluded that most patients in this study, including sub-acute/chronic patients, with symptomatic magnetic resonance imaging-confirmed CDH treated with spinal manipulative therapy, reported significant improvement with no adverse events.
In the presence of disc herniation, chiropractic manipulations have been described to cause serious complications. Some experts therefore believe that CDH is a contra-indication for spinal manipulation. The authors of this study imply, however, that it is not – on the contrary, they think it is an effective intervention for CDH.
One does not need to be a sceptic to notice that the basis for this assumption is less than solid. The study had no control group. This means that the observed effect could have been due to:
a placebo response,
the regression towards the mean,
the natural history of the condition,
or other factors which have nothing to do with the chiropractic intervention per se.
And what about the interesting finding that no adverse-effects were noted? Does that mean that the treatment is safe? Sorry, but it most certainly does not! In order to generate reliable results about possibly rare complications, the study would have needed to include not 50 but well over 50 000 patients.
So what does the study really tell us? I have pondered over this question for some time and arrived at the following answer: NOTHING!
Is that a bit harsh? Well, perhaps yes. And I will revise my verdict slightly: the study does tell us something, after all – chiropractors tend to confuse research with the promotion of very doubtful concepts at the expense of their patients. I think, there is a name for this phenomenon: PSEUDO-SCIENCE.
Research is essential for progress, and research in alternative medicine is important for advancing alternative medicine, one would assume. But why then do I often feel that research in this area hinders progress? One of the reasons is, in my view, the continuous drip, drip, drip of misleading conclusions usually drawn from weak studies. I could provide thousands of examples; here is one recently published article chosen at random which seems as good as any other to make the point.
Researchers from the Department of Internal and Integrative Medicine, Faculty of Medicine, University of Duisburg-Essen, Germany set out to investigate associations of regular yoga practice with quality of life and mental health in patients with chronic diseases. Using a case-control study design, 186 patients with chronic diseases who had elected to regularly practice yoga were selected and compared to controls who had chosen to not regularly practice yoga. Patients were matched individually on gender, main diagnosis, education, and age. Patients’ quality of life, mental health, life satisfaction, and health satisfaction were also assessed. The analyses show that patients who regularly practiced yoga had a significantly better general health status, a higher physical functioning, and physical component score on the SF-36 than those who did not.
The authors concluded that practicing yoga under naturalistic conditions seems to be associated with increased physical health but not mental health in chronically diseased patients.
Why do I find these conclusions misleading?
In alternative medicine, we have an irritating abundance of such correlative research. By definition, it does not allow us to make inferences about causation. Most (but by no means all) authors are therefore laudably careful when choosing their terminology. Certainly, the present article does not claim that regular yoga practice has caused increased physical health; it rightly speaks of “associations“. And surely, there is nothing wrong with that – or is there?
Perhaps, I will be accused of nit-picking, but I think the results are presented in a slightly misleading way, and the conclusions are not much better.
Why do the authors claim that patients who regularly practiced yoga had a significantly better general health status, a higher physical functioning, and physical component score on the SF-36 than those who did not than those who did not? I know that the statement is strictly speaking correct, but why do they not write that “patients who had a significantly better general health status, a higher physical functioning, and physical component score on the SF-36 were more likely to practice yoga regularly”? After all, this too is correct! And why does the conclusion not state that better physical health seems to be associated with a greater likelihood of practicing yoga?
The possibility that the association is the other way round deserves serious consideration, in my view. Is it not logical to assume that, if someone is relatively fit and healthy, he/she is more likely to take up yoga (or table-tennis, sky-diving, pole dancing, etc.)?
It’s perhaps not a hugely important point, so I will not dwell on it – but, as the alternative medicine literature is full with such subtly misleading statements, I don’t find it entirely irrelevant either.
Did I previously imply that osteopaths are not very research-active? Shame on me!
Here are two brand-new studies by osteopaths and they both seem to show that their treatments work.
Well, perhaps we better have a closer look at them before we start praising osteopathic research efforts.
THE FIRST STUDY
Researchers from the ‘European Institute for Evidence Based Osteopathic Medicine’ in Chieti, Italy, investigated the effect of osteopathic manipulative therapy (OMT) on the length of hospital-stay (LOHS) in premature infants. They conducted an RCT on 110 preterm newborns admitted to a single specialised unit. Thus the subjects with a gestational age between 28 and 38 weeks were randomized to receive either just routine care, or routine care with OMT for the period of hospitalization. Endpoints were differences in LOHS and daily weight gain. The results showed a mean difference in LOHS between the OMT and the control group: -5.906 days (95% C.I. -7.944, -3.869; p<0.001). However, OMT was not associated with any change in daily weight gain.
The authors’ conclusion was bold: OMT may have an important role in the management of preterm infants hospitalization.
THE SECOND STUDY
The second investigation suggested similarly positive effects of OMT on LOHS in a different setting. Using a retrospective cohort study, US osteopaths wanted to determine whether there is a relationship between post-operative use of OMT and post-operative outcomes in gastrointestinal surgical patients, including time to flatus, clear liquid diet, and bowel movement [all indicators for the length of the post-operative ileus] as well as LOHS. They thus assessed the records of 55 patients who underwent a major gastrointestinal operation in a hospital that had been routinely offering OMT to its patients. The analyses showed that 17 patients had received post-operative OMT and 38 had not.The two groups were similar in terms of all variables the researchers managed to assess. The time to bowel movement and to clear liquid diet did not differ significantly between the groups. The mean time to flatus was 4.7 days in the non-OMT group and 3.1 days in the OMT group (P=.035). The mean post-operative hospital LOHS was also reduced significantly with OMT, from 11.5 days in the non-OMT group to 6.1 days in the OMT group (P=.006).
The authors concluded that OMT applied after a major gastrointestinal operation is associated with decreased time to flatus and decreased postoperative hospital LOHS.
WHAT SHOULD WE MAKE OF THESE RESULTS?
Some people may have assumed that OMT is for bad backs; these two studies imply, however, that it can do much more. If the findings are correct, they have considerable implications: shortening the time patients have to spend in hospital would not only decrease individual suffering, it would also save us all tons of money! But do these results hold water?
The devil’s advocate in me cannot help but being more than a little sceptical. I fail to see how OMT might shorten LOHS; it just does not seem plausible! Moreover, some of the results seem too good to be true. Could there be any alternative explanations for the observed findings?
The first study, I think, might merely demonstrate that more time spent handling premature babies provides a powerful developmental stimulus. Therefore the infants are quicker ready to leave hospital compared to those children who did not receive this additional boost. But the effect might not at all be related to OMT per se; if, for instance, the parents had handled their children for the same amount of time, the outcome would probably have been quite similar, possibly even better.
The second study is not an RCT and therefore it tells us little about cause and effect. We might speculate, for instance, that those patients who elected to have OMT were more active, had lived healthier lives, adhered more rigorously to a pre-operative diet, or differed in other variables from those patients who chose not to bother with OMT. Again, the observed difference in the duration of the post-operative ileus and consequently the LOHS would be entirely unrelated to OMT.
I suggest therefore to treat these two studies with more than just a pinch of salt. Before hospitals all over the world start employing osteopaths right, left and centre in order to shorten their average LOHS, we might be well advised to plan and conduct a trial that avoids the pitfalls of the research so far. I would bet a fiver that, once we do a proper independent replication, we will find that both investigations did, in fact, generate false positive results.
My conclusion from all this is simple: RESEARCH CAN SOMETIMES BE MISLEADING, AND POOR QUALITY RESEARCH IS ALMOST INVARIABLY MISLEADING.
A recently published study by Danish researchers aimed at comparing the effectiveness of a patient education (PEP) programme with or without the added effect of chiropractic manual therapy (MT) to a minimal control intervention (MCI). Its results seem to indicate that chiropractic MT is effective. Is this the result chiropractors have been waiting for?
To answer this question, we need to look at the trial and its methodology in more detail.
A total of 118 patients with clinical and radiographic unilateral hip osteoarthritis (OA) were randomized into one of three groups: PEP, PEP+ MT or MCI. The PEP was taught by a physiotherapist in 5 sessions. The MT was delivered by a chiropractor in 12 sessions, and the MCI included a home stretching programme. The primary outcome measure was the self-reported pain severity on an 11-box numeric rating scale immediately following the 6-week intervention period. Patients were subsequently followed for one year.
The primary analyses included 111 patients. In the PEP+MT group, a statistically and clinically significant reduction in pain severity of 1.9 points was noted compared to the MCI of 1.90. The number needed to treat for PEP+MT was 3. No difference was found between the PEP and the MCI groups. At 12 months, the difference favouring PEP+MT was maintained.
The authors conclude that for primary care patients with osteoarthritis of the hip, a combined intervention of manual therapy and patient education was more effective than a minimal control intervention. Patient education alone was not superior to the minimal control intervention.
This is an interesting, pragmatic trial with a result suggesting that chiropractic MT in combination with PEP is effective in reducing the pain of hip OA. One could easily argue about the small sample size, the need for independent replication etc. However, my main concern is the fact that the findings can be interpreted in not just one but in at least two very different ways.
The obvious explanation would be that chiropractic MT is effective. I am sure that chiropractors would be delighted with this conclusion. But how sure can we be that it would reflect the truth?
I think an alternative explanation is just as (possibly more) plausible: the added time, attention and encouragement provided by the chiropractor (who must have been aware what was at stake and hence highly motivated) was the effective element in the MT-intervention, while the MT per se made little or no difference. The PEP+MT group had no less than 12 sessions with the chiropractor. We can assume that this additional care, compassion, empathy, time, encouragement etc. was a crucial factor in making these patients feel better and in convincing them to adhere more closely to the instructions of the PEP. I speculate that these factors were more important than the actual MT itself in determining the outcome.
In my view, such critical considerations regarding the trial methodology are much more than an exercise in splitting hair. They are important in at least two ways.
Firstly, they remind us that clinical trials, whenever possible, should be designed such that they allow only one interpretation of their results. This can sometimes be a problem with pragmatic trials of this nature. It would be wise, I think, to conduct pragmatic trials only of interventions which have previously been proven to work. To the best of my knowledge, chiropractic MT as a treatment for hip OA does not belong to this category.
Secondly, it seems crucial to be aware of such methodological issues and to consider them carefully before research findings are translated into clinical practice. If not, we might end up with therapeutic decisions (or guidelines) which are quite simply not warranted.
I would not be in the least surprised, if chiropractic interest groups were to use the current findings for promoting chiropractic in hip-OA. But what, if the MT per se was ineffective, while the additional care, compassion and encouragement was? In this case, we would not need to recruit (and pay for) chiropractors and put up with the considerable risks chiropractic treatments can entail; we would merely need to modify the PE programme such that patients are better motivated to adhere to it.
As it stands, the new study does not tell us much that is of any practical use. In my view, it is a pragmatic trial which cannot readily be translated into evidence-based practice. It might get interpreted as good news for chiropractic but, in fact, it is not.