Has it ever occurred to you that much of the discussion about cause and effect in alternative medicine goes in circles without ever making progress? I have come to the conclusion that it does. Here I try to illustrate this point using the example of acupuncture, more precisely the endless discussion about how to best test acupuncture for efficacy. For those readers who like to misunderstand me I should explain that the sceptics’ view is in capital letters.
At the beginning there was the experience. Unaware of anatomy, physiology, pathology etc., people started sticking needles in other people’s skin, some 2000 years ago, and observed that they experienced relief of all sorts of symptoms.When an American journalist reported about this phenomenon in the 1970s, acupuncture became all the rage in the West. Acupuncture-fans then claimed that a 2000-year history is ample proof that acupuncture does work.
BUT ANECDOTES ARE NOTORIOUSLY UNRELIABLE!
Even the most enthusiastic advocates conceded that this is probably true. So they documented detailed case-series of lots of patients, calculated the average difference between the pre- and post-treatment severity of symptoms, submitted it to statistical tests, and published the notion that the effects of acupuncture are not just anecdotal; in fact, they are statistically significant, they said.
BUT THIS EFFECT COULD BE DUE TO THE NATURAL HISTORY OF THE CONDITION!
“True enough”, grumbled the acupuncture-fans and conducted the very first controlled clinical trials. Essentially they treated one group of patients with acupuncture while another group received conventional treatments as usual. When they analysed the results, they found that the acupuncture group had improved significantly more. “Now do you believe us?”, they asked triumphantly, “acupuncture is clearly effective”.
NO! THIS OUTCOME MIGHT BE DUE TO SELECTION BIAS. SUCH A STUDY-DESIGN CANNOT ESTABLISH CAUSE AND EFFECT.
The acupuncturists felt slightly embarrassed because they had not thought of that. They had allocated their patients to the treatment according to patients’ choice. Thus the expectation of the patients (or the clinician) to get relief from acupuncture might have been the reason for the difference in outcome. So they consulted an expert in trial-design and were advised to allocate not by choice but by chance. In other words, they repeated the previous study but randomised patients to the two groups. Amazingly, their RCT still found a significant difference favouring acupuncture over treatment as usual.
BUT THIS DIFFERENCE COULD BE CAUSED BY A PLACEBO-EFFECT!
Now the acupuncturists were in a bit of a pickle; as far as they could see, there was no good placebo for acupuncture! Eventually some methodologist-chap came up with the idea that, in order to mimic a placebo, they could simply stick needles into non-acupuncture points. When the acupuncturists tried that method, they found that there were improvements in both groups but the difference between real acupuncture and placebo was tiny and usually neither statistically significant nor clinically relevant.
NOW DO YOU CONCEDE THAT ACUPUNCTURE IS NOT AN EFFECTIVE TREATMENT?
Absolutely not! The results merely show that needling non-acupuncture points is not an adequate placebo. Obviously this intervention also sends a powerful signal to the brain which clearly makes it an effective intervention. What do you expect when you compare two effective treatments?
IF YOU REALLY THINK SO, YOU NEED TO PROVE IT AND DESIGN A PLACEBO THAT IS INERT.
At that stage, the acupuncturists came up with a placebo-needle that did not actually penetrate the skin; it worked like a mini stage dagger that telescopes into itself while giving the impression that it penetrated the skin just like the real thing. Surely this was an adequate placebo! The acupuncturists repeated their studies but, to their utter dismay, they found again that both groups improved and the difference in outcome between their new placebo and true acupuncture was minimal.
WE TOLD YOU THAT ACUPUNCTURE WAS NOT EFFECTIVE! DO YOU FINALLY AGREE?
Certainly not, they replied. We have thought long and hard about these intriguing findings and believe that they can be explained just like the last set of results: the non-penetrating needles touch the skin; this touch provides a stimulus powerful enough to have an effect on the brain; the non-penetrating placebo-needles are not inert and therefore the results merely depict a comparison of two effective treatments.
YOU MUST BE JOKING! HOW ARE YOU GOING TO PROVE THAT BIZARRE HYPOTHESIS?
We had many discussions and consensus meeting amongst the most brilliant brains in acupuncture about this issue and have arrived at the conclusion that your obsession with placebo, cause and effect etc. is ridiculous and entirely misplaced. In real life, we don’t use placebos. So, let’s instead address the ‘real life’ question: is acupuncture better than usual treatment? We have conducted pragmatic studies where one group of patients gets treatment as usual and the other group receives acupuncture in addition. These studies show that acupuncture is effective. This is all the evidence we need. Why can you not believe us?
NOW WE HAVE ARRIVED EXACTLY AT THE POINT WHERE WE HAVE BEEN A LONG TIME AGO. SUCH A STUDY-DESIGN CANNOT ESTABLISH CAUSE AND EFFECT. YOU OBVIOUSLY CANNOT DEMONSTRATE THAT ACUPUNCTURE CAUSES CLINICAL IMPROVEMENT. THEREFORE YOU OPT TO PRETEND THAT CAUSE AND EFFECT ARE IRRELEVANT. YOU USE SOME IMITATION OF SCIENCE TO ‘PROVE’ THAT YOUR PRECONCEIVED IDEAS ARE CORRECT. YOU DO NOT SEEM TO BE INTERESTED IN THE TRUTH ABOUT ACUPUNCTURE AT ALL.
As I write these words, I am travelling back from a medical conference. The organisers had invited me to give a lecture which I concluded saying: “anyone in medicine not believing in evidence-based health care is in the wrong business”. This statement was meant to stimulate the discussion and provoke the audience who were perhaps just a little on the side of those who are not all that taken by science.
I may well have been right, because, in the coffee break, several doctors disputed my point; to paraphrase their arguments: “You don’t believe in the value of experience, you think that science is the way to know everything. But you are wrong! Philosophers and other people, who are a lot cleverer than you, tell us that science is not the way to real knowledge; and in some forms of medicine we have a wealth of experience which we cannot ignore. This is at least as important as scientific knowledge. Take TCM, for instance, thousands of years of tradition must mean something; in fact it tells us more than science will ever be able to. Qi-energy, for instance, is a concept based on experience, and science is useless at verifying it.”
I disagreed, of course. But I am afraid that I did not convince my colleagues. The appeal to tradition is amazingly powerful, so much so that even well-seasoned physicians fall for it. Yet it nevertheless is a fallacy, I am sure.
So what does experience tell us, how is it generated and why should it be unreliable?
On the level of the individual, experience emerges when a clinician makes similar observations several times in a row. This is so persuasive that few doctors are immune to the phenomenon. Let’s assume the experience is about acupuncture, more precisely about acupuncture for smoking cessation. The acupuncturist presumably has learnt during his training that his therapy works for that indication via stimulating the flow of Qi, and promptly tries it on several patients. Some of them come back for more and report that they find it easier to give up cigarettes after consulting him. This happens repeatedly, and our clinician forthwith is convinced – in fact, he knows – that acupuncture is effective for smoking cessation.
If we critically analyse this scenario, what does it tell us? It tells us very little of relevance, I am afraid. The scenario is entirely compatible with a whole host of explanations which have nothing to do with the effects of acupuncture per se:
- Those patients who did not manage to stop smoking might not have returned. Only seeing his successes without his failures, the acupuncturist would have got the wrong end of the stick.
- Human memory is selective such that the few patients who did come back and reported failure might easily get forgotten by the clinician. We all remember the good things and forget the disappointments, particularly if we are clinicians.
- The placebo-effect might have played a dirty trick on the experience of our acupuncturist.
- Some patients might have used nicotine patches that helped him to stop smoking without disclosing this fact to the acupuncturist who then, of course, attributed the benefit to his needling.
- The acupuncturist – being a very kind and empathetic clinician – might have involuntarily induced some of his patients to show kindness in return and thus tell porkies about their smoking habits which would have created a false positive impression about the effectiveness of his treatment.
- Being so empathetic, the acupuncturists would have provided lots of encouragement to stop smoking which, in some patients, might have been sufficient to kick the habit.
The long and short of all this is that our acupuncturist gradually got convinced by this interplay of factors that Qi exists and that acupuncture is an ineffective treatment. Hence forth he would bet his last shirt that he is right about this – after all, he has seen it with his own eyes, not just once but many times. And he will doubt anyone who shows him evidence that says otherwise. In fact, he is likely become very sceptical about scientific evidence in general – just like the doctors who talked to me after my lecture.
On a population level, such experience will be prevalent in not just one but most acupuncturists. Our clinician’s experience is certainly not unique; others will have made it too. In fact, as an acupuncturist, it is hard not to make it. Acupuncturists would have told everyone else about it, perhaps reported it on conferences or published it in articles or books. Experience of this nature is passed on from generation to generation, and soon someone will be able to demonstrate that acupuncture has been used ’effectively’ for smoking cessation since decades or centuries. The creation of a myth out of unreliable experience is thus complete.
Am I saying that experience of this nature is always and necessarily wrong or useless? No, I am not. It can be and often is correct. But, at the same time, it is frequently incorrect. It can serve as a valuable indicator but not more. Experience is not a tool for reliably informing us about the effectiveness of medical interventions. Experience based-medicine is an obsolete pseudo-medicine burdened with concepts that are counter-productive to optimal health care.
Philosophers and other people who are much cleverer than I am have been trying for some time to separate good from bad science and evidence from experience. Most recently, two philosophers, MASSIMO PIGLIUCCI and MAARTEN BOUDRY, commented specifically on this problem in relation to TCM. I leave you with some extensive quotes from what they wrote.
… pointing out that some traditional Chinese remedies (like drinking fresh turtle blood to alleviate cold symptoms) may in fact work, and therefore should not be dismissed as pseudoscience… risks confusing the possible effectiveness of folk remedies with the arbitrary theoretical-metaphysical baggage attached to it. There is no question that some folk remedies do work. The active ingredient of aspirin, for example, is derived from willow bark…
… claims about the existence of “Qi” energy, channeled through the human body by way of “meridians,” though, is a different matter. This sounds scientific, because it uses arcane jargon that gives the impression of articulating explanatory principles. But there is no way to test the existence of Qi and associated meridians, or to establish a viable research program based on those concepts, for the simple reason that talk of Qi and meridians only looks substantive, but it isn’t even in the ballpark of an empirically verifiable theory.
…the notion of Qi only mimics scientific notions such as enzyme actions on lipid compounds. This is a standard modus operandi of pseudoscience: it adopts the external trappings of science, but without the substance.
…The notion of Qi, again, is not really a theory in any meaningful sense of the word. It is just an evocative word to label a mysterious force of which we do not know and we are not told how to find out anything at all.
Still, one may reasonably object, what’s the harm in believing in Qi and related notions, if in fact the proposed remedies seem to help? Well, setting aside the obvious objections that the slaughtering of turtles might raise on ethical grounds, there are several issues to consider. To begin with, we can incorporate whatever serendipitous discoveries from folk medicine into modern scientific practice, as in the case of the willow bark turned aspirin. In this sense, there is no such thing as “alternative” medicine, there’s only stuff that works and stuff that doesn’t.
Second, if we are positing Qi and similar concepts, we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them. Most importantly, pseudo-medical treatments often do not work, or are even positively harmful. If you take folk herbal “remedies,” for instance, while your body is fighting a serious infection, you may suffer severe, even fatal, consequences.
…Indulging in a bit of pseudoscience in some instances may be relatively innocuous, but the problem is that doing so lowers your defenses against more dangerous delusions that are based on similar confusions and fallacies. For instance, you may expose yourself and your loved ones to harm because your pseudoscientific proclivities lead you to accept notions that have been scientifically disproved, like the increasingly (and worryingly) popular idea that vaccines cause autism.
Philosophers nowadays recognize that there is no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other. For example, alchemy was a (somewhat) legitimate science in the times of Newton and Boyle, but it is now firmly pseudoscientific (movements in the opposite direction, from full-blown pseudoscience to genuine science, are notably rare)….
The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning. To try a modicum of turtle blood here and a little aspirin there is not the hallmark of wisdom and even-mindedness. It is a dangerous gateway to superstition and irrationality
Homeopaths can bear criticism only when it is highly diluted. Any critique from the ‘outside’ is therefore dismissed by insisting that the author fails to understand the subtleties of homeopathy. And criticism from the ‘inside’ does not exist: by definition, a homeopath does not criticise his/her own trade. Through these mechanisms, homeopaths have more or less successfully shielded themselves from all arguments against their activities and have, for the last 200 years, managed to survive in a world of make-belief.
As I will show below, I started my professional life on the side of the homeopaths – I am not proud of this fact, but there is no use denying it. When the evidence told me more and more clearly that I had been wrong, about 10 years ago, I began expressing serious doubts about the plausibility, efficacy and safety of using homeopathic remedies to treat patients in need. Homeopaths reacted not just with anger, they were also at a loss.
Their little trick of saying ‘He does not understand homeopathy and therefore his critique is invalid’ could not possibly work in my case – I had been one of them: I had attended their meetings, chaired some of their sessions, edited a book on homeopathy, accepted an invitation to join the editorial board of the journal ‘HOMEOPATHY‘ as well as a EU-panel investigating homeopathy, conducted trials, systematic reviews and meta-analyses, published over 100 articles on the subject, accepted money from Prince Charles as well as from the ‘ueber-homeopath’ George Vithoulkas for my research, and even contributed to THE INTERNATIONAL DICTIONARY OF HOMEOPATHY. It would have not looked reasonable to suddenly deny my previously accepted expertise. Homeopaths thus found themselves in a pickle: critique from the ‘inside’ is not what they were used to or could easily cope with.
The homeopathic idyll was under threat, and a solution to the problem had to be found with some urgency. And soon enough, it was found. Homeopaths from across the world started claiming that I had been telling porkies about my training/qualifications in homeopathy: “Edzard Ernst has admitted that he has over the years lied or supported a lie about having homeopathic training. In reality he has had none at all! The leading so-called ‘expert’ and critic of homeopathy, Professor Edzard Ernst, has admitted that he has no qualifications in homeopathy. etc. etc. Diluting the truth to the extreme, they almost unanimously insisted that, contrary to my previous assertions, I had no training/qualifications in homeopathy. Thus, they began to argue, that I was an imposter and had insufficient knowledge, expertise and experience after all: Professor Edzard Ernst the leading ‘authority’ on homeopathy, and perhaps its most referenced critic, has no qualifications in homeopathy. William Alderson of HMC21 also claims that Ernst’s book Trick or Treatment? shows Ernst to be unreliable as a researcher into homeopathy. Opposition to homeopathy is based on propaganda, they stated. Others wrote that Edzard Ernst’ failure as a homeopath only proves he lacked some basic qualities essential to become a successful homeopath. He failed as a homeopath, and then turned a skeptic. His failure is only his failure- it does not disprove homeopathy by any way. Once he failed in putting a mark as a successful homeopath or CAM practitioner, he just tried the other way to become famous and respectable- he converted himself into a skeptic, which provided him with ample opportunities to appear on ‘anti-homeopathy’ platforms’ as an ‘authority’, ‘expert’ and ‘ex-homeopath’! Some went even further claiming that I had also lied about my medical qualifications.
These notions has been going around the internet for several years now and conveniently served as a reason to re-categorise me into the camp of the homeopathically unqualified pseudo-experts: ‘We believe that it is time to recognise that opposition to homeopathy is largely based on the opinions of individuals who are unqualified or unwilling to judge the evidence fairly’. I, by contrast, believe it is time that I disclose the full truth about ‘my double-life as a homeopath’. What exactly is my background in this area? Have I really been found out to be a confidence-trickster?
THE HOMEOPATHIC CLINICIAN
I graduated from medical school in Munich in the late 1970s and, looking for a job, I realised that there weren’t any. At the time, Germany had a surplus of doctors and all the posts I wanted were taken. Eventually, I found one in the only hospital that was run almost entirely homoeopathically, the KRANKENHAUS FUER NATURHEILWEISEN in Munich. Within about half a year, I learned how to think like a homeopath, diagnose like a homeopath and treat patients like a homeopath. I never attended formal courses or aspired to get a certificate; as far as I remember, none of the junior doctors working in the homeopathic hospital did that either. We were expected to learn on the job, and so we did.
Our teachers at medical school had hardly ever mentioned homeopathy, but one thing they had nevertheless made abundantly clear to us: homeopathy cannot possibly work; there is nothing in these pills and potions! To my surprise, however, my patients improved, their symptoms subsided and, in general, they were very happy with the treatment we provided. My professors had told me that homeopathy was rubbish, but they had forgotten to teach me a much more important lesson: critical thinking. Therefore, I might be forgiven for proudly assuming that my patients’ improvement was due to my skilful homeopathic prescriptions.
But then came another surprise: the boss of the homeopathic hospital, Dr Zimmermann, took me under his wings, and we had occasional discussions about this and that and, of course, about homeopathy. When I shyly mentioned what I had been told at medical school ( about homeopathy being entirely implausible), he agreed! I was speechless. Crucially, he considered that there were other explanations: “Our patients might improve because we look after them well and we discontinue all the unnecessary medication they come in with; perhaps the homeopathic remedies play only a small part”, he said.
THE INVESTIGATOR OF HOMEOPATHY
This may well have been the first time I started looking critically at homeopathy and my own clinical practice – and this is roughly where I left things as far as homeopathy is concerned until, in 1993, it became my job to research alternative therapies systematically and rigorously. Meanwhile I had done a PhD and tried my best to learn the skills of critical analysis. As I began to investigate homeopathy scientifically, I found that my former boss had been right: patients do indeed improve because of a multitude of factors: placebo-effects, natural history of the disease, regression towards the mean, to mention just three of a multitude of phenomena. At the same time, he had not been entirely correct: homeopathic remedies are pure placebos; they do not play a ‘small part’ in patients’ improvement, they play no part in this process.
As I began to state this more and more clearly, all sorts of ad hominem attacks were hauled in my direction, and recently I was even fired from the editorial board of the journal ‘HOMEOPATHY’ because allegedly I “…smeared homeopathy and other forms of complementary medicine…” I don’t mind any of that – but I do think that the truth about ‘my double-life as a homeopath’ should not be diluted like a homeopathic remedy until it suits those who think they can defame me by claiming I am a liar and do not know what I am talking about.
This rather depressing story shows, I think, that some homeopaths, rather than admitting they are in the wrong, are prepared to dilute the truth until it might be hard for third parties to tell who is right and who is wrong. But however they may deny it, the truth is still the truth: I have been trained as a homeopath.
A lengthy article posted by THE HOMEOPATHIC COLLEGE recently advocated treating cancer with homeopathy. Since I doubt that many readers access this publication, I take the liberty of reproducing here their (also fairly lengthy) CONCLUSIONS in full:
Laboratory studies in vitro and in vivo show that homeopathic drugs, in addition to having the capacity to reduce the size of tumors and to induce apoptosis, can induce protective and restorative effects. Additionally homeopathic treatment has shown effects when used as a complementary therapy for the effects of conventional cancer treatment. This confirms observations from our own clinical experience as well as that of others that when suitable remedies are selected according to individual indications as well as according to pathology and to cell-line indications and administered in the appropriate doses according to the standard principles of homeopathic posology, homeopathic treatment of cancer can be a highly effective therapy for all kinds of cancers and leukemia as well as for the harmful side effects of conventional treatment. More research is needed to corroborate these clinical observations.
Homeopathy over almost two decades of its existence has developed more than four hundred remedies for cancer treatment. Only a small fraction have been subjected to scientific study so far. More homeopathic remedies need to be studied to establish if they have any significant action in cancer. Undoubtedly the next big step in homeopathic cancer research must be multiple comprehensive double-blinded, placebo-controlled, randomized clinical trials. To assess the effect of homeopathic treatment in clinical settings, volunteer adult patients who prefer to try homeopathic treatment instead of conventional therapy could be recruited, especially in cases for which no conventional therapy has been shown to be effective.
Many of the researchers conducting studies — cited here but not discussed — on the growing interest in homeopathic cancer treatment have observed that patients are driving the demand for access to homeopathic and other alternative modes of cancer treatment. So long as existing cancer treatment is fraught with danger and low efficacy, it is urgent that the research on and the provision of quality homeopathic cancer treatment be made available for those who wish to try it.
When I report about nonsense like that, I find it hard not to go into a fuming rage. But doing that would not be very constructive – so let me instead highlight (in random order) eight simple techniques that seem to be so common when unsubstantiated claims are being promoted for alternative treatments:
1) cherry pick the data
2) use all sorts of ‘evidence’ regardless how flimsy or irrelevant it might be
3) give yourself the flair of being highly scientific and totally impartial
4) point out how dangerous and ineffective all the conventional treatments are
5) do not shy away from overt lies
6) do not forget to stress that the science is in full agreement with your exhaustive clinical experience
7) stress that patients want what you are offering
8) ignore the biological plausibility of the underlying concepts
Provided we adhere to these simple rules, we can convince the unsuspecting public of just about anything – even of the notion that homeopathy is a cure for cancer!
If one spends a lot of time, as I presently do, sorting out old files, books, journals etc., one is bound to come across plenty of weird and unusual things. I for one, am slow at making progress with this task, mainly because I often start reading the material that is in front of me. It was one of those occasions that I had begun studying a book written by one of the more fanatic proponent of alternative medicine and stumbled over the term THE PROOF OF EXPERIENCE. It made me think, and I began to realise that the notion behind these four words is quite characteristic of the field of alternative health care.
When I studied medicine, in the 1970s, we were told by our peers what to do, which treatments worked for which conditions and why. They had all the experience and we, by definition, had none. Experience seemed synonymous with proof. Nobody dared to doubt the word of ‘the boss’. We were educated, I now realise, in the age of EMINENCE-BASED MEDICINE.
All of this gradually changed when the concepts of EVIDENCE-BASED MEDICINE became appreciated and generally adopted by responsible health care professionals. If now the woman or man on top of the medical ‘pecking order’ claims something that is doubtful in view of the published evidence, it is possible (sometimes even desirable) to say so – no matter how junior the doubter happened to be. As a result, medicine has thus changed for ever: progress is no longer made funeral by funeral [of the bosses] but new evidence is much more swiftly translated into clinical practice.
Don’t get me wrong, EVIDENCE-BASED MEDICINE does not does not imply disrespect EXPERIENCE; it merely takes it for what it is. And when EVIDENCE and EXPERIENCE fail to agree with each other, we have to take a deep breath, think hard and try to do something about it. Depending on the specific situation, this might involve further study or at least an acknowledgement of a degree of uncertainty. The tension between EXPERIENCE and EVIDENCE often is the impetus for making progress. The winner in this often complex story is the patient: she will receive a therapy which, according to the best available EVIDENCE and careful consideration of the EXPERIENCE, is best for her.
NOT SO IN ALTERNATIVE MEDICINE!!! Here EXPERIENCE still trumps EVIDENCE any time, and there is no need for acknowledging uncertainty: EXPERIENCE = proof!!!
In case you think I am exaggerating, I recommend thumbing through a few books on the subject. As I already stated, I have done this quite a bit in recent months, and I can assure you that there is very little evidence in these volumes to suggest that data, research, science, etc.. matter a hoot. No critical thinking is required, as long as we have EXPERIENCE on our side!
‘THE PROOF OF EXPERIENCE’ is still a motto that seems to be everywhere in alternative medicine. In many ways, it seems to me, this motto symbolises much of what is wrong with alternative medicine and the mind-set of its proponents. Often, the EXPERIENCE is in sharp contrast to the EVIDENCE. But this little detail does not seem to irritate anyone. Apologists of alternative medicine stubbornly ignore such contradictions. In the rare case where they do comment at all, the gist of their response normally is that EXPERIENCE is much more relevant than EVIDENCE. After all, EXPERIENCE is based on hundreds of years and thousands of ‘real-life’ cases, while EVIDENCE is artificial and based on just a few patients.
As far as I can see, nobody in alternative medicine pays more than a lip service to the fact that EXPERIENCE can be [and often is] grossly misleading. Little or no acknowledgement exists of the fact that, in clinical routine, there are simply far too many factors that interfere with our memories, impressions, observations and conclusions. If a patient gets better after receiving a therapy, she might have improved for a dozen reasons which are unrelated to the treatment per se. And if a patient does not get better, she might not come back at all, and the practitioner’s memory will therefore fail register such events as therapeutic failures. Whatever EXPERIENCE is, in health care, it rarely constitutes proof!
The notion of THE PROOF OF EXPERIENCE, it thus turns out, is little more than self-serving, wishful thinking which characterises the backward attitude that seems to be so remarkably prevalent in alternative medicine. No tension between EXPERIENCE and EVIDENCE is noticeable because the EVIDENCE is being ignored; as a result, there is no progress. The looser is, of course, the patient: she will receive a treatment based on criteria which are less than reliable.
Isn’t it time to burry the fallacy of THE PROOF OF EXPERIENCE once and for all?
Some time ago, we published a systematic review aimed at identifying what patients might hope for when they consult a practitioner of alternative medicine. The most common expectations that emerged from this research are listed here:
- Less side-effects
- Symptom relief
- Cure of their disease
- Cope better with their condition
- Improve quality of life
- Boost immune system
- Prevention of illness
- Good therapeutic relationship with a clinician
- Holistic care
- Emotional support
- Control over their own health
In several ways, I think, these expectations are revealing; here I want to focus on one particular aspect, and ask the following question: To what extent are patients driven to see alternative practitioners simply because conventional medicine is letting them down? It seems to me that several items in the list above are an implicit criticism of mainstream medicine. This might get much clearer, if I re-phrase the points a bit: according to our findings, patients feel:
- that conventional treatments have too many side-effects;
- that they frequently fail to ease their symptoms;
- that they often do not cure the disease;
- that doctors do not enable their patients to cope with their condition;
- that doctors care not enough about their patients’ quality of life;
- that many conventional treatments neglect the importance of the immune system;
- that prevention is not given the importance it should have;
- that doctors are often no good at establishing good therapeutic relationships with their patients;
- that doctors fail to realise that their patients are not just “cases” but whole human individuals;
- that doctors are not providing enough emotional support;
- that doctors fail to empower their patients to be in control of their health.
Some of these points will probably strike a cord with most of us. I for one know of many instances where conventional physicians have failed their patients most miserably. All too often, the failings of modern medicine are as obvious as they are inexcusable! I can fully understand that disappointed patients look for help and compassion elsewhere, and I am quite sure that the failings of modern medicine are an important motivator for people to try alternative medicine.
But looking elsewhere might not be the best approach for improving health care. Alternative practitioners may well be more compassionate than conventional clinicians but features like empathy, time and attention can never make good medicine, if they are not accompanied by effective therapies.
The conclusion is therefore simple: whenever we encounter one of the many failings of conventional medicine, instead of turning away in disgust, we ought to make sure that mistakes are corrected, lessons are learnt and improvements are found and put into practice. Our aim must be to generate progress, and it cannot be reached by opting for unproven or dis-proven treatments.
What is and what isn’t evidence, and why is the distinction important?
In the area of alternative medicine, we tend to engage in sheer endless discussions around the subject of evidence; the relatively few comments on this new blog already confirm this impression. Many practitioners claim that their very own clinical experience is at least as important and generalizable as scientific evidence. It is therefore relevant to analyse in a little more detail some of the issues related to evidence as they apply to the efficacy of alternative therapies.
To prevent the debate from instantly deteriorating into a dispute about the value of this or that specific treatment, I will abstain from mentioning any alternative therapy by name and urge all commentators to do the same. The discussion on this post should not be about the value of homeopathy or any other alternative treatment; it is about more fundamental issues which, in my view, often get confused in the usually heated arguments for or against a specific alternative treatment.
My aim here is to outline the issues more fully than would be possible in the comments section of this blog. Readers and commentators can subsequently be referred to this post whenever appropriate. My hope is that, in this way, we might avoid repeating the same arguments ad nauseam.
Clinical experience is notoriously unreliable
Clinicians often feel quite strongly that their daily experience holds important information about the efficacy of their interventions. In this assumption, alternative practitioners are usually entirely united with healthcare professionals working in conventional medicine.
When their patients get better, they assume this to be the result of their treatment, especially if the experience is repeated over and over again. As an ex-clinician, I do sympathise with this notion which might even prevent practitioners from losing faith in their own work. But is the assumption really correct?
The short answer is NO. Two events [the treatment and the improvement] that follow each other in time are not necessarily causally related; we all know that, of course. So, we ought to consider alternative explanations for a patient’s improvement after therapy.
Even the most superficial scan of the possibilities discloses several options: the natural history of the condition, regression towards the mean, the placebo-effect, concomitant treatments, social desirability to name but a few. These and other phenomena can contribute to or determine the clinical outcome such that inefficacious treatments appear to be efficacious.
What follows is simple, undeniable and plausible for scientists, yet intensely counter-intuitive for clinicians: the prescribed treatment is only one of many influences on the clinical outcome. Thus even the most impressive clinical experience of the perceived efficacy of a treatment can be totally misleading. In fact, experience might just reflect the fact that we repeat the same mistake over and over again. Put differently, the plural of anecdote is anecdotes, not evidence!
Clinicians tend to get quite miffed when anyone tries to explain to them how multifactorial the situation really is and how little their much-treasured experience tells us about therapeutic efficacy. Here are seven of the counter-arguments I hear most frequently:
1) The improvement was so direct and prompt that it was obviously caused by my treatment [this notion is not very convincing; placebo-effects can be just as prompt and direct].
2) I have seen it so many times that it cannot be a coincidence [some clinicians are very caring, charismatic, and empathetic; they will thus regularly generate powerful placebo-responses, even when using placebos].
3) A study with several thousand patients shows that 75% of them improved with my treatment [such response rates are not uncommon, even for ineffective treatments, if patient-expectation was high].
4) Surely chronic conditions don’t suddenly get better; my treatment therefore cannot be a placebo [this is incorrect, eventually many chronic conditions improve, if only temporarily].
5) I had a patient with a serious condition, e.g. cancer, who received my treatment and was cured [if one investigates such cases, one often finds that the patient also took a conventional treatment; or, in rare instances, even cancer-patients show spontaneous remissions].
6) I have tried the treatment myself and had a positive outcome [clinicians are not immune to the multifactorial nature of the perceived clinical response].
7) Even children and animals respond very well to my treatment, surely they are not prone to placebo-effects [animals can be conditioned to respond; and then there is, of course, the natural history of the disease].
Is all this to say that clinical experience is useless? Clearly not! I am merely pointing out that, when it comes to therapeutic efficacy, clinical experience is no replacement for evidence. It is invaluable for a lot of other things, but it can at best provide a hint and never a proof of efficacy.
What then is reliable evidence?
As the clinical outcomes after treatments always have many determinants, we need a different approach for verifying therapeutic efficacy. Essentially, we need to know what would have happened, if our patients had not received the treatment in question.
The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.
Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.
Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.
Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.
Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-commings, they are far superior than any other method for determining the efficacy of medical interventions.
There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.
Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.
In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.
Why is evidence important?
In a way, this question has already been answered: only with reliable evidence can we tell with any degree of certainty that it was the treatment per se – and not any of the other factors mentioned above – that caused the clinical outcome we observe in routine practice. Only if we have such evidence can we be sure about cause and effect. And only then can we make sure that patients receive the best possible treatments currently available.
There are, of course, those who say that causality does not matter all that much. What is important, they claim, is to help the patient, and if it was a placebo-effect that did the trick, who cares? However, I know of many reasons why this attitude is deeply misguided. To mention just one: we probably all might agree that the placebo-effect can benefit many patients, yet it would be a fallacy to assume that we need a placebo treatment to generate a placebo-response.
If a clinician administers an efficacious therapy [one that generates benefit beyond placebo] with compassion, time, empathy and understanding, she will generate a placebo-response PLUS a response to the therapy administered. In this case, the patient benefits twice. It follows that, merely administering a placebo is less than optimal; in fact it usually means cheating the patient of the effect of an efficacious therapy.
The frequently voiced counter-argument is that there are many patients who are ill without an exact diagnosis and who therefore cannot receive a specific treatment. This may be true, but even those patients’ symptoms can usually be alleviated with efficacious symptomatic therapy, and I fail to see how the administration of an ineffective treatment might be preferable to using an effective symptomatic therapy.
We all agree that helping the patient is the most important task of a clinician. This task is best achieved by maximising the non-specific effects [e.g. placebo], while also making sure that the patient benefits from the specific effects of what medicine has to offer. If that is our goal in clinical practice, we need reliable evidence and experience. Therefore one cannot be a substitute for the other, and scientific evidence is an essential precondition for good medicine.