Homeopaths can bear criticism only when it is highly diluted. Any critique from the ‘outside’ is therefore dismissed by insisting that the author fails to understand the subtleties of homeopathy. And criticism from the ‘inside’ does not exist: by definition, a homeopath does not criticise his/her own trade. Through these mechanisms, homeopaths have more or less successfully shielded themselves from all arguments against their activities and have, for the last 200 years, managed to survive in a world of make-belief.
As I will show below, I started my professional life on the side of the homeopaths – I am not proud of this fact, but there is no use denying it. When the evidence told me more and more clearly that I had been wrong, about 10 years ago, I began expressing serious doubts about the plausibility, efficacy and safety of using homeopathic remedies to treat patients in need. Homeopaths reacted not just with anger, they were also at a loss.
Their little trick of saying ‘He does not understand homeopathy and therefore his critique is invalid’ could not possibly work in my case – I had been one of them: I had attended their meetings, chaired some of their sessions, edited a book on homeopathy, accepted an invitation to join the editorial board of the journal ‘HOMEOPATHY‘ as well as a EU-panel investigating homeopathy, conducted trials, systematic reviews and meta-analyses, published over 100 articles on the subject, accepted money from Prince Charles as well as from the ‘ueber-homeopath’ George Vithoulkas for my research, and even contributed to THE INTERNATIONAL DICTIONARY OF HOMEOPATHY. It would have not looked reasonable to suddenly deny my previously accepted expertise. Homeopaths thus found themselves in a pickle: critique from the ‘inside’ is not what they were used to or could easily cope with.
The homeopathic idyll was under threat, and a solution to the problem had to be found with some urgency. And soon enough, it was found. Homeopaths from across the world started claiming that I had been telling porkies about my training/qualifications in homeopathy: “Edzard Ernst has admitted that he has over the years lied or supported a lie about having homeopathic training. In reality he has had none at all! The leading so-called ‘expert’ and critic of homeopathy, Professor Edzard Ernst, has admitted that he has no qualifications in homeopathy. etc. etc. Diluting the truth to the extreme, they almost unanimously insisted that, contrary to my previous assertions, I had no training/qualifications in homeopathy. Thus, they began to argue, that I was an imposter and had insufficient knowledge, expertise and experience after all: Professor Edzard Ernst the leading ‘authority’ on homeopathy, and perhaps its most referenced critic, has no qualifications in homeopathy. William Alderson of HMC21 also claims that Ernst’s book Trick or Treatment? shows Ernst to be unreliable as a researcher into homeopathy. Opposition to homeopathy is based on propaganda, they stated. Others wrote that Edzard Ernst’ failure as a homeopath only proves he lacked some basic qualities essential to become a successful homeopath. He failed as a homeopath, and then turned a skeptic. His failure is only his failure- it does not disprove homeopathy by any way. Once he failed in putting a mark as a successful homeopath or CAM practitioner, he just tried the other way to become famous and respectable- he converted himself into a skeptic, which provided him with ample opportunities to appear on ‘anti-homeopathy’ platforms’ as an ‘authority’, ‘expert’ and ‘ex-homeopath’! Some went even further claiming that I had also lied about my medical qualifications.
These notions has been going around the internet for several years now and conveniently served as a reason to re-categorise me into the camp of the homeopathically unqualified pseudo-experts: ‘We believe that it is time to recognise that opposition to homeopathy is largely based on the opinions of individuals who are unqualified or unwilling to judge the evidence fairly’. I, by contrast, believe it is time that I disclose the full truth about ‘my double-life as a homeopath’. What exactly is my background in this area? Have I really been found out to be a confidence-trickster?
THE HOMEOPATHIC CLINICIAN
I graduated from medical school in Munich in the late 1970s and, looking for a job, I realised that there weren’t any. At the time, Germany had a surplus of doctors and all the posts I wanted were taken. Eventually, I found one in the only hospital that was run almost entirely homoeopathically, the KRANKENHAUS FUER NATURHEILWEISEN in Munich. Within about half a year, I learned how to think like a homeopath, diagnose like a homeopath and treat patients like a homeopath. I never attended formal courses or aspired to get a certificate; as far as I remember, none of the junior doctors working in the homeopathic hospital did that either. We were expected to learn on the job, and so we did.
Our teachers at medical school had hardly ever mentioned homeopathy, but one thing they had nevertheless made abundantly clear to us: homeopathy cannot possibly work; there is nothing in these pills and potions! To my surprise, however, my patients improved, their symptoms subsided and, in general, they were very happy with the treatment we provided. My professors had told me that homeopathy was rubbish, but they had forgotten to teach me a much more important lesson: critical thinking. Therefore, I might be forgiven for proudly assuming that my patients’ improvement was due to my skilful homeopathic prescriptions.
But then came another surprise: the boss of the homeopathic hospital, Dr Zimmermann, took me under his wings, and we had occasional discussions about this and that and, of course, about homeopathy. When I shyly mentioned what I had been told at medical school ( about homeopathy being entirely implausible), he agreed! I was speechless. Crucially, he considered that there were other explanations: “Our patients might improve because we look after them well and we discontinue all the unnecessary medication they come in with; perhaps the homeopathic remedies play only a small part”, he said.
THE INVESTIGATOR OF HOMEOPATHY
This may well have been the first time I started looking critically at homeopathy and my own clinical practice – and this is roughly where I left things as far as homeopathy is concerned until, in 1993, it became my job to research alternative therapies systematically and rigorously. Meanwhile I had done a PhD and tried my best to learn the skills of critical analysis. As I began to investigate homeopathy scientifically, I found that my former boss had been right: patients do indeed improve because of a multitude of factors: placebo-effects, natural history of the disease, regression towards the mean, to mention just three of a multitude of phenomena. At the same time, he had not been entirely correct: homeopathic remedies are pure placebos; they do not play a ‘small part’ in patients’ improvement, they play no part in this process.
As I began to state this more and more clearly, all sorts of ad hominem attacks were hauled in my direction, and recently I was even fired from the editorial board of the journal ‘HOMEOPATHY’ because allegedly I “…smeared homeopathy and other forms of complementary medicine…” I don’t mind any of that – but I do think that the truth about ‘my double-life as a homeopath’ should not be diluted like a homeopathic remedy until it suits those who think they can defame me by claiming I am a liar and do not know what I am talking about.
This rather depressing story shows, I think, that some homeopaths, rather than admitting they are in the wrong, are prepared to dilute the truth until it might be hard for third parties to tell who is right and who is wrong. But however they may deny it, the truth is still the truth: I have been trained as a homeopath.
A lengthy article posted by THE HOMEOPATHIC COLLEGE recently advocated treating cancer with homeopathy. Since I doubt that many readers access this publication, I take the liberty of reproducing here their (also fairly lengthy) CONCLUSIONS in full:
Laboratory studies in vitro and in vivo show that homeopathic drugs, in addition to having the capacity to reduce the size of tumors and to induce apoptosis, can induce protective and restorative effects. Additionally homeopathic treatment has shown effects when used as a complementary therapy for the effects of conventional cancer treatment. This confirms observations from our own clinical experience as well as that of others that when suitable remedies are selected according to individual indications as well as according to pathology and to cell-line indications and administered in the appropriate doses according to the standard principles of homeopathic posology, homeopathic treatment of cancer can be a highly effective therapy for all kinds of cancers and leukemia as well as for the harmful side effects of conventional treatment. More research is needed to corroborate these clinical observations.
Homeopathy over almost two decades of its existence has developed more than four hundred remedies for cancer treatment. Only a small fraction have been subjected to scientific study so far. More homeopathic remedies need to be studied to establish if they have any significant action in cancer. Undoubtedly the next big step in homeopathic cancer research must be multiple comprehensive double-blinded, placebo-controlled, randomized clinical trials. To assess the effect of homeopathic treatment in clinical settings, volunteer adult patients who prefer to try homeopathic treatment instead of conventional therapy could be recruited, especially in cases for which no conventional therapy has been shown to be effective.
Many of the researchers conducting studies — cited here but not discussed — on the growing interest in homeopathic cancer treatment have observed that patients are driving the demand for access to homeopathic and other alternative modes of cancer treatment. So long as existing cancer treatment is fraught with danger and low efficacy, it is urgent that the research on and the provision of quality homeopathic cancer treatment be made available for those who wish to try it.
When I report about nonsense like that, I find it hard not to go into a fuming rage. But doing that would not be very constructive – so let me instead highlight (in random order) eight simple techniques that seem to be so common when unsubstantiated claims are being promoted for alternative treatments:
1) cherry pick the data
2) use all sorts of ‘evidence’ regardless how flimsy or irrelevant it might be
3) give yourself the flair of being highly scientific and totally impartial
4) point out how dangerous and ineffective all the conventional treatments are
5) do not shy away from overt lies
6) do not forget to stress that the science is in full agreement with your exhaustive clinical experience
7) stress that patients want what you are offering
8) ignore the biological plausibility of the underlying concepts
Provided we adhere to these simple rules, we can convince the unsuspecting public of just about anything – even of the notion that homeopathy is a cure for cancer!
If one spends a lot of time, as I presently do, sorting out old files, books, journals etc., one is bound to come across plenty of weird and unusual things. I for one, am slow at making progress with this task, mainly because I often start reading the material that is in front of me. It was one of those occasions that I had begun studying a book written by one of the more fanatic proponent of alternative medicine and stumbled over the term THE PROOF OF EXPERIENCE. It made me think, and I began to realise that the notion behind these four words is quite characteristic of the field of alternative health care.
When I studied medicine, in the 1970s, we were told by our peers what to do, which treatments worked for which conditions and why. They had all the experience and we, by definition, had none. Experience seemed synonymous with proof. Nobody dared to doubt the word of ‘the boss’. We were educated, I now realise, in the age of EMINENCE-BASED MEDICINE.
All of this gradually changed when the concepts of EVIDENCE-BASED MEDICINE became appreciated and generally adopted by responsible health care professionals. If now the woman or man on top of the medical ‘pecking order’ claims something that is doubtful in view of the published evidence, it is possible (sometimes even desirable) to say so – no matter how junior the doubter happened to be. As a result, medicine has thus changed for ever: progress is no longer made funeral by funeral [of the bosses] but new evidence is much more swiftly translated into clinical practice.
Don’t get me wrong, EVIDENCE-BASED MEDICINE does not does not imply disrespect EXPERIENCE; it merely takes it for what it is. And when EVIDENCE and EXPERIENCE fail to agree with each other, we have to take a deep breath, think hard and try to do something about it. Depending on the specific situation, this might involve further study or at least an acknowledgement of a degree of uncertainty. The tension between EXPERIENCE and EVIDENCE often is the impetus for making progress. The winner in this often complex story is the patient: she will receive a therapy which, according to the best available EVIDENCE and careful consideration of the EXPERIENCE, is best for her.
NOT SO IN ALTERNATIVE MEDICINE!!! Here EXPERIENCE still trumps EVIDENCE any time, and there is no need for acknowledging uncertainty: EXPERIENCE = proof!!!
In case you think I am exaggerating, I recommend thumbing through a few books on the subject. As I already stated, I have done this quite a bit in recent months, and I can assure you that there is very little evidence in these volumes to suggest that data, research, science, etc.. matter a hoot. No critical thinking is required, as long as we have EXPERIENCE on our side!
‘THE PROOF OF EXPERIENCE’ is still a motto that seems to be everywhere in alternative medicine. In many ways, it seems to me, this motto symbolises much of what is wrong with alternative medicine and the mind-set of its proponents. Often, the EXPERIENCE is in sharp contrast to the EVIDENCE. But this little detail does not seem to irritate anyone. Apologists of alternative medicine stubbornly ignore such contradictions. In the rare case where they do comment at all, the gist of their response normally is that EXPERIENCE is much more relevant than EVIDENCE. After all, EXPERIENCE is based on hundreds of years and thousands of ‘real-life’ cases, while EVIDENCE is artificial and based on just a few patients.
As far as I can see, nobody in alternative medicine pays more than a lip service to the fact that EXPERIENCE can be [and often is] grossly misleading. Little or no acknowledgement exists of the fact that, in clinical routine, there are simply far too many factors that interfere with our memories, impressions, observations and conclusions. If a patient gets better after receiving a therapy, she might have improved for a dozen reasons which are unrelated to the treatment per se. And if a patient does not get better, she might not come back at all, and the practitioner’s memory will therefore fail register such events as therapeutic failures. Whatever EXPERIENCE is, in health care, it rarely constitutes proof!
The notion of THE PROOF OF EXPERIENCE, it thus turns out, is little more than self-serving, wishful thinking which characterises the backward attitude that seems to be so remarkably prevalent in alternative medicine. No tension between EXPERIENCE and EVIDENCE is noticeable because the EVIDENCE is being ignored; as a result, there is no progress. The looser is, of course, the patient: she will receive a treatment based on criteria which are less than reliable.
Isn’t it time to burry the fallacy of THE PROOF OF EXPERIENCE once and for all?
Some time ago, we published a systematic review aimed at identifying what patients might hope for when they consult a practitioner of alternative medicine. The most common expectations that emerged from this research are listed here:
- Less side-effects
- Symptom relief
- Cure of their disease
- Cope better with their condition
- Improve quality of life
- Boost immune system
- Prevention of illness
- Good therapeutic relationship with a clinician
- Holistic care
- Emotional support
- Control over their own health
In several ways, I think, these expectations are revealing; here I want to focus on one particular aspect, and ask the following question: To what extent are patients driven to see alternative practitioners simply because conventional medicine is letting them down? It seems to me that several items in the list above are an implicit criticism of mainstream medicine. This might get much clearer, if I re-phrase the points a bit: according to our findings, patients feel:
- that conventional treatments have too many side-effects;
- that they frequently fail to ease their symptoms;
- that they often do not cure the disease;
- that doctors do not enable their patients to cope with their condition;
- that doctors care not enough about their patients’ quality of life;
- that many conventional treatments neglect the importance of the immune system;
- that prevention is not given the importance it should have;
- that doctors are often no good at establishing good therapeutic relationships with their patients;
- that doctors fail to realise that their patients are not just “cases” but whole human individuals;
- that doctors are not providing enough emotional support;
- that doctors fail to empower their patients to be in control of their health.
Some of these points will probably strike a cord with most of us. I for one know of many instances where conventional physicians have failed their patients most miserably. All too often, the failings of modern medicine are as obvious as they are inexcusable! I can fully understand that disappointed patients look for help and compassion elsewhere, and I am quite sure that the failings of modern medicine are an important motivator for people to try alternative medicine.
But looking elsewhere might not be the best approach for improving health care. Alternative practitioners may well be more compassionate than conventional clinicians but features like empathy, time and attention can never make good medicine, if they are not accompanied by effective therapies.
The conclusion is therefore simple: whenever we encounter one of the many failings of conventional medicine, instead of turning away in disgust, we ought to make sure that mistakes are corrected, lessons are learnt and improvements are found and put into practice. Our aim must be to generate progress, and it cannot be reached by opting for unproven or dis-proven treatments.
What is and what isn’t evidence, and why is the distinction important?
In the area of alternative medicine, we tend to engage in sheer endless discussions around the subject of evidence; the relatively few comments on this new blog already confirm this impression. Many practitioners claim that their very own clinical experience is at least as important and generalizable as scientific evidence. It is therefore relevant to analyse in a little more detail some of the issues related to evidence as they apply to the efficacy of alternative therapies.
To prevent the debate from instantly deteriorating into a dispute about the value of this or that specific treatment, I will abstain from mentioning any alternative therapy by name and urge all commentators to do the same. The discussion on this post should not be about the value of homeopathy or any other alternative treatment; it is about more fundamental issues which, in my view, often get confused in the usually heated arguments for or against a specific alternative treatment.
My aim here is to outline the issues more fully than would be possible in the comments section of this blog. Readers and commentators can subsequently be referred to this post whenever appropriate. My hope is that, in this way, we might avoid repeating the same arguments ad nauseam.
Clinical experience is notoriously unreliable
Clinicians often feel quite strongly that their daily experience holds important information about the efficacy of their interventions. In this assumption, alternative practitioners are usually entirely united with healthcare professionals working in conventional medicine.
When their patients get better, they assume this to be the result of their treatment, especially if the experience is repeated over and over again. As an ex-clinician, I do sympathise with this notion which might even prevent practitioners from losing faith in their own work. But is the assumption really correct?
The short answer is NO. Two events [the treatment and the improvement] that follow each other in time are not necessarily causally related; we all know that, of course. So, we ought to consider alternative explanations for a patient’s improvement after therapy.
Even the most superficial scan of the possibilities discloses several options: the natural history of the condition, regression towards the mean, the placebo-effect, concomitant treatments, social desirability to name but a few. These and other phenomena can contribute to or determine the clinical outcome such that inefficacious treatments appear to be efficacious.
What follows is simple, undeniable and plausible for scientists, yet intensely counter-intuitive for clinicians: the prescribed treatment is only one of many influences on the clinical outcome. Thus even the most impressive clinical experience of the perceived efficacy of a treatment can be totally misleading. In fact, experience might just reflect the fact that we repeat the same mistake over and over again. Put differently, the plural of anecdote is anecdotes, not evidence!
Clinicians tend to get quite miffed when anyone tries to explain to them how multifactorial the situation really is and how little their much-treasured experience tells us about therapeutic efficacy. Here are seven of the counter-arguments I hear most frequently:
1) The improvement was so direct and prompt that it was obviously caused by my treatment [this notion is not very convincing; placebo-effects can be just as prompt and direct].
2) I have seen it so many times that it cannot be a coincidence [some clinicians are very caring, charismatic, and empathetic; they will thus regularly generate powerful placebo-responses, even when using placebos].
3) A study with several thousand patients shows that 75% of them improved with my treatment [such response rates are not uncommon, even for ineffective treatments, if patient-expectation was high].
4) Surely chronic conditions don’t suddenly get better; my treatment therefore cannot be a placebo [this is incorrect, eventually many chronic conditions improve, if only temporarily].
5) I had a patient with a serious condition, e.g. cancer, who received my treatment and was cured [if one investigates such cases, one often finds that the patient also took a conventional treatment; or, in rare instances, even cancer-patients show spontaneous remissions].
6) I have tried the treatment myself and had a positive outcome [clinicians are not immune to the multifactorial nature of the perceived clinical response].
7) Even children and animals respond very well to my treatment, surely they are not prone to placebo-effects [animals can be conditioned to respond; and then there is, of course, the natural history of the disease].
Is all this to say that clinical experience is useless? Clearly not! I am merely pointing out that, when it comes to therapeutic efficacy, clinical experience is no replacement for evidence. It is invaluable for a lot of other things, but it can at best provide a hint and never a proof of efficacy.
What then is reliable evidence?
As the clinical outcomes after treatments always have many determinants, we need a different approach for verifying therapeutic efficacy. Essentially, we need to know what would have happened, if our patients had not received the treatment in question.
The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.
Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.
Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.
Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.
Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-commings, they are far superior than any other method for determining the efficacy of medical interventions.
There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.
Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.
In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.
Why is evidence important?
In a way, this question has already been answered: only with reliable evidence can we tell with any degree of certainty that it was the treatment per se – and not any of the other factors mentioned above – that caused the clinical outcome we observe in routine practice. Only if we have such evidence can we be sure about cause and effect. And only then can we make sure that patients receive the best possible treatments currently available.
There are, of course, those who say that causality does not matter all that much. What is important, they claim, is to help the patient, and if it was a placebo-effect that did the trick, who cares? However, I know of many reasons why this attitude is deeply misguided. To mention just one: we probably all might agree that the placebo-effect can benefit many patients, yet it would be a fallacy to assume that we need a placebo treatment to generate a placebo-response.
If a clinician administers an efficacious therapy [one that generates benefit beyond placebo] with compassion, time, empathy and understanding, she will generate a placebo-response PLUS a response to the therapy administered. In this case, the patient benefits twice. It follows that, merely administering a placebo is less than optimal; in fact it usually means cheating the patient of the effect of an efficacious therapy.
The frequently voiced counter-argument is that there are many patients who are ill without an exact diagnosis and who therefore cannot receive a specific treatment. This may be true, but even those patients’ symptoms can usually be alleviated with efficacious symptomatic therapy, and I fail to see how the administration of an ineffective treatment might be preferable to using an effective symptomatic therapy.
We all agree that helping the patient is the most important task of a clinician. This task is best achieved by maximising the non-specific effects [e.g. placebo], while also making sure that the patient benefits from the specific effects of what medicine has to offer. If that is our goal in clinical practice, we need reliable evidence and experience. Therefore one cannot be a substitute for the other, and scientific evidence is an essential precondition for good medicine.