Edzard Ernst

MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

Since it was first published, the “Swiss government report” on homeopathy has been celebrated as the most convincing proof so far that homeopathy works. On the back of this news, all sorts of strange stories have emerged. Their aim seems to be that consumers become convinced that homeopathy is based on compelling evidence.

Readers of this blog might therefore benefit from a brief and critical evaluation of this “evidence” in support of homeopathy. Recently, not one, two, three but four independent critiques of this document have become available.

Collectively, these articles [only one of which is mine] suggest that the “Swiss report” is hardly worth the paper it was written on; one of the critiques published in the Swiss Medical Weekly even stated that it amounted to “research misconduct”! Compared to such outspoken language, my own paper concluded much more conservatively: “this report [is] methodologically flawed, inaccurate and biased”.

So what is wrong with it? Why is this document not an accurate summary of the existing evidence? I said this would be a brief post, so I  will only mention some of the most striking flaws.

The report is not, as often claimed, a product by the Swiss government; in fact, it was produced by 13 authors who have no connection to any government and who are known proponents of homeopathy. For some unimaginable reason, they decided to invent their very own criteria for what constitutes evidence. For instance, they included case-reports and case-series, re-defined what is meant by effectiveness, were highly selective in choosing the articles they happened to like [presumably because of the direction of the result] while omitting lots of data that did not seem to confirm their prior belief, and assessed only a very narrow range of indications.

The report quotes several of my own reviews of homeopathy but, intriguingly, it omitted others for no conceivable reason. I was baffled to realise that the authors reported my conclusions differently from the original published text in my articles. If this had occurred once or twice, it might have been a forgivable error – but this happened in 10 of 22 instances.

Negative conclusions in my original reviews were thus repeatedly turned into positive verdicts, and evidence against homeopathy suddenly appeared to support it. This is, of course, a serious problem: if someone is too busy to look up my original articles, she is very unlikely to notice this extraordinary attempt to cheat.

To me, this approach seems similar to that of an accountant who produces a balance sheet where debts appear as credits. It is a simple yet dishonest way to generate a positive result where there is none!

The final straw for me came when I realised that the authors of this dubious report had declared that they were free of conflicts of interest. This notion is demonstrably wrong; several of them earn their living through homeopathy!

Knowing all this, sceptics might take any future praise of this “Swiss government report” with more than just a pinch of salt. Once we are aware of the full, embarrassing details, it is not difficult to understand how the final verdict turned out to be in favour of homeopathy: if we convert much of the negative data on any subject into positive evidence, any rubbish will come out smelling of roses – even homeopathy.

 

What is and what isn’t evidence, and why is the distinction important?

In the area of alternative medicine, we tend to engage in sheer endless discussions around the subject of evidence; the relatively few comments on this new blog already confirm this impression. Many practitioners claim that their very own clinical experience is at least as important and generalizable as scientific evidence. It is therefore relevant to analyse in a little more detail some of the issues related to evidence as they apply to the efficacy of alternative therapies.

To prevent the debate from instantly deteriorating into a dispute about the value of this or that specific treatment, I will abstain from mentioning any alternative therapy by name and urge all commentators to do the same. The discussion on this post should not be about the value of homeopathy or any other alternative treatment; it is about more fundamental issues which, in my view, often get confused in the usually heated arguments for or against a specific alternative treatment.

My aim here is to outline the issues more fully than would be possible in the comments section of this blog. Readers and commentators can subsequently be referred to this post whenever appropriate. My hope is that, in this way, we might avoid repeating the same arguments ad nauseam.

Clinical experience is notoriously unreliable

Clinicians often feel quite strongly that their daily experience holds important information about the efficacy of their interventions. In this assumption, alternative practitioners are usually entirely united with healthcare professionals working in conventional medicine.

When their patients get better, they assume this to be the result of their treatment, especially if the experience is repeated over and over again. As an ex-clinician, I do sympathise with this notion which might even prevent practitioners from losing faith in their own work. But is the assumption really correct?

The short answer is NO. Two events [the treatment and the improvement] that follow each other in time are not necessarily causally related; we all know that, of course. So, we ought to consider alternative explanations for a patient’s improvement after therapy.

Even the most superficial scan of the possibilities discloses several options: the natural history of the condition, regression towards the mean, the placebo-effect, concomitant treatments, social desirability to name but a few. These and other phenomena can contribute to or determine the clinical outcome such that inefficacious treatments appear to be efficacious.

What follows is simple, undeniable and plausible for scientists, yet intensely counter-intuitive for clinicians: the prescribed treatment is only one of many influences on the clinical outcome. Thus even the most impressive clinical experience of the perceived efficacy of a treatment can be totally misleading. In fact, experience might just reflect the fact that we repeat the same mistake over and over again. Put differently, the plural of anecdote is anecdotes, not evidence!

Clinicians tend to get quite miffed when anyone tries to explain to them how multifactorial the situation really is and how little their much-treasured experience tells us about therapeutic efficacy. Here are seven of the counter-arguments I hear most frequently:

1)      The improvement was so direct and prompt that it was obviously caused by my treatment [this notion is not very convincing; placebo-effects can be just as prompt and direct].

2)      I have seen it so many times that it cannot be a coincidence [some clinicians are very caring, charismatic, and empathetic; they will thus regularly generate powerful placebo-responses, even when using placebos].

3)      A study with several thousand patients shows that 75% of them improved with my treatment [such response rates are not uncommon, even for ineffective treatments, if patient-expectation was high].

4)      Surely chronic conditions don’t suddenly get better; my treatment therefore cannot be a placebo [this is incorrect, eventually many chronic conditions improve, if only temporarily].

5)      I had a patient with a serious condition, e.g. cancer, who received my treatment and was cured [if one investigates such cases, one often finds that the patient also took a conventional treatment; or, in rare instances, even cancer-patients show spontaneous remissions].

6)      I have tried the treatment myself and had a positive outcome [clinicians are not immune to the multifactorial nature of the perceived clinical response].

7)      Even children and animals respond very well to my treatment, surely they are not prone to placebo-effects [animals can be conditioned to respond; and then there is, of course, the natural history of the disease].

Is all this to say that clinical experience is useless? Clearly not! I am merely pointing out that, when it comes to therapeutic efficacy, clinical experience is no replacement for evidence. It is invaluable for a lot of other things, but it can at best provide a hint and never a proof of efficacy.

What then is reliable evidence?

As the clinical outcomes after treatments always have many determinants, we need a different approach for verifying therapeutic efficacy. Essentially, we need to know what would have happened, if our patients had not received the treatment in question.

The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.

Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.

Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.

Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.

Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-commings, they are far superior than any other method for determining the efficacy of medical interventions.

There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.

Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.

In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.

Why is evidence important?

In a way, this question has already been answered: only with reliable evidence can we tell with any degree of certainty that it was the treatment per se – and not any of the other factors mentioned above – that caused the clinical outcome we observe in routine practice. Only if we have such evidence can we be sure about cause and effect. And only then can we make sure that patients receive the best possible treatments currently available.

There are, of course, those who say that causality does not matter all that much. What is important, they claim, is to help the patient, and if it was a placebo-effect that did the trick, who cares? However, I know of many reasons why this attitude is deeply misguided. To mention just one: we probably all might agree that the placebo-effect can benefit many patients, yet it would be a fallacy to assume that we need a placebo treatment to generate a placebo-response.

If a clinician administers an efficacious therapy [one that generates benefit beyond placebo] with compassion, time, empathy and understanding, she will generate a placebo-response PLUS a response to the therapy administered. In this case, the patient benefits twice. It follows that, merely administering a placebo is less than optimal; in fact it usually means cheating the patient of the effect of an efficacious therapy.

The frequently voiced counter-argument is that there are many patients who are ill without an exact diagnosis and who therefore cannot receive a specific treatment. This may be true, but even those patients’ symptoms can usually be alleviated with efficacious symptomatic therapy, and I fail to see how the administration of an ineffective treatment might be preferable to using an effective symptomatic therapy.

Conclusion

We all agree that helping the patient is the most important task of a clinician. This task is best achieved by maximising the non-specific effects [e.g. placebo], while also making sure that the patient benefits from the specific effects of what medicine has to offer. If that is our goal in clinical practice, we need reliable evidence and experience. Therefore one cannot be a substitute for the other, and scientific evidence is an essential precondition for good medicine.

Guest Post by Louise Lubetkin

A study published last week in the New England Journal of Medicine (NEJM) has brought to light some stark differences in the way that physicians and their patients see the role of chemotherapy in the management of advanced (i.e., metastatic) cancer.

Physicians who treat patients with advanced cancer know only too well that while chemotherapy can sometimes be helpful in easing symptoms, and may temporarily slow tumor growth, it cannot reverse or permanently cure the disease.  In other words, when chemotherapy is given to patients with advanced cancer it is always given with palliative rather than curative intent.  However, this is a distinction that a sizeable majority of cancer patients apparently do not fully understand.

In the NEJM-study, which involved 1193 patients with advanced lung or colorectal cancer, only 20-30 percent of patients reported understanding that chemotherapy was not at all likely to cure their cancer. The remainder, a full 81 percent of patients with colorectal cancer and 69 percent of patients with lung cancer, continued to believe, even when told otherwise, that chemotherapy did indeed offer them a significant chance of cure.

The study raises important questions concerning possible lack of informed consent: would patients still accept chemotherapy if they knew that it stood no chance of curing them? The authors cite a study which revealed that patients  – especially younger patients – would opt for chemotherapy if it offered even a 1 percent chance of cure, but would be considerably less willing to accept the same treatment if it offered only a significant increase in life expectancy. In the light of this, the authors write, “…an argument can be made that patients without a sustained understanding that chemotherapy cannot cure their cancer have not met the standard for true ongoing informed consent to their treatment.”

Because of the searching nature of the questions raised by the NEJM-study, and its potential ethical ramifications, it seems destined to be picked up by advocates of alternative medicine and used as a cudgel against standard medicine. To promoters of alt med, oncology represents a cynical institutionalized conspiracy to obstruct the use of purported “natural” cures, and chemotherapy is simply a license to poison patients in pursuit of profit. Take, for example this fevered headline and article from the Natural News website : “Chemo ‘benefits’ wildly over-hyped by oncologists; cancer patients actually believe they will be ‘cured’ by poison.”

“…chemotherapy is nothing but a sham “treatment” that puts cancer patients through needless pain and suffering while making the cancer industry rich,” continues the Natural News article.

“And perhaps the most disturbing part about this now-normalized form of medical quackery is that oncologists typically fail to disclose to their patients the fact that chemotherapy does not even cure cancer, which gives them false hope.”

(Which incidentally is pretty rich, coming from a website which carries, on the same page as this article, an ad which reads “How to CURE almost any cancer at home for $5.15 a day.”)

In fact, as more than one study has previously demonstrated, the majority of oncologists do indeed try their best to convey the incurable nature of metastatic cancer, and do mention the limited aims of chemotherapy in this setting. However, patients themselves are not always psychologically receptive, and are not always immediately able to confront the bleak truth. Neither, understandably, are physicians always eager to dwell on the negative aspects of the situation during “bad news” consultations. While two thirds of doctors tell patients at their initial visit that they have an incurable disease, only about a third explicitly state the prognosis. And even when prognosis is explained, more than one third of patients simply refuse to believe that treatment is unable to cure them (see Smith TJ, Dow LA, Virago EA, et al., here).

Moreover, patients’ initial reaction to the news that their cancer has recurred, or has metastasized, is typically “What can be done?” rather than “When will I die?”  Similarly, physicians – who, contrary to the calumnies of alt med conspiracy-mongers, are just as human as the rest of us, and just as averse to being the bearer of awful news – are apt quickly to follow their patients’ lead away from the hopelessness and finality of the situation and towards a practical discussion of treatment options, a realm in which they feel far more at home.

Significantly, the NEJM-study found that the very physicians who most explicitly drummed home the message that chemotherapy would not cure advanced cancer were consistently given the lowest marks for empathy and communication skills by their patients.  Conversely, those physicians who projected a more optimistic view of chemotherapy were perceived as better communicators.

In an era of greater measurement and accountability in health care,” the study concludes,  “we need to recognize that oncologists who communicate honestly with their patients, a marker of high quality of care, may be at risk for lower patient ratings.”

In an accompanying NEJM editorial titled “Talking with Patients about Dying” (unfortunately it’s behind a paywall but you can read a summary here), Thomas J. Smith, MD, and Dan L. Longo, MD, provide a trenchant commentary on this important subject.

Chemotherapy near the end of life is still common, does not improve survival, and is one preventable reason why 25 percent of all Medicare funds are spent in the last year of life. Patients need truthful information in order to make good choices. If patients are offered truthful information – repeatedly – on what is going to happen to them, they can choose wisely. Most people want to live as long as they can, with a good quality of life, and then transition to a peaceful death outside the hospital. We have the tools to help patients make these difficult decisions. We just need the gumption and incentives to use them.”

As these uncompromisingly candid editorialists point out, chemotherapy is a crude and ineffective treatment for advanced cancer. But to claim, as do many proponents of alternative approaches to cancer, that palliative chemotherapy represents a highly lucrative business built on the deliberate deception of dying patients, is a clear-cut case of the pot calling the kettle black.

When advocates of alternative cancer therapies have subjected their own highly profitable nostrums to the same kind of scientific scrutiny and honest, unsparing self-criticism as the NEJM researchers and editorialists, and when they produce evidence that their remedies and regimens, their coffee enemas and latter-day reincarnations of laetrile offer greater efficacy, whether palliative or curative, than chemotherapy, then, and only then, will they will have earned the right to criticize rational medicine for its shortcomings.

Science has seen its steady stream of scandals which are much more than just regrettable, as they undermine much of what science stands for. In medicine, fraud and other forms of misconduct of scientists can even endanger the health of patients.

On this background, it would be handy to have a simple measure which would give us some indication about the trustworthiness of scientists, particularly clinical scientists. Might I be as bold as to propose such a method, the TRUSTWORTHINESS INDEX (TI)?

A large part of clinical science is about testing the efficacy of treatments, and it is the scientist who does this type of research who I want to focus on. It goes without saying that, occasionally, such tests will have to generate negative results such as “the experimental treatment was not effective” [actually “negative” is not the right term, as it is clearly positive to know that a given therapy does not work]. If this never happens with the research of a given individual, we could be dealing with false positive results. In such a case, our alarm bells should start ringing, and we might begin to ask ourselves, how trustworthy is this person?

Yet, in real life, the alarm bells rarely do ring. This absence of suspicion might be due to the fact that, at one point in time, one single person tends to see only one particular paper of the individual in question – and one result tells him next to nothing about the question whether this scientist produces more than his fair share of positive findings.

What is needed is a measure that captures the totality of a researcher’s out-put. Such parameters already exist; think of the accumulated ”Impact Factor” or the ”H-Index”, for instance. But, at best, these citation metrics provide information about the frequency or impact of this person’s published papers and totally ignore his trustworthiness. To get a handle on this particular aspect of a scientist’s work, we might have to consider not the impact but the direction of his published conclusions.

If we calculated the percentage of a researcher’s papers arriving at positive conclusions and divided this by the percentage of his papers drawing negative conclusions, we might have a useful measure. A realistic example might be the case of a clinical researcher who has published a total of 100 original articles. If 50% had positive and 50% negative conclusions about the efficacy of the therapy tested, his TI would be 1.

Depending on what area of clinical medicine this person is working in, 1 might be a figure that is just about acceptable in terms of the trustworthiness of the author. If the TI goes beyond 1, we might get concerned; if it reaches 4 or more, we should get worried.

An example would be a researcher who has published 100 papers of which 80 are positive and 20 arrive at negative conclusions. His TI would consequently amount to 4. Most of us equipped with a healthy scepticism would consider this figure highly suspect.

Of course, this is all a bit simplistic, and, like all other citation metrics, my TI provides us not with any level of proof; it merely is a vague indicator that something might be amiss. And, as stressed already, the cut-off point for any scientist’s TI very much depends on the area of clinical research we are dealing with. The lower the plausibility and the higher the uncertainty associated with the efficacy of the experimental treatments, the lower the point where the TI might suggest  something  to be fishy.

A good example of an area plagued with implausibility and uncertainty is, of course, alternative medicine. Here one would not expect a high percentage of rigorous tests to come out positive, and a TI of 0.5 might perhaps already be on the limit.

So how does the TI perform when we apply it to my colleagues, the full-time researchers in alternative medicine? I have not actually calculated the exact figures, but as an educated guess, I estimate that it would be very hard, even impossible, to find many with a TI under 4.

But surely this cannot be true! It would be way above the acceptable level which we just estimated to be around 0.5. This must mean that my [admittedly slightly tongue in cheek] idea of calculating the TI was daft. The concept of my TI clearly does not work.

The alternative explanation for the high TIs in alternative medicine might be that most full-time researchers in this field are not trustworthy. But this hypothesis must be rejected off hand – or mustn’t it?

Whenever I lecture on the topic of alternative medicine for cancer, the first comment from the audience usually is “aren’t there any herbal treatments that are effective?” This is of course a most reasonable question; after all, many conventional cancer drugs originate from the plant kingdom – think of Taxol, for instance.

My answer often upsets believers in alternative cancer remedies. I tell them that, no, there is none and, even worse, there never will be one.

Did I just contradict myself? Did I not just state that many cancer drugs come from plants? Yes, but once the pure ingredient is isolated and synthetized, the drug ceases to be an herbal remedy which is defined as an extract of all the plant’s ingredients, not just one isolated constituent.

And why am I so depressingly pessimistic about there ever being an herbal cancer cure? Because of a simple fact: as soon as a natural substance shows the slightest promise, scientists will analyse and test it. If this process turns out to be successful, we will have a new cancer drug – but not an effective herbal remedy. Again, think of Taxol.

Since almost a decade, colleagues and I have been working on a relatively little-known project called CAM cancer. It started as an EU-funded activity and is now coordinated by Norwegian researchers. Our main aim is to provide unbiased and reliable information about all sorts of alternative treatments for cancer.

Our team is large, hard-working, highly motivated and independent – we do not accept sponsorship from anyone who might want to influence the results of what we are publishing. It is probably fair to say that most individuals who give their time working for CAM cancer are more optimistic than I regarding the value of alternative treatments. Therefore, our publications are certainly not biased against them; if anything, they are a bit on the generous side.

Much of our work consists in generating rigorously researched and fully referenced summaries of the evidence. Before these get published, they are thoroughly peer-reviewed and, whenever necessary, they also get updated to include the newest data. A good proportion of the reviews relates to herbal treatments.

Here are the crucial bits from our conclusions about those herbal cancer remedies which we have so far investigated:

Aloe vera: …studies are too preliminary to tell whether it is effective.

Artemisia annua: …there is no evidence from clinical trials…

Black cohosh: …In all but one trial black cohosh extracts were not superior to placebo.

Boswellia: …No certain conclusions can be drawn…

Cannabis: …The use of cannabinoids for anorexia-cachexia-syndrome in advanced cancer is not supported by the evidence…

Carctol: … is not supported by evidence…

Chinese herbal medicine for pancreatic cancer: …the potential benefit… is not strong enough to support their use…

Curcumin: There is currently insufficient documentation to support the effectiveness and efficacy of curcumin for cancer…

Echinacea: …there is currently insufficient evidence to support or refute the claims… in relation to cancer management.

Essiac: There is no evidence from clinical trials to indicate that it is effective…

Garlic: Only a few clinical trials exist and their results are inconclusive.

Green tea: …the findings… are still inconclusive.

Milk vetch: Poor design and low quality… prohibit any definite conclusions.

Mistletoe: …the evidence to support these claims is weak.

Noni: …evidence on the proposed benefits in cancer patients is lacking…

PC-Spes: …the… contamination issues render these results meaningless. An improved PC-Spes2 preparation was evaluated in an uncontrolled study which did not confirm the encouraging results…

St John’s wort: …there are no clinical studies to show that St. John’s wort would change the natural history of any type of cancer…

Ukrain: …several limitations in the studies prevent any conclusion.

As you can see, so far, we have not identified a single herbal cancer treatment that demonstrably alters the natural history of cancer in a positive direction. To me, this suggests that my rather bold statements above might be correct.

Of course, there will be some enthusiasts who point out that the list is not complete; and they, of course, are correct: there are probably hundreds of herbal remedies that we have not yet dealt with. And, of course, for some of those the evidence might be more convincing – but somehow I doubt it; after all, we did try to tackle the most promising herbal remedies first.

My claim therefore stands: there never will be a herbal (or other alternative) cancer cure. But, please, feel free to convince me otherwise.

Guest post by Louise Lubetkin

A few months ago The Economist ran one of its Where Do You Stand? polls asking readers whether alternative medicine should be taught in medical schools:

In Britain and Australia, horrified scientists are fighting hard against the teaching of alternative therapies in publicly funded universities and against their provision in mainstream medical care. They have had most success in Britain. Some universities have been shamed into ending alternative courses. The number of homeopathic hospitals in Britain is dwindling. In 2005 the Lancet, a leading medical journal, declared “the end of homeopathy”. In 2010 a parliamentary science committee advised that “the government should not endorse the use of placebo treatments including homeopathy.” So, should alternative medicine be treated on a par with the traditional sort and taught in medical schools?

It may surprise you to discover that more than two thirds of the almost 43,000 respondents were of the opinion that yes, it should.

Given that the use of alternative therapies is now so widespread, a plausible case can be made for giving medical students a comprehensive overview of the field as part of their training. But that’s not at all what the poll asked. Here again is how it was worded:

So, should alternative medicine be treated on a par with the traditional sort and taught in medical schools? (emphasis added)

That such a hefty majority of those who responded – and Economist readers are generally affluent and well-educated – came out firmly in favour not just of the teaching of alternative medicine but explicitly of parity between it and standard medicine, is both a reflection of the seemingly unstoppable popularity of alternative medicine and also, in a wider sense, of just how respectable it has become to be indifferent to, or even overtly hostile towards science.

It is ironic that since its very first issue in 1843 The Economist has proudly displayed on its contents page a mission statement declaring that the magazine is engaged in “a severe contest between intelligence, which presses forward, and an unworthy, timid ignorance obstructing our progress.”

It would seem that a significant sample of its poll-answering readership has a somewhat distorted vision of the struggle between intelligence and ignorance. In this postmodern worldview truth is relative: science is simply one version of reality; anti-science is another – and the two carry equal weight.

The very term “alternative medicine” – I use that expression with the greatest reluctance – is itself an outgrowth of this phenomenon, implying as it does that there are two valid, indeed interchangeable, choices in the sphere of medicine, a mainstream version and a parallel and equally effective alternative approach. That the term “alternative medicine” has now so seamlessly entered our language is a measure of how pervasive this form of relativism has become.

In fact, alternative medicine and mainstream medicine are absolutely not equivalent, nor are they by any means interchangeable, and to speak about them the way one might when debating whether to take the bus or the subway to work – both will get you there reliably – constitutes an assault on truth.

How did alternative medicine, so very little of which has ever been conclusively shown to be of even marginal benefit, achieve this astounding degree of acceptance?

Certainly the pervasive and deeply unhealthy influence of the pharmaceutical industry over the practice of medicine has done much to erode public confidence in the integrity of the medical profession.  Alternative medicine has nimbly stepped into the breach, successfully casting itself as an Everyman’s egalitarian version of medicine with a gentle-sounding therapeutic philosophy based not on pharmaceuticals with their inevitable side effects, but on helping the body to heal itself with the assistance of “natural” and freely available remedies.

This image of alternative medicine as a humble David bravely facing down the medico-pharmaceutical establishment’s bullying Goliath does not, however, stand up well to scrutiny. Alternative medicine is without question a hugely lucrative enterprise. Moreover, unlike the pharmaceutical industry or mainstream medicine, it is almost entirely unregulated.

According to the US National Institutes of Health, in 2007 Americans spent almost $40 billion out of their own pockets (i.e., not reimbursed by health insurance) on alternative medicine, almost $12 billion of which was spent on an estimated 350 million visits to various practitioners (chiropractors, naturopaths, massage therapists, etc.) The remaining $28 billion was spent on non-vitamin “natural” products for self-care such as fish oils, plant extracts, glucosamine and chondroitin, etc. And that’s not all: on top of this, sales of vitamin and nutritional supplements have been estimated to constitute a further $30 billion annually.

And then, of course, there’s the awkward fact of its almost total lack of effectiveness.

Look at it this way: illness is the loneliest and most isolating of all journeys. In that bleak landscape, scientifically validated medicine is not just the best compass and the most reliable map; it’s also the truest friend any of us can have.

So, should alternative medicine be treated on a par with the traditional sort and taught in medical schools?

Not on your life.

Is acupuncture an effective treatment for pain? This is a question which has attracted decades of debate and controversy. Proponents usually argue that it is supported by good clinical evidence, millennia of tradition and a sound understanding of the mechanisms involved. Sceptics, however, tend to be unimpressed and point out that the clinical evidence of proponents often is cherry-picked, that a long history of usage is fairly meaningless, and that the alleged mechanisms are tentative at best.

This discrepancy of opinions is confusing, particularly for lay people who might be tempted to try acupuncture. But it might vanish in the light of a new, comprehensive and unique evaluation of the clinical evidence.

An international team of acupuncture trialists published a meta-analysed of individual patient data to determine the analgesic effect of acupuncture compared to sham or non-acupuncture control for the following 4 chronic pain conditions: back and neck pain, osteoarthritis, headache, and shoulder pain. Data from 29 RCTs, with an impressive total of 17 922 patients, were included.

The results of this new evaluation suggest that acupuncture is superior to both sham and no-acupuncture controls for each of these conditions. Patients receiving acupuncture had less pain, with scores that were 0.23 (95% CI, 0.13-0.33), 0.16 (95% CI, 0.07-0.25), and 0.15 (95% CI, 0.07-0.24) SDs lower than those of sham controls for back and neck pain, osteoarthritis, and chronic headache, respectively; the effect sizes in comparison to no-acupuncture controls were 0.55 (95% CI, 0.51-0.58), 0.57 (95% CI, 0.50-0.64), and 0.42 (95% CI, 0.37-0.46) SDs.

Based on these findings, the authors reached the conclusion that “acupuncture is effective for the treatment of chronic pain and is therefore a reasonable referral option. Significant differences between true and sham acupuncture indicate that acupuncture is more than a placebo. However, these differences are relatively modest, suggesting that factors in addition to the specific effects of needling are important contributors to the therapeutic effects of acupuncture”.

Only hours after its publication, this new meta-analysis was celebrated by believers in acupuncture as the strongest evidence yet on the topic currently available. Much of the lay press followed in the same, disappointingly uncritical vein.The authors of the meta-analysis, most of whom are known enthusiasts of acupuncture, seem entirely sure that they have provided the most compelling proof to date for the effectiveness of acupuncture. But are they correct or are they perhaps the victims of their own devotion to this therapy?

Perhaps, a more sceptical view would be helpful – after all, even the enthusiastic authors of this article admit that, when compared to sham, the effect size of real acupuncture is too small to be clinically relevant. Therefore one might argue that this meta-analysis confirms what critics have suggested all along: acupuncture is not a useful treatment for clinical routine.

Unsurprisingly, the authors of the meta-analysis do their very best to play down this aspect. They reason that, for clinical routine, the comparison between acupuncture and non-acupuncture controls is more relevant than the one between acupuncture and sham. But this comparison, of course, includes placebo- and other non-specific effects masquerading as effects of acupuncture – and with this little trick ( which, by the way is very popular in alternative medicine), we can, of course, show that even sugar pills are effective.

I do not doubt that context effects are important in patient care; yet I do doubt that we need a placebo treatment for generating such benefit in our patients. If we administer treatments which are effective beyond placebo with kindness, time, compassion and empathy, our patients will benefit from both specific and non-specific effects. In other words, purely generating non-specific effects with acupuncture is far from optimal and certainly not in the interest of our patients. In my view, it cannot be regarded as not good medicine, and the authors’ conclusion referring to a “reasonable referral option” is more than a little surprising in my view.

Acupuncture-fans might argue that, at the very minimum, the new meta-analysis does demonstrate acupuncture to be statistically significantly better than a placebo. Yet I am not convinced that this notion holds water: the small residual effect-size in the comparison of acupuncture with sham might not be the result of a specific effect of acupuncture; it could be (and most likely is) due to residual bias in the analysed studies.

The meta-analysis is strongly driven by the large German trials which, for good reasons, were heavily and frequently criticised when first published. One of the most important potential drawbacks was that many participating patients were almost certainly de-blinded through the significant media coverage of the study while it was being conducted. Moreover, in none of these trials was the therapist blinded (the often-voiced notion that therapist-blinding is impossible is demonstrably false). Thus it is likely that patient-unblinding and the absence of therapist-blinding importantly influenced the clinical outcome of these trials thus generating false positive findings. As the German studies constitute by far the largest volume of patients in the meta-analysis, any of their flaws would strongly impact on the overall result of the meta-analysis.

So, has this new meta-analysis finally solved the decades-old question about the effectiveness of acupuncture? It might not have solved it, but we have certainly moved closer to a solution, particularly if we employ our faculties of critical thinking. In my view, this meta-analysis is the most compelling evidence yet to demonstrate the ineffectiveness of acupuncture for chronic pain.

Last Friday, it was announced in Vienna that Prof Harald Walach is the recipient of a prestigious award. The Austrian ‘Society for Critical Thinking’ wanted to officially recognise Walach for his “unique effort to introduce science-free theories into academia“.

Walach is professor at the Europa-Universitaet Viadrina where he investigates alternative medicine as well as much more exotic subjects. During recent months, Walach made  headlines because he had published research allegedly showing that, with the use of a “Kozyrev mirror“, one can open channels of time and space and make telepathy a reality.

In the laudatio, it was pointed out that Walach’s claim to fame is his attempt to render bullshit more respectable by pressing it through the channels of his university. The end result, the speaker stressed, is not that bullshit becomes non-bullshit, but that the university stinks.

Most of Walach’s research is in the area of the more implausible end of the alternative medicine spectrum, e.g. homeopathy and spiritual healing. He also is the editor in chief of a journal specialised in alternative medicine which virtually never publishes a negative result and where he frequently promotes his bizarrely irrational concepts.

Crucially, Walach is a member of the scientific advisory board of CAM-media-watch a blog run by Claus Fritzsche and sponsored by the homeopathic manufacturer Heel who also happens to be the donor for Walach’s university chair. Fritzsche and Walach have many things in common, not just the sponsor or the obsession with irrationality but also the fact that they frequently and unfairly attack me and my work.

I would like to take this opportunity to congratulate Walach for this remarkable award — they could not have found a more deserving pseudo-scientist!

We all remember the libel case of the British Chiropractic Association (BCA) against Simon Singh, I’m sure. The BCA lost, and the chiropractic profession was left in disarray.

One would have thought that chiropractors have learnt a lesson from this experience which, after all, resulted in a third of all UK chiropractors facing disciplinary proceedings. One would have thought that chiropractors had enough of their attempts to pursue others when, in fact, they themselves were clearly in the wrong. One would have thought that chiropractors would eventually focus on providing us with some sound evidence about their treatments. One would have thought that chiropractors might now try to get their act together.

Yet it seems that such hopes are being sorely disappointed. In particular, chiropractors continue to attack those who have the courage to publicly criticise them. The proof for this statement is that, during the last few months, chiropractors took direct or indirect actions against me on three different occasions.

The first complaint was made by a chiropractor to the PRESS COMPLAINTS COMMISSION (PCC). The GUARDIAN had commented on a paper that I had just published which demonstrated that many trials of chiropractic fail to mention adverse effects. If nothing else, this omission amounts to a serious breach of publication ethics and is thus not a trivial matter. However, the chiropractor felt that the GUARDIAN and I were essentially waging a war against chiropractors in order to tarnish the reputation and public image of chiropractors. The PCC considered the case and promptly dismissed it.

The second complaint was made by a local chiropractor to my university. He alleged that I had been generally unfair in my publications on the subject and, specifically, he claimed that, in a recent systematic review of deaths after chiropractic treatments, I had committed what he called “research misconduct”. My university considered the case and promptly dismissed it.

The third and probably most significant complaint was also made by a chiropractor directly to my university. This time, the allegation was that I had fabricated data in an article published as long ago as 1996. The chiropractor in question had previously already tried three times to attack me through complaints and through his publications. Crucially, several years ago he had filed a formal complaint with the General Medical Council (GMC) claiming that, in my published articles, I systematically and wilfully misquoted the chiropractic literature. At the time, the GMC had ruled that his accusation had been unfounded.

Presumably to increase his chances of success for his fourth attempt, his new complaint to my university was backed up by a supporting letter from the WORLD FEDERATION OF CHIROPRACTIC. This document stated that my publications relating to the risks of chiropractic had “serious scientific shortcomings” and suggested that Exeter University “publicly distance itself from Prof Ernst’s publications on chiropractic, to enhance the reputation of the university”. My university peers considered the case and promptly dismissed it.

At this point, I should perhaps explain that my university has, in the past, been less than protective towards me. During the last decade or so, complaints angainst me had become a fairly regular occurrence, and invariably, my peers have taken them very seriously. When the first private secretary of Charles Windsor filed one, they even deemed it appropriate to conduct an official 13 month long investigation into my alleged wrong-doings. Thus my peers’ dismissal of the two chiropractors’ claims indicates to me that their two recent complaints must have been truly and utterly devoid of substance.

The three deplorable episodes summarised here speak for themselves, I think. I will therefore abstain from further comments and am delighted to leave this task to the readers of this blog.

Cancer patients are understandably desperate to try every treatment that promises a cure. They often turn to the Internet where they find thousands of “alternative” cancer cures being sold often for exorbitant cost. One of them is Ukrain.

Ukrain is based on two natural substances: alkaloids from the Greater Celandine and Thiotepa. It was developed by Dr Wassil Nowicky who allegedly cured his brother’s testicular cancer with his invention. Despite its high cost of about £50 per injection, Ukrain has become popular in the UK and elsewhere.

Ukrain has its name from the fact that the brothers Nowicky originate from the Ukraine, where also much of the research on this drug was conducted. When I say much, I should stress that I use this word in relative terms. In the realm of “alternative” cancer cures, we often find no clinical studies at all. For Ukrain, however, the situation is refreshingly different; there are a number of trials, and the question is, what do they really tell us?

In 2005, we decided to review all the clinical studies which had tested the efficacy of Ukrain. Somewhat to our surprise, we found 7 randomised clinical trials. Even more surprising, we thought, was the fact that all of them reported baffling cure rates. So, were we excited to have identified a cure for even the most incurable cancers? The short answer to this question is NO.

All of the trials were methodologically weak; but, as this is not uncommon in the area of alternative medicine, it did not irritate us all that much. Far more remarkable was the fact that these studies seemed to be odd in several other ways.

Their results seemed too good to be true; all but one trial came from the Ukraine where research governance might have been less than adequate. The authors of the studies seemed to overlap and often included Nowicky himself. They were published in only two different journals of little impact. The only non-Ukrainian trial came from Germany and was not much better: its lead author happened to be the editor of the journal where it was published; more importantly, the paper lacked crucial methodological details, which rendered the findings difficult to interpret, and the trial had a tiny sample size.

Collectively, these circumstances were enough for us to be very cautious. Consequently, we stated that “numerous caveats prevent a positive conclusion”.

Despite our caution, this article became much cited, and cancer centres around the world began to wonder whether they should take Ukrain more seriously; many integrative cancer clinics even started using the drug in their clinical routine. Dr Nowicky, who meanwhile had established his base in Vienna from where he marketed his drug, must have been delighted.

Soon, numerous websites sprang up praising Ukrain: “It is the first medicament in the world that accumulates in the cores of cancer cells very quickly after administration and kills only cancer cells while leaving healthy cells undamaged. Its inventor and patent holder Dr Wassil Nowicky was nominated for the Nobel Prize for this medicament in 2005…”  .

Somehow, I doubt this thing with the Nobel Prize. What I do not question for a minute, however, is this press release by the Austrian police: since January, the Viennese police have been investigating Dr Nowicky. During a “major raid” on 4 September 2012, he and his accomplices were arrested under the suspicion of commercial fraud. Nowicky was accused of illegally producing and selling the unlicensed drug Ukrain. The financial damage was estimated to be in the region of 5 million Euros.

I fear, however, that the damage done on desperate cancer patients across the world might be much greater. Generally speaking, “alternative” cancer cures are not just a menace, they are a contradiction in terms: there is no such a thing and there will never be one. If tomorrow this or that alternative remedy shows some promise as a cancer cure, it will be investigated by mainstream oncology with some urgency; and if the findings turn out to be positive, the eventual result would be a new cancer treatment. To assume that oncologists might ignore a promising treatment simply because it originates from the realm of alternative medicine is idiotic and supposes that oncologists are mean bastards who do not care about their patients – and this, of course, is an accusation which one might rather direct towards the irresponsible purveyors of “alternative” cancer cures.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives

Categories