MD, PhD, FMedSci, FSB, FRCP, FRCPEd

causation

Yes, it is unlikely but true! I once was the hero of the world of energy healing, albeit for a short period only. An amusing story, I hope you agree.

Back in the late 1990s, we had decided to run two trials in this area. One of them was to test the efficacy of distant healing for the removal of ordinary warts, common viral infections of the skin which are quite harmless and usually disappear spontaneously. We had designed a rigorous study, obtained ethics approval and were in the midst of recruiting patients, when I suggested I could be the trial’s first participant, as I had noticed a tiny wart on my left foot. As patient-recruitment was sluggish at that stage, my co-workers consulted the protocol to check whether it might prevent me from taking part in my own trial. They came back with the good news that, as I was not involved in the running of the study, there was no reason for me to be excluded.

The next day, they ‘processed’ me like all the other wart sufferers of our investigation. My wart was measured, photographed and documented. A sealed envelope with my trial number was opened (in my absence, of course) by one of the trialists to see whether I would be in the experimental or the placebo group. The former patients were to receive ‘distant healing’ from a group of 10 experienced healers who had volunteered and felt confident to be able to cure warts. All they needed was a few details about each patients, they had confirmed. The placebo group received no such intervention. ‘Blinding’ the patient was easy in this trial; since they were not themselves involved in any healing-action, they could not know whether they were in the placebo or the verum group.

The treatment period lasted for several weeks during which time my wart was re-evaluated in regular intervals. When I had completed the study, final measurements were done, and I was told that I had been the recipient of ‘healing energy’ from the 10 healers during the past weeks. Not that I had felt any of it, and not that my wart had noticed it either: it was still there, completely unchanged.

I remember not being all that surprised…until, the next morning, when I noticed that my wart had disappeared! Gone without a trace!

Of course, I told my co-workers who were quite excited, re-photographed the spot where the wart had been and consulted the study protocol to determine what had to be done next. It turned out that we had made no provisions for events that might occur after the treatment period.

But somehow, this did not feel right, we all thought. So we decided to make a post-hoc addendum to our protocol which stipulated that all participants of our trial would be asked a few days after the end of the treatment whether any changes to their warts had been noted.

Meanwhile the healers had got wind of the professorial wart’s disappearance. They were delighted and quickly told other colleagues. In no time at all, the world of ‘distant healing’ had agreed that warts often reacted to their intervention with a slight delay – and they were pleased to hear that we had duly amended our protocol to adequately capture this important phenomenon. My ‘honest’ and ‘courageous’ action of acknowledging and documenting the disappearance of my wart was praised, and it was assumed that I was about to prove the efficacy of distant healing.

And that’s how I became their ‘hero’ – the sceptical professor who had now seen the light with his own eyes and experienced on his own body the incredible power of their ‘healing energy’.

Incredible it remained though: I was the only trial participant who lost his wart in this way. When we published this study, we concluded: Distant healing from experienced healers had no effect on the number or size of patients’ warts.

AND THAT’S WHEN I STOPPED BEING THEIR ‘HERO’.

A recent interview on alternative medicine for the German magazine DER SPIEGEL prompted well over 500 comments; even though, in the interview, I covered numerous alternative therapies, the discussion that followed focussed almost entirely on homeopathy. Yet again, many of the comments provided a reminder of the quasi-religious faith many people have in homeopathy.

There can, of course, be dozens of reasons for such strong convictions. Yet, in my experience, some seem to be more prevalent and important than others. During my last two decades in researching homeopathy, I think, I have identified several of the most important ones. In this post, I try to outline a typical sequence of events that eventually leads to a faith in homeopathy which is utterly immune to fact and reason.

The epiphany

The starting point of this journey towards homeopathy-worship is usually an impressive personal experience which is often akin to an epiphany (defined as a moment of sudden and great revelation or realization). I have met hundreds of advocates of homeopathy, and those who talk about this sort of thing invariably offer impressive stories about how they metamorphosed from being a ‘sceptic’ (yes, it is truly phenomenal how many believers insist that they started out as sceptics) into someone who was completely bowled over by homeopathy, and how that ‘moment of great revelation’ changed the rest of their lives. Very often, this ‘Saulus-Paulus conversion’ relates to that person’s own (or a close friend’s) illness which allegedly was cured by homeopathy.

Rachel Roberts, chief executive of the Homeopathy Research Institute, provides as good an example of this sort of epiphany as anyone; in an article in THE GUARDIAN, she described her conversion to homeopathy with the following words:

I was a dedicated scientist about to begin a PhD in neuroscience when, out of the blue, homeopathy bit me on the proverbial bottom.

Science had been my passion since I began studying biology with Mr Hopkinson at the age of 11, and by the age of 21, when I attended the dinner party that altered the course of my life, I had still barely heard of it. The idea that I would one day become a homeopath would have seemed ludicrous.

That turning point is etched in my mind. A woman I’d known my entire life told me that a homeopath had successfully treated her when many months of conventional treatment had failed. As a sceptic, I scoffed, but was nonetheless a little intrigued.

She confessed that despite thinking homeopathy was a load of rubbish, she’d finally agreed to an appointment, to stop her daughter nagging. But she was genuinely shocked to find that, after one little pill, within days she felt significantly better. A second tablet, she said, “saw it off completely”.

I admit I ruined that dinner party. I interrogated her about every detail of her diagnosis, previous treatment, time scales, the lot. I thought it through logically – she was intelligent, she wasn’t lying, she had no previous inclination towards alternative medicine, and her reluctance would have diminished any placebo effect.

Scientists are supposed to make unprejudiced observations, then draw conclusions. As I thought about this, I was left with the highly uncomfortable conclusion that homeopathy appeared to have worked. I had to find out more.

So, I started reading about homeopathy, and what I discovered shifted my world for ever. I became convinced enough to hand my coveted PhD studentship over to my best friend and sign on for a three-year, full-time homeopathy training course.

Now, as an experienced homeopath, it is “science” that is biting me on the bottom. I know homeopathy works…

As I said, I have heard many strikingly similar accounts. Some of these tales seem a little too tall to be true and might be a trifle exaggerated, but the consistency of the picture that emerges from all of these stories is nevertheless extraordinary: people get started on a single anecdote which they are prepared to experience as an epiphanic turn-around. Subsequently, they are on a mission of confirming their new-found belief over and over again, until they become undoubting disciples for life.

So what? you might ask. But I do think this epiphany-like event at the outset of a homeopathic career is significant. In no other area of health care does the initial anecdote regularly play such a prominent role. People do not become believers in aspirin, for instance, on the basis of a ‘moment of great revelation’, they may take it because of the evidence. And, if there is a discrepancy between the external evidence and their own experience, as with homeopathy, most people would start to reflect: What other explanations exist to rationalise the anecdote? Invariably, there are many (placebo, natural history of the condition, concomitant events etc.).

Confirmation bias

Epiphany-stuck believers spends much time and effort to actively look for similar stories that seem to confirm the initial anecdote. They might, for instance, recommend or administer or prescribe homeopathy to others, many of whom would report positive outcomes. At the same time, all anecdotes that do not happen to fit the belief are brushed aside, forgotten, supressed, belittled, decried etc. This process leads to confirmation after confirmation after confirmation – and gradually builds up to what proponents of homeopathy would call ‘years of experience’. And ‘years of experience’ can, of course, not be wrong!

Again, believers neglect to question, doubt and rationalise their own perceptions. They ignore the fact that years of experience might just be little more than a suborn insistence on repeating one’s own mistakes. Even the most obvious confounders such as selective memory or alternative causes for positive clinical outcomes are quickly dismissed or not even considered at all.

Avoiding cognitive dissonance at all cost

But believers still has to somehow deal with the scientific facts about homeopathy; and these are, of course, grossly out of line with their belief. Thus the external evidence and the internal belief would inevitably clash creating a shrill cognitive dissonance. This must be avoided at all cost, as it might threaten the believer’s peace of mind. And the solution is amazingly simple: scientific evidence that does not confirm the believer’s conviction is ignored or, when this proves to be impossible, turned upside down.

Rachel Roberts’ account is most enlightening also in this repect:

And yet I keep reading reports in the media saying that homeopathy doesn’t work and that this scientific evidence doesn’t exist.

The facts, it seems, are being ignored. By the end of 2009, 142 randomised control trials (the gold standard in medical research) comparing homeopathy with placebo or conventional treatment had been published in peer-reviewed journals – 74 were able to draw firm conclusions: 63 were positive for homeopathy and 11 were negative. Five major systematic reviews have also been carried out to analyse the balance of evidence from RCTs of homeopathy – four were positive (Kleijnen, J, et al; Linde, K, et al; Linde, K, et al; Cucherat, M, et al) and one was negative (Shang, A et al). It’s usual to get mixed results when you look at a wide range of research results on one subject, and if these results were from trials measuring the efficacy of “normal” conventional drugs, ratios of 63:11 and 4:1 in favour of a treatment working would be considered pretty persuasive.

This statement is, in my view, a classic example of a desperate misinterpretation of the truth as a means of preventing the believer’s house of cards from collapsing. It even makes the hilarious claim that not the believers but the doubters “ignore” the facts.

In order to be able to adhere to her belief, Roberts needs to rely on a woefully biased white-wash from the ‘British Homeopathic Association’. And, in order to be on the safe side, she even quotes it misleadingly. The conclusion of the Cucherat review, for instance, can only be seen as positive by most blinkered of minds: There is some evidence that homeopathic treatments are more effective than placebo; however, the strength of this evidence is low because of the low methodological quality of the trials. Studies of high methodological quality were more likely to be negative than the lower quality studies. Further high quality studies are needed to confirm these results. Contrary to what Roberts states, there are at least a dozen more than 5 systematic reviews of homeopathy; my own systematic review of systematic reviews, for example, concluded that the best clinical evidence for homeopathy available to date does not warrant positive recommendations for its use in clinical practice.

It seems that, at this stage of a believer’s development, the truth gets all too happily sacrificed on the altar of faith. All these ‘ex-sceptics’ turned believers are now able to display is a rather comical parody of scepticism.

The delusional end-stage

The last stage in the career of a believer has been reached when hardly anything that he or she is convinced of resembles reality any longer. I don’t know much about Rachel Roberts, and she might not have reached this point yet; but there are many others who clearly have.

My two favourite examples of end-stage homeopathic delusionists are John Benneth and Dana Ullman. The final stage on the journey from ‘sceptic scientist’ to delusional disciple is characterised by an incessant stream of incoherent statements of vile nonsense that beggars belief. It is therefore easy to recognise and, because nobody can possibly take the delusionists seriously, they are best viewed as relatively harmless contributors to medical comedy.

Why does all of this matter?

Many homeopathy-fans are quasi-religious believers who, in my experience, have degressed way beyond reason. It is therefore a complete waste of time trying to reason with them. Initiated by a highly emotional epiphany, their faith cannot be shaken by rational arguments. Similar but usually less pronounced attitudes, I am afraid, can be observed in true believers of other alternative treatments as well (here I have chosen the example of homeopathy mainly because it is the area where things are most explicit).

True believers claim to have started out as sceptics and they often insist to be driven by a scientific mind. Yet I have never seen any evidence for these assumptions. On the contrary, for a relatively trivial episode to become a life-changing epiphany, the believer’s mind needs to be lamentably unscientific, unquestioning and simple.

In my experience, true believers will not change their mind; I have never seen this happening. However, progress might nevertheless be made, if we managed to instil a more (self-) questioning rationality and scientific attitudes into the minds of the next generations. In other words, we need better education in science and more training of critical thinking during their formative years.

Some experts concede that chiropractic spinal manipulation is effective for chronic low back pain (cLBP). But what is the right dose? There have been no full-scale trials of the optimal number of treatments with spinal manipulation. This study was aimed at filling this gap by trying to identify a dose-response relationship between the number of visits to a chiropractor for spinal manipulation and cLBP outcomes. A further aim was to determine the efficacy of manipulation by comparison with a light massage control.

The primary cLBP outcomes were the 100-point pain intensity scale and functional disability scales evaluated at the 12- and 24-week primary end points. Secondary outcomes included days with pain and functional disability, pain unpleasantness, global perceived improvement, medication use, and general health status.

One hundred patients with cLBP were randomized to each of 4 dose levels of care: 0, 6, 12, or 18 sessions of spinal manipulation from a chiropractor. Participants were treated three times per week for 6 weeks. At sessions when manipulation was not assigned, the patients received a focused light massage control. Covariate-adjusted linear dose effects and comparisons with the no-manipulation control group were evaluated at 6, 12, 18, 24, 39, and 52 weeks.

For the primary outcomes, mean pain and disability improvement in the manipulation groups were 20 points by 12 weeks, an effect that was sustainable to 52 weeks. Linear dose-response effects were small, reaching about two points per 6 manipulation sessions at 12 and 52 weeks for both variables. At 12 weeks, the greatest differences compared to the no-manipulation controls were found for 12 sessions (8.6 pain and 7.6 disability points); at 24 weeks, differences were negligible; and at 52 weeks, the greatest group differences were seen for 18 visits (5.9 pain and 8.8 disability points).

The authors concluded that the number of spinal manipulation visits had modest effects on cLBP outcomes above those of 18 hands-on visits to a chiropractor. Overall, 12 visits yielded the most favorable results but was not well distinguished from other dose levels.

This study is interesting because it confirms that the effects of chiropractic spinal manipulation as a treatment for cLBP are tiny and probably not clinically relevant. And even these tiny effects might not be due to the treatment per se but could be caused by residual confounding and bias.

As for the optimal dose, the authors suggest that, on average, 18 sessions might be the best. But again, we have to be clear that the dose-response effects were small and of doubtful clinical relevance. Since the therapeutic effects are tiny, it is obviously difficult to establish a dose-response relationship.

In view of the cost of chiropractic spinal manipulation and the uncertainty about its safety, I would probably not rate this approach as the treatment of choice but would consider the current Cochrane review which concludes that “high quality evidence suggests that there is no clinically relevant difference between spinal manipulation and other interventions for reducing pain and improving function in patients with chronic low-back pain” Personally, I think it is more prudent to recommend exercise, back school, massage or perhaps even yoga to cLBP-sufferers.

Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.

Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.

An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.

Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.

The principle that is being followed here is simple:

  • believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
  • the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
  • they publish their findings in one of the many journals that specialise in this sort of nonsense;
  • they make sure that advocates across the world learn about their results;
  • the community of apologists of this treatment picks up the information without the slightest critical analysis;
  • the researchers conduct more and more of such pseudo-research;
  • nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
  • thus the body of false or misleading ‘evidence’ grows and grows;
  • proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
  • too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
  • eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
  • important health care decisions are thus based on data which are false and misleading.

So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:

  • authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
  • editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
  • if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
  • in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).

What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ‘evidence’ critically…but how realistic is that?

We have probably all fallen into the trap of thinking that something which has stood the ‘test of time’, i.e. something that has been used for centuries with apparent success, must be ok. In alternative medicine, this belief is extremely wide-spread, and one could argue that the entire sector is built on it. Influential proponents of ‘traditional’ medicine like Prince Charles do their best to strengthen this assumption. Sadly, however, it is easily disclosed as a classical fallacy: things that have stood the ‘test of time’ might work, of course, but the ‘test of time’ is never a proof of anything.

A recent study brought this message home loud and clear. This trial tested the efficacy of Rhodiola crenulata (R. crenulata), a traditional remedy which has been used widely in the Himalayan areas and in Tibet to prevent acute mountain sickness . As no scientific studies of this traditional treatment existed, the researchers conducted a double-blind, placebo-controlled crossover RCT to test its efficacy in acute mountain sickness prevention.

Healthy adult volunteers were randomized to two treatment sequences, receiving either 800 mg R. crenulata extract or placebo daily for 7 days before ascent and two days during mountaineering. After a three-month wash-out period, they were crossed over to the alternate treatment. On each occasion, the participants ascended rapidly from 250 m to 3421 m. The primary outcome measure was the incidence of acute mountain sickness with headache and at least one of the symptoms of nausea or vomiting, fatigue, dizziness, or difficulty sleeping.

One hundred and two participants completed the trial. No significant differences in the incidence of acute mountain sickness were found between R. crenulata extract and placebo groups. If anything, the incidence of severe acute mountain sickness with Rhodiola extract was slightly higher compared to the one with placebo: 35.3% vs. 29.4%.

R. crenulata extract was not effective in reducing the incidence or severity of acute mountain sickness as compared to placebo.

Similar examples could be found by the dozen. They demonstrate very clearly that the notion of the ‘test of time’ is erroneous: a treatment which has a long history of usage is not necessarily effective (or safe) – not only that, it might be dangerous. The true value of a therapy cannot be judged by experience, to be sure, we need rigorous clinical trials. Acute mountain sickness is a potentially life-threatening condition for which there are reasonably effective treatments. If people relied on the ‘ancient wisdom’ instead of using a therapy that actually works, they might pay for their error with their lives. The sooner alternative medicine proponents realise that, the better.

This post will probably work best, if you have read the previous one describing how the parallel universe of acupuncture research insists on going in circles in order to avoid admitting that their treatment might not be as effective as they pretend. The way they achieve this is fairly simple: they conduct trials that are designed in such a way that they cannot possibly produce a negative result.

A brand-new investigation which was recently vociferously touted via press releases etc. as a major advance in proving the effectiveness of acupuncture is an excellent case in point. According to its authors, the aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care. This sounds alright, but wait!

755 patients with depression were randomised to one of three arms to 1)acupuncture, 2)counselling, and 3)usual care alone. The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat. PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of 10 sessions for acupuncture and 9 sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 and 12 months for acupuncture and counselling.

From this, the authors conclude that both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.

Acupuncture for depression? Really? Our own systematic review with co-authors who are the most ardent apologists of acupuncture I have come across showed that the evidence is inconsistent on whether manual acupuncture is superior to sham… Therefore, I thought it might be a good idea to have a closer look at this new study.

One needs to search this article very closely indeed to find out that the authors did not actually evaluate acupuncture versus usual care and counselling versus usual care at all, and that comparisons were not made between acupuncture, counselling, and usual care (hints like the use of the word “alone” are all we get to guess that the authors’ text is outrageously misleading). Not even the methods section informs us what really happened in this trial. You find this hard to believe? Here is the unabbreviated part of the article that describes the interventions applied:

Patients allocated to the acupuncture and counselling groups were offered up to 12 sessions usually on a weekly basis. Participating acupuncturists were registered with the British Acupuncture Council with at least 3 years post-qualification experience. An acupuncture treatment protocol was developed and subsequently refined in consultation with participating acupuncturists. It allowed for customised treatments within a standardised theory-driven framework. Counselling was provided by members of the British Association for Counselling and Psychotherapy who were accredited or were eligible for accreditation having completed 400 supervised hours post-qualification. A manualised protocol, using a humanistic approach, was based on competences independently developed for Skills for Health. Practitioners recorded in logbooks the number and length of sessions, treatment provided, and adverse events. Further details of the two interventions are presented in Tables S2 and S3. Usual care, both NHS and private, was available according to need and monitored for all patients in all three groups for the purposes of comparison.

It is only in the results tables that we can determine what treatments were actually given; and these were:

1) Acupuncture PLUS usual care (i.e. medication)

2) Counselling PLUS usual care

3) Usual care

Its almost a ‘no-brainer’ that, if you compare A+B to B (or in this three-armed study A+B vs C+B vs B), you find that the former is more than the latter – unless A is a negative, of course. As acupuncture has significant placebo-effects, it never can be a negative, and thus this trial is an entirely foregone conclusion. As, in alternative medicine, one seems to need experimental proof even for ‘no-brainers’, we have some time ago demonstrated that this common sense theory is correct by conducting a systematic review of all acupuncture trials with such a design. We concluded that the ‘A + B versus B’ design is prone to false positive results…What makes this whole thing even worse is the fact that I once presented our review in a lecture where the lead author of the new trial was in the audience; so there can be no excuse of not being aware of the ‘no-brainer’.

Some might argue that this is a pragmatic trial, that it would have been unethical to not give anti-depressants to depressed patients and that therefore it was not possible to design this study differently. However, none of these arguments are convincing, if you analyse them closely (I might leave that to the comment section, if there is interest in such aspects). At the very minimum, the authors should have explained in full detail what interventions were given; and that means disclosing these essentials even in the abstract (and press release) – the part of the publication that is most widely read and quoted.

It is arguably unethical to ask patients’ co-operation, use research funds etc. for a study, the results of which were known even before the first patient had been recruited. And it is surely dishonest to hide the true nature of the design so very sneakily in the final report.

In my view, this trial begs at least 5 questions:

1) How on earth did it pass the peer review process of one of the most highly reputed medical journals?

2) How did the protocol get ethics approval?

3) How did it get funding?

4) Does the scientific community really allow itself to be fooled by such pseudo-research?

5) What do I do to not get depressed by studies of acupuncture for depression?

As I write these words, I am travelling back from a medical conference. The organisers had invited me to give a lecture which I concluded saying: “anyone in medicine not believing in evidence-based health care is in the wrong business”. This statement was meant to stimulate the discussion and provoke the audience who were perhaps just a little on the side of those who are not all that taken by science.

I may well have been right, because, in the coffee break, several doctors disputed my point; to paraphrase their arguments: “You don’t believe in the value of experience, you think that science is the way to know everything. But you are wrong! Philosophers and other people, who are a lot cleverer than you, tell us that science is not the way to real knowledge; and in some forms of medicine we have a wealth of experience which we cannot ignore. This is at least as important as scientific knowledge. Take TCM, for instance, thousands of years of tradition must mean something; in fact it tells us more than science will ever be able to. Qi-energy, for instance, is a concept based on experience, and science is useless at verifying it.”

I disagreed, of course. But I am afraid that I did not convince my colleagues. The appeal to tradition is amazingly powerful, so much so that even well-seasoned physicians fall for it. Yet it nevertheless is a fallacy, I am sure.

So what does experience tell us, how is it generated and why should it be unreliable?

On the level of the individual, experience emerges when a clinician makes similar observations several times in a row. This is so persuasive that few doctors are immune to the phenomenon. Let’s assume the experience is about acupuncture, more precisely about acupuncture for smoking cessation. The acupuncturist presumably has learnt during his training that his therapy works for that indication via stimulating the flow of Qi, and promptly tries it on several patients. Some of them come back for more and report that they find it easier to give up cigarettes after consulting him. This happens repeatedly, and our clinician forthwith is convinced – in fact, he knows – that acupuncture is effective for smoking cessation.

If we critically analyse this scenario, what does it tell us? It tells us very little of relevance, I am afraid. The scenario is entirely compatible with a whole host of explanations which have nothing to do with the effects of acupuncture per se:

  • Those patients who did not manage to stop smoking might not have returned. Only seeing his successes without his failures, the acupuncturist would have got the wrong end of the stick.
  • Human memory is selective such that the few patients who did come back and reported failure might easily get forgotten by the clinician. We all remember the good things and forget the disappointments, particularly if we are clinicians.
  • The placebo-effect might have played a dirty trick on the experience of our acupuncturist.
  • Some patients might have used nicotine patches that helped him to stop smoking without disclosing this fact to the acupuncturist who then, of course, attributed the benefit to his needling.
  • The acupuncturist – being a very kind and empathetic clinician – might have involuntarily induced some of his patients to show kindness in return and thus tell porkies about their smoking habits which would have created a false positive impression about the effectiveness of his treatment.
  • Being so empathetic, the acupuncturists would have provided lots of encouragement to stop smoking which, in some patients, might have been sufficient to kick the habit.

 

The long and short of all this is that our acupuncturist gradually got convinced by this interplay of factors that Qi exists and that acupuncture is an ineffective treatment. Hence forth he would bet his last shirt that he is right about this – after all, he has seen it with his own eyes, not just once but many times. And he will doubt anyone who shows him evidence that says otherwise. In fact, he is likely become very sceptical about scientific evidence in general – just like the doctors who talked to me after my lecture.

On a population level, such experience will be prevalent in not just one but most acupuncturists. Our clinician’s experience is certainly not unique; others will have made it too. In fact, as an acupuncturist, it is hard not to make it. Acupuncturists would have told everyone else about it, perhaps reported it on conferences or published it in articles or books. Experience of this nature is passed on from generation to generation, and soon someone will be able to demonstrate that acupuncture has been used ’effectively’ for smoking cessation since decades or centuries. The creation of a myth out of unreliable experience is thus complete.

Am I saying that experience of this nature is always and necessarily wrong or useless? No, I am not. It can be and often is correct. But, at the same time, it is frequently incorrect. It can serve as a valuable indicator but not more. Experience is not a tool for reliably informing us about the effectiveness of medical interventions. Experience based-medicine is an obsolete pseudo-medicine burdened with concepts that are counter-productive to optimal health care.

Philosophers and other people who are much cleverer than I am have been trying for some time to separate good from bad science and evidence from experience. Most recently, two philosophers, MASSIMO PIGLIUCCI and MAARTEN BOUDRY, commented specifically on this problem in relation to TCM. I leave you with some extensive quotes from what they wrote.

… pointing out that some traditional Chinese remedies (like drinking fresh turtle blood to alleviate cold symptoms) may in fact work, and therefore should not be dismissed as pseudoscience… risks confusing the possible effectiveness of folk remedies with the arbitrary theoretical-metaphysical baggage attached to it. There is no question that some folk remedies do work. The active ingredient of aspirin, for example, is derived from willow bark…

… claims about the existence of “Qi” energy, channeled through the human body by way of “meridians,” though, is a different matter. This sounds scientific, because it uses arcane jargon that gives the impression of articulating explanatory principles. But there is no way to test the existence of Qi and associated meridians, or to establish a viable research program based on those concepts, for the simple reason that talk of Qi and meridians only looks substantive, but it isn’t even in the ballpark of an empirically verifiable theory.

…the notion of Qi only mimics scientific notions such as enzyme actions on lipid compounds. This is a standard modus operandi of pseudoscience: it adopts the external trappings of science, but without the substance.

…The notion of Qi, again, is not really a theory in any meaningful sense of the word. It is just an evocative word to label a mysterious force of which we do not know and we are not told how to find out anything at all.

Still, one may reasonably object, what’s the harm in believing in Qi and related notions, if in fact the proposed remedies seem to help? Well, setting aside the obvious objections that the slaughtering of turtles might raise on ethical grounds, there are several issues to consider. To begin with, we can incorporate whatever serendipitous discoveries from folk medicine into modern scientific practice, as in the case of the willow bark turned aspirin. In this sense, there is no such thing as “alternative” medicine, there’s only stuff that works and stuff that doesn’t.

Second, if we are positing Qi and similar concepts, we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them. Most importantly, pseudo-medical treatments often do not work, or are even positively harmful. If you take folk herbal “remedies,” for instance, while your body is fighting a serious infection, you may suffer severe, even fatal, consequences.

…Indulging in a bit of pseudoscience in some instances may be relatively innocuous, but the problem is that doing so lowers your defenses against more dangerous delusions that are based on similar confusions and fallacies. For instance, you may expose yourself and your loved ones to harm because your pseudoscientific proclivities lead you to accept notions that have been scientifically disproved, like the increasingly (and worryingly) popular idea that vaccines cause autism.

Philosophers nowadays recognize that there is no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other. For example, alchemy was a (somewhat) legitimate science in the times of Newton and Boyle, but it is now firmly pseudoscientific (movements in the opposite direction, from full-blown pseudoscience to genuine science, are notably rare)….

The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning. To try a modicum of turtle blood here and a little aspirin there is not the hallmark of wisdom and even-mindedness. It is a dangerous gateway to superstition and irrationality

“Wer heilt hat recht”. Every German knows this saying and far too many believe it. Literally translated, it means THE ONE WHO HEALS IS RIGHT, and indicates that, in health care, the proof of efficacy of a treatment is self-evident: if a clinician administers a treatment and the patient improves, she was right in prescribing it and the treatment must have been efficacious. The only English saying which is vaguely similar (but rarely used for therapies) is THE PROOF OF THE PUDDING IS IN THE EATING, translated into a medical context: the proof of the treatment is in the clinical outcome.

The saying is German but the sentiment behind it is amazingly widespread across the world, particularly the alternative one. If I had a fiver for each time a German journalist has asked me to comment on this ‘argument’ I could probably invite all my readers for a beer in the pub. The notion seems to be irresistibly appealing and journalists, consumers, patients, politicians etc. fall for it like flies. It is popular foremost as a counter-argument against scientists’ objections to homeopathy and similar placebo-treatments. If the homeopath cured her patient, then she and her treatments are evidently fine!

It is time, I think, that I scrutinise the argument and refute it once and for all.

The very first thing to note is that placebos never cure a condition. They might alleviate symptoms, but cure? No!

The next issue relates to causality. The saying assumes that the sole reason for the clinical outcome is the treatment. Yet, if a patient’s symptoms improve, the reason might have been the prescribed treatment, but this is just one of a multitude of different options, e.g.:

  • the placebo-effect
  • the regression towards the mean
  • the natural history of the condition
  • the Hawthorne effect
  • the compassion of the clinician
  • other treatments that might have been administered in parallel

Often it is a complex mixture of these and possibly other phenomena that is responsible and, unless we run a proper clinical trial, we cannot even guess the relative importance of each factor. To claim in such a messy situation that the treatment given by the clinician was the cause of the improvement, is ridiculously simplistic and overtly wrong.

But that is precisely what the saying WER HEILT HAT RECHT does. It assumes a simple mono-causal relationship that never exists in clinical settings. And, annoyingly, it somewhat arrogantly dismisses any scientific evidence by implying that the anecdotal observation is so much more accurate and relevant.

The true monstrosity of the saying can be easily disclosed with a little thought experiment. Let’s assume the saying is correct and we adopt it as a major axiom in health care. This would have all sorts of terrible consequences. For instance, any pharmaceutical company would be allowed to produce colourful placebos and sell them for a premium; they would only need to show that some patients do experience some relief after taking it. THE ONE WHO HEALS IS RIGHT!

The saying is a dangerously misleading platitude. That it happens to be German and that the Germans remain so frightfully fond of it disturbs me. That the notion, in one way or another, is deeply ingrained in the mind of charlatans across the world is worrying but hardly surprising – after all, it is said to have been coined by Samuel Hahnemann.

If one spends a lot of time, as I presently do, sorting out old files, books, journals etc., one is bound to come across plenty of weird and unusual things. I for one, am slow at making progress with this task, mainly because I often start reading the material that is in front of me. It was one of those occasions that I had begun studying a book written by one of the more fanatic proponent of alternative medicine and stumbled over the term THE PROOF OF EXPERIENCE. It made me think, and I began to realise that the notion behind these four words is quite characteristic of the field of alternative health care.

When I studied medicine, in the 1970s, we were told by our peers what to do, which treatments worked for which conditions and why. They had all the experience and we, by definition, had none. Experience seemed synonymous with proof. Nobody dared to doubt the word of ‘the boss’. We were educated, I now realise, in the age of EMINENCE-BASED MEDICINE.

All of this gradually changed when the concepts of EVIDENCE-BASED MEDICINE became appreciated and generally adopted by responsible health care professionals. If now the woman or man on top of the medical ‘pecking order’ claims something that is doubtful in view of the published evidence, it is possible (sometimes even desirable) to say so – no matter how junior the doubter happened to be. As a result, medicine has thus changed for ever: progress is no longer made funeral by funeral [of the bosses] but new evidence is much more swiftly translated into clinical practice.

Don’t get me wrong, EVIDENCE-BASED MEDICINE does not does not imply disrespect EXPERIENCE; it merely takes it for what it is. And when EVIDENCE and EXPERIENCE fail to agree with each other, we have to take a deep breath, think hard and try to do something about it. Depending on the specific situation, this might involve further study or at least an acknowledgement of a degree of uncertainty. The tension between EXPERIENCE and EVIDENCE often is the impetus for making progress. The winner in this often complex story is the patient: she will receive a therapy which, according to the best available EVIDENCE and careful consideration of the EXPERIENCE, is best for her.

NOT SO IN ALTERNATIVE MEDICINE!!! Here EXPERIENCE still trumps EVIDENCE any time, and there is no need for acknowledging uncertainty: EXPERIENCE = proof!!!

In case you think I am exaggerating, I recommend thumbing through a few books on the subject. As I already stated, I have done this quite a bit in recent months, and I can assure you that there is very little evidence in these volumes to suggest that data, research, science, etc.. matter a hoot. No critical thinking is required, as long as we have EXPERIENCE on our side!

‘THE PROOF OF EXPERIENCE’ is still a motto that seems to be everywhere in alternative medicine. In many ways, it seems to me, this motto symbolises much of what is wrong with alternative medicine and the mind-set of its proponents. Often, the EXPERIENCE is in sharp contrast to the EVIDENCE. But this little detail does not seem to irritate anyone. Apologists of alternative medicine stubbornly ignore such contradictions. In the rare case where they do comment at all, the gist of their response normally is that EXPERIENCE is much more relevant than EVIDENCE. After all, EXPERIENCE is based on hundreds of years and thousands of ‘real-life’ cases, while EVIDENCE is artificial and based on just a few patients.

As far as I can see, nobody in alternative medicine pays more than a lip service to the fact that EXPERIENCE can be [and often is] grossly misleading. Little or no acknowledgement exists of the fact that, in clinical routine, there are simply far too many factors that interfere with our memories, impressions, observations and conclusions. If a patient gets better after receiving a therapy, she might have improved for a dozen reasons which are unrelated to the treatment per se. And if a patient does not get better, she might not come back at all, and the practitioner’s memory will therefore fail register such events as therapeutic failures. Whatever EXPERIENCE is, in health care, it rarely constitutes proof!

The notion of THE PROOF OF EXPERIENCE, it thus turns out, is little more than self-serving, wishful thinking which characterises the backward attitude that seems to be so remarkably prevalent in alternative medicine. No tension between EXPERIENCE and EVIDENCE is noticeable because the EVIDENCE is being ignored; as a result, there is no progress. The looser is, of course, the patient: she will receive a treatment based on criteria which are less than reliable.

Isn’t it time to burry the fallacy of THE PROOF OF EXPERIENCE once and for all?

Swiss chiropractors have just published a clinical trial to investigate outcomes of patients with radiculopathy due to cervical disk herniation (CDH). All patients had neck pain and dermatomal arm pain; sensory, motor, or reflex changes corresponding to the involved nerve root and at least one positive orthopaedic test for cervical radiculopathy were included. CDH was confirmed by magnetic resonance imaging. All patients received regular neck manipulations.

Baseline data included two pain numeric rating scales (NRSs), for neck and arm, and the Neck Disability Index (NDI). At two, four and twelve weeks after the initial consultation, patients were contacted by telephone, and the data for NDI, NRSs, and patient’s global impression of change were collected. High-velocity, low-amplitude thrusts were administered by experienced chiropractors. The proportion of patients reporting to feel “better” or “much better” on the patient’s global impression of change scale was calculated. Pre-treatment and post-treatment NRSs and NDIs were analysed.

Fifty patients were included. At two weeks, 55.3% were “improved,” 68.9% at four and 85.7% at twelve weeks. Statistically significant decreases in neck pain, arm pain, and NDI scores were noted at 1 and 3 months compared with baseline scores. 76.2% of all sub-acute/chronic patients were improved at 3 months.

The authors concluded that most patients in this study, including sub-acute/chronic patients, with symptomatic magnetic resonance imaging-confirmed CDH treated with spinal manipulative therapy, reported significant improvement with no adverse events.

In the presence of disc herniation, chiropractic manipulations have been described to cause serious complications. Some experts therefore believe that CDH is a contra-indication for spinal manipulation. The authors of this study imply, however, that it is not – on the contrary, they think it is an effective intervention for CDH.

One does not need to be a sceptic to notice that the basis for this assumption is less than solid. The study had no control group. This means that the observed effect could have been due to:

a placebo response,

the regression towards the mean,

the natural history of the condition,

concomitant treatments,

social desirability,

or other factors which have nothing to do with the chiropractic intervention per se.

And what about the interesting finding that no adverse-effects were noted? Does that mean that the treatment is safe? Sorry, but it most certainly does not! In order to generate reliable results about possibly rare complications, the study would have needed to include not 50 but well over 50 000 patients.

So what does the study really tell us? I have pondered over this question for some time and arrived at the following answer: NOTHING!

Is that a bit harsh? Well, perhaps yes. And I will revise my verdict slightly: the study does tell us something, after all – chiropractors tend to confuse research with the promotion of very doubtful concepts at the expense of their patients. I think, there is a name for this phenomenon: PSEUDO-SCIENCE.

Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories