MD, PhD, FMedSci, FRSB, FRCP, FRCPEd.

critical thinking

One of the perks of researching alternative medicine and writing a blog about it is that one rarely runs out of good laughs. In perfect accordance with ERNST’S LAW, I have recently been entertained, amused, even thrilled by a flurry of ad hominem attacks most of which are true knee-slappers. I would like to take this occasion to thank my assailants for their fantasy and tenacity. Most days, these ad hominem attacks really do make my day.

I can only hope they will continue to make my days a little more joyous. My fear, however, is that they might, one day, run out of material. Even today, their claims are somewhat repetitive:

  • I am not qualified
  • I only speak tosh
  • I do not understand science
  • I never did any ‘real’ research
  • Exeter Uni fired me
  • I have been caught red-handed (not quite sure at what)
  • I am on BIG PHARMA’s payroll
  • I faked my research papers

Come on, you feeble-minded fantasists must be able to do better! Isn’t it time to bring something new?

Yes, I know, innovation is not an easy task. The best ad hominem attacks are, of course, always based on a kernel of truth. In that respect, the ones that have been repeated ad nauseam are sadly wanting. Therefore I have decided to provide all would-be attackers with some true and relevant facts from my life. These should enable them to invent further myths and use them as ammunition against me.

Sounds like fun? Here we go:

Both my grandfather and my father were both doctors

This part of my family history could be spun in all sorts of intriguing ways. For instance, one could make up a nice story about how I, even as a child, was brain-washed to defend the medical profession at all cost from the onslaught of non-medical healers.

Our family physician was a prominent homeopath

Ahhhh, did he perhaps mistreat me and start me off on my crusade against homeopathy? Surely, there must be a nice ad hominem attack in here!

I studied psychology at Munich but did not finish it

Did I give up psychology because I discovered a manic obsession or other character flaw deeply hidden in my soul?

I then studied medicine (also in Munich) and made a MD thesis in the area of blood clotting

No doubt this is pure invention. Where are the proofs of my qualifications? Are the data in my thesis real or invented?

My 1st job as a junior doctor was in a homeopathic hospital in Munich

Yes, but why did I leave? Surely they found out about me and fired me.

I had hands on training in several forms of alternative medicine, including homeopathy

Easy to say, but where are the proofs?

I moved to London where I worked in St George’s Hospital conducting research in blood rheology

Another invention? Where are the published papers to document this?

I went back to Munich university where I continued this line of research and was awarded a PhD

Another thesis? Again with dodgy data? Where can one see this document?

I became Professor Rehabilitation Medicine first at Hannover Medical School and later in Vienna

How did that happen? Did I perhaps bribe the appointment panels?

In 1993, I was appointed to the Chair in Complementary Medicine at Exeter university

Yes, we all know that; but why did I not direct my efforts towards promoting alternative medicine?

In Exeter, together with a team of ~20 colleagues, we published > 1000 papers on alternative medicine, more than anyone else in that field

Impossible! This number clearly shows that many of these articles are fakes or plagiaries.

My H-Index is currently >80

Same as above.

In 2012, I became Emeritus Professor of the University of Exeter

Isn’t ’emeritus’ the Latin word for ‘dishonourable discharge’?

I HOPE I CAN RELY ON ALL OF MY AD HOMINEM ATTACKERS TO USE THIS INFORMATION AND RENDER THE ASSAULTS MORE DIVERSE, REAL AND INTERESTING.

This article was posted a few months ago. Then it mysteriously vanished without a trace; nobody knows quite why or how. Today I found an old draft on my computer, so I post the article again. It might not be identical with the original but it is close enough, I think.

Some time ago, Andy Lewis formulated a notion which he called ‘Ernst’s law’. Initially, I felt this was a bit o.t.t., then it made me chuckle, and eventually it got me thinking: could there be some truth in it, and if so, why?

The ‘law’ stipulates that, if a scientist investigating alternative medicine is much liked by the majority of enthusiasts in this field, the scientist is not doing his/her job properly. In any other area of healthcare, such a ‘law’ would be absurd. Why then does it seem to make sense, at least to some degree, in alternative medicine? The differences between any area of conventional and alternative medicine are diverse and profound.

Take neurology, for instance: here we have an organ-system, anatomy, physiology, pathophysiology, etiology and nosology all related more or less specifically to this field and all based on facts, rigorous science and substantial evidence. None of this knowledge, science and evidence is static, but each has evolved and can be predicted to do so in future. What we knew about neurology 50 years ago, for example, was dramatically different from what we know today. Scientific discovery discoveries in neurology link up with the knowledge gathered in other areas of medicine to generate a (more or less) complete bigger picture.

In alternative medicine or any single branch thereof, we have no specific organ-system, anatomy, physiology, pathophysiology, etiology or nosology to speak of. We also have few notions that are transferable from one branch of alternative medicine to another – on the contrary, the assumptions of homeopathy, for example, are in overt contradiction to those of acupuncture which, in turn, are out of sync with those of reflexology, aromatherapy and Reiki.

Instead, each branch of alternative medicine has its own axioms that are largely detached from reality or, indeed, from the axioms of other branches of alternative medicine. In acupuncture, for instance, we have concepts such as yin and yang, qi, meridians and acupuncture points, and there is hardly any development of these concepts. This renders them akin to dogmas, and there is no chance in hell that the combination of all the branches of alternative medicine would add up to provide a sensible ‘bigger picture’.

If a scientist were to instill scientific, critical, progressive thought in a field like neurology, thus overthrowing current concepts and assumptions, they would be greeted with open arms among many like-minded researchers who all pursue the aim of advancing their field and contributing to the knowledge base by overturning wrong assumptions and discovering new truths. If researchers were to spend their time trying to analyse the concepts or treatments of alternative medicine, thus overthrowing current concepts and assumptions, they would not only not be appreciated by the majority of the experts working in this field, they would be castigated for their actions.

If a scientist dedicated decades of hard work to the rigorous assessment of alternative medicine, that person would become a thorn in the flesh of believers. Instead of welcoming him with open arms, some disappointed enthusiasts of alternative treatments might even pay for defaming them.

On the other hand, if a researcher merely misused the tools of science to confirm the implausible assumptions of alternative medicine, he would quickly become the celebrated ‘heroes’ of this field.

This is the bizarre phenomenon that ‘Ernst’s law’ seems to capture quite well – and this is why I believe the ‘law’ is worth more than a laugh and a chuckle. In fact, ‘Ernst’s law’ might even describe the depressing reality of retrograde thinking in alternative medicine more accurately than most of us care to admit.

What do my readers feel? Their comments following this blog may well confirm or refute my theory.

Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.

Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.

An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.

Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.

The principle that is being followed here is simple:

  • believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
  • the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
  • they publish their findings in one of the many journals that specialise in this sort of nonsense;
  • they make sure that advocates across the world learn about their results;
  • the community of apologists of this treatment picks up the information without the slightest critical analysis;
  • the researchers conduct more and more of such pseudo-research;
  • nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
  • thus the body of false or misleading ‘evidence’ grows and grows;
  • proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
  • too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
  • eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
  • important health care decisions are thus based on data which are false and misleading.

So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:

  • authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
  • editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
  • if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
  • in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).

What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ‘evidence’ critically…but how realistic is that?

According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.

The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.

The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.

Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.

But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.

And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.

I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.

We have probably all fallen into the trap of thinking that something which has stood the ‘test of time’, i.e. something that has been used for centuries with apparent success, must be ok. In alternative medicine, this belief is extremely wide-spread, and one could argue that the entire sector is built on it. Influential proponents of ‘traditional’ medicine like Prince Charles do their best to strengthen this assumption. Sadly, however, it is easily disclosed as a classical fallacy: things that have stood the ‘test of time’ might work, of course, but the ‘test of time’ is never a proof of anything.

A recent study brought this message home loud and clear. This trial tested the efficacy of Rhodiola crenulata (R. crenulata), a traditional remedy which has been used widely in the Himalayan areas and in Tibet to prevent acute mountain sickness . As no scientific studies of this traditional treatment existed, the researchers conducted a double-blind, placebo-controlled crossover RCT to test its efficacy in acute mountain sickness prevention.

Healthy adult volunteers were randomized to two treatment sequences, receiving either 800 mg R. crenulata extract or placebo daily for 7 days before ascent and two days during mountaineering. After a three-month wash-out period, they were crossed over to the alternate treatment. On each occasion, the participants ascended rapidly from 250 m to 3421 m. The primary outcome measure was the incidence of acute mountain sickness with headache and at least one of the symptoms of nausea or vomiting, fatigue, dizziness, or difficulty sleeping.

One hundred and two participants completed the trial. No significant differences in the incidence of acute mountain sickness were found between R. crenulata extract and placebo groups. If anything, the incidence of severe acute mountain sickness with Rhodiola extract was slightly higher compared to the one with placebo: 35.3% vs. 29.4%.

R. crenulata extract was not effective in reducing the incidence or severity of acute mountain sickness as compared to placebo.

Similar examples could be found by the dozen. They demonstrate very clearly that the notion of the ‘test of time’ is erroneous: a treatment which has a long history of usage is not necessarily effective (or safe) – not only that, it might be dangerous. The true value of a therapy cannot be judged by experience, to be sure, we need rigorous clinical trials. Acute mountain sickness is a potentially life-threatening condition for which there are reasonably effective treatments. If people relied on the ‘ancient wisdom’ instead of using a therapy that actually works, they might pay for their error with their lives. The sooner alternative medicine proponents realise that, the better.

A most excellent comment by Donald Marcus on what many now call ‘quackademia‘ (the disgraceful practice of teaching quackery (alternology) such as homoeopathy, acupuncture or chiropractic at universities as if they were legitimate medical professions) has recently been published in the BMJ.

Please allow me to quote extensively from it:

A detailed review of curriculums created by 15 institutions that received educational grants from the National Center for Complementary and Alternative Medicine (NCCAM) showed that they failed to conform to the principles of evidence based medicine. In brief, they cited many poor quality clinical trials that supported the efficacy of alternative therapies and omitted negative clinical trials; they had not been updated for 6-7 years; and they omitted reports of serious adverse events associated with CAM therapies, especially with chiropractic manipulation and with non-vitamin, non-mineral dietary supplements such as herbal remedies. Representation of the curriculums as “evidence based” was inaccurate and unjustified. Similar defects were present in the curriculums of other integrative medicine programs that did not receive educational grants….

A re-examination of the integrative medicine curriculums reviewed previously showed that they were essentially unchanged since their creation in 2002-03…Why do academic centers that are committed to evidence based medicine and to comparative effectiveness analysis of treatments endorse CAM? One factor may be a concern about jeopardizing income from grants from NCCAM, from CAM clinical practice, and from private foundations that donate large amounts of money to integrative medicine centers. Additional factors may be concern about antagonizing faculty colleagues who advocate and practice CAM, and inadequate oversight of curriculums.

By contrast to the inattention of US academics and professional societies to CAM education, biomedical scientists in Great Britain and Australia have taken action. At the beginning of 2007, 16 British universities offered 45 bachelor of science degrees in alternative practices. As the result of a campaign to expose the lack of evidence supporting those practices, most courses in alternative therapies offered by public universities in Britain have been discontinued. Scientists, physicians, and consumer advocates in Australia have formed an organization, Friends of Science in Medicine, to counter the growth of pseudoscience in medicine.

The CAM curriculums violate every tenet of evidence based medicine, and they are a disservice to learners and to the public. It could be argued that, in the name of academic freedom, faculty who believe in the benefits of CAM have a right to present their views. However, as educators and role models they should adhere to the principles of medical professionalism, including “a duty to uphold scientific standards.” Faculty at health profession schools should urge administrators to appoint independent committees to review integrative medicine curriculums, and to consider whether provision of CAM clinical services is consistent with a commitment to scholarship and to evidence based healthcare.

One of the first who openly opposed science degrees without science was David Colquhoun; in an influential article published in Nature, he wrote:

The least that one can expect of a bachelor of science (BSc) honours degree is that the subject of the degree is science. Yet in December 2006 the UK Universities and Colleges Admissions Service advertised 61 courses for complementary medicine, of which 45 are BSc honours degrees. Most complementary and alternative medicine (CAM) is not science because the vast majority of it is not based on empirical evidence. Homeopathy, for example, has barely changed since the beginning of the nineteenth century. It is much more like religion than science. Worse still, many of the doctrines of CAM, and quite a lot of its practitioners, are openly anti-science.

More recently, Louise Lubetkin wrote in her post ‘Quackademia‘ that alternative medicine and mainstream medicine are absolutely not equivalent, nor are they by any means interchangeable, and to speak about them the way one might when debating whether to take the bus or the subway to work – both will get you there reliably – constitutes an assault on truth.

I think ‘quackademia’ is most definitely an assault on truth – and I certainly know what I am talking about. When, in 1993, I was appointed as Professor of Complementary Medicine at Exeter, I became the director of a pre-existing team of apologists teaching a BSc-course in alternative medicine to evangelic believers. I was horrified and had to use skill, diplomacy and even money to divorce myself from this unit, an experience which I will not forget in a hurry. In fact, I am currently writing it up for a book I hope to publish soon which covers not only this story but many similarly bizarre encounters I had while researching alternative medicine during the last two decades.

According to a recent comment by Dr Larry Dossey, sceptics are afflicted by “randomania,” “statisticalitis,” “coincidentitis,” or “ODD” (Obsessive Debunking Disorder). I thought his opinion was hilariously funny; it shows that this prominent apologist of alternative medicine who claims that he is deeply rooted in the scientific world has, in fact, understood next to nothing about the scientific method. Like all quacks who have run out of rational arguments, he resorts to primitive ad hominem attacks in order to defend his bizarre notions. It also suggests that he could do with a little scepticism himself, perhaps.

In case anyone wonders how the long-obsolete notions of vitalism, which Dossey promotes, not just survive but are becoming again wide-spread, they only need to look into the best-selling books of Dossey and other vitalists. And it is not just lay people, the target audience of such books, who are taken by such nonsense. Health care professionals are by no means immune to these remnants from the prescientific era.

A recent survey is a good case in point. It was aimed at exploring US student pharmacists’ attitudes toward complementary and alternative medicine (CAM) and examine factors shaping students’ attitudes. In total, 887 student pharmacists in 10 U.S. colleges/schools of pharmacy took part. Student pharmacists’ attitudes regarding CAM were quantified using the attitudes toward CAM scale (15 items), attitudes toward specific CAM therapies (13 items), influence of factors (e.g., coursework, personal experience) on attitudes (18 items), and demographic characteristics (15 items).

The results show a mean (±SD) score on the attitudes toward CAM scale of 52.57 ± 7.65 (of a possible 75; higher score indicated more favorable attitudes). There were strong indications that students agreed with the concepts of vitalism. When asked about specific CAMs, many students revealed positive views even on the least plausible and least evidence-based modalities like homeopathy or Reiki.

Unsurprisingly, students agreed that a patient’s health beliefs should be integrated in the patient care process and that knowledge about CAM would be required in future pharmacy practice. Scores on the attitudes toward CAM scale varied by gender, race/ethnicity, type of institution, previous CAM coursework, and previous CAM use. Personal experience, pharmacy education, and family background were important factors shaping students’ attitudes.

The authors concluded: Student pharmacists hold generally favorable views of CAM, and both personal and educational factors shape their views. These results provide insight into factors shaping future pharmacists’ perceptions of CAM. Additional research is needed to examine how attitudes influence future pharmacists’ confidence and willingness to talk to patients about CAM.

I find the overwhelmingly positive views of pharmacists on even over quackery quite troubling. One of the few critical pharmacists shares my worries and commented that this survey on CAM attitudes paints a concerning portrait of American pharmacy students. However, limitations in the survey process may have created biases that could have exaggerated the overall perspective presented. More concerning than the results themselves are the researchers’ interpretation of this data: Critical and negative perspectives on CAM seem to be viewed as problematic, rather than positive examples of good critical thinking.

One lesson from surveys like these is they illustrate the educational goals of CAM proponents. Just like “integrative” medicine that is making its ways into academic hospital settings, CAM education on campus is another tactic that is being used by proponents to shape health professional attitudes and perspectives early in their careers. The objective is obvious: normalize pseudoscience with students, and watch it become embedded into pharmacy practice.

Is this going to change? Unless there is a deliberate and explicit attempt to call out and push back against the degradation of academic and scientific standards created by existing forms of CAM education and “integrative medicine” programs, we should expect to see a growing normalizing of pseudoscience in health professions like pharmacy.

I have criticised pharmacists’ attitude and behaviour towards alternative medicine more often than I care to remember. I even contributed an entire series of articles (around 10; I forgot the precise number) to THE PHARMACEUTICAL JOURNAL in an attempt to stimulate their abilities to think critically about alternative medicine. Pharmacists could certainly do with a high dose of “randomania,” “statisticalitis,” “coincidentitis,” or “ODD” (Obsessive Debunking Disorder). In particular, pharmacists who sell bogus remedies, i.e. virtually all retail pharmacists, need to remember that

  • they are breaking their own ethical code
  • they are putting profit before responsible health care
  • by selling bogus products, they give credibility to quackery
  • they are risking their reputation as professionals who provide evidence-based advice to the public
  • they might seriously endanger the health of many of their customers

In discussions about these issues, pharmacists usually defend themselves and argue that

  • those working in retail chains cannot do anything about this situation; head office decides what is sold on their premises and what not
  • many medicinal products we sell are as bogus as the alternative medicines in question
  • other health care professions are also not perfect, blameless or free of fault and error
  • many pharmacists, particularly those not working in retail, are aware of this lamentable situation but cannot do anything about it
  • retail pharmacists are both shopkeepers and health care professionals and are trying their very best to cope with this difficult dual role
  • we usually appreciate your work and critical comments but, in this case, you are talking nonsense

I do not agree with any of these arguments. Of course, each single individual pharmacist is fairly powerless when it comes to changing the system (but nobody forces anyone to work in a chain that breaks the ethical code of their profession). Yet pharmacists have their professional organisations, and it is up to each individual pharmacist to exert influence, if necessary pressure, via their professional bodies and representatives, such that eventually the system changes. In all this distasteful mess, only one thing seems certain: without a groundswell of opinion from pharmacists, nothing will happen simply because too many pharmacists are doing very nicely with fooling their customers into buying expensive rubbish.

And when eventually something does happen, it will almost certainly be a slow and long process until quackery has been fully expelled from retail pharmacies. My big concern is not so much the slowness of the process but the fact that, currently, I see virtually no groundswell of opinion that might produce anything. For the foreseeable future pharmacists seem to have decided to be content with a role as shopkeepers who do not sufficiently care about healthcare-ethics to change the status quo.

This post will probably work best, if you have read the previous one describing how the parallel universe of acupuncture research insists on going in circles in order to avoid admitting that their treatment might not be as effective as they pretend. The way they achieve this is fairly simple: they conduct trials that are designed in such a way that they cannot possibly produce a negative result.

A brand-new investigation which was recently vociferously touted via press releases etc. as a major advance in proving the effectiveness of acupuncture is an excellent case in point. According to its authors, the aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care. This sounds alright, but wait!

755 patients with depression were randomised to one of three arms to 1)acupuncture, 2)counselling, and 3)usual care alone. The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat. PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of 10 sessions for acupuncture and 9 sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 and 12 months for acupuncture and counselling.

From this, the authors conclude that both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.

Acupuncture for depression? Really? Our own systematic review with co-authors who are the most ardent apologists of acupuncture I have come across showed that the evidence is inconsistent on whether manual acupuncture is superior to sham… Therefore, I thought it might be a good idea to have a closer look at this new study.

One needs to search this article very closely indeed to find out that the authors did not actually evaluate acupuncture versus usual care and counselling versus usual care at all, and that comparisons were not made between acupuncture, counselling, and usual care (hints like the use of the word “alone” are all we get to guess that the authors’ text is outrageously misleading). Not even the methods section informs us what really happened in this trial. You find this hard to believe? Here is the unabbreviated part of the article that describes the interventions applied:

Patients allocated to the acupuncture and counselling groups were offered up to 12 sessions usually on a weekly basis. Participating acupuncturists were registered with the British Acupuncture Council with at least 3 years post-qualification experience. An acupuncture treatment protocol was developed and subsequently refined in consultation with participating acupuncturists. It allowed for customised treatments within a standardised theory-driven framework. Counselling was provided by members of the British Association for Counselling and Psychotherapy who were accredited or were eligible for accreditation having completed 400 supervised hours post-qualification. A manualised protocol, using a humanistic approach, was based on competences independently developed for Skills for Health. Practitioners recorded in logbooks the number and length of sessions, treatment provided, and adverse events. Further details of the two interventions are presented in Tables S2 and S3. Usual care, both NHS and private, was available according to need and monitored for all patients in all three groups for the purposes of comparison.

It is only in the results tables that we can determine what treatments were actually given; and these were:

1) Acupuncture PLUS usual care (i.e. medication)

2) Counselling PLUS usual care

3) Usual care

Its almost a ‘no-brainer’ that, if you compare A+B to B (or in this three-armed study A+B vs C+B vs B), you find that the former is more than the latter – unless A is a negative, of course. As acupuncture has significant placebo-effects, it never can be a negative, and thus this trial is an entirely foregone conclusion. As, in alternative medicine, one seems to need experimental proof even for ‘no-brainers’, we have some time ago demonstrated that this common sense theory is correct by conducting a systematic review of all acupuncture trials with such a design. We concluded that the ‘A + B versus B’ design is prone to false positive results…What makes this whole thing even worse is the fact that I once presented our review in a lecture where the lead author of the new trial was in the audience; so there can be no excuse of not being aware of the ‘no-brainer’.

Some might argue that this is a pragmatic trial, that it would have been unethical to not give anti-depressants to depressed patients and that therefore it was not possible to design this study differently. However, none of these arguments are convincing, if you analyse them closely (I might leave that to the comment section, if there is interest in such aspects). At the very minimum, the authors should have explained in full detail what interventions were given; and that means disclosing these essentials even in the abstract (and press release) – the part of the publication that is most widely read and quoted.

It is arguably unethical to ask patients’ co-operation, use research funds etc. for a study, the results of which were known even before the first patient had been recruited. And it is surely dishonest to hide the true nature of the design so very sneakily in the final report.

In my view, this trial begs at least 5 questions:

1) How on earth did it pass the peer review process of one of the most highly reputed medical journals?

2) How did the protocol get ethics approval?

3) How did it get funding?

4) Does the scientific community really allow itself to be fooled by such pseudo-research?

5) What do I do to not get depressed by studies of acupuncture for depression?

Has it ever occurred to you that much of the discussion about cause and effect in alternative medicine goes in circles without ever making progress? I have come to the conclusion that it does. Here I try to illustrate this point using the example of acupuncture, more precisely the endless discussion about how to best test acupuncture for efficacy. For those readers who like to misunderstand me I should explain that the sceptics’ view is in capital letters.

At the beginning there was the experience. Unaware of anatomy, physiology, pathology etc., people started sticking needles in other people’s skin, some 2000 years ago, and observed that they experienced relief of all sorts of symptoms.When an American journalist reported about this phenomenon in the 1970s, acupuncture became all the rage in the West. Acupuncture-fans then claimed that a 2000-year history is ample proof that acupuncture does work.

BUT ANECDOTES ARE NOTORIOUSLY UNRELIABLE!

Even the most enthusiastic advocates conceded that this is probably true. So they documented detailed case-series of lots of patients, calculated the average difference between the pre- and post-treatment severity of symptoms, submitted it to statistical tests, and published the notion that the effects of acupuncture are not just anecdotal; in fact, they are statistically significant, they said.

BUT THIS EFFECT COULD BE DUE TO THE NATURAL HISTORY OF THE CONDITION!

“True enough”, grumbled the acupuncture-fans and conducted the very first controlled clinical trials. Essentially they treated one group of patients with acupuncture while another group received conventional treatments as usual. When they analysed the results, they found that the acupuncture group had improved significantly more. “Now do you believe us?”, they asked triumphantly, “acupuncture is clearly effective”.

NO! THIS OUTCOME MIGHT BE DUE TO SELECTION BIAS. SUCH A STUDY-DESIGN CANNOT ESTABLISH CAUSE AND EFFECT.

The acupuncturists felt slightly embarrassed because they had not thought of that. They had allocated their patients to the treatment according to patients’ choice. Thus the expectation of the patients (or the clinician) to get relief from acupuncture might have been the reason for the difference in outcome. So they consulted an expert in trial-design and were advised to allocate not by choice but by chance. In other words, they repeated the previous study but randomised patients to the two groups. Amazingly, their RCT still found a significant difference favouring acupuncture over treatment as usual.

BUT THIS DIFFERENCE COULD BE CAUSED BY A PLACEBO-EFFECT!

Now the acupuncturists were in a bit of a pickle; as far as they could see, there was no good placebo for acupuncture! Eventually some methodologist-chap came up with the idea that, in order to mimic a placebo, they could simply stick needles into non-acupuncture points. When the acupuncturists tried that method, they found that there were improvements in both groups but the difference between real acupuncture and placebo was tiny and usually neither statistically significant nor clinically relevant.

NOW DO YOU CONCEDE THAT ACUPUNCTURE IS NOT AN EFFECTIVE TREATMENT?

Absolutely not! The results merely show that needling non-acupuncture points is not an adequate placebo. Obviously this intervention also sends a powerful signal to the brain which clearly makes it an effective intervention. What do you expect when you compare two effective treatments?

IF YOU REALLY THINK SO, YOU NEED TO PROVE IT AND DESIGN A PLACEBO THAT IS INERT.

At that stage, the acupuncturists came up with a placebo-needle that did not actually penetrate the skin; it worked like a mini stage dagger that telescopes into itself while giving the impression that it penetrated the skin just like the real thing. Surely this was an adequate placebo! The acupuncturists repeated their studies but, to their utter dismay, they found again that both groups improved and the difference in outcome between their new placebo and true acupuncture was minimal.

WE TOLD YOU THAT ACUPUNCTURE WAS NOT EFFECTIVE! DO YOU FINALLY AGREE?

Certainly not, they replied. We have thought long and hard about these intriguing findings and believe that they can be explained just like the last set of results: the non-penetrating needles touch the skin; this touch provides a stimulus powerful enough to have an effect on the brain; the non-penetrating placebo-needles are not inert and therefore the results merely depict a comparison of two effective treatments.

YOU MUST BE JOKING! HOW ARE YOU GOING TO PROVE THAT BIZARRE HYPOTHESIS?

We had many discussions and consensus meeting amongst the most brilliant brains in acupuncture about this issue and have arrived at the conclusion that your obsession with placebo, cause and effect etc. is ridiculous and entirely misplaced. In real life, we don’t use placebos. So, let’s instead address the ‘real life’ question: is acupuncture better than usual treatment? We have conducted pragmatic studies where one group of patients gets treatment as usual and the other group receives acupuncture in addition. These studies show that acupuncture is effective. This is all the evidence we need. Why can you not believe us?

NOW WE HAVE ARRIVED EXACTLY AT THE POINT WHERE WE HAVE BEEN A LONG TIME AGO. SUCH A STUDY-DESIGN CANNOT ESTABLISH CAUSE AND EFFECT. YOU OBVIOUSLY CANNOT DEMONSTRATE THAT ACUPUNCTURE CAUSES CLINICAL IMPROVEMENT. THEREFORE YOU OPT TO PRETEND THAT CAUSE AND EFFECT ARE IRRELEVANT. YOU USE SOME IMITATION OF SCIENCE TO ‘PROVE’ THAT YOUR PRECONCEIVED IDEAS ARE CORRECT. YOU DO NOT SEEM TO BE INTERESTED IN THE TRUTH ABOUT ACUPUNCTURE AT ALL.

As I write these words, I am travelling back from a medical conference. The organisers had invited me to give a lecture which I concluded saying: “anyone in medicine not believing in evidence-based health care is in the wrong business”. This statement was meant to stimulate the discussion and provoke the audience who were perhaps just a little on the side of those who are not all that taken by science.

I may well have been right, because, in the coffee break, several doctors disputed my point; to paraphrase their arguments: “You don’t believe in the value of experience, you think that science is the way to know everything. But you are wrong! Philosophers and other people, who are a lot cleverer than you, tell us that science is not the way to real knowledge; and in some forms of medicine we have a wealth of experience which we cannot ignore. This is at least as important as scientific knowledge. Take TCM, for instance, thousands of years of tradition must mean something; in fact it tells us more than science will ever be able to. Qi-energy, for instance, is a concept based on experience, and science is useless at verifying it.”

I disagreed, of course. But I am afraid that I did not convince my colleagues. The appeal to tradition is amazingly powerful, so much so that even well-seasoned physicians fall for it. Yet it nevertheless is a fallacy, I am sure.

So what does experience tell us, how is it generated and why should it be unreliable?

On the level of the individual, experience emerges when a clinician makes similar observations several times in a row. This is so persuasive that few doctors are immune to the phenomenon. Let’s assume the experience is about acupuncture, more precisely about acupuncture for smoking cessation. The acupuncturist presumably has learnt during his training that his therapy works for that indication via stimulating the flow of Qi, and promptly tries it on several patients. Some of them come back for more and report that they find it easier to give up cigarettes after consulting him. This happens repeatedly, and our clinician forthwith is convinced – in fact, he knows – that acupuncture is effective for smoking cessation.

If we critically analyse this scenario, what does it tell us? It tells us very little of relevance, I am afraid. The scenario is entirely compatible with a whole host of explanations which have nothing to do with the effects of acupuncture per se:

  • Those patients who did not manage to stop smoking might not have returned. Only seeing his successes without his failures, the acupuncturist would have got the wrong end of the stick.
  • Human memory is selective such that the few patients who did come back and reported failure might easily get forgotten by the clinician. We all remember the good things and forget the disappointments, particularly if we are clinicians.
  • The placebo-effect might have played a dirty trick on the experience of our acupuncturist.
  • Some patients might have used nicotine patches that helped him to stop smoking without disclosing this fact to the acupuncturist who then, of course, attributed the benefit to his needling.
  • The acupuncturist – being a very kind and empathetic clinician – might have involuntarily induced some of his patients to show kindness in return and thus tell porkies about their smoking habits which would have created a false positive impression about the effectiveness of his treatment.
  • Being so empathetic, the acupuncturists would have provided lots of encouragement to stop smoking which, in some patients, might have been sufficient to kick the habit.

 

The long and short of all this is that our acupuncturist gradually got convinced by this interplay of factors that Qi exists and that acupuncture is an ineffective treatment. Hence forth he would bet his last shirt that he is right about this – after all, he has seen it with his own eyes, not just once but many times. And he will doubt anyone who shows him evidence that says otherwise. In fact, he is likely become very sceptical about scientific evidence in general – just like the doctors who talked to me after my lecture.

On a population level, such experience will be prevalent in not just one but most acupuncturists. Our clinician’s experience is certainly not unique; others will have made it too. In fact, as an acupuncturist, it is hard not to make it. Acupuncturists would have told everyone else about it, perhaps reported it on conferences or published it in articles or books. Experience of this nature is passed on from generation to generation, and soon someone will be able to demonstrate that acupuncture has been used ’effectively’ for smoking cessation since decades or centuries. The creation of a myth out of unreliable experience is thus complete.

Am I saying that experience of this nature is always and necessarily wrong or useless? No, I am not. It can be and often is correct. But, at the same time, it is frequently incorrect. It can serve as a valuable indicator but not more. Experience is not a tool for reliably informing us about the effectiveness of medical interventions. Experience based-medicine is an obsolete pseudo-medicine burdened with concepts that are counter-productive to optimal health care.

Philosophers and other people who are much cleverer than I am have been trying for some time to separate good from bad science and evidence from experience. Most recently, two philosophers, MASSIMO PIGLIUCCI and MAARTEN BOUDRY, commented specifically on this problem in relation to TCM. I leave you with some extensive quotes from what they wrote.

… pointing out that some traditional Chinese remedies (like drinking fresh turtle blood to alleviate cold symptoms) may in fact work, and therefore should not be dismissed as pseudoscience… risks confusing the possible effectiveness of folk remedies with the arbitrary theoretical-metaphysical baggage attached to it. There is no question that some folk remedies do work. The active ingredient of aspirin, for example, is derived from willow bark…

… claims about the existence of “Qi” energy, channeled through the human body by way of “meridians,” though, is a different matter. This sounds scientific, because it uses arcane jargon that gives the impression of articulating explanatory principles. But there is no way to test the existence of Qi and associated meridians, or to establish a viable research program based on those concepts, for the simple reason that talk of Qi and meridians only looks substantive, but it isn’t even in the ballpark of an empirically verifiable theory.

…the notion of Qi only mimics scientific notions such as enzyme actions on lipid compounds. This is a standard modus operandi of pseudoscience: it adopts the external trappings of science, but without the substance.

…The notion of Qi, again, is not really a theory in any meaningful sense of the word. It is just an evocative word to label a mysterious force of which we do not know and we are not told how to find out anything at all.

Still, one may reasonably object, what’s the harm in believing in Qi and related notions, if in fact the proposed remedies seem to help? Well, setting aside the obvious objections that the slaughtering of turtles might raise on ethical grounds, there are several issues to consider. To begin with, we can incorporate whatever serendipitous discoveries from folk medicine into modern scientific practice, as in the case of the willow bark turned aspirin. In this sense, there is no such thing as “alternative” medicine, there’s only stuff that works and stuff that doesn’t.

Second, if we are positing Qi and similar concepts, we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them. Most importantly, pseudo-medical treatments often do not work, or are even positively harmful. If you take folk herbal “remedies,” for instance, while your body is fighting a serious infection, you may suffer severe, even fatal, consequences.

…Indulging in a bit of pseudoscience in some instances may be relatively innocuous, but the problem is that doing so lowers your defenses against more dangerous delusions that are based on similar confusions and fallacies. For instance, you may expose yourself and your loved ones to harm because your pseudoscientific proclivities lead you to accept notions that have been scientifically disproved, like the increasingly (and worryingly) popular idea that vaccines cause autism.

Philosophers nowadays recognize that there is no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other. For example, alchemy was a (somewhat) legitimate science in the times of Newton and Boyle, but it is now firmly pseudoscientific (movements in the opposite direction, from full-blown pseudoscience to genuine science, are notably rare)….

The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning. To try a modicum of turtle blood here and a little aspirin there is not the hallmark of wisdom and even-mindedness. It is a dangerous gateway to superstition and irrationality

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories