critical thinking

The news that the use of Traditional Chinese Medicine (TCM) positively affects cancer survival might come as a surprise to many readers of this blog; but this is exactly what recent research has suggested. As it was published in one of the leading cancer journals, we should be able to trust the findings – or shouldn’t we?

The authors of this new study used the Taiwan National Health Insurance Research Database to conduct a retrospective population-based cohort study of patients with advanced breast cancer between 2001 and 2010. The patients were separated into TCM users and non-users, and the association between the use of TCM and patient survival was determined.

A total of 729 patients with advanced breast cancer receiving taxanes were included. Their mean age was 52.0 years; 115 patients were TCM users (15.8%) and 614 patients were TCM non-users. The mean follow-up was 2.8 years, with 277 deaths reported to occur during the 10-year period. Multivariate analysis demonstrated that, compared with non-users, the use of TCM was associated with a significantly decreased risk of all-cause mortality (adjusted hazards ratio [HR], 0.55 [95% confidence interval, 0.33-0.90] for TCM use of 30-180 days; adjusted HR, 0.46 [95% confidence interval, 0.27-0.78] for TCM use of > 180 days). Among the frequently used TCMs, those found to be most effective (lowest HRs) in reducing mortality were Bai Hua She She Cao, Ban Zhi Lian, and Huang Qi.

The authors of this paper are initially quite cautious and use adequate terminology when they write that TCM-use was associated with increased survival. But then they seem to get carried away by their enthusiasm and even name the TCM drugs which they thought were most effective in prolonging cancer survival. It is obvious that such causal extrapolations are well out of line with the evidence they produced (oh, how I wished that journal editors would finally wake up to such misleading language!) .

Of course, it is possible that some TCM drugs are effective cancer cures – but the data presented here certainly do NOT demonstrate anything like such an effect. And before such a far-reaching claim is being made, much more and much better research would be necessary.

The thing is, there are many alternative and plausible explanations for the observed phenomenon. For instance, it is conceivable that users and non-users of TCM in this study differed in many ways other than their medication, e.g. severity of cancer, adherence to conventional therapies, life-style, etc. And even if the researchers have used clever statistical methods to control for some of these variables, residual confounding can never be ruled out in such case-control studies.

Correlation is not causation, they say. Neglect of this elementary axiom makes for very poor science – in fact, it produces dangerous pseudoscience which could, like in the present case, lead a cancer patient straight up the garden path towards a premature death.

There is not a discussion about homeopathy where an apologist would eventually state: HOMEOPATHY CANNOT BE A PLACEBO, BECAUSE IT WORKS IN ANIMALS!!! Those who are not well-versed in this subject tend to be impressed, and the argument has won many consumers over to the dark side, I am sure. But is it really correct?

The short answer to this question is NO.

Pavlov discovered the phenomenon of ‘conditioning’ in animals, and ‘conditioning’ is considered to be a major part of the placebo-response. So, depending on the circumstances, animals do respond to placebo (my dog, for instance, used to go into a distinct depressive mood when he saw me packing a suitcase).

Then there is the fact that the animal’s response might be less important than the owner’s reaction to homeopathic treatment. This is particularly important with pets, of course. Homeopathy-believing pet owners might over-interpret the pet’s response and report that the homeopathic remedy has worked wonders when, in fact, it has made no difference.

Finally, there may be some situations where neither of the above two phenomena can play a decisive role. Homeopaths like to cite studies where entire herds of cows were treated homeopathically to prevent mastitis, a common problem in milk-cows. It is unlikely that conditioning or wishful thinking of the owner are decisive in such a study. Let’s see whether homeopathy-promoters will also be fond of this new study of exactly this subject.

New Zealand vets compared clinical and bacteriological cure rates of clinical mastitis following treatment with either antimicrobials or homeopathic preparations. They used 7 spring-calving herds from the Waikato region of New Zealand to source cases of clinical mastitis (n=263 glands) during the first 90 days following calving. Duplicate milk samples were collected for bacteriology from each clinically infected gland at diagnosis and 25 (SD 5.3) days after the initial treatment. Affected glands were treated with either an antimicrobial formulation or a homeopathic remedy. Generalised linear models with binomial error distribution and logit link were used to analyse the proportion of cows that presented clinical treatment cures and the proportion of glands that were classified as bacteriological cures, based on initial and post-treatment milk samples.

The results show that the mean cumulative incidence of clinical mastitis was 7% (range 2-13% across herds) of cows. Streptococcus uberis was the most common pathogen isolated from culture-positive samples from affected glands (140/209; 67%). The clinical cure rate was higher for cows treated with antimicrobials (107/113; 95%) than for cows treated with homeopathic remedies (72/114; 63%) (p<0.001) based on the observance of clinical signs following initial treatment. Across all pathogen types bacteriological cure rate at gland level was higher for those cows treated with antimicrobials (75/102; 74%) than for those treated with a homeopathic preparation (39/107; 36%) (p<0.001).

The authors conclude that homeopathic remedies had significantly lower clinical and bacteriological cure rates compared with antimicrobials when used to treat post-calving clinical mastitis where S. uberis was the most common pathogen. The proportion of cows that needed retreatment was significantly higher for the homeopathic treated cows. This, combined with lower bacteriological cure rates, has implications for duration of infection, individual cow somatic cell count, costs associated with treatment and animal welfare.

Yes, I know, this is just one single study, and we need to consider the totality of the reliable evidence. Currently, there are 203 clinical trials of homeopathic treatments of animals; and they are being reviewed at the very moment (unfortunately by a team that is not known for its objective stance on homeopathy). So, we will have to wait and see. When, in 1999, A. Vickers reviewed all per-clinical studies, including those on animals, he concluded that there is a lack of independent replication of any pre-clinical research in homoeopathy. In the few instances where a research team has set out to replicate the work of another, either the results were negative or the methodology was questionable.

All this is to say that, until truly convincing evidence to the contrary is available, the homeopaths’ argument ‘HOMEOPATHY CANNOT BE A PLACEBO, BECAUSE IT WORKS IN ANIMALS!!!’ is, in my view, as weak as the dilution of their remedies.


This statement contradicts all those thousands of messages on the Internet that pretend otherwise. Far too many ‘entrepreneurs’ are trying to exploit desperate cancer patients by making claims about alternative cancer ‘cures’ ranging from shark oil to laetrile and from Essiac to mistletoe. The truth is that none of them are anything other than bogus.

Why? Let me explain.

If ever a curative cancer treatment emerged from the realm of alternative medicine that showed any promise at all, it would be very quickly researched by scientists and, if the results were positive, instantly adopted by mainstream oncology. The notion of an alternative cancer cure is therefore a contradiction in terms. It implies that oncologists are mean bastards who would, in the face of immense suffering, reject a promising cure simply because it did not originate from their own ranks.


So, let’s forget about alternative cancer ‘cures’ and let’s once and for all declare the people who sell or promote them as charlatans of the worst type. But some alternative therapies might nevertheless have a role in oncology – not as curative treatments but as supportive or palliative therapies.

The aim of supportive or palliative cancer care is not to cure the disease but to ease the suffering of cancer patients. According to my own research, promising evidence exists in this context, for instance, for massage, guided imagery, Co-enzyme Q10, acupuncture for nausea, and relaxation therapies. For other alternative therapies, the evidence is not supportive, e.g. reflexology, tai chi, homeopathy, spiritual healing, acupuncture for pain-relief, and aromatherapy.

So, in the realm of supportive and palliative care there is both encouraging as well as disappointing evidence. But what amazes me over and over again is the fact that the majority of cancer centres employing alternative therapies seem to bother very little about the evidence; they tend to use a weird mix of treatments regardless of whether they are backed by evidence or not. If patients like them, all is fine, they seem to think. I find this argument worrying.

Of course, every measure that increases the well-being of cancer patients must be welcome. But this should not mean that we disregard priorities or adopt any quackery that is on offer. In the interest of patients, we need to spend the available resources in the most effective ways. Those who argue that a bit of Reiki or reflexology, for example, is useful – if only via a non-specific (placebo) effects – seem to forget that we do not require quackery for patients to benefit from a placebo-response. An evidence-based treatment that is administered with kindness and compassion also generates specific non-specific effects. In addition, such treatments also generate specific effects. Therefore it would be a disservice to patients to merely rely on the non-specific effects of bogus treatments, even if the patients do experience some benefit from them.


So, why are unproven or disproven treatments like Reiki or reflexology so popular for cancer palliation? This question has puzzled me for years, and I sometimes wonder whether some oncologists’ tolerance of quackery is not an attempt to compensate for any inadequacies within the routine service they deliver to their patients. Sub-standard care, unappetising food, insufficient pain-control, lack of time and compassion as well as other problems undoubtedly exist in some cancer units. It might be tempting to assume that such deficiencies can be compensated by a little pampering from a reflexologist or Reiki master. And it might be easier to hire a few alternative therapists for treating patients with agreeable yet ineffective interventions than to remedy the deficits that may exist in basic conventional care.

But this strategy would be wrong, unethical and counter-productive. Empathy, sympathy and compassion are core features of conventional care and must not be delegated to quacks.

Advocates of alternative medicine are incredibly fond of supporting their claims with anecdotes, or ‘case-reports’ as they are officially called. There is no question, case-reports can be informative and important, but we need to be aware of their limitations.

A recent case-report from the US might illustrated this nicely. It described a 65-year-old male patient who had had MS for 20 years when he decided to get treated with Chinese scalp acupuncture. The motor area, sensory area, foot motor and sensory area, balance area, hearing and dizziness area, and tremor area were stimulated once a week for 10 weeks, then once a month for 6 further sessions.

After the 16 treatments, the patient showed remarkable improvements. He was able to stand and walk without any problems. The numbness and tingling in his limbs did not bother him anymore. He had more energy and had not experienced incontinence of urine or dizziness after the first treatment. He was able to return to work full time. Now the patient has been in remission for 26 months.

The authors of this case-report conclude that Chinese scalp acupuncture can be a very effective treatment for patients with MS. Chinese scalp acupuncture holds the potential to expand treatment options for MS in both conventional and complementary or integrative therapies. It can not only relieve symptoms, increase the patient’s quality of life, and slow and reverse the progression of physical disability but also reduce the number of relapses and help patients.

There is absolutely nothing wrong with case-reports; on the contrary, they can provide extremely valuable pointers for further research. If they relate to adverse effects, they can give us crucial information about the risks associated with treatments. Nobody would ever argue that case-reports are useless, and that is why most medical journals regularly publish such papers. But they are valuable only, if one is aware of their limitations. Medicine finally started to make swift progress, ~150 years ago, when we gave up attributing undue importance to anecdotes, began to doubt established wisdom and started testing it scientifically.

Conclusions such as the ones drawn above are not just odd, they are misleading to the point of being dangerous. A reasonable conclusion might have been that this case of a MS-patient is interesting and should be followed-up through further observations. If these then seem to confirm the positive outcome, one might consider conducting a clinical trial. If this study proves to yield encouraging findings, one might eventually draw the conclusions which the present authors drew from their single case.

To jump at conclusions in the way the authors did, is neither justified nor responsible. It is unjustified because case-reports never lend themselves to such generalisations. And it is irresponsible because desperate patients, who often fail to understand the limitations of case-reports and tend to believe things that have been published in medical journals, might act on these words. This, in turn, would raise false hopes or might even lead to patients forfeiting those treatments that are evidence-based.

It is high time, I think, that proponents of alternative medicine give up their love-affair with anecdotes and join the rest of the health care professions in the 21st century.

Yes, it is unlikely but true! I once was the hero of the world of energy healing, albeit for a short period only. An amusing story, I hope you agree.

Back in the late 1990s, we had decided to run two trials in this area. One of them was to test the efficacy of distant healing for the removal of ordinary warts, common viral infections of the skin which are quite harmless and usually disappear spontaneously. We had designed a rigorous study, obtained ethics approval and were in the midst of recruiting patients, when I suggested I could be the trial’s first participant, as I had noticed a tiny wart on my left foot. As patient-recruitment was sluggish at that stage, my co-workers consulted the protocol to check whether it might prevent me from taking part in my own trial. They came back with the good news that, as I was not involved in the running of the study, there was no reason for me to be excluded.

The next day, they ‘processed’ me like all the other wart sufferers of our investigation. My wart was measured, photographed and documented. A sealed envelope with my trial number was opened (in my absence, of course) by one of the trialists to see whether I would be in the experimental or the placebo group. The former patients were to receive ‘distant healing’ from a group of 10 experienced healers who had volunteered and felt confident to be able to cure warts. All they needed was a few details about each patients, they had confirmed. The placebo group received no such intervention. ‘Blinding’ the patient was easy in this trial; since they were not themselves involved in any healing-action, they could not know whether they were in the placebo or the verum group.

The treatment period lasted for several weeks during which time my wart was re-evaluated in regular intervals. When I had completed the study, final measurements were done, and I was told that I had been the recipient of ‘healing energy’ from the 10 healers during the past weeks. Not that I had felt any of it, and not that my wart had noticed it either: it was still there, completely unchanged.

I remember not being all that surprised…until, the next morning, when I noticed that my wart had disappeared! Gone without a trace!

Of course, I told my co-workers who were quite excited, re-photographed the spot where the wart had been and consulted the study protocol to determine what had to be done next. It turned out that we had made no provisions for events that might occur after the treatment period.

But somehow, this did not feel right, we all thought. So we decided to make a post-hoc addendum to our protocol which stipulated that all participants of our trial would be asked a few days after the end of the treatment whether any changes to their warts had been noted.

Meanwhile the healers had got wind of the professorial wart’s disappearance. They were delighted and quickly told other colleagues. In no time at all, the world of ‘distant healing’ had agreed that warts often reacted to their intervention with a slight delay – and they were pleased to hear that we had duly amended our protocol to adequately capture this important phenomenon. My ‘honest’ and ‘courageous’ action of acknowledging and documenting the disappearance of my wart was praised, and it was assumed that I was about to prove the efficacy of distant healing.

And that’s how I became their ‘hero’ – the sceptical professor who had now seen the light with his own eyes and experienced on his own body the incredible power of their ‘healing energy’.

Incredible it remained though: I was the only trial participant who lost his wart in this way. When we published this study, we concluded: Distant healing from experienced healers had no effect on the number or size of patients’ warts.


One of the perks of researching alternative medicine and writing a blog about it is that one rarely runs out of good laughs. In perfect accordance with ERNST’S LAW, I have recently been entertained, amused, even thrilled by a flurry of ad hominem attacks most of which are true knee-slappers. I would like to take this occasion to thank my assailants for their fantasy and tenacity. Most days, these ad hominem attacks really do make my day.

I can only hope they will continue to make my days a little more joyous. My fear, however, is that they might, one day, run out of material. Even today, their claims are somewhat repetitive:

  • I am not qualified
  • I only speak tosh
  • I do not understand science
  • I never did any ‘real’ research
  • Exeter Uni fired me
  • I have been caught red-handed (not quite sure at what)
  • I am on BIG PHARMA’s payroll
  • I faked my research papers

Come on, you feeble-minded fantasists must be able to do better! Isn’t it time to bring something new?

Yes, I know, innovation is not an easy task. The best ad hominem attacks are, of course, always based on a kernel of truth. In that respect, the ones that have been repeated ad nauseam are sadly wanting. Therefore I have decided to provide all would-be attackers with some true and relevant facts from my life. These should enable them to invent further myths and use them as ammunition against me.

Sounds like fun? Here we go:

Both my grandfather and my father were both doctors

This part of my family history could be spun in all sorts of intriguing ways. For instance, one could make up a nice story about how I, even as a child, was brain-washed to defend the medical profession at all cost from the onslaught of non-medical healers.

Our family physician was a prominent homeopath

Ahhhh, did he perhaps mistreat me and start me off on my crusade against homeopathy? Surely, there must be a nice ad hominem attack in here!

I studied psychology at Munich but did not finish it

Did I give up psychology because I discovered a manic obsession or other character flaw deeply hidden in my soul?

I then studied medicine (also in Munich) and made a MD thesis in the area of blood clotting

No doubt this is pure invention. Where are the proofs of my qualifications? Are the data in my thesis real or invented?

My 1st job as a junior doctor was in a homeopathic hospital in Munich

Yes, but why did I leave? Surely they found out about me and fired me.

I had hands on training in several forms of alternative medicine, including homeopathy

Easy to say, but where are the proofs?

I moved to London where I worked in St George’s Hospital conducting research in blood rheology

Another invention? Where are the published papers to document this?

I went back to Munich university where I continued this line of research and was awarded a PhD

Another thesis? Again with dodgy data? Where can one see this document?

I became Professor Rehabilitation Medicine first at Hannover Medical School and later in Vienna

How did that happen? Did I perhaps bribe the appointment panels?

In 1993, I was appointed to the Chair in Complementary Medicine at Exeter university

Yes, we all know that; but why did I not direct my efforts towards promoting alternative medicine?

In Exeter, together with a team of ~20 colleagues, we published > 1000 papers on alternative medicine, more than anyone else in that field

Impossible! This number clearly shows that many of these articles are fakes or plagiaries.

My H-Index is currently >80

Same as above.

In 2012, I became Emeritus Professor of the University of Exeter

Isn’t ’emeritus’ the Latin word for ‘dishonourable discharge’?


This article was posted a few months ago. Then it mysteriously vanished without a trace; nobody knows quite why or how. Today I found an old draft on my computer, so I post the article again. It might not be identical with the original but it is close enough, I think.

Some time ago, Andy Lewis formulated a notion which he called ‘Ernst’s law’. Initially, I felt this was a bit o.t.t., then it made me chuckle, and eventually it got me thinking: could there be some truth in it, and if so, why?

The ‘law’ stipulates that, if a scientist investigating alternative medicine is much liked by the majority of enthusiasts in this field, the scientist is not doing his/her job properly. In any other area of healthcare, such a ‘law’ would be absurd. Why then does it seem to make sense, at least to some degree, in alternative medicine? The differences between any area of conventional and alternative medicine are diverse and profound.

Take neurology, for instance: here we have an organ-system, anatomy, physiology, pathophysiology, etiology and nosology all related more or less specifically to this field and all based on facts, rigorous science and substantial evidence. None of this knowledge, science and evidence is static, but each has evolved and can be predicted to do so in future. What we knew about neurology 50 years ago, for example, was dramatically different from what we know today. Scientific discovery discoveries in neurology link up with the knowledge gathered in other areas of medicine to generate a (more or less) complete bigger picture.

In alternative medicine or any single branch thereof, we have no specific organ-system, anatomy, physiology, pathophysiology, etiology or nosology to speak of. We also have few notions that are transferable from one branch of alternative medicine to another – on the contrary, the assumptions of homeopathy, for example, are in overt contradiction to those of acupuncture which, in turn, are out of sync with those of reflexology, aromatherapy and Reiki.

Instead, each branch of alternative medicine has its own axioms that are largely detached from reality or, indeed, from the axioms of other branches of alternative medicine. In acupuncture, for instance, we have concepts such as yin and yang, qi, meridians and acupuncture points, and there is hardly any development of these concepts. This renders them akin to dogmas, and there is no chance in hell that the combination of all the branches of alternative medicine would add up to provide a sensible ‘bigger picture’.

If a scientist were to instill scientific, critical, progressive thought in a field like neurology, thus overthrowing current concepts and assumptions, they would be greeted with open arms among many like-minded researchers who all pursue the aim of advancing their field and contributing to the knowledge base by overturning wrong assumptions and discovering new truths. If researchers were to spend their time trying to analyse the concepts or treatments of alternative medicine, thus overthrowing current concepts and assumptions, they would not only not be appreciated by the majority of the experts working in this field, they would be castigated for their actions.

If a scientist dedicated decades of hard work to the rigorous assessment of alternative medicine, that person would become a thorn in the flesh of believers. Instead of welcoming him with open arms, some disappointed enthusiasts of alternative treatments might even pay for defaming them.

On the other hand, if a researcher merely misused the tools of science to confirm the implausible assumptions of alternative medicine, he would quickly become the celebrated ‘heroes’ of this field.

This is the bizarre phenomenon that ‘Ernst’s law’ seems to capture quite well – and this is why I believe the ‘law’ is worth more than a laugh and a chuckle. In fact, ‘Ernst’s law’ might even describe the depressing reality of retrograde thinking in alternative medicine more accurately than most of us care to admit.

What do my readers feel? Their comments following this blog may well confirm or refute my theory.

Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.

Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.

An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.

Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.

The principle that is being followed here is simple:

  • believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
  • the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
  • they publish their findings in one of the many journals that specialise in this sort of nonsense;
  • they make sure that advocates across the world learn about their results;
  • the community of apologists of this treatment picks up the information without the slightest critical analysis;
  • the researchers conduct more and more of such pseudo-research;
  • nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
  • thus the body of false or misleading ‘evidence’ grows and grows;
  • proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
  • too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
  • eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
  • important health care decisions are thus based on data which are false and misleading.

So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:

  • authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
  • editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
  • if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
  • in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).

What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ‘evidence’ critically…but how realistic is that?

According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.

The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.

The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.

Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.

But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.

And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.

I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.

We have probably all fallen into the trap of thinking that something which has stood the ‘test of time’, i.e. something that has been used for centuries with apparent success, must be ok. In alternative medicine, this belief is extremely wide-spread, and one could argue that the entire sector is built on it. Influential proponents of ‘traditional’ medicine like Prince Charles do their best to strengthen this assumption. Sadly, however, it is easily disclosed as a classical fallacy: things that have stood the ‘test of time’ might work, of course, but the ‘test of time’ is never a proof of anything.

A recent study brought this message home loud and clear. This trial tested the efficacy of Rhodiola crenulata (R. crenulata), a traditional remedy which has been used widely in the Himalayan areas and in Tibet to prevent acute mountain sickness . As no scientific studies of this traditional treatment existed, the researchers conducted a double-blind, placebo-controlled crossover RCT to test its efficacy in acute mountain sickness prevention.

Healthy adult volunteers were randomized to two treatment sequences, receiving either 800 mg R. crenulata extract or placebo daily for 7 days before ascent and two days during mountaineering. After a three-month wash-out period, they were crossed over to the alternate treatment. On each occasion, the participants ascended rapidly from 250 m to 3421 m. The primary outcome measure was the incidence of acute mountain sickness with headache and at least one of the symptoms of nausea or vomiting, fatigue, dizziness, or difficulty sleeping.

One hundred and two participants completed the trial. No significant differences in the incidence of acute mountain sickness were found between R. crenulata extract and placebo groups. If anything, the incidence of severe acute mountain sickness with Rhodiola extract was slightly higher compared to the one with placebo: 35.3% vs. 29.4%.

R. crenulata extract was not effective in reducing the incidence or severity of acute mountain sickness as compared to placebo.

Similar examples could be found by the dozen. They demonstrate very clearly that the notion of the ‘test of time’ is erroneous: a treatment which has a long history of usage is not necessarily effective (or safe) – not only that, it might be dangerous. The true value of a therapy cannot be judged by experience, to be sure, we need rigorous clinical trials. Acute mountain sickness is a potentially life-threatening condition for which there are reasonably effective treatments. If people relied on the ‘ancient wisdom’ instead of using a therapy that actually works, they might pay for their error with their lives. The sooner alternative medicine proponents realise that, the better.

Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.

Click here for a comprehensive list of recent comments.