MD, PhD, FMedSci, FSB, FRCP, FRCPEd

causation

After a traumatic brain injury (TBI) the risk of stroke is significantly increased. Taiwanese researchers conducted a study to find out whether acupuncture can help to protect TBI patients from stroke. They used Taiwan’s National Health Insurance Research Database to conduct a retrospective cohort study of 7409 TBI patients receiving acupuncture treatment and 29,636 propensity-score-matched TBI patients without acupuncture treatment as controls. Both TBI cohorts were followed for up to two years and adjusted for immortal time to measure the incidence and adjusted hazard ratios (HRs) of new-onset stroke.

TBI patients with acupuncture treatment (4.9 per 1000 person-years) had a lower incidence of stroke compared with those without acupuncture treatment (7.5 per 1000 person-years), with a HR of 0.59 (95% CI = 0.50-0.69) after adjustment for sociodemographics, coexisting medical conditions and medications. The association between acupuncture treatment and stroke risk was investigated by sex and age group (20-44, 45-64, and ≥65 years). The probability curve with log-rank test showed that TBI patients receiving acupuncture treatment had a lower probability of stroke than those without acupuncture treatment during the follow-up period (p<0.0001).

The authors conclude that patients with TBI receiving acupuncture treatment show decreased risk of stroke compared with those without acupuncture treatment. However, this study was limited by lack of information regarding lifestyles, biochemical profiles, TBI severity, and acupuncture points used in treatments.

I want to congratulate the authors for adding the last sentence to their conclusions. There is no plausible mechanism that I can think of by which acupuncture might bring about the observed effect. This does not mean that an effect does not exist; it means, however, that it is wise to be cautious and to not jump to conclusions which later need to be revised. The simplest interpretation, by far, of the observed phenomenon is that those patients opting to have acupuncture were, on average, less ill and therefore had a lower risk of stroke.

Having said that, the findings are, I think, intriguing enough to conduct further investigations – provided they are rigorous and eliminate the confounders that prevented this study from arriving at more definitive conclusions.

The news that the use of Traditional Chinese Medicine (TCM) positively affects cancer survival might come as a surprise to many readers of this blog; but this is exactly what recent research has suggested. As it was published in one of the leading cancer journals, we should be able to trust the findings – or shouldn’t we?

The authors of this new study used the Taiwan National Health Insurance Research Database to conduct a retrospective population-based cohort study of patients with advanced breast cancer between 2001 and 2010. The patients were separated into TCM users and non-users, and the association between the use of TCM and patient survival was determined.

A total of 729 patients with advanced breast cancer receiving taxanes were included. Their mean age was 52.0 years; 115 patients were TCM users (15.8%) and 614 patients were TCM non-users. The mean follow-up was 2.8 years, with 277 deaths reported to occur during the 10-year period. Multivariate analysis demonstrated that, compared with non-users, the use of TCM was associated with a significantly decreased risk of all-cause mortality (adjusted hazards ratio [HR], 0.55 [95% confidence interval, 0.33-0.90] for TCM use of 30-180 days; adjusted HR, 0.46 [95% confidence interval, 0.27-0.78] for TCM use of > 180 days). Among the frequently used TCMs, those found to be most effective (lowest HRs) in reducing mortality were Bai Hua She She Cao, Ban Zhi Lian, and Huang Qi.

The authors of this paper are initially quite cautious and use adequate terminology when they write that TCM-use was associated with increased survival. But then they seem to get carried away by their enthusiasm and even name the TCM drugs which they thought were most effective in prolonging cancer survival. It is obvious that such causal extrapolations are well out of line with the evidence they produced (oh, how I wished that journal editors would finally wake up to such misleading language!) .

Of course, it is possible that some TCM drugs are effective cancer cures – but the data presented here certainly do NOT demonstrate anything like such an effect. And before such a far-reaching claim is being made, much more and much better research would be necessary.

The thing is, there are many alternative and plausible explanations for the observed phenomenon. For instance, it is conceivable that users and non-users of TCM in this study differed in many ways other than their medication, e.g. severity of cancer, adherence to conventional therapies, life-style, etc. And even if the researchers have used clever statistical methods to control for some of these variables, residual confounding can never be ruled out in such case-control studies.

Correlation is not causation, they say. Neglect of this elementary axiom makes for very poor science – in fact, it produces dangerous pseudoscience which could, like in the present case, lead a cancer patient straight up the garden path towards a premature death.

There are dozens of observational studies of homeopathy which seem to suggest – at least to homeopaths – that homeopathic treatments generate health benefits. As these investigations lack a control group, their results can be all to easily invalidated by pointing out that factors like ‘regression towards the mean‘ (RTM, a statistical artefact caused by the phenomenon that a variable that is extreme on its first measurement tends to be closer to the average on its second measurement) might be the cause of the observed change. Thus the debate whether such observational data are reliable or not has been raging for decades. Now, German (pro-homeopathy) investigators have published a paper which potentially could resolve this dispute.

With this re-analysis of an observational study, the investigators wanted to evaluate whether the observed changes in previous cohort studies are due to RTM and to estimate RTM adjusted effects. SF-36 quality-of-life (QoL) data from a cohort of 2827 chronically diseased adults treated with homeopathy were reanalysed using a method described in 1991 by Mee and Chua’s. RTM adjusted effects, standardized by the respective standard deviation at baseline, were 0.12 (95% CI: 0.06-0.19, P < 0.001) in the mental and 0.25 (0.22-0.28, P < 0.001) in the physical summary score of the SF-36. Small-to-moderate effects were confirmed for most individual diagnoses in physical, but not in mental component scores. Under the assumption that the true population mean equals the mean of all actually diseased patients, RTM adjusted effects were confirmed for both scores in most diagnoses.

The authors reached the following conclusion: “In our paper we showed that the effects on quality of life observed in patients receiving homeopathic care in a usual care setting are small or moderate at maximum, but cannot be explained by RTM alone. Due to the uncontrolled study design they may, however, completely be due to nonspecific effects. All our analyses made a restrictive and conservative assumption, so the true treatment effects might be larger than shown.” 

Of course, the analysis heavily relies on the validity of Mee and Chua’s modified t-test. It requires the true mean in the target population to be known, a requirement that seldom can be fulfilled. The authors therefore took the SF-36 mean summary scores from the 1998 German health survey as proxies. I am not a statistician and therefore unable to tell how reliable this method might be (- if there is someone out there who can give us some guidance here, please post your comment).

In order to make sense of these data, we need to consider that, during the study period, about half of the patients admitted to have had additional visits to non-homeopathic doctors, and 27% also received conventional drugs. In addition, they would have benefitted from:

  • the benign history of the conditions they were suffering from,
  • a placebo-effect,
  • the care and attention they received
  • and all sorts of other non-specific effects.

So, considering these factors, what does this interesting re-analysis really tell us? My interpretation is as follows: the type of observational study that homeopaths are so fond of yields false-positive results. If we correct them – as the authors have done here for just one single factor, the RTM – the effect size gets significantly smaller. If we were able to correct them for some of the other factors mentioned above, the effect size would shrink more and more. And if we were able to correct them for all confounders, their results would almost certainly concur with those of rigorously controlled trials which demonstrate that homeopathic remedies are pure placebos.

I am quite sure that this interpretation is unpopular with homeopaths, but I am equally certain that it is correct.

Advocates of alternative medicine are incredibly fond of supporting their claims with anecdotes, or ‘case-reports’ as they are officially called. There is no question, case-reports can be informative and important, but we need to be aware of their limitations.

A recent case-report from the US might illustrated this nicely. It described a 65-year-old male patient who had had MS for 20 years when he decided to get treated with Chinese scalp acupuncture. The motor area, sensory area, foot motor and sensory area, balance area, hearing and dizziness area, and tremor area were stimulated once a week for 10 weeks, then once a month for 6 further sessions.

After the 16 treatments, the patient showed remarkable improvements. He was able to stand and walk without any problems. The numbness and tingling in his limbs did not bother him anymore. He had more energy and had not experienced incontinence of urine or dizziness after the first treatment. He was able to return to work full time. Now the patient has been in remission for 26 months.

The authors of this case-report conclude that Chinese scalp acupuncture can be a very effective treatment for patients with MS. Chinese scalp acupuncture holds the potential to expand treatment options for MS in both conventional and complementary or integrative therapies. It can not only relieve symptoms, increase the patient’s quality of life, and slow and reverse the progression of physical disability but also reduce the number of relapses and help patients.

There is absolutely nothing wrong with case-reports; on the contrary, they can provide extremely valuable pointers for further research. If they relate to adverse effects, they can give us crucial information about the risks associated with treatments. Nobody would ever argue that case-reports are useless, and that is why most medical journals regularly publish such papers. But they are valuable only, if one is aware of their limitations. Medicine finally started to make swift progress, ~150 years ago, when we gave up attributing undue importance to anecdotes, began to doubt established wisdom and started testing it scientifically.

Conclusions such as the ones drawn above are not just odd, they are misleading to the point of being dangerous. A reasonable conclusion might have been that this case of a MS-patient is interesting and should be followed-up through further observations. If these then seem to confirm the positive outcome, one might consider conducting a clinical trial. If this study proves to yield encouraging findings, one might eventually draw the conclusions which the present authors drew from their single case.

To jump at conclusions in the way the authors did, is neither justified nor responsible. It is unjustified because case-reports never lend themselves to such generalisations. And it is irresponsible because desperate patients, who often fail to understand the limitations of case-reports and tend to believe things that have been published in medical journals, might act on these words. This, in turn, would raise false hopes or might even lead to patients forfeiting those treatments that are evidence-based.

It is high time, I think, that proponents of alternative medicine give up their love-affair with anecdotes and join the rest of the health care professions in the 21st century.

Yes, it is unlikely but true! I once was the hero of the world of energy healing, albeit for a short period only. An amusing story, I hope you agree.

Back in the late 1990s, we had decided to run two trials in this area. One of them was to test the efficacy of distant healing for the removal of ordinary warts, common viral infections of the skin which are quite harmless and usually disappear spontaneously. We had designed a rigorous study, obtained ethics approval and were in the midst of recruiting patients, when I suggested I could be the trial’s first participant, as I had noticed a tiny wart on my left foot. As patient-recruitment was sluggish at that stage, my co-workers consulted the protocol to check whether it might prevent me from taking part in my own trial. They came back with the good news that, as I was not involved in the running of the study, there was no reason for me to be excluded.

The next day, they ‘processed’ me like all the other wart sufferers of our investigation. My wart was measured, photographed and documented. A sealed envelope with my trial number was opened (in my absence, of course) by one of the trialists to see whether I would be in the experimental or the placebo group. The former patients were to receive ‘distant healing’ from a group of 10 experienced healers who had volunteered and felt confident to be able to cure warts. All they needed was a few details about each patients, they had confirmed. The placebo group received no such intervention. ‘Blinding’ the patient was easy in this trial; since they were not themselves involved in any healing-action, they could not know whether they were in the placebo or the verum group.

The treatment period lasted for several weeks during which time my wart was re-evaluated in regular intervals. When I had completed the study, final measurements were done, and I was told that I had been the recipient of ‘healing energy’ from the 10 healers during the past weeks. Not that I had felt any of it, and not that my wart had noticed it either: it was still there, completely unchanged.

I remember not being all that surprised…until, the next morning, when I noticed that my wart had disappeared! Gone without a trace!

Of course, I told my co-workers who were quite excited, re-photographed the spot where the wart had been and consulted the study protocol to determine what had to be done next. It turned out that we had made no provisions for events that might occur after the treatment period.

But somehow, this did not feel right, we all thought. So we decided to make a post-hoc addendum to our protocol which stipulated that all participants of our trial would be asked a few days after the end of the treatment whether any changes to their warts had been noted.

Meanwhile the healers had got wind of the professorial wart’s disappearance. They were delighted and quickly told other colleagues. In no time at all, the world of ‘distant healing’ had agreed that warts often reacted to their intervention with a slight delay – and they were pleased to hear that we had duly amended our protocol to adequately capture this important phenomenon. My ‘honest’ and ‘courageous’ action of acknowledging and documenting the disappearance of my wart was praised, and it was assumed that I was about to prove the efficacy of distant healing.

And that’s how I became their ‘hero’ – the sceptical professor who had now seen the light with his own eyes and experienced on his own body the incredible power of their ‘healing energy’.

Incredible it remained though: I was the only trial participant who lost his wart in this way. When we published this study, we concluded: Distant healing from experienced healers had no effect on the number or size of patients’ warts.

AND THAT’S WHEN I STOPPED BEING THEIR ‘HERO’.

A recent interview on alternative medicine for the German magazine DER SPIEGEL prompted well over 500 comments; even though, in the interview, I covered numerous alternative therapies, the discussion that followed focussed almost entirely on homeopathy. Yet again, many of the comments provided a reminder of the quasi-religious faith many people have in homeopathy.

There can, of course, be dozens of reasons for such strong convictions. Yet, in my experience, some seem to be more prevalent and important than others. During my last two decades in researching homeopathy, I think, I have identified several of the most important ones. In this post, I try to outline a typical sequence of events that eventually leads to a faith in homeopathy which is utterly immune to fact and reason.

The epiphany

The starting point of this journey towards homeopathy-worship is usually an impressive personal experience which is often akin to an epiphany (defined as a moment of sudden and great revelation or realization). I have met hundreds of advocates of homeopathy, and those who talk about this sort of thing invariably offer impressive stories about how they metamorphosed from being a ‘sceptic’ (yes, it is truly phenomenal how many believers insist that they started out as sceptics) into someone who was completely bowled over by homeopathy, and how that ‘moment of great revelation’ changed the rest of their lives. Very often, this ‘Saulus-Paulus conversion’ relates to that person’s own (or a close friend’s) illness which allegedly was cured by homeopathy.

Rachel Roberts, chief executive of the Homeopathy Research Institute, provides as good an example of this sort of epiphany as anyone; in an article in THE GUARDIAN, she described her conversion to homeopathy with the following words:

I was a dedicated scientist about to begin a PhD in neuroscience when, out of the blue, homeopathy bit me on the proverbial bottom.

Science had been my passion since I began studying biology with Mr Hopkinson at the age of 11, and by the age of 21, when I attended the dinner party that altered the course of my life, I had still barely heard of it. The idea that I would one day become a homeopath would have seemed ludicrous.

That turning point is etched in my mind. A woman I’d known my entire life told me that a homeopath had successfully treated her when many months of conventional treatment had failed. As a sceptic, I scoffed, but was nonetheless a little intrigued.

She confessed that despite thinking homeopathy was a load of rubbish, she’d finally agreed to an appointment, to stop her daughter nagging. But she was genuinely shocked to find that, after one little pill, within days she felt significantly better. A second tablet, she said, “saw it off completely”.

I admit I ruined that dinner party. I interrogated her about every detail of her diagnosis, previous treatment, time scales, the lot. I thought it through logically – she was intelligent, she wasn’t lying, she had no previous inclination towards alternative medicine, and her reluctance would have diminished any placebo effect.

Scientists are supposed to make unprejudiced observations, then draw conclusions. As I thought about this, I was left with the highly uncomfortable conclusion that homeopathy appeared to have worked. I had to find out more.

So, I started reading about homeopathy, and what I discovered shifted my world for ever. I became convinced enough to hand my coveted PhD studentship over to my best friend and sign on for a three-year, full-time homeopathy training course.

Now, as an experienced homeopath, it is “science” that is biting me on the bottom. I know homeopathy works…

As I said, I have heard many strikingly similar accounts. Some of these tales seem a little too tall to be true and might be a trifle exaggerated, but the consistency of the picture that emerges from all of these stories is nevertheless extraordinary: people get started on a single anecdote which they are prepared to experience as an epiphanic turn-around. Subsequently, they are on a mission of confirming their new-found belief over and over again, until they become undoubting disciples for life.

So what? you might ask. But I do think this epiphany-like event at the outset of a homeopathic career is significant. In no other area of health care does the initial anecdote regularly play such a prominent role. People do not become believers in aspirin, for instance, on the basis of a ‘moment of great revelation’, they may take it because of the evidence. And, if there is a discrepancy between the external evidence and their own experience, as with homeopathy, most people would start to reflect: What other explanations exist to rationalise the anecdote? Invariably, there are many (placebo, natural history of the condition, concomitant events etc.).

Confirmation bias

Epiphany-stuck believers spends much time and effort to actively look for similar stories that seem to confirm the initial anecdote. They might, for instance, recommend or administer or prescribe homeopathy to others, many of whom would report positive outcomes. At the same time, all anecdotes that do not happen to fit the belief are brushed aside, forgotten, supressed, belittled, decried etc. This process leads to confirmation after confirmation after confirmation - and gradually builds up to what proponents of homeopathy would call ‘years of experience’. And ‘years of experience’ can, of course, not be wrong!

Again, believers neglect to question, doubt and rationalise their own perceptions. They ignore the fact that years of experience might just be little more than a suborn insistence on repeating one’s own mistakes. Even the most obvious confounders such as selective memory or alternative causes for positive clinical outcomes are quickly dismissed or not even considered at all.

Avoiding cognitive dissonance at all cost

But believers still has to somehow deal with the scientific facts about homeopathy; and these are, of course, grossly out of line with their belief. Thus the external evidence and the internal belief would inevitably clash creating a shrill cognitive dissonance. This must be avoided at all cost, as it might threaten the believer’s peace of mind. And the solution is amazingly simple: scientific evidence that does not confirm the believer’s conviction is ignored or, when this proves to be impossible, turned upside down.

Rachel Roberts’ account is most enlightening also in this repect:

And yet I keep reading reports in the media saying that homeopathy doesn’t work and that this scientific evidence doesn’t exist.

The facts, it seems, are being ignored. By the end of 2009, 142 randomised control trials (the gold standard in medical research) comparing homeopathy with placebo or conventional treatment had been published in peer-reviewed journals – 74 were able to draw firm conclusions: 63 were positive for homeopathy and 11 were negative. Five major systematic reviews have also been carried out to analyse the balance of evidence from RCTs of homeopathy – four were positive (Kleijnen, J, et al; Linde, K, et al; Linde, K, et al; Cucherat, M, et al) and one was negative (Shang, A et al). It’s usual to get mixed results when you look at a wide range of research results on one subject, and if these results were from trials measuring the efficacy of “normal” conventional drugs, ratios of 63:11 and 4:1 in favour of a treatment working would be considered pretty persuasive.

This statement is, in my view, a classic example of a desperate misinterpretation of the truth as a means of preventing the believer’s house of cards from collapsing. It even makes the hilarious claim that not the believers but the doubters “ignore” the facts.

In order to be able to adhere to her belief, Roberts needs to rely on a woefully biased white-wash from the ‘British Homeopathic Association’. And, in order to be on the safe side, she even quotes it misleadingly. The conclusion of the Cucherat review, for instance, can only be seen as positive by most blinkered of minds: There is some evidence that homeopathic treatments are more effective than placebo; however, the strength of this evidence is low because of the low methodological quality of the trials. Studies of high methodological quality were more likely to be negative than the lower quality studies. Further high quality studies are needed to confirm these results. Contrary to what Roberts states, there are at least a dozen more than 5 systematic reviews of homeopathy; my own systematic review of systematic reviews, for example, concluded that the best clinical evidence for homeopathy available to date does not warrant positive recommendations for its use in clinical practice.

It seems that, at this stage of a believer’s development, the truth gets all too happily sacrificed on the altar of faith. All these ‘ex-sceptics’ turned believers are now able to display is a rather comical parody of scepticism.

The delusional end-stage

The last stage in the career of a believer has been reached when hardly anything that he or she is convinced of resembles reality any longer. I don’t know much about Rachel Roberts, and she might not have reached this point yet; but there are many others who clearly have.

My two favourite examples of end-stage homeopathic delusionists are John Benneth and Dana Ullman. The final stage on the journey from ‘sceptic scientist’ to delusional disciple is characterised by an incessant stream of incoherent statements of vile nonsense that beggars belief. It is therefore easy to recognise and, because nobody can possibly take the delusionists seriously, they are best viewed as relatively harmless contributors to medical comedy.

Why does all of this matter?

Many homeopathy-fans are quasi-religious believers who, in my experience, have degressed way beyond reason. It is therefore a complete waste of time trying to reason with them. Initiated by a highly emotional epiphany, their faith cannot be shaken by rational arguments. Similar but usually less pronounced attitudes, I am afraid, can be observed in true believers of other alternative treatments as well (here I have chosen the example of homeopathy mainly because it is the area where things are most explicit).

True believers claim to have started out as sceptics and they often insist to be driven by a scientific mind. Yet I have never seen any evidence for these assumptions. On the contrary, for a relatively trivial episode to become a life-changing epiphany, the believer’s mind needs to be lamentably unscientific, unquestioning and simple.

In my experience, true believers will not change their mind; I have never seen this happening. However, progress might nevertheless be made, if we managed to instil a more (self-) questioning rationality and scientific attitudes into the minds of the next generations. In other words, we need better education in science and more training of critical thinking during their formative years.

Some experts concede that chiropractic spinal manipulation is effective for chronic low back pain (cLBP). But what is the right dose? There have been no full-scale trials of the optimal number of treatments with spinal manipulation. This study was aimed at filling this gap by trying to identify a dose-response relationship between the number of visits to a chiropractor for spinal manipulation and cLBP outcomes. A further aim was to determine the efficacy of manipulation by comparison with a light massage control.

The primary cLBP outcomes were the 100-point pain intensity scale and functional disability scales evaluated at the 12- and 24-week primary end points. Secondary outcomes included days with pain and functional disability, pain unpleasantness, global perceived improvement, medication use, and general health status.

One hundred patients with cLBP were randomized to each of 4 dose levels of care: 0, 6, 12, or 18 sessions of spinal manipulation from a chiropractor. Participants were treated three times per week for 6 weeks. At sessions when manipulation was not assigned, the patients received a focused light massage control. Covariate-adjusted linear dose effects and comparisons with the no-manipulation control group were evaluated at 6, 12, 18, 24, 39, and 52 weeks.

For the primary outcomes, mean pain and disability improvement in the manipulation groups were 20 points by 12 weeks, an effect that was sustainable to 52 weeks. Linear dose-response effects were small, reaching about two points per 6 manipulation sessions at 12 and 52 weeks for both variables. At 12 weeks, the greatest differences compared to the no-manipulation controls were found for 12 sessions (8.6 pain and 7.6 disability points); at 24 weeks, differences were negligible; and at 52 weeks, the greatest group differences were seen for 18 visits (5.9 pain and 8.8 disability points).

The authors concluded that the number of spinal manipulation visits had modest effects on cLBP outcomes above those of 18 hands-on visits to a chiropractor. Overall, 12 visits yielded the most favorable results but was not well distinguished from other dose levels.

This study is interesting because it confirms that the effects of chiropractic spinal manipulation as a treatment for cLBP are tiny and probably not clinically relevant. And even these tiny effects might not be due to the treatment per se but could be caused by residual confounding and bias.

As for the optimal dose, the authors suggest that, on average, 18 sessions might be the best. But again, we have to be clear that the dose-response effects were small and of doubtful clinical relevance. Since the therapeutic effects are tiny, it is obviously difficult to establish a dose-response relationship.

In view of the cost of chiropractic spinal manipulation and the uncertainty about its safety, I would probably not rate this approach as the treatment of choice but would consider the current Cochrane review which concludes that “high quality evidence suggests that there is no clinically relevant difference between spinal manipulation and other interventions for reducing pain and improving function in patients with chronic low-back pain” Personally, I think it is more prudent to recommend exercise, back school, massage or perhaps even yoga to cLBP-sufferers.

Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.

Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.

An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.

Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.

The principle that is being followed here is simple:

  • believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
  • the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
  • they publish their findings in one of the many journals that specialise in this sort of nonsense;
  • they make sure that advocates across the world learn about their results;
  • the community of apologists of this treatment picks up the information without the slightest critical analysis;
  • the researchers conduct more and more of such pseudo-research;
  • nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
  • thus the body of false or misleading ‘evidence’ grows and grows;
  • proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
  • too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
  • eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
  • important health care decisions are thus based on data which are false and misleading.

So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:

  • authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
  • editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
  • if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
  • in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).

What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ‘evidence’ critically…but how realistic is that?

We have probably all fallen into the trap of thinking that something which has stood the ‘test of time’, i.e. something that has been used for centuries with apparent success, must be ok. In alternative medicine, this belief is extremely wide-spread, and one could argue that the entire sector is built on it. Influential proponents of ‘traditional’ medicine like Prince Charles do their best to strengthen this assumption. Sadly, however, it is easily disclosed as a classical fallacy: things that have stood the ‘test of time’ might work, of course, but the ‘test of time’ is never a proof of anything.

A recent study brought this message home loud and clear. This trial tested the efficacy of Rhodiola crenulata (R. crenulata), a traditional remedy which has been used widely in the Himalayan areas and in Tibet to prevent acute mountain sickness . As no scientific studies of this traditional treatment existed, the researchers conducted a double-blind, placebo-controlled crossover RCT to test its efficacy in acute mountain sickness prevention.

Healthy adult volunteers were randomized to two treatment sequences, receiving either 800 mg R. crenulata extract or placebo daily for 7 days before ascent and two days during mountaineering. After a three-month wash-out period, they were crossed over to the alternate treatment. On each occasion, the participants ascended rapidly from 250 m to 3421 m. The primary outcome measure was the incidence of acute mountain sickness with headache and at least one of the symptoms of nausea or vomiting, fatigue, dizziness, or difficulty sleeping.

One hundred and two participants completed the trial. No significant differences in the incidence of acute mountain sickness were found between R. crenulata extract and placebo groups. If anything, the incidence of severe acute mountain sickness with Rhodiola extract was slightly higher compared to the one with placebo: 35.3% vs. 29.4%.

R. crenulata extract was not effective in reducing the incidence or severity of acute mountain sickness as compared to placebo.

Similar examples could be found by the dozen. They demonstrate very clearly that the notion of the ‘test of time’ is erroneous: a treatment which has a long history of usage is not necessarily effective (or safe) – not only that, it might be dangerous. The true value of a therapy cannot be judged by experience, to be sure, we need rigorous clinical trials. Acute mountain sickness is a potentially life-threatening condition for which there are reasonably effective treatments. If people relied on the ‘ancient wisdom’ instead of using a therapy that actually works, they might pay for their error with their lives. The sooner alternative medicine proponents realise that, the better.

This post will probably work best, if you have read the previous one describing how the parallel universe of acupuncture research insists on going in circles in order to avoid admitting that their treatment might not be as effective as they pretend. The way they achieve this is fairly simple: they conduct trials that are designed in such a way that they cannot possibly produce a negative result.

A brand-new investigation which was recently vociferously touted via press releases etc. as a major advance in proving the effectiveness of acupuncture is an excellent case in point. According to its authors, the aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care. This sounds alright, but wait!

755 patients with depression were randomised to one of three arms to 1)acupuncture, 2)counselling, and 3)usual care alone. The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat. PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of 10 sessions for acupuncture and 9 sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 and 12 months for acupuncture and counselling.

From this, the authors conclude that both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.

Acupuncture for depression? Really? Our own systematic review with co-authors who are the most ardent apologists of acupuncture I have come across showed that the evidence is inconsistent on whether manual acupuncture is superior to sham… Therefore, I thought it might be a good idea to have a closer look at this new study.

One needs to search this article very closely indeed to find out that the authors did not actually evaluate acupuncture versus usual care and counselling versus usual care at all, and that comparisons were not made between acupuncture, counselling, and usual care (hints like the use of the word “alone” are all we get to guess that the authors’ text is outrageously misleading). Not even the methods section informs us what really happened in this trial. You find this hard to believe? Here is the unabbreviated part of the article that describes the interventions applied:

Patients allocated to the acupuncture and counselling groups were offered up to 12 sessions usually on a weekly basis. Participating acupuncturists were registered with the British Acupuncture Council with at least 3 years post-qualification experience. An acupuncture treatment protocol was developed and subsequently refined in consultation with participating acupuncturists. It allowed for customised treatments within a standardised theory-driven framework. Counselling was provided by members of the British Association for Counselling and Psychotherapy who were accredited or were eligible for accreditation having completed 400 supervised hours post-qualification. A manualised protocol, using a humanistic approach, was based on competences independently developed for Skills for Health. Practitioners recorded in logbooks the number and length of sessions, treatment provided, and adverse events. Further details of the two interventions are presented in Tables S2 and S3. Usual care, both NHS and private, was available according to need and monitored for all patients in all three groups for the purposes of comparison.

It is only in the results tables that we can determine what treatments were actually given; and these were:

1) Acupuncture PLUS usual care (i.e. medication)

2) Counselling PLUS usual care

3) Usual care

Its almost a ‘no-brainer’ that, if you compare A+B to B (or in this three-armed study A+B vs C+B vs B), you find that the former is more than the latter – unless A is a negative, of course. As acupuncture has significant placebo-effects, it never can be a negative, and thus this trial is an entirely foregone conclusion. As, in alternative medicine, one seems to need experimental proof even for ‘no-brainers’, we have some time ago demonstrated that this common sense theory is correct by conducting a systematic review of all acupuncture trials with such a design. We concluded that the ‘A + B versus B’ design is prone to false positive results…What makes this whole thing even worse is the fact that I once presented our review in a lecture where the lead author of the new trial was in the audience; so there can be no excuse of not being aware of the ‘no-brainer’.

Some might argue that this is a pragmatic trial, that it would have been unethical to not give anti-depressants to depressed patients and that therefore it was not possible to design this study differently. However, none of these arguments are convincing, if you analyse them closely (I might leave that to the comment section, if there is interest in such aspects). At the very minimum, the authors should have explained in full detail what interventions were given; and that means disclosing these essentials even in the abstract (and press release) - the part of the publication that is most widely read and quoted.

It is arguably unethical to ask patients’ co-operation, use research funds etc. for a study, the results of which were known even before the first patient had been recruited. And it is surely dishonest to hide the true nature of the design so very sneakily in the final report.

In my view, this trial begs at least 5 questions:

1) How on earth did it pass the peer review process of one of the most highly reputed medical journals?

2) How did the protocol get ethics approval?

3) How did it get funding?

4) Does the scientific community really allow itself to be fooled by such pseudo-research?

5) What do I do to not get depressed by studies of acupuncture for depression?

Categories