MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

bias

In a recent comment, US chiropractors stated that there is a growing recognition within the profession that the practicing chiropractor must be able to do the following: formulate a searchable clinical question, rapidly access the best evidence available, assess the quality of that evidence, determine if it is applicable to a particular patient or group of patients, and decide if and how to incorporate the evidence into the care being offered. In a word, they believe, that evidence-based chiropractic is possible, perhaps even (almost) a reality. For evidence-based practice to penetrate and transform a profession, the penetration must occur at two levels, they explain. One level is the degree to which individual practitioners possess the willingness and basic skills to search and assess the literature.

The second level, the authors explain, relates to whether the therapeutic interventions commonly employed by a particular health care discipline are supported by clinical research. The authors believe that a growing body of randomized controlled trials provides evidence of the effectiveness and safety of manual therapies. Is this really true, I wonder.

In support of these fairly bold statements, they cite a paper by Bronfort et al which, in their view, is currently the most comprehensive review of the evidence for the efficacy of manual therapies. According to these authors, the ‘Bronfort-report’ stated that evidence is inconclusive for pneumonia, stage 1 hypertension, pre-menstrual syndrome, nocturnal enuresis, and otitis media. The authors also believe that it is unlikely manipulation of the neck is causally related to stroke.

When I read this article, I could not stop myself from giggling. It seems to me that it provides pretty good evidence for the fact that the chiropractic profession is nowhere near reaching the stage where anyone could reasonably claim that chiropractors practice evidence-based medicine – not even the authors themselves seem to abide by the rules of evidence-based medicine! If they had truly been able to access the best evidence available and assess the quality of that evidence surely they would not have (mis-) quoted the ‘Bronfort-report’.

Bronfort’s overview was commissioned by the General Chiropractic Council, it was hastily compiled by ardent believers of chiropractic, published in a journal that non-chiropractors would not touch with a barge pole, and crucially it lacks some of the most important qualities of an unbiased systematic review. In my view, it is nothing short of a white-wash and not worth the paper it was printed on. Conclusions, such as the evidence regarding pneumonia, bed-wetting and otitis is inconclusive are just embarrassing; the correct conclusion is that the evidence fails to be positive for these and most other indications.

Similarly, if the authors had really studied and quoted the best evidence, how on earth could they have stated that manipulation of the neck cannot cause a stroke? The evidence for that is fairly overwhelming, and the only open question here is, how often do such complications occur? And even the biased ‘Bronfort-report’ states: Adverse events associated with manual treatment can be classified into two categories: 1) benign, minor or non-serious and 2) serious. Generally those that are benign are transient, mild to moderate in intensity, have little effect on activities, and are short lasting. Most commonly, these involve pain or discomfort to the musculoskeletal system. Less commonly, nausea, dizziness or tiredness are reported. Serious adverse events are disabling, require hospitalization and may be life-threatening. The most documented and discussed serious adverse event associated with spinal manipulation (specifically to the cervical spine) is vertebrobasilar artery (VBA) stroke. Less commonly reported are serious adverse events associated with lumbar spine manipulation, including lumbar disc herniation and cauda equina syndrome.

Evidence-based practice? Who are these chiropractors kidding? This article very neatly reflects the exact opposite. It ignores hundreds of peer-reviewed papers which are critical of chiropractic. The best one can do with this paper, I think, is to use it as a hilarious bit of involuntary humour or as a classic example of cherry-picking.

Come to think of it, chiropractic and evidence-based practice are contradictions in terms. Either a therapist claims to adjust mystical subluxations, in which case he/she does not practice evidence-based medicine. Or he/she practices evidence-based medicine, in which case adjusting mystical subluxations cannot be part of their therapeutic repertoire.

Towards the end of the article, we learn further fascinating things: the authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article – oh, really?!?! Furthermore, we are told that this ‘research’ was funded by the ‘National Center of Complementary and Alternative Medicine’ (NCCAM) of the National Institutes of Health.

Can it be true? Does the otherwise most respectable NIH really give its name for such overt nonsense? Yes, it is true, and it is by no means the first time. In fact, our analysis shows that, when it comes to chiropractic, this organisation has sponsored almost nothing but utter rubbish, and our conclusion was blunt: the criticism repeatedly aimed at NCCAM seems justified, as far as their RCTs of chiropractic is concerned. It seems questionable whether such research is worthwhile.

Research is essential for progress, and research in alternative medicine is important for advancing alternative medicine, one would assume. But why then do I often feel that research in this area hinders progress? One of the reasons is, in my view, the continuous drip, drip, drip of misleading conclusions usually drawn from weak studies. I could provide thousands of examples; here is one recently published article chosen at random which seems as good as any other to make the point.

Researchers from the Department of Internal and Integrative Medicine, Faculty of Medicine, University of Duisburg-Essen, Germany set out to investigate associations of regular yoga practice with quality of life and mental health in patients with chronic diseases. Using a case-control study design, 186 patients with chronic diseases who had elected to regularly practice yoga were selected and compared to controls who had chosen to not regularly practice yoga. Patients were matched individually on gender, main diagnosis, education, and age. Patients’ quality of life, mental health, life satisfaction, and health satisfaction were also assessed. The analyses show that patients who regularly practiced yoga had a significantly better general health status, a higher physical functioning, and physical component score  on the SF-36 than those who did not.

The authors concluded that practicing yoga under naturalistic conditions seems to be associated with increased physical health but not mental health in chronically diseased patients.

Why do I find these conclusions misleading?

In alternative medicine, we have an irritating abundance of such correlative research. By definition, it does not allow us to make inferences about causation. Most (but by no means all) authors are therefore laudably careful when choosing their terminology. Certainly, the present article does not claim that regular yoga practice has caused increased physical health; it rightly speaks of “associations“. And surely, there is nothing wrong with that – or is there?

Perhaps, I will be accused of nit-picking, but I think the results are presented in a slightly misleading way, and the conclusions are not much better.

Why do the authors claim that patients who regularly practiced yoga had a significantly better general health status, a higher physical functioning, and physical component score  on the SF-36 than those who did not than those who did not? I know that the statement is strictly speaking correct, but why do they not write that “patients who had a significantly better general health status, a higher physical functioning, and physical component score  on the SF-36 were more likely to practice yoga regularly”? After all, this too is correct! And why does the conclusion not state that better physical health seems to be associated with a greater likelihood of practicing yoga?

The possibility that the association is the other way round deserves serious consideration, in my view. Is it not logical to assume that, if someone is  relatively fit and healthy, he/she is more likely to take up yoga (or table-tennis, sky-diving, pole dancing, etc.)?

It’s perhaps not a hugely important point, so I will not dwell on it – but, as the alternative medicine literature is full with such subtly  misleading statements, I don’t find it entirely irrelevant either.

A recently published study by Danish researchers aimed at comparing the effectiveness of a patient education (PEP) programme with or without the added effect of chiropractic manual therapy (MT) to a minimal control intervention (MCI). Its results seem to indicate that chiropractic MT is effective. Is this the result chiropractors have been waiting for?

To answer this question, we need to look at the trial and its methodology in more detail.

A total of 118 patients with clinical and radiographic unilateral hip osteoarthritis (OA) were randomized into one of three groups: PEP, PEP+ MT or MCI. The PEP was taught by a physiotherapist in 5 sessions. The MT was delivered by a chiropractor in 12 sessions, and the MCI included a home stretching programme. The primary outcome measure was the self-reported pain severity on an 11-box numeric rating scale immediately following the 6-week intervention period. Patients were subsequently followed for one year.

The primary analyses included 111 patients. In the PEP+MT group, a statistically and clinically significant reduction in pain severity of 1.9 points was noted compared to the MCI of 1.90. The number needed to treat for PEP+MT was 3. No difference was found between the PEP and the MCI groups. At 12 months, the difference favouring PEP+MT was maintained.

The authors conclude that for primary care patients with osteoarthritis of the hip, a combined intervention of manual therapy and patient education was more effective than a minimal control intervention. Patient education alone was not superior to the minimal control intervention.

This is an interesting, pragmatic trial with a result suggesting that chiropractic MT in combination with PEP is effective in reducing the pain of hip OA. One could easily argue about the small sample size, the need for independent replication etc. However, my main concern is the fact that the findings can be interpreted in not just one but in at least two very different ways.

The obvious explanation would be that chiropractic MT is effective. I am sure that chiropractors would be delighted with this conclusion. But how sure can we be that it would reflect the truth?

I think an alternative explanation is just as (possibly more) plausible: the added time, attention and encouragement provided by the chiropractor (who must have been aware what was at stake and hence highly motivated) was the effective element in the MT-intervention, while the MT per se made little or no difference. The PEP+MT group had no less than 12 sessions with the chiropractor. We can assume that this additional care, compassion, empathy, time, encouragement etc. was a crucial factor in making these patients feel better and in convincing them to adhere more closely to the instructions of the PEP. I speculate that these factors were more important than the actual MT itself in determining the outcome.

In my view, such critical considerations regarding the trial methodology are much more than an exercise in splitting hair. They are important in at least two ways.

Firstly, they remind us that clinical trials, whenever possible, should be designed such that they allow only one interpretation of their results. This can sometimes be a problem with pragmatic trials of this nature. It would be wise, I think, to conduct pragmatic trials only of interventions which have previously been proven to work.  To the best of my knowledge, chiropractic MT as a treatment for hip OA does not belong to this category.

Secondly, it seems crucial to be aware of such methodological issues and to consider them carefully before research findings are translated into clinical practice. If not, we might end up with  therapeutic decisions (or guidelines) which are quite simply not warranted.

I would not be in the least surprised, if chiropractic interest groups were to use the current findings for promoting chiropractic in hip-OA. But what, if the MT per se was ineffective, while the additional care, compassion and encouragement was? In this case, we would not need to recruit (and pay for) chiropractors and put up with the considerable risks chiropractic treatments can entail; we would merely need to modify the PE programme such that patients are better motivated to adhere to it.

As it stands, the new study does not tell us much that is of any practical use. In my view, it is a pragmatic trial which cannot readily be translated into evidence-based practice. It might get interpreted as good news for chiropractic but, in fact, it is not.

Sorry, but I am fighting a spell of depression today.

Why? I came across this website which lists the 10 top blogs on alternative medicine. To be precise, here is what they say about their hit-list: this list includes the top 10 alternative medicine bloggers on Twitter, ranked by Klout score. Using Cision’s media database, we compiled the list based on Cision’s proprietary research, with results limited to bloggers who dedicate significant coverage to alternative medicine and therapies…

And here are the glorious top ten:

Andrew WeilDr. Andrew Weil’s Daily Health Tips

Joy McCarthyJoyous Health Blog

Johanna BjörkGoodlifer

Stacey ChillemiStay Healthy and Cure Your Conditions Naturally

Eric GreyDeepest Health

Kristi ShmyrPrana Holistic Blog

Cathy WongAlternative Medicine Blog

Renee CanadaHartford Healthy Living Examiner

Dee BraunNatural Holistic Health Blog

Geo EspinosaDr. Geo’s Natural Health Blog

All of these sites are promotional and lack even the slightest hint of critical evaluation. All of them sell or advertise products and are thus out to make money. All of them are full of quackery, in my view. Some of the most popular bloggers are world-famous quacks!

What about impartial information for the public? What about critical review of the evidence? What about a degree of balance? What about guiding consumers to make responsible, evidence-based decisions? What about preventing harm? What about using scarce resources wisely?

I don’t see any of this on any of the sites.

You see, now I have depressed you too!

Quick, buy some herbal, natural, holistic and integrative anti-depressant! As it happens, I have some for sale….

Even after all these years of full-time research into alternative medicine and uncounted exchanges with enthusiasts involved in this sector, I find the logic that is often applied in this field bewildering and the unproductiveness of the dialogue disturbing.

To explain what I mean, it be might best to publish a (fictitious, perhaps slightly exaggerated) debate between a critical thinker or scientist (S) and an uncritical proponent (P) of one particular form of alternative medicine.

P: Did you see this interesting study demonstrating that treatment X is now widely accepted, even by highly critical GPs at the cutting edge of health care?

S: This was a survey, not a ‘study’, and I never found the average GP “highly critical”. Surveys of this nature are fairly useless and they “demonstrate” nothing of real value.

P: Whatever, but it showed that GPs accept treatment X. This can only mean that they realise how safe and effective it is.

S: Not necessarily, GPs might just give in to consumer demand, or the sample was cleverly selected, or the question was asked in a leading manner, etc.

P: Hardly, because there is plenty of good evidence for treatment X.

S: Really? Show me.

P: There is this study here which proves that treatment X works and is risk-free.

S: The study was far too small to demonstrate safety, and it is wide open to multiple sources of bias. Therefore it does not conclusively show efficacy either.

P: You just say this because you don’t like its result! You have a closed mind!

In any case, it was merely an example! There are plenty more positive studies; do your research properly before you talk such nonsense.

S: I did do some research and I found a recent, high quality systematic review that arrived at a negative conclusion about the value of treatment X.

P: That review was done by sceptics who clearly have an axe to grind. It is based on studies which do not account for the intrinsic subtleties of treatment X. Therefore they are unfair tests of treatment X. These trials don’t really count at all. Every insider knows that! The fact that you cite it merely confirms that you do not understand what you are talking about.

S: It seems to me, that you like scientific evidence only when it confirms your belief. This, I am afraid, is what quacks tend to do!

P: I strongly object to being insulted in this way.

S: I did not insult you, I merely made a statement of fact.

P: If you like facts, you have to see that one needs to have sufficient expertise in treatment X in order to apply it properly and effectively. This important fact is neglected in all of those trials that report negative results; and that’s why they are negative. Simple! I really don’t understand why you are too stupid to understand this. Such studies do not show that treatment X is ineffective, but they demonstrate that the investigators were incompetent or hired with the remit to discredit treatment X.

S: I would have thought they are negative because they minimised bias and the danger of generating a false positive result.

P: No, by minimising bias, as you put it, these trials eliminated the factors that are important elements of treatment X.

S: Such as the placebo-effect?

P: That’s what you call it because you irrationally believe in reductionist science.

S: Science requires no belief, I think you are the believer here.

P: The fact is that scientists of your ilk negate all factors related to human interactions. Patients are no machines, you know, they need compassion; we clinicians know that because we work at the coal face of health care. Scientists in their ivory towers have no idea about patient care and just want science for science sake. This is not how you help patients. Show some compassion man!

S: I do know about the importance of compassion and care, but here we are discussing an entirely different topic, namely tests the efficacy or effectiveness of treatments, not patient-care. Let’s focus on one issue at a time.

P: You cannot separate things in this way. We have to take a holistic view. Patients are whole individuals, and you cannot do them justice by running artificial experiments. Every patient is different; clinical trials fail to account for this fact and are therefore fairly irrelevant to us and to our patients. Real life is very different from your imagined little experiments, you know.

S: These are platitudes that are nonsensical in this context and do not contribute anything meaningful to the present discussion. You do not seem to understand the methodology or purpose of a clinical trial.

P: That is typical! Whenever you run out of arguments, you try to change the subject or throw a few insults at me.

S: Not at all, I thought we were talking about clinical trials evaluating the effectiveness of treatment X.

P: That’s right; and they do show that it is effective, provided you consider those which are truly well-done by experts who know about treatment X and believe in it.

S: Not true. Only if you cherry-pick the data will you be able to produce an overall positive result for treatment X.

P: In any case, the real world results of clinical practice show very clearly that it works. It would not have survived for so long, if it didn’t. Nobody can deny that, and nobody should claim that silly little trials done in artificial circumstances are more meaningful than a wealth of experience.

S: Experience has little to do with reliable evidence.

P: To deny the value of experience is just stupid and clearly puts you in the wrong. I have shown you plenty of reliable evidence but you just ignore everything I say that does not go along with your narrow-minded notions about science; science is not the only way of knowing or comprehending things! Stop being obsessed with science.

S: No, you show me rubbish data and have little understanding of science, I am afraid.

P: Here we go again! I have had about enough of that and your blinkered arguments. We are going in circles because you are ignorant and arrogant. I have tried my best to show you the light, but your mind is closed. I offer true insight and you pay me back with insults. You and your cronies are in the pocket of BIG PHARMA. You are cynical, heartless and not interested in the wellbeing of patients. Next you will tell me to vaccinate my kids!

S: I think this is a waste of time.

P: Precisely! Everyone who has followed this debate will see very clearly that you are obsessed with reductionist science and incapable of considering the suffering of whole individuals. You want to deny patients a treatment that  really helps them simply because you do not understand how treatment X works. Shame on you!!!

Neck pain is a common problem which is often far from easy to treat. Numerous therapies are being promoted but few are supported by good evidence. Could yoga be the solution?

The aim of a brand-new RCT was to evaluate the effectiveness of Iyengar yoga for chronic non-specific neck pain. Patients were randomly assigned to either yoga or exercise. The yoga group attended a 9-week yoga course, while the exercise group received a self-care manual on home-based exercises for neck pain. The primary outcome measure was neck pain. Secondary outcome measures included functional disability, pain at motion, health-related quality of life, cervical range of motion, proprioceptive acuity, and pressure pain threshold. Fifty-one patients participated in the study: yoga (n = 25), exercise (n = 26). At the end of the treatment phase, patients in the yoga group reported significantly less neck pain compared as well as less disability and better mental quality of life compared with the exercise group. Range of motion and proprioceptive acuity were improved and the pressure pain threshold was elevated in the yoga group.

The authors draw the following conclusion: “Yoga was more effective in relieving chronic nonspecific neck pain than a home-based exercise program. Yoga reduced neck pain intensity and disability and improved health-related quality of life. Moreover, yoga seems to influence the functional status of neck muscles, as indicated by improvement of physiological measures of neck pain.”

I’d love to agree with the authors and would be more than delighted, if an effective treatment for neck pain had been identified. Yoga is fairly safe and inexpensive; it promotes a generally healthy life-style, and is attractive to many patients; it has thus the potential to help thousands of suffering individuals. However, I fear that things might not be quite as rosy as the authors of this trial seem to believe.

The principle of an RCT essentially is that two groups of patients receive two different therapies and that any difference in outcome after the treatment phase is attributable to the therapy in question. Unfortunately, this is not the case here. One does not need to be an expert in critical thinking to realise that, in the present study, the positive outcome might be unrelated to yoga. For instance, it could be that the unsupervised home exercises were carried out wrongly and thus made the neck pain worse. In this case, the difference between the two treatment groups might not have been caused by yoga at all. A second possibility is that the yoga-group benefited not from the yoga itself but from the attention given to these patients which the exercise-group did not have. A third explanation could be that the yoga teachers were very kind to their patients, and that the patients returned their kindness by pretending to have less symptoms or exaggerating their improvements. In my view the most likely cause of the results seen in this study is a complex mixture of all the options just mentioned.

This study thus teaches us two valuable lessons: 1) whenever possible, RCTs should be designed such that a clear attribution of cause and effect is possible, once the results are on the table; 2) if cause and effect cannot be clearly defined, it is unwise to draw conclusions that are definite and have the potential to mislead the public.

This post has an odd title and addresses an odd subject. I am sure some people reading it will ask themselves “has he finally gone potty; is he a bit xenophobic, chauvinistic, or what?” I can assure you none of the above is the case.

Since many years, I have been asked to peer-review Chinese systematic reviews and meta-analyses of TCM-trials submitted to various journals and to the Cochrane Collaboration for publication, and I estimate that around 300 such articles are available today. Initially, I thought they were a valuable contribution to our knowledge, particularly for the many of us who cannot read Chinese languages. I hoped they might provide reliable information about this huge and potentially important section of the TCM-evidence. After doing this type of work for some time, I became more and more frustrated; now I have decided not to accept this task any longer – not because it is too much trouble, but because I have come to the conclusion that these articles are far less helpful than I had once assumed; in fact, I now fear that they are counter-productive.

In order to better understand what I mean, it might be best to use an example; this recent systematic review seems as good for that purpose as any.

Its Chinese authors “hypothesized that the eligible trials would provide evidence of the effect of Chinese herbs on bone mineral density (BMD) and the therapeutic benefits of Chinese medicine treatment in patients with bone loss. Randomized controlled trials (RCTs) were thus retrieved for a systematic review from Medline and 8 Chinese databases. The authors identified 12 RCTs involving a total of 1816 patients. The studies compared Chinese herbs with placebo or standard anti-osteoporotic therapy. The pooled data from these RCTs showed that the change of BMD in the spine was more pronounced with Chinese herbs compared to the effects noted with placebo. Also, in the femoral neck, Chinese herbs generated significantly higher increments of BMD compared to placebo. Compared to conventional anti-osteoporotic drugs, Chinese herbs generated greater BMD changes.

In their abstract, the part on the paper that most readers access, the authors reached the following conclusions: “Our results demonstrated that Chinese herb significantly increased lumbar spine BMD as compared to the placebo or other standard anti-osteoporotic drugs.” In the article itself, we find this more detailed conclusion: “We conclude that Chinese herbs substantially increased BMD of the lumbar spine compared to placebo or anti-osteoporotic drugs as indicated in the current clinical reports on osteoporosis treatment. Long term of Chinese herbs over 12 months of treatment duration may increase BMD in the hip more effectively. However, further studies are needed to corroborate the positive effect of increasing the duration of Chinese herbs on outcome as the results in this analysis are based on indirect comparisons. To date there are no studies available that compare Chinese herbs, Chinese herbs plus anti-osteoporotic drugs, and anti-osteoporotic drug versus placebo in a factorial design. Consequently, we are unable to draw any conclusions on the possible superiority of Chinese herbs plus anti-osteoporotic drug versus anti-osteoporotic drug or Chinese herb alone in the context of BMD.

Most readers will feel that this evidence is quite impressive and amazingly solid; they might therefore advocate routinely using Chinese herbs for the common and difficult to treat problem of osteoporosis. The integration of TCM might avoid lots of human suffering, prolong the life of many elderly patients, and save us all a lot of money. Why then am I not at all convinced?

The first thing to notice is the fact that we do not really know which of the ~7000 different Chinese herbs should be used. The article tells us surprisingly little about this crucial point. And even, if we manage to study this question in more depth, we are bound to get thoroughly confused; there are simply too many herbal mixtures and patent medicines to easily identify the most promising candidates.

The second and more important hurdle to making sense of these data is the fact that most of the primary studies originate from inaccessible Chinese journals and were published in Chinese languages which, of course, few people in the West can understand. This is entirely our fault, some might argue, but it does mean that we have to believe the authors, take their words at face value, and cannot check the original data. You may think this is fine, after all, the paper has gone through a rigorous peer-review process where it has been thoroughly checked by several top experts in the field. This, however, is a fallacy; like you and me, the peer-reviewers might not read Chinese either! (I don’t, and I reviewed quite a few of these papers; in some instances, I even asked for translations of the originals to do the job properly but this request was understandably turned down) In all likelihood, the above paper and most similar articles have not been properly peer-reviewed at all.

The third and perhaps most crucial point can only be fully appreciated, if we were able to access and understand the primary studies; it relates to the quality of the original RCTs summarised in such systematic reviews. The abstract of the present paper tells us nothing at all about this issue. In the paper, however, we do find a formal assessment of the studies’ risk of bias which shows that the quality of the included RCTs was poor to very poor. We also find a short but revealing sentence: “The reports of all trials mentioned randomization, but only seven described the method of randomization.” This remark is much more significant than it may seem: we have shown that such studies use such terminology in a rather adventurous way; reviewing about 2000 of these allegedly randomised trials, we found that many Chinese authors call a trial “randomised” even in the absence of a control group (one cannot randomise patients and have no control group)! They seem to like the term because it is fashionable and makes publication of their work easier. We thus have good reason to fear that some/many/most of the studies were not RCTs at all.

The fourth issue that needs mentioning is the fact that very close to 100% of all Chinese TCM-trials report positive findings. This means that either TCM is effective for every indication it is tested for (most unlikely, not least because there are many negative non-Chinese trials of TCM), or there is something very fundamentally wrong with Chinese research into TCM. Over the years, I have had several Chinese co-workers in my team and was invariably impressed by their ability to work hard and efficiently; we often discussed the possible reasons for the extraordinary phenomenon of 0% negative Chinese trials. The most plausible answer they offered was this: it would be most impolite for a Chinese researcher to produce findings which contradict the opinion of his/her peers.

In view of these concerns, can we trust the conclusions of such systematic reviews? I don’t think so – and this is why I have problems with research of this nature. If there are good reasons to doubt their conclusions, these reviews might misinform us systematically, they might not further but hinder progress, and they might send us up the garden path. This could well be in the commercial interest of the Chinese multi-billion dollar TCM-industry, but it would certainly not be in the interest of patients and good health care.

Many people who have arrived at a certain age have knee osteoarthritis and most of them suffer pain, lack of mobility etc. because of it. There are many effective treatments for this condition, of course, but some have serious side-effects, others are tedious to follow and therefore not popular, and none of the existing options totally cure the problem. In many cases, surgery is the best solution; a knee-endoprosthesis can restore everything almost back to normal. But surgery carries risks and will cause considerable pain and rehabilitation-effort. This is perhaps why we are still looking for a treatment that is both effective and risk-free. Personally, I doubt that such a therapy will ever be found, but that does, of course, not stop alternative medicine enthusiasts from claiming that this or that treatment is what the world has been waiting for. The newest kid on this block is leech therapy. Did I just write “newest”? Leeches are not new at all; they are a treatment from the dark ages of medicine – but are they about to experience a come-back?

A recent systematic review and meta-analysis evaluated the effectiveness of medical leech therapy for osteoarthritis of the knee. Five electronic databases were screened to identify randomized (RCTs) and non randomized controlled clinical trials (CCTs) comparing leech therapy to any type of control condition. The main outcome measures were pain, functional impairment, and joint stiffness. Three RCTs and 1 CCT with a total of 237 patients with osteoarthritis were included. Three trials had, according to the review-authors, a low risk of bias. They claimed to have found strong evidence for immediate and short-term pain reduction, immediate improvement in patients’ physical function, and both immediate and long-term improvement in their joint stiffness. Moderate evidence was found for leech therapy’s short-term effects on physical function and long-term effects on pain. Leech therapy was not associated with any serious adverse events. The authors reached the following conclusion: ” Given the low number of reported adverse events, leech therapy may be a useful approach in treating this condition. Further high-quality RCTs are required for the conclusive judgment of its effectiveness and safety.”

When, about 35 years ago, I worked as a young doctor in the homeopathic hospital in Munich, I was taught how to apply leeches to my patients. We got the animals from a specialised supplier, put them on the patient’s skin and waited until they had bitten a little hole and started sucking the patient’s blood. Once they were full they spontaneously fell off and were then disposed off. Many patients were too disgusted with the prospect of leech therapy to agree to this intervention. Those who did were very impressed with the procedure; it occurred to me then that this therapy must be associated with an enormous placebo-effect simply because it is exotic, impressive and a treatment that no patient will ever forget.

The bite of the leech is not normally painful because the leech has a local anaesthetic which it applies in order to suck blood without being noticed. The leech furthermore injects a powerful anticoagulant into its victim’s body which is necessary for preventing the blood from clotting. Through the injection of these pharmacologically active substances, leeches can clearly be therapeutic and they are thus not entirely unknown in conventional medicine; in plastic surgery, for instance, they are sometimes being used to generate optimal results for micro surgical wounds. Their anticoagulant has long been identified and is sometimes being used therapeutically. The use of leeches for the management of osteoarthritis, however, is not a conventional concept. So, how convincing are the above data? Should we agree with the authors’ conclusion that “leech therapy may be a useful approach in treating this condition“? I think not, and here is why:

1) The collective evidence for efficacy is far from convincing. The few studies which were summarised in this systematic review are mostly those of the research group that also authored the review. Critical thinkers would insist on an independent assessment of those trials. Moreover, none of the trials was patient-blind (which would not be all that difficult to do), and thus the enormous placebo-effect of applying a leech might be the cause of all or most of the observed effect.

2) The authors claim that the treatment is safe. On the basis of just 250 patients treated under highly controlled conditions, this claim has almost no evidential basis.

3) As already mentioned above, there are many treatments which are more effective for improving pain and function than leeches.

4) Leech therapy is time-consuming, relatively expensive and quite unpractical as a regular, long-term therapy.

5) In my experience, patients will run a mile to avoid having something as ‘disgusting’ as leeches sucking blood from their body.

6) The animals need to be destroyed after the treatment to avoid infections.

7) As multiple leeches applied regularly will suck a significant volume of blood, the treatment might lead to anaemia and would be contra-indicated in patients with low haemoglobin levels.

8) Like most other treatments for osteoarthritis, leech therapy would not be curative but might just alleviate the symptoms temporarily.

On balance therefore, I very much doubt that the leech will have a come-back in the realm of osteoarthritis therapy. In fact, I think that, in this particular context, leeches are just a chapter from the dark ages of medicine. Their re-introduction into osteoarthritis care seems like a significant step into the wrong direction.

Reiki is a form of  healing which rests on the assumption that some form “energy” determines our health. In this context, I tend to put energy in inverted commas because it is not the energy a physicist might have in mind. It is a much more mystical entity, a form of vitality that is supposed to be essential for life and keep us going. Nobody has been able to define or quantify this “energy”, it defies scientific measurement and is biologically implausible. These circumstances render Reiki one of the least plausible therapies in the tool kit of alternative medicine.

Reiki-healers (they prefer to be called “masters”) would channel “energy” into his or her patient which, in turn, is thought to stimulate the healing process of whatever condition is being treated. In the eyes of those who believe in this sort of thing, Reiki is therefore a true panacea: it can heal everything.

The clinical evidence for or against Reiki is fairly clear – as one would expect after realising how ‘far out’ its underlying concepts are. Numerous studies are available, but most are of very poor quality. Their results tend to suggest that patients experience benefit after having Reiki but they rarely exclude the possibility that this is due to placebo or other non-specific effects. Those that are rigorous show quite clearly that Reiki is a placebo. Our own review therefore concluded that “the evidence is insufficient to suggest that Reiki is an effective treatment for any condition… the value of Reiki remains unproven.”

Since the publication of our article, a number of new investigations have become available. In a brand-new study, for instance, the researchers wanted to explore a Reiki therapy-training program for the care-givers of paediatric patients. A series of Reiki training classes were offered by a Reiki-master. At the completion of the program, interviews were conducted to elicit participant’s feedback regarding its effectiveness.

Seventeen families agreed to participate and 65% of them attended three Reiki training sessions. They reported that Reiki had benefited their child by improving their comfort (76%), providing relaxation (88%) and pain relief (41%). All caregivers thought that becoming an active participant in their child’s care was a major gain. The authors of this investigation conclude that “a hospital-based Reiki training program for caregivers of hospitalized pediatric patients is feasible and can positively impact patients and their families. More rigorous research regarding the benefits of Reiki in the pediatric population is needed.

Trials like this one abound in the parallel world of “energy” medicine. In my view, such investigations do untold damage: they convince uncritical thinkers that “energy” healing is a rational and effective approach – so much so that even the military is beginning to use it.

The flaws in trials as the one above are too obvious to mention. Like most studies in this area, this new investigation proves nothing except the fact that poor quality research will mislead those who believe in its findings.

Some might say, so what? If a patient experiences benefit from a bogus yet harmless therapy, why not? I would strongly disagree with this increasingly popular view. Reiki and similarly bizarre forms of “energy” healing are well capable of causing harm.

Some fanatics might use these placebo-treatments as a true alternative to effective therapies. This would mean that the condition at hand remains untreated which, in a worst case scenario, might even lead to the death of patients. More important, in my view, is an entirely different risk: making people believe in mystic “energies” undermines rationality in a much more general sense. If this happens, the harm to society would be incalculable and extends far beyond health care.

Believe it or not, but my decision – all those years ago – to study medicine was to a significant degree influenced by a somewhat naive desire to, one day, be able to save lives. In my experience, most medical students are motivated by this wish – “to save lives” in this context stands not just for the dramatic act of administering a life-saving treatment to a moribund patient but it is meant as a synonym for helping patients in a much more general sense.

I am not sure whether, as a young clinician, I ever did manage to save many lives. Later, I had a career-change and became a researcher. The general view about researchers seems to be that they are detached from real life, sit in ivory towers and write clever papers which hardly anyone understands and few people will ever read. Researchers therefore cannot save lives, can they?

So, what happened to those laudable ambitions of the young Dr Ernst? Why did I decide to go into research, and why alternative medicine; why did I not conduct research in more the promotional way of so many of my colleagues (my life would have been so much more hassle-free, and I even might have a knighthood by now); why did I feel the need to insist on rigorous assessments and critical thinking, often at high cost? For my many detractors, the answers to these questions seem to be more than obvious: I was corrupted by BIG PHARMA, I have an axe to grind against all things alternative, I have an insatiable desire to be in the lime-light, I defend my profession against the concurrence from alternative practitioners etc. However, for me, the issues are a little less obvious (today, I will, for the first time, disclose the bribe I received from BIG PHARMA for criticising alternative medicine: the precise sum was zero £ and the same amount again in $).

As I am retiring from academic life and doing less original research, I do have the time and the inclination to brood over such questions. What precisely motivated my research agenda in alternative medicine, and why did I remain unimpressed by the number of powerful enemies I made pursuing it?

If I am honest – and I know this will sound strange to many, particularly to those who are convinced that I merely rejoice in being alarmist – I am still inspired by this hope to save lives. Sure, the youthful naivety of the early days has all but disappeared, yet the core motivation has remained unchanged.

But how can research into alternative medicine ever save a single life?

Since about 20 years, I am regularly pointing out that the most important research questions in my field relate to the risks of alternative medicine. I have continually published articles about these issues in the medical literature and, more recently, I have also made a conscious effort to step out of the ivory towers of academia and started writing for a much wider lay-audience (hence also this blog). Important landmarks on this journey include:

– pointing out that some forms of alternative medicine can cause serious complications, including deaths,

– disclosing that alternative diagnostic methods are unreliable and can cause serious problems,

– demonstrating that much of the advice given by alternative practitioners can cause serious harm to the patients who follow it,

– that the advice provided in books or on the Internet can be equally dangerous,

– and that even the most innocent yet ineffective therapy becomes life-threatening, once it is used to replace effective treatments for serious conditions.

Alternative medicine is cleverly, heavily and incessantly promoted as being natural and hence harmless. Several of my previous posts and the ensuing discussions on this blog strongly suggest that some chiropractors deny that their neck manipulations can cause a stroke. Similarly, some homeopaths are convinced that they can do no harm; some acupuncturists insist that their needles are entirely safe; some herbalists think that their medicines are risk-free, etc. All of them tend to agree that the risks are non-existent or so small that they are dwarfed by those of conventional medicine, thus ignoring that the potential risks of any treatment must be seen in relation to their proven benefit.

For 20 years, I have tried my best to dispel these dangerous myths and fallacies. In doing so, I had to fight many tough battles  (sometimes even with the people who should have protected me, e.g. my peers at Exeter university), and I have the scars to prove it. If, however, I did save just one life by conducting my research into the risks of alternative medicine and by writing about it, the effort was well worth it.

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories