Monthly Archives: February 2014

The purpose of this paper by Canadian chiropractors was to expand practitioners’ knowledge on areas of liability when treating low back pain patients. Six cases where chiropractors in Canada were sued for allegedly causing or aggravating lumbar disc herniation after spinal manipulative therapy were retrieved using the CANLII database.

The patients were 4 men and 2 women with an average age of 37 years. Trial courts’ decisions were rendered between 2000 and 2011. The following conclusions from Canadian courts were noted:

  1. informed consent is an on-going process that cannot be entirely delegated to office personnel;
  2. when the patient’s history reveals risk factors for lumbar disc herniation the chiropractor has the duty to rule out disc pathology as an aetiology for the symptoms presented by the patients before beginning anything but conservative palliative treatment;
  3. lumbar disc herniation may be triggered by spinal manipulative therapy on vertebral segments distant from the involved herniated disc such as the thoracic spine.

The fact that this article was published by chiropractors seems like a step into the right direction. Disc herniations after chiropractic have been reported regularly and since many years. It is not often that I hear chiropractors admit that their spinal manipulations carry serious risks.

And it is not often that chiropractors consider the issue of informed consent. One the one hand, one hardly can blame them for it: if they ever did take informed consent seriously and informed their patients fully about the evidence and risks of their treatments as well as those of other therapeutic options, they would probably be out of business for ever. One the other hand, chiropractors should not be allowed to continue excluding themselves from the generally accepted ethical standards of modern health care.

The news that the use of Traditional Chinese Medicine (TCM) positively affects cancer survival might come as a surprise to many readers of this blog; but this is exactly what recent research has suggested. As it was published in one of the leading cancer journals, we should be able to trust the findings – or shouldn’t we?

The authors of this new study used the Taiwan National Health Insurance Research Database to conduct a retrospective population-based cohort study of patients with advanced breast cancer between 2001 and 2010. The patients were separated into TCM users and non-users, and the association between the use of TCM and patient survival was determined.

A total of 729 patients with advanced breast cancer receiving taxanes were included. Their mean age was 52.0 years; 115 patients were TCM users (15.8%) and 614 patients were TCM non-users. The mean follow-up was 2.8 years, with 277 deaths reported to occur during the 10-year period. Multivariate analysis demonstrated that, compared with non-users, the use of TCM was associated with a significantly decreased risk of all-cause mortality (adjusted hazards ratio [HR], 0.55 [95% confidence interval, 0.33-0.90] for TCM use of 30-180 days; adjusted HR, 0.46 [95% confidence interval, 0.27-0.78] for TCM use of > 180 days). Among the frequently used TCMs, those found to be most effective (lowest HRs) in reducing mortality were Bai Hua She She Cao, Ban Zhi Lian, and Huang Qi.

The authors of this paper are initially quite cautious and use adequate terminology when they write that TCM-use was associated with increased survival. But then they seem to get carried away by their enthusiasm and even name the TCM drugs which they thought were most effective in prolonging cancer survival. It is obvious that such causal extrapolations are well out of line with the evidence they produced (oh, how I wished that journal editors would finally wake up to such misleading language!) .

Of course, it is possible that some TCM drugs are effective cancer cures – but the data presented here certainly do NOT demonstrate anything like such an effect. And before such a far-reaching claim is being made, much more and much better research would be necessary.

The thing is, there are many alternative and plausible explanations for the observed phenomenon. For instance, it is conceivable that users and non-users of TCM in this study differed in many ways other than their medication, e.g. severity of cancer, adherence to conventional therapies, life-style, etc. And even if the researchers have used clever statistical methods to control for some of these variables, residual confounding can never be ruled out in such case-control studies.

Correlation is not causation, they say. Neglect of this elementary axiom makes for very poor science – in fact, it produces dangerous pseudoscience which could, like in the present case, lead a cancer patient straight up the garden path towards a premature death.

A meta-analysis compared the effectiveness of spinal manipulation therapies (SMT), medical management, physical therapies, and exercise for acute and chronic low back pain. Studies were chosen based on inclusion in prior evidence syntheses. Effect sizes were converted to standardized mean effect sizes and probabilities of recovery. Nested model comparisons isolated non-specific from treatment effects. Aggregate data were tested for evidential support as compared to shams.

The results suggest that, of 84% acute pain variance, 81% was from non-specific factors and 3% from treatment. No treatment was better than sham. Most acute results were within 95% confidence bands of that predicted by natural history alone. For chronic pain, 66% out of 98% was non-specific, but treatments influenced 32% of outcomes. Chronic pain treatments also fitted within 95% confidence bands as predicted by natural history. The evidential support for treating chronic back pain as compared to sham groups was weak, but chronic pain appeared to respond to SMT, while whole systems of chiropractic management did not.

The authors of this intriguing paper conclude: Meta-analyses can extract comparative effectiveness information from existing literature. The relatively small portion of outcomes attributable to treatment explains why past research results fail to converge on stable estimates. The probability of treatment superiority between treatment arms was equivalent to that expected by random selection. Treatments serve to motivate, reassure, and calibrate patient expectations – features that might reduce medicalization and augment self-care. Exercise with authoritative support is an effective strategy for acute and chronic low back pain.

This essentially indicates that none of these treatments for low back pain are convincingly effective. In turn this means we might as well stop using them. Alternatively, we could opt for the therapy that carries the least risks and cost. As the authors point out, this treatment is exercise.

The aim of this survey was to investigate the use of alternative medicines (AMs) by Scottish healthcare professionals involved in the care of pregnant women, and to identify predictors of usage.

135 professionals (midwives, obstetricians, anaesthetists) involved in the care of pregnant women filled a questionnaire. A response rate of 87% was achieved. A third of respondents (32.5%) had recommended (prescribed, referred, or advised) the use of AMs to pregnant women. The most frequently recommended AMs modalities were: vitamins and minerals (excluding folic acid) (55%); massage (53%); homeopathy (50%); acupuncture (32%); yoga (32%); reflexology (26%); aromatherapy (24%); and herbal medicine (21%). Univariate analysis identified that those who recommended AMs were significantly more likely to be midwives who had been in post for more than 5 years, had received training in AMs, were interested in AMs, and were themselves users of AMs. However, the only variable retained in bivariate logistic regression was ‘personal use of AM’ (odds ratio of 8.2).

The authors draw the following conclusion: Despite the lack of safety or efficacy data, a wide variety of AM therapies are recommended to pregnant women by approximately a third of healthcare professionals, with those recommending the use of AMs being eight times more likely to be personal AM users.

There are virtually thousands of websites which recommend unproven treatments to pregnant women. This one may stand for the rest:

Chamomile, lemon balm, peppermint, and raspberry leaf are also effective in treating morning sickness. Other helpful herbs for pregnancy discomforts include:

  • dandelion leaf for water retention
  • lavender, mint, and slippery elm for heartburn
  • butcher’s broom, hawthorn, and yarrow, applied externally to varicose veins
  • garlic for high blood pressure
  • witch hazel, applied externally to haemorrhoids.

Our research has shown that midwives are particularly keen to recommend and often sell AMs to their patients. In fact, it would be difficult to find a midwife in the UK or elsewhere who is not involved in this sort of thing. Similarly, we have demonstrated that the advice given by herbalists is frequently not based on evidence and prone to harm the unborn child, the mother or both. Finally, we have pointed out that many of the AMs in question are by no means free of risks.

The most serious risk, I think, is that advice to use AM for health problems during pregnancy might delay adequate care for potentially serious conditions. For instance, the site quoted above advocates garlic for a pregnant women who develops high blood pressure during pregnancy and dandelion for water retention. These two abnormalities happen to be early signs that a pregnant women might be starting to develop eclampsia. Treating such serious conditions with a few unproven herbal remedies is dangerous and recommendations to do so are irresponsible.

I think the new survey discussed above suggests a worrying degree of sympathy amongst conventional healthcare professionals for unproven treatments. This is likely to render healthcare less effective and less safe and is not in the interest of patients.

When we talk about conflicts of interest, we usually think of financial concerns. But conflicts of interests also extend to non-financial matters, such as strong beliefs. These are important in alternative medicine – I would even go as far as to claim that they dominate this field.

My detractors have often claimed that this is where my problem lies. They are convinced that, in 1993, I came into the job as PROFESSOR OF COMPLEMENTARY MEDICINE with an axe to grind; I was determined or perhaps even paid to show that all alternative medicine is utter hocus-pocus, they say. The truth is that, if anything, I was on the side of alternative medicine – and I can prove it. Using the example of homeopathy, I have dedicated an entire article to demonstrate that the myth is untrue – I was not closed-minded or out to ditch homeopathy (or any other form of alternative medicine for that matter).

What then could constitute my ‘conflict of interest’? Surely, he was bribed, I hear them say. Just look at the funds he took from industry. Some of those people have even gone to the trouble of running freedom of information requests to obtain the precise figures for my research-funding. Subsequently they triumphantly publish them and say: Look he got £x from this company and £y from that firm. And they are, of course, correct: I did receive support from commercially interested parties on several occasions. But what my detractors forget is that these were all pro-alternative medicine institutions. More importantly, I always made very sure that no strings were attached with any funds we accepted.

Our core funds came from ‘The Laing Foundation’ which endowed Exeter University with £ 1.5 million. This was done with the understanding that Exeter would put the same amount again into the kitty (which they never did). Anyone who can do simple arithmetic can tell that, to sustain up to 20 staff for almost 20 years, £1.5 million is not nearly enough. There must have been other sources. Who exactly gave money?

Despite utterly useless fundraising by the University, we did manage to obtain additional funds. I managed to receive support in the form of multiple research fellowships, for instance. It came from various sources; for instance, manufacturers of herbal medicines, Boots, the Pilkington Family Trust (yes, the glass manufacturers).

A hugely helpful contributor to our work was the sizable number (I estimate around 30) of visitors from abroad who came on their own money simply because they wanted to learn from and with us. They stayed between 3 months and 4 years, and importantly contributed to our research, knowledge and fun.

In addition, we soon devised ways to generate our own money. For instance, we started an annual conference for researchers in our field which ran for 14 successful years. As we managed everything on a shoestring and did all the organisation ourselves, we made a tidy profit each year which, of course, went straight back into our research. We also published several books which generated some revenue for the same purpose.

And then we received research funding for specific projects, for instance, from THE PRINCE OF WALES’ FOUNDATION FOR INTEGRATED HEALTH, a Japanese organisation supporting Jorhei Healing, THE WELCOME TRUST, the NHS, and even a homeopathic company.

So, do I have a conflict of interest? Did I take money from anyone who might have wanted to ditch alternative medicine? I don’t think so! And if I tell you that, when I came to Exeter in 1993, I donated ~£120 000 of my own funds towards the research of my unit, even my detractors might, for once, be embarrassed to have thought otherwise.

The most widely used definition of EVIDENCE-BASED MEDICINE (EBM) is probably this one: The judicious use of the best current available scientific research in making decisions about the care of patients. Evidence-based medicine (EBM) is intended to integrate clinical expertise with the research evidence and patient values.

David Sackett’s own definition is a little different: Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

Even though the principles of EBM are now widely accepted, there are those who point out that EBM has its limitations. The major criticisms of EBM relate to five themes: reliance on empiricism, narrow definition of evidence, lack of evidence of efficacy, limited usefulness for individual patients, and threats to the autonomy of the doctor/patient relationship.

Advocates of alternative medicine have been particularly vocal in pointing out that EBM is not really applicable to their area. However, as their arguments were less than convincing, a new strategy for dealing with EBM seemed necessary. Some proponents of alternative medicine therefore are now trying to hoist EBM-advocates by their own petard.

In doing so they refer directly to the definitions of EBM and argue that EBM has to fulfil at least three criteria: 1) external best evidence, 2) clinical expertise and 3) patient values or preferences.

Using this argument, they thrive to demonstrate that almost everything in alternative medicine is evidence-based. Let me explain this with two deliberately extreme examples.


There is, of course, not a jot of evidence for this. But there may well be the opinion held by crystal therapist that some cancer patients respond to their treatment. Thus the ‘best’ available evidence is clearly positive, they argue. Certainly the clinical expertise of these crystal therapists is positive. So, if a cancer patient wants crystal therapy, all three preconditions are fulfilled and CRYSTAL THERAPY IS ENTIRELY EVIDENCE-BASED.


Even the most optimistic chiropractor would find it hard to deny that the best evidence does not demonstrate the effectiveness of chiropractic for asthma. But never mind, the clinical expertise of the chiropractor may well be positive. If the patient has a preference for chiropractic, at least two of the three conditions are fulfilled. Therefore – on balance – chiropractic for asthma is [fairly] evidence-based.

The ‘HOISTING ON THE PETARD OF EBM’-method is thus a perfect technique for turning the principles of EBM upside down. Its application leads us straight back into the dark ages of medicine when anything was legitimate as long as some charlatan could convince his patients to endure his quackery and pay for it – if necessary with his life.

Do you think that chiropractic is effective for asthma? I don’t – in fact, I know it isn’t because, in 2009, I have published a systematic review of the available RCTs which showed quite clearly that the best evidence suggested chiropractic was ineffective for that condition.

But this is clearly not true, might some enthusiasts reply. What is more, they can even refer to a 2010 systematic review which indicates that chiropractic is effective; its conclusions speak a very clear language: …the eight retrieved studies indicated that chiropractic care showed improvements in subjective measures and, to a lesser degree objective measures… How on earth can this be?

I would not be surprised, if chiropractors claimed the discrepancy is due to the fact that Prof Ernst is biased. Others might point out that the more recent review includes more studies and thus ought to be more reliable. The newer review does, in fact, have about twice the number of studies than mine.

How come? Were plenty of new RCTs published during the 12 months that lay between the two publications? The answer is NO. But why then the discrepant conclusions?

The answer is much less puzzling than you might think. The ‘alchemists of alternative medicine’ regularly succeed in smuggling non-evidence into such reviews in order to beautify the overall picture and confirm their wishful thinking. The case of chiropractic for asthma does by no means stand alone, but it is a classic example of how we are being misled by charlatans.

Anyone who reads the full text of the two reviews mentioned above will find that they do, in fact, include exactly the same amount of RCTs. The reason why they arrive at different conclusions is simple: the enthusiasts’ review added NON-EVIDENCE to the existing RCTs. To be precise, the authors included one case series, one case study, one survey, two randomized controlled trials (RCTs), one randomized patient and observer blinded cross-over trial, one single blind cross study design, and one self-reported impairment questionnaire.

Now, there is nothing wrong with case reports, case series, or surveys – except THEY TELL US NOTHING ABOUT EFFECTIVENESS. I would bet my last shirt that the authors know all of that; yet they make fairly firm and positive conclusions about effectiveness. As the RCT-results collectively happen to be negative, they even pretend that case reports etc. outweigh the findings of RCTs.

And why do they do that? Because they are interested in the truth, or because they don’t mind using alchemy in order to mislead us? Your guess is as good as mine.

Systematic reviews are widely considered to be the most reliable type of evidence for judging the effectiveness of therapeutic interventions. Such reviews should be focused on a well-defined research question and identify, critically appraise and synthesize the totality of the high quality research evidence relevant to that question. Often it is possible to pool the data from individual studies and thus create a new numerical result of the existing evidence; in this case, we speak of a meta-analysis, a sub-category of systematic reviews.

One strength of systematic review is that they minimise selection and random biases by considering at the totality of the evidence of a pre-defined nature and quality. A crucial precondition, however, is that the quality of the primary studies is critically assessed. If this is done well, the researchers will usually be able to determine how robust any given result is, and whether high quality trials generate similar findings as those of lower quality. If there is a discrepancy between findings from rigorous and flimsy studies, it is obviously advisable to trust the former and discard the latter.

And this is where systematic reviews of alternative treatments can run into difficulties. For any given research question in this area we usually have a paucity of primary studies. Equally important is the fact that many of the available trials tend to be of low quality. Consequently, there often is a lack of high quality studies, and this makes it all the more important to include a robust critical evaluation of the primary data. Not doing so would render the overall result of the review less than reliable – in fact, such a paper would not qualify as a systematic review at all; it would be a pseudo-systematic review, i.e. a review which pretends to be systematic but, in fact, is not. Such papers are a menace in that they can seriously mislead us, particularly if we are not familiar with the essential requirements for a reliable review.

This is precisely where some promoters of bogus treatments seem to see their opportunity of making their unproven therapy look as though it was evidence-based. Pseudo-systematic reviews can be manipulated to yield a desired outcome. In my last post, I have shown that this can be done by including treatments which are effective so that an ineffective therapy appears effective (“chiropractic is so much more than just spinal manipulation”). An even simpler method is to exclude some of the studies that contradict one’s belief from the review. Obviously, the review would then not comprise the totality of the available evidence. But, unless the reader bothers to do a considerable amount of research, he/she would be highly unlikely to notice. All one needs to do is to smuggle the paper past the peer-review process – hardly a difficult task, given the plethora of alternative medicine journals that bend over backwards to publish any rubbish as long as it promotes alternative medicine.

Alternatively (or in addition) one can save oneself a lot of work and omit the process of critically evaluating the primary studies. This method is increasingly popular in alternative medicine. It is a fool-proof method of generating a false-positive overall result. As poor quality trials have a tendency to deliver false-positive results, it is obvious that a predominance of flimsy studies must create a false-positive result.

A particularly notorious example of a pseudo-systematic review that used this as well as most of the other tricks for misleading the reader is the famous ‘systematic’ review by Bronfort et al. It was commissioned by the UK GENERAL CHIROPRACTIC COUNCIL after the chiropractic profession got into trouble and was keen to defend those bogus treatments disclosed by Simon Singh. Bronfort and his colleagues thus swiftly published (of course, in a chiro-journal) an all-encompassing review attempting to show that, at least for some conditions, chiropractic was effective. Its lengthy conclusions seemed encouraging: Spinal manipulation/mobilization is effective in adults for: acute, subacute, and chronic low back pain; migraine and cervicogenic headache; cervicogenic dizziness; manipulation/mobilization is effective for several extremity joint conditions; and thoracic manipulation/mobilization is effective for acute/subacute neck pain. The evidence is inconclusive for cervical manipulation/mobilization alone for neck pain of any duration, and for manipulation/mobilization for mid back pain, sciatica, tension-type headache, coccydynia, temporomandibular joint disorders, fibromyalgia, premenstrual syndrome, and pneumonia in older adults. Spinal manipulation is not effective for asthma and dysmenorrhea when compared to sham manipulation, or for Stage 1 hypertension when added to an antihypertensive diet. In children, the evidence is inconclusive regarding the effectiveness for otitis media and enuresis, and it is not effective for infantile colic and asthma when compared to sham manipulation. Massage is effective in adults for chronic low back pain and chronic neck pain. The evidence is inconclusive for knee osteoarthritis, fibromyalgia, myofascial pain syndrome, migraine headache, and premenstrual syndrome. In children, the evidence is inconclusive for asthma and infantile colic. 

Chiropractors across the world cite this paper as evidence that chiropractic has at least some evidence base. What they omit to tell us (perhaps because they do not appreciate it themselves) is the fact that Bronfort et al

  • failed to formulate a focussed research question,
  • invented his own categories of inconclusive findings,
  • included all sorts of studies which had nothing to do with chiropractic,
  • and did not to make an assessment of the quality of the included primary studies they included in their review.

If, for a certain condition, three trials were included, for instance, two of which were positive but of poor quality and one was negative but of good quality, the authors would conclude that, overall, there is sound evidence.

Bronfort himself is, of course, more than likely to know all that (he has learnt his trade with an excellent Dutch research team and published several high quality reviews) – but his readers mostly don’t. And for chiropractors, this ‘systematic’ review is now considered to be the most reliable evidence in their field.

Imagine a type of therapeutic intervention that has been shown to be useless. Let’s take surgery, for instance. Imagine that research had established with a high degree of certainty that surgical operations are ineffective. Imagine further that surgeons, once they can no longer hide this evidence, argue that good surgeons do much more than just operate: surgeons wash their hands which effectively reduces the risk of infections, they prescribe medications, they recommend rehabilitative and preventative treatments, etc. All of these measures are demonstratively effective in their own right, never mind the actual surgery. Therefore, surgeons could argue that the things surgeons do are demonstrably effective and helpful, even though surgery itself would be useless in this imagined scenario.

I am, of course, not for a minute claiming that surgery is rubbish, but I have used this rather extreme example to expose the flawed argument that is often used in alternative medicine for white-washing bogus treatments. The notion is that, because a particular alternative health care profession employs not just one but multiple forms of treatments, it should not be judged by the effectiveness of its signature-therapy, particularly if it happens to be ineffective.

This type of logic seems nowhere more prevalent than in the realm of chiropractic. Its founding father, D.D. Palmer, dreamt up the bizarre notion that all human disease is caused by ‘subluxations’ which require spinal manipulation for returning the ill person to good health. Consequently, most chiropractors see spinal manipulation as a panacea and use this type of treatment for almost 100% of their patients. In other words, spinal manipulation is as much the hallmark-therapy for chiropractic as surgery is for surgeons.

When someone points out that, for this or that condition, spinal manipulation is not of proven effectiveness or even of proven ineffectiveness, chiropractors have in recent years taken to answering as outlined above; they might say: WE DO ALL SORTS OF OTHER THINGS TOO, YOU KNOW. FOR INSTANCE, WE EMPLOY OTHER MANUAL TECHNIQUES, GIVE LIFE-STYLE ADVICE AND USE NO END OF PHYSIOTHERAPEUTIC INTERVENTIONS. YOU CANNOT SAY THAT THESE APPROACHES ARE BOGUS. THEREFORE CHIROPRACTIC IS FAR FROM USELESS.

To increase the chances of convincing us with this notion, they have, in recent months, produced dozens of ‘systematic reviews’ which allegedly prove their point. Here are some of the conclusions from these articles which usually get published in chiro-journals:

The use of manual techniques on children with respiratory diseases seems to be beneficial.

The majority of the included trials appeared to indicate that the parents of infants receiving manipulative therapies reported fewer hours crying per day than parents whose infants did not, based on contemporaneous crying diaries, and this difference was statistically significant.

A limited amount of research has been published that supports a role for manual therapy in improving postural stability and balance.

…a trial of chiropractic care for sufferers of autism is prudent and warranted.

This study found a level of B or fair evidence for manual manipulative therapy of the shoulder, shoulder girdle, and/or the FKC combined with multimodal or exercise therapy for rotator cuff injuries/disorders, disease, or dysfunction.

Chiropractic care is an alternative approach to the care of the child with colic.

There is a baseline of evidence that suggests chiropractic care improves cervical range of motion (cROM) and pain in the management of whiplash-associated disorders.

Results of the eight retrieved studies indicated that chiropractic care showed improvements [for asthma].

Personally, I find this kind of ‘logic’ irritatingly illogical. If we accept it as valid, the boundaries between sense and nonsense disappear, and our tools of differentiating between quackery and ethical health care become blunt.

The next step could then even be to claim that a homeopathic hospital must be a good thing because some of its clinicians occasionally also prescribe non-homeopathic treatments.

The efficacy or effectiveness of medical interventions is, of course, best tested in clinical trials. The principle of a clinical trial is fairly simple: typically, a group of patients is divided (preferably at random) into two subgroups, one (the ‘verum’ group) is treated with the experimental treatment and the other (the ‘control’ group) with another option (often a placebo), and the eventual outcomes of the two groups is compared. If done well, such studies are able to exclude biases and confounding factors such that their findings allow causal inference. In other words, they can tell us whether an outcome was caused by the intervention per se or by some other factor such as the natural history of the disease, regression towards the mean etc.

A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.

For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.

One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about NON-CONCLUSIVE?

That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!

How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo NON-CONCLUSIVE.

In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.

This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.

Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.

The little trick of applying the ‘NON-CONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.

Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.

Click here for a comprehensive list of recent comments.