MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

pseudo-science

It has been reported that Belgium has just officially recognised homeopathy. The government had given the green light already in July last year, but the Royal Decree has only now become official. This means that, from now on, Belgian doctors, dentists and midwives can only call themselves homeopaths, if they have attended recognised courses in homeopathy and are officially certified. While much of the new regulation is as yet unclear (at least to me), it seems that, in future, only doctors, dentists and midwives are allowed to practice homeopathy, according to one source.

However, the new law also seems to provide that those clinicians with a Bachelor degree in health care who have already been practicing as homeopaths can continue their activities under a temporary measure.

Moreover, the official recognition as a homeopath does not automatically imply that the services will be refunded from a health insurance.

It is said that, in general, homeopaths are happy with the new regulation; they are delighted to have been up-graded in this way and argue that the changes will result in higher quality standards: “This is a very important step and it can only be to the benefit of the patients’ safety. Patients will know whether or not they are dealing with someone who correctly applies homeopathic medicine”, Leon Schepers of the Unio Homeopathica Belgica was quoted saying.

The delight of homeopaths is in sharp contrast to the dismay of rational thinkers. The NHMRC recently assessed the effectiveness of homeopathy. The evaluation is both comprehensive and independent; it concluded that “the evidence from research in humans does not show that homeopathy is effective for treating the range of health conditions considered.” In other words, homeopathic remedies are implausible, over-priced placebos.

Granting an official status to homeopaths cannot possibly benefit patients. On the contrary, it will only render health care less effective and charlatans more assertive.

It is not often that we see an article of the great George Vithoulkas, the ‘über-guru‘ of homeopathy, in a medical journal. In fact, this paper, which he co-authored with several colleagues, seems to be a rare exception: in his entire career, he seems to have published just 15 Medline- listed articles most of which are letters to the editor.

According to Wikipedia, Vithoulkas has been described as “the maestro of classical homeopathy” by Robin Shohet; Lyle Morgan says he is “widely considered to be the greatest living homeopathic theorist”; and Scott Shannon calls him a “contemporary master of homeopathy.” Paul Ekins credited Vithoulkas with the revival of the credibility of homeopathy.

In his brand new paper, Vithoulkas provides evidence for the notion that homeopathy can treat infertility. More specifically, the authors present 5 cases of female infertility treated successfully with the use of homeopathic remedies.

Really?

Yes, really! The American Medical College of Homeopathy informs us that homeopathy has an absolute solution that can augment your probability of conception. Homeopathic treatment of Infertility addresses both physical and emotional imbalances in a person. Homeopathy plays a role in treating Infertility by strengthening the reproductive organs in both men and women, by regulating hormonal balance, menstruation and ovulation in women, by escalating blood flow into the pelvic region, by mounting the thickness of the uterine lining and preventing the uterus from contracting hence abating chances of a miscarriage, and by increasing quality and quantity of sperm count in men. It can also be advantageous in reducing anxiety so that the embryo implantation can take place in a favourable environment. Homoeopathy is a system of medicine directed at assisting the body’s own healing process.

Imagine: the 5 women in Vithoulkas ‘study’ wanted to have children; they consulted homeopaths because they did not get pregnant in a timely fashion. The homeopaths prescribed individualised homeopathy and treated them for prolonged periods of time. Eventually, BINGO!, all of the 5 women got pregnant.

What a hoot!

It beggars belief that this result is being credited to the administration of homeopathic remedies. Do the authors not know that, in many cases, it can take many months until a pregnancy occurs? Do they not think that the many women they treated unsuccessfully for the same problem should raise some doubts about homeopathy? Do they really believe that their remedies had any causal relationship to the 5 pregnancies?

Vithoulkas was a recipient of the Right Livelihood Award in 1996. I hope they did not give it to him in recognition of his scientific achievements!

 

 

A recent meta-analysis evaluated the efficacy of acupuncture for treatment of irritable bowel syndrome (IBS) and arrived at bizarrely positive conclusions.

The authors state that they searched 4 electronic databases for double-blind, placebo-controlled trials investigating the efficacy of acupuncture in the management of IBS. Studies were screened for inclusion based on randomization, controls, and measurable outcomes reported.

Six RCTs were included in the meta-analysis, and 5 articles were of high quality.  The pooled relative risk for clinical improvement with acupuncture was 1.75 (95%CI: 1.24-2.46, P = 0.001). Using two different statistical approaches, the authors confirmed the efficacy of acupuncture for treating IBS and concluded that acupuncture exhibits clinically and statistically significant control of IBS symptoms.

As IBS is a common and often difficult to treat condition, this would be great news! But is it true? We do not need to look far to find the embarrassing mistakes and – dare I say it? – lies on which this result was constructed.

The largest RCT included in this meta-analysis was neither placebo-controlled nor double blind; it was a pragmatic trial with the infamous ‘A+B versus B’ design. Here is the key part of its methods section: 116 patients were offered 10 weekly individualised acupuncture sessions plus usual care, 117 patients continued with usual care alone. Intriguingly, this was the ONLY one of the 6 RCTs with a significantly positive result!

The second largest study (as well as all the other trials) showed that acupuncture was no better than sham treatments. Here is the key quote from this trial: there was no statistically significant difference between acupuncture and sham acupuncture.

So, let me re-write the conclusions of this meta-analysis without spin, lies or hype: These results of this meta-analysis seem to indicate that:

  1. currently there are several RCTs testing whether acupuncture is an effective therapy for IBS,
  2. all the RCTs that adequately control for placebo-effects show no effectiveness of acupuncture,
  3. the only RCT that yields a positive result does not make any attempt to control for placebo-effects,
  4. this suggests that acupuncture is a placebo,
  5. it also demonstrates how misleading studies with the infamous ‘A+B versus B’ design can be,
  6. finally, this meta-analysis seems to be a prime example of scientific misconduct with the aim of creating a positive result out of data which are, in fact, negative.

Homeopathy is a deeply puzzling subject for many observers. Perhaps it gets a little easier to understand, if we consider the three main perspectives on homeopathy. For the purpose of this post, I take the liberty of exaggerating, almost caricaturizing, these perspectives in order to contrast them as clearly as possible.

THE SCEPTICS’ PERSPECTIVE

Sceptics take a brief look at the two main assumptions which underpin homeopathy (like cures like and potentiation/dilution/water memory) and henceforward are convinced that homeopathic remedies are pure placebos. Homeopathy flies in the face of science; if homeopathy is right, several laws of nature must be wrong, they love to point out. As this is most unlikely, they reject homeopathy outright, usually even without looking in any detail at what homeopaths consider to be evidence in support of their trade. If sceptics are forced to consider a positive study of homeopathy, they know before they have seen it that its results are wrong – due to an error caused by chance, faulty study design or fabrication. The sceptics’ conclusion on homeopathy: it is a placebo-therapy, no doubt about it; and further investment into research is a waste of scarce resources which must be stopped.

THE BELIEVERS’ PERSPECTIVE

The believers in homeopathy know from experience that homeopathy works. They therefore feel that they have no choice but to reject almost every word the sceptics might tell them. They cling on to the gospel of Hahnemann and elaborate on the modern but vague theories that might support the theoretical assumptions of homeopathy. They point to positive clinical trials and outcome studies, to 200 years of experience, and to the endorsement of homeopathy by VIPs. When confronted with the weaknesses of their arguments, they find even weaker ones, such as ‘much of conventional medicine is also not based on good evidence, and the mechanism of action of many mainstream drugs is also not fully understood’. Alternatively, they employ the phoniest argument of them all: ‘even if it works via a placebo effect, it still helps patients and therefore is a useful therapy’. When even this fails, they tend to resort to ad hominem attacks against their opponents. The believers’ conclusion on homeopathy: it is unquestionably a valuable type of therapy regardless of what anyone else might say; research is merely needed to confirm their belief.

THE PERSPECTIVE OF THE ADVOCATES OF EVIDENCE-BASED MEDICINE (EBM)

The perspective of EBM-advocates is pragmatic; they simply say: “show me the evidence!” If the majority of the most reliable clinical trials of homeopathic remedies (or anything else) suggests an effect beyond placebo, they conclude that they are effective. If that is not the case, they doubt the effectiveness. If the evidence is highly contradictory or incomplete, they are likely to advocate more rigorous research. Advocates of EBM are usually not all that concerned by the lack of plausibility of the interventions they evaluate. If it works, it works, they think – and if a plausible mechanism is currently not available, it might be found in due course. The advocates of EBM have no preconceived ideas about homeopathy. Their conclusion on homeopathy goes exactly where the available best evidence leads them.

COMMENT

The arguments and counter-arguments originating from the various perspectives would surely continue for another 200 years – unless, of course, two of the three perspectives merge and arrive at the same or very similar conclusions. And this is precisely what has now happened. As I have pointed out in a recent post, the most thorough and independent evaluation of homeopathy according to rigorous EBM-standards has concluded that “the evidence from research in humans does not show that homeopathy is effective for treating the range of health conditions considered.”

In other words, two of the three principal perspectives have now drawn conclusions which are virtually identical: there is a consensus between the EBM-advocates and the sceptics. This isolates the believers and renders their position no longer tenable. If we furthermore consider that the believers are heavily burdened with obvious conflicts of interest, while the other two groups are by definition much more independent and objective, it appears more and more as though homeopathy is fast degenerating into a cult characterised by the unquestioning commitment and unconditional submission of its members who are too heavily brain-washed to realize that their fervour has isolated them from the rational sections of society. And a cult is hardly what we need in heath care, I should think.

It seems to me therefore that these intriguing developments might finally end the error that homeopathy represented for nearly 200 years.

Progress at last?

The news that the use of Traditional Chinese Medicine (TCM) positively affects cancer survival might come as a surprise to many readers of this blog; but this is exactly what recent research has suggested. As it was published in one of the leading cancer journals, we should be able to trust the findings – or shouldn’t we?

The authors of this new study used the Taiwan National Health Insurance Research Database to conduct a retrospective population-based cohort study of patients with advanced breast cancer between 2001 and 2010. The patients were separated into TCM users and non-users, and the association between the use of TCM and patient survival was determined.

A total of 729 patients with advanced breast cancer receiving taxanes were included. Their mean age was 52.0 years; 115 patients were TCM users (15.8%) and 614 patients were TCM non-users. The mean follow-up was 2.8 years, with 277 deaths reported to occur during the 10-year period. Multivariate analysis demonstrated that, compared with non-users, the use of TCM was associated with a significantly decreased risk of all-cause mortality (adjusted hazards ratio [HR], 0.55 [95% confidence interval, 0.33-0.90] for TCM use of 30-180 days; adjusted HR, 0.46 [95% confidence interval, 0.27-0.78] for TCM use of > 180 days). Among the frequently used TCMs, those found to be most effective (lowest HRs) in reducing mortality were Bai Hua She She Cao, Ban Zhi Lian, and Huang Qi.

The authors of this paper are initially quite cautious and use adequate terminology when they write that TCM-use was associated with increased survival. But then they seem to get carried away by their enthusiasm and even name the TCM drugs which they thought were most effective in prolonging cancer survival. It is obvious that such causal extrapolations are well out of line with the evidence they produced (oh, how I wished that journal editors would finally wake up to such misleading language!) .

Of course, it is possible that some TCM drugs are effective cancer cures – but the data presented here certainly do NOT demonstrate anything like such an effect. And before such a far-reaching claim is being made, much more and much better research would be necessary.

The thing is, there are many alternative and plausible explanations for the observed phenomenon. For instance, it is conceivable that users and non-users of TCM in this study differed in many ways other than their medication, e.g. severity of cancer, adherence to conventional therapies, life-style, etc. And even if the researchers have used clever statistical methods to control for some of these variables, residual confounding can never be ruled out in such case-control studies.

Correlation is not causation, they say. Neglect of this elementary axiom makes for very poor science – in fact, it produces dangerous pseudoscience which could, like in the present case, lead a cancer patient straight up the garden path towards a premature death.

The most widely used definition of EVIDENCE-BASED MEDICINE (EBM) is probably this one: The judicious use of the best current available scientific research in making decisions about the care of patients. Evidence-based medicine (EBM) is intended to integrate clinical expertise with the research evidence and patient values.

David Sackett’s own definition is a little different: Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

Even though the principles of EBM are now widely accepted, there are those who point out that EBM has its limitations. The major criticisms of EBM relate to five themes: reliance on empiricism, narrow definition of evidence, lack of evidence of efficacy, limited usefulness for individual patients, and threats to the autonomy of the doctor/patient relationship.

Advocates of alternative medicine have been particularly vocal in pointing out that EBM is not really applicable to their area. However, as their arguments were less than convincing, a new strategy for dealing with EBM seemed necessary. Some proponents of alternative medicine therefore are now trying to hoist EBM-advocates by their own petard.

In doing so they refer directly to the definitions of EBM and argue that EBM has to fulfil at least three criteria: 1) external best evidence, 2) clinical expertise and 3) patient values or preferences.

Using this argument, they thrive to demonstrate that almost everything in alternative medicine is evidence-based. Let me explain this with two deliberately extreme examples.

CRYSTAL THERAPY FOR CURING CANCER

There is, of course, not a jot of evidence for this. But there may well be the opinion held by crystal therapist that some cancer patients respond to their treatment. Thus the ‘best’ available evidence is clearly positive, they argue. Certainly the clinical expertise of these crystal therapists is positive. So, if a cancer patient wants crystal therapy, all three preconditions are fulfilled and CRYSTAL THERAPY IS ENTIRELY EVIDENCE-BASED.

CHIROPRACTIC FOR ASTHMA

Even the most optimistic chiropractor would find it hard to deny that the best evidence does not demonstrate the effectiveness of chiropractic for asthma. But never mind, the clinical expertise of the chiropractor may well be positive. If the patient has a preference for chiropractic, at least two of the three conditions are fulfilled. Therefore – on balance – chiropractic for asthma is [fairly] evidence-based.

The ‘HOISTING ON THE PETARD OF EBM’-method is thus a perfect technique for turning the principles of EBM upside down. Its application leads us straight back into the dark ages of medicine when anything was legitimate as long as some charlatan could convince his patients to endure his quackery and pay for it – if necessary with his life.

Do you think that chiropractic is effective for asthma? I don’t – in fact, I know it isn’t because, in 2009, I have published a systematic review of the available RCTs which showed quite clearly that the best evidence suggested chiropractic was ineffective for that condition.

But this is clearly not true, might some enthusiasts reply. What is more, they can even refer to a 2010 systematic review which indicates that chiropractic is effective; its conclusions speak a very clear language: …the eight retrieved studies indicated that chiropractic care showed improvements in subjective measures and, to a lesser degree objective measures… How on earth can this be?

I would not be surprised, if chiropractors claimed the discrepancy is due to the fact that Prof Ernst is biased. Others might point out that the more recent review includes more studies and thus ought to be more reliable. The newer review does, in fact, have about twice the number of studies than mine.

How come? Were plenty of new RCTs published during the 12 months that lay between the two publications? The answer is NO. But why then the discrepant conclusions?

The answer is much less puzzling than you might think. The ‘alchemists of alternative medicine’ regularly succeed in smuggling non-evidence into such reviews in order to beautify the overall picture and confirm their wishful thinking. The case of chiropractic for asthma does by no means stand alone, but it is a classic example of how we are being misled by charlatans.

Anyone who reads the full text of the two reviews mentioned above will find that they do, in fact, include exactly the same amount of RCTs. The reason why they arrive at different conclusions is simple: the enthusiasts’ review added NON-EVIDENCE to the existing RCTs. To be precise, the authors included one case series, one case study, one survey, two randomized controlled trials (RCTs), one randomized patient and observer blinded cross-over trial, one single blind cross study design, and one self-reported impairment questionnaire.

Now, there is nothing wrong with case reports, case series, or surveys – except THEY TELL US NOTHING ABOUT EFFECTIVENESS. I would bet my last shirt that the authors know all of that; yet they make fairly firm and positive conclusions about effectiveness. As the RCT-results collectively happen to be negative, they even pretend that case reports etc. outweigh the findings of RCTs.

And why do they do that? Because they are interested in the truth, or because they don’t mind using alchemy in order to mislead us? Your guess is as good as mine.

Systematic reviews are widely considered to be the most reliable type of evidence for judging the effectiveness of therapeutic interventions. Such reviews should be focused on a well-defined research question and identify, critically appraise and synthesize the totality of the high quality research evidence relevant to that question. Often it is possible to pool the data from individual studies and thus create a new numerical result of the existing evidence; in this case, we speak of a meta-analysis, a sub-category of systematic reviews.

One strength of systematic review is that they minimise selection and random biases by considering at the totality of the evidence of a pre-defined nature and quality. A crucial precondition, however, is that the quality of the primary studies is critically assessed. If this is done well, the researchers will usually be able to determine how robust any given result is, and whether high quality trials generate similar findings as those of lower quality. If there is a discrepancy between findings from rigorous and flimsy studies, it is obviously advisable to trust the former and discard the latter.

And this is where systematic reviews of alternative treatments can run into difficulties. For any given research question in this area we usually have a paucity of primary studies. Equally important is the fact that many of the available trials tend to be of low quality. Consequently, there often is a lack of high quality studies, and this makes it all the more important to include a robust critical evaluation of the primary data. Not doing so would render the overall result of the review less than reliable – in fact, such a paper would not qualify as a systematic review at all; it would be a pseudo-systematic review, i.e. a review which pretends to be systematic but, in fact, is not. Such papers are a menace in that they can seriously mislead us, particularly if we are not familiar with the essential requirements for a reliable review.

This is precisely where some promoters of bogus treatments seem to see their opportunity of making their unproven therapy look as though it was evidence-based. Pseudo-systematic reviews can be manipulated to yield a desired outcome. In my last post, I have shown that this can be done by including treatments which are effective so that an ineffective therapy appears effective (“chiropractic is so much more than just spinal manipulation”). An even simpler method is to exclude some of the studies that contradict one’s belief from the review. Obviously, the review would then not comprise the totality of the available evidence. But, unless the reader bothers to do a considerable amount of research, he/she would be highly unlikely to notice. All one needs to do is to smuggle the paper past the peer-review process – hardly a difficult task, given the plethora of alternative medicine journals that bend over backwards to publish any rubbish as long as it promotes alternative medicine.

Alternatively (or in addition) one can save oneself a lot of work and omit the process of critically evaluating the primary studies. This method is increasingly popular in alternative medicine. It is a fool-proof method of generating a false-positive overall result. As poor quality trials have a tendency to deliver false-positive results, it is obvious that a predominance of flimsy studies must create a false-positive result.

A particularly notorious example of a pseudo-systematic review that used this as well as most of the other tricks for misleading the reader is the famous ‘systematic’ review by Bronfort et al. It was commissioned by the UK GENERAL CHIROPRACTIC COUNCIL after the chiropractic profession got into trouble and was keen to defend those bogus treatments disclosed by Simon Singh. Bronfort and his colleagues thus swiftly published (of course, in a chiro-journal) an all-encompassing review attempting to show that, at least for some conditions, chiropractic was effective. Its lengthy conclusions seemed encouraging: Spinal manipulation/mobilization is effective in adults for: acute, subacute, and chronic low back pain; migraine and cervicogenic headache; cervicogenic dizziness; manipulation/mobilization is effective for several extremity joint conditions; and thoracic manipulation/mobilization is effective for acute/subacute neck pain. The evidence is inconclusive for cervical manipulation/mobilization alone for neck pain of any duration, and for manipulation/mobilization for mid back pain, sciatica, tension-type headache, coccydynia, temporomandibular joint disorders, fibromyalgia, premenstrual syndrome, and pneumonia in older adults. Spinal manipulation is not effective for asthma and dysmenorrhea when compared to sham manipulation, or for Stage 1 hypertension when added to an antihypertensive diet. In children, the evidence is inconclusive regarding the effectiveness for otitis media and enuresis, and it is not effective for infantile colic and asthma when compared to sham manipulation. Massage is effective in adults for chronic low back pain and chronic neck pain. The evidence is inconclusive for knee osteoarthritis, fibromyalgia, myofascial pain syndrome, migraine headache, and premenstrual syndrome. In children, the evidence is inconclusive for asthma and infantile colic. 

Chiropractors across the world cite this paper as evidence that chiropractic has at least some evidence base. What they omit to tell us (perhaps because they do not appreciate it themselves) is the fact that Bronfort et al

  • failed to formulate a focussed research question,
  • invented his own categories of inconclusive findings,
  • included all sorts of studies which had nothing to do with chiropractic,
  • and did not to make an assessment of the quality of the included primary studies they included in their review.

If, for a certain condition, three trials were included, for instance, two of which were positive but of poor quality and one was negative but of good quality, the authors would conclude that, overall, there is sound evidence.

Bronfort himself is, of course, more than likely to know all that (he has learnt his trade with an excellent Dutch research team and published several high quality reviews) – but his readers mostly don’t. And for chiropractors, this ‘systematic’ review is now considered to be the most reliable evidence in their field.

The efficacy or effectiveness of medical interventions is, of course, best tested in clinical trials. The principle of a clinical trial is fairly simple: typically, a group of patients is divided (preferably at random) into two subgroups, one (the ‘verum’ group) is treated with the experimental treatment and the other (the ‘control’ group) with another option (often a placebo), and the eventual outcomes of the two groups is compared. If done well, such studies are able to exclude biases and confounding factors such that their findings allow causal inference. In other words, they can tell us whether an outcome was caused by the intervention per se or by some other factor such as the natural history of the disease, regression towards the mean etc.

A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.

For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.

One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about NON-CONCLUSIVE?

That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!

How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo NON-CONCLUSIVE.

In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.

This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.

Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.

The little trick of applying the ‘NON-CONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.

Whenever a new trial of an alternative intervention emerges which fails to confirm the wishful thinking of the proponents of that therapy, the world of alternative medicine is in turmoil. What can be done about yet another piece of unfavourable evidence? The easiest solution would be to ignore it, of course – and this is precisely what is often tried. But this tactic usually proves to be unsatisfactory; it does not neutralise the new evidence, and each time someone brings it up, one has to stick one’s head back into the sand. Rather than denying its existence, it would be preferable to have a tool which invalidates the study in question once and for all.

The ‘fatal flaw’ solution is simpler than anticipated! Alternative treatments are ‘very special’, and this notion must be emphasised, blown up beyond all proportions and used cleverly to discredit studies with unfavourable outcomes: the trick is simply to claim that studies with unfavourable results have a ‘fatal flaw’ in the way the alternative treatment was applied. As only the experts in the ‘very special’ treatment in question are able to judge the adequacy of their therapy, nobody is allowed to doubt their verdict.

Take acupuncture, for instance; it is an ancient ‘art’ which only the very best will ever master – at least that is what we are being told. So, all the proponents need to do in order to invalidate a trial, is read the methods section of the paper in full detail and state ‘ex cathedra’ that the way acupuncture was done in this particular study is completely ridiculous. The wrong points were stimulated, or the right points were stimulated but not long enough [or too long], or the needling was too deep [or too shallow], or the type of stimulus employed was not as recommended by TCM experts, or the contra-indications were not observed etc. etc.

As nobody can tell a correct acupuncture from an incorrect one, this ‘fatal flaw’ method is fairly fool-proof. It is also ever so simple: acupuncture-fans do not necessarily study hard to find the ‘fatal flaw’, they only have to look at the result of a study – if it was favourable, the treatment was obviously done perfectly by highly experienced experts; if it was unfavourable, the therapists clearly must have been morons who picked up their acupuncture skills in a single weekend course. The reasons for this judgement can always be found or, if all else fails, invented.

And the end-result of the ‘fatal flaw’ method is most satisfactory; what is more, it can be applied to all alternative therapies – homeopathy, herbal medicine, reflexology, Reiki healing, colonic irrigation…the method works for all of them! What is even more, the ‘fatal flaw’ method is adaptable to other aspects of scientific investigations such that it fits every conceivable circumstance.

An article documenting the ‘fatal flaw’ has to be published, of course – but this is no problem! There are dozens of dodgy alternative medicine journals which are only too keen to print even the most far-fetched nonsense as long as it promotes alternative medicine in some way. Once this paper is published, the proponents of the therapy in question have a comfortable default position to rely on each time someone cites the unfavourable study: “WHAT NOT THAT STUDY AGAIN! THE TREATMENT HAS BEEN SHOWN TO BE ALL WRONG. NOBODY CAN EXPECT GOOD RESULTS FROM A THERAPY THAT WAS NOT CORRECTLY ADMINISTERED. IF YOU DON’T HAVE BETTER STUDIES TO SUPPORT YOUR ARGUMENTS, YOU BETTER SHUT UP.”

There might, in fact, be better studies – but chances are that the ‘other side’ has already documented a ‘fatal flaw’ in them too.

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories