MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

critical thinking

After the usually challenging acute therapy is behind them, cancer patients are often desperate to find a therapy that might improve their wellbeing. At that stage they may suffer from a wide range of symptoms which can seriously limit their quality of life. Any treatment that can be shown to restore them to their normal mental and physical health would be more than welcome.

Most homeopaths believe that their remedies can do just that, particularly if they are tailored not to the disease but to the individual patient. Sadly, the evidence that this might be so is almost non-existent. Now, a new trial has become available; it was conducted by Jennifer Poole, a chartered psychologist and registered homeopath, and researcher and teacher at Nemeton Research Foundation, Romsey.

The aim of this study was to explore the benefits of a three-month course of individualised homeopathy (IH) for survivors of cancer.  Fifteen survivors of any type of cancer were recruited from a walk-in cancer support centre. Conventional treatment had to have taken place within the last three years. Patients saw a homeopath who prescribed IH. After three months of IH, they scored their total, physical and emotional wellbeing using the Functional Assessment of Chronic Illness Therapy for Cancer (FACIT-G). The results show that 11 of the 14 women had statistically positive outcomes for emotional, physical and total wellbeing.
The conclusions of the author are clear: Findings support previous research, suggesting CAM or IH could be beneficial for survivors of cancer.

This article was published in the NURSING TIMES, and the editor added a footnote informing us that “This article has been double-blind “.

I find this surprising. A decent peer-review should have picked up the point that a study of that nature cannot possibly produce results which tell us anything about the benefits of IH. The reasons for this are fairly obvious:

  • there was no control group,
  • therefore the observed outcomes are most likely due to 1) natural history, 2) placebo, 3) regression towards the mean and 4) social desirability; it seems most unlikely that IH had anything to do with the result
  • the sample size was tiny,
  • the patients elected to receive IH which means that had high expectations of a positive outcome,
  • only subjective outcome measures were used,
  • there is no good previous research suggesting that IH benefits cancer patients.

On the last point, a recent systematic review showed that the studies available on this topic had mixed results either showing a significantly greater improvement in QOL in the intervention group compared to the control group, or no significant difference between groups. The authors concluded that there existed significant gaps in the evidence base for the effectiveness of CAM on QOL in cancer survivors. Further work in this field needs to adopt more rigorous methodology to help support cancer survivors to actively embrace self-management and effective CAMs, without recommending inappropriate interventions which are of no proven benefit.

All this new study might tell us is that IH did not seem to harm these patients  – but even this finding is not certain; to be sure, we would need to include many more patients. Any conclusions about the effectiveness of IH are totally unwarranted. But are there ANY generalizable conclusions that can be drawn from this article? Yes, I can think of a few:

  • Some cancer patients can be persuaded to try the most implausible treatments.
  • Some journals will publish any rubbish.
  • Some peer-reviewers fail to spot the most obvious defects.
  • Some ‘researchers’ haven’t got a clue.
  • The attempts of misleading us about the value of homeopathy are incessant.

One might argue that this whole story is too trivial for words; who cares what dodgy science is published in the NURSING TIMES? But I think it does matter – not so much because of this one silly article itself, but because similarly poor research with similarly ridiculous conclusions is currently published almost every day. Subsequently it is presented to the public as meaningful science heralding important advances in medicine. It matters because this constant drip of bogus research eventually influences public opinion and determines far-reaching health care decisions.

Many proponents of alternative medicine seem somewhat suspicious of research; they have obviously understood that it might not produce the positive result they had hoped for; after all, good research tests hypotheses and does not necessarily confirm beliefs. At the same time, they are often tempted to conduct research: this is perceived as being good for the image and, provided the findings are positive, also good for business.

Therefore they seem to be tirelessly looking for a study design that cannot ‘fail’, i.e. one that avoids the risk of negative results but looks respectable enough to be accepted by ‘the establishment’. For these enthusiasts, I have good news: here is the study design that cannot fail.

It is perhaps best outlined as a concrete example; for reasons that will become clear very shortly, I have chosen reflexology as a treatment of diabetic neuropathy, but you can, of course, replace both the treatment and the condition as it suits your needs. Here is the outline:

  • recruit a group of patients suffering from diabetic neuropathy – say 58, that will do nicely,
  • randomly allocate them to two groups,
  • the experimental group receives regular treatments by a motivated reflexologist,
  • the controls get no such therapy,
  • both groups also receive conventional treatments for their neuropathy,
  • the follow-up is 6 months,
  • the following outcome measures are used: pain reduction, glycemic control, nerve conductivity, and thermal and vibration sensitivities,
  • the results show that the reflexology group experience more improvements in all outcome measures than those of control subjects,
  • your conclusion: This study exhibited the efficient utility of reflexology therapy integrated with conventional medicines in managing diabetic neuropathy.

Mission accomplished!

This method is fool-proof, trust me, I have seen it often enough being tested, and never has it generated disappointment. It cannot fail because it follows the notorious A+B versus B design (I know, I have mentioned this several times before on this blog, but it is really important, I think): both patient groups receive the essential mainstream treatment, and the experimental group receives a useless but pleasant alternative treatment in addition. The alternative treatment involves touch, time, compassion, empathy, expectations, etc. All of these elements will inevitably have positive effects, and they can even be used to increase the patients’ compliance with the conventional treatments that is being applied in parallel. Thus all outcome measures will be better in the experimental compared to the control group.

The overall effect is pure magic: even an utterly ineffective treatment will appear as being effective – the perfect method for producing false-positive results.

And now we hopefully all understand why this study design is so very popular in alternative medicine. It looks solid – after all, it’s an RCT!!! – and it thus convinces even mildly critical experts of the notion that the useless treatment is something worth while. Consequently the useless treatment will become accepted as ‘evidence-based’, will be used more widely and perhaps even reimbursed from the public purse. Business will be thriving!

And why did I employ reflexology for diabetic neuropathy? Is that example not a far-fetched? Not a bit! I used it because it describes precisely a study that has just been published. Of course, I could also have taken the chiropractic trial from my last post, or dozens of other studies following the A+B versus B design – it is so brilliantly suited for misleading us all.

On this blog, I have often pointed out how dismally poor most of the trials of alternative therapies frequently are, particularly those in the realm of chiropractic. A brand-new study seems to prove my point.

The aim of this trial was to determine whether spinal manipulative therapy (SMT) plus home exercise and advice (HEA) compared with HEA alone reduces leg pain in the short and long term in adults with sub-acute and chronic back-related leg-pain (BRLP).

Patients aged 21 years or older with BRLP for least 4 weeks were randomised to receive 12 weeks of SMT plus HEA or HEA alone. Eleven chiropractors with a minimum of 5 years of practice experience delivered SMT in the SMT plus HEA group. The primary outcome was subjective BRLP at 12 and 52 weeks. Secondary outcomes were self-reported low back pain, disability, global improvement, satisfaction, medication use, and general health status at 12 and 52 weeks.

Of the 192 enrolled patients, 191 (99%) provided follow-up data at 12 weeks and 179 (93%) at 52 weeks. For leg pain, SMT plus HEA had a clinically important advantage over HEA (difference, 10 percentage points [95% CI, 2 to 19]; P = 0.008) at 12 weeks but not at 52 weeks (difference, 7 percentage points [CI, -2 to 15]; P = 0.146). Nearly all secondary outcomes improved more with SMT plus HEA at 12 weeks, but only global improvement, satisfaction, and medication use had sustained improvements at 52 weeks. No serious treatment-related adverse events or deaths occurred.

The authors conclude that, for patients with BRLP, SMT plus HEA was more effective than HEA alone after 12 weeks, but the benefit was sustained only for some secondary outcomes at 52 weeks.

This is yet another pragmatic trial following the notorious and increasingly popular A+B versus B design. As pointed out repeatedly on this blog, this study design can hardly ever generate a negative result (A+B is always more than B, unless A has a negative value [which even placebos don’t have]). Thus it is not a true test of the experimental treatment but all an exercise to create a positive finding for a potentially useless treatment. Had the investigators used any mildly pleasant placebo with SMT, the result would have been the same. In this way, they could create results showing that getting a £10 cheque or meeting with pleasant company every other day, together with HEA, is more effective than HEA alone. The conclusion that the SMT, the cheque or the company have specific effects is as implicit in this article as it is potentially wrong.

The authors claim that their study was limited because patient-blinding was not possible. This is not entirely true, I think; it was limited mostly because it failed to point out that the observed outcomes could be and most likely are due to a whole range of factors which are not directly related to SMT and, most crucially, because its write-up, particularly the conclusions, wrongly implied cause and effect between SMT and the outcome. A more accurate conclusion could have been as follows: SMT plus HEA was more effective than HEA alone after 12 weeks, but the benefit was sustained only for some secondary outcomes at 52 weeks. Because the trial design did not control for non-specific effects, the observed outcomes are consistent with SMT being an impressive placebo.

No such critical thought can be found in the article; on the contrary, the authors claim in their discussion section that the current trial adds to the much-needed evidence base about SMT for subacute and chronic BRLP. Such phraseology is designed to mislead decision makers and get SMT accepted as a treatment of conditions for which it is not necessarily useful.

Research where the result is known before the study has even started (studies with a A+B versus B design) is not just useless, it is, in my view, unethical: it fails to answer a real question and is merely a waste of resources as well as an abuse of patients willingness to participate in clinical trials. But the authors of this new trial are in good and numerous company: in the realm of alternative medicine, such pseudo-research is currently being published almost on a daily basis. What is relatively new, however, that even some of the top journals are beginning to fall victim to this incessant stream of nonsense.

Many experts are critical about the current craze for dietary supplements. Now a publication suggests that it is something that can save millions.

This article examines evidence suggesting that the use of selected dietary supplements can reduce overall disease treatment-related hospital utilization costs associated with coronary heart disease (CHD) in the United States among those at a high risk of experiencing a costly, disease-related event.

Results show that:

  • the potential avoided hospital utilization costs related to the use of omega-3 supplements at preventive intake levels among the target population can be as much as $2.06 billion on average per year from 2013 to 2020. The potential net savings in avoided CHD-related hospital utilization costs after accounting for the cost of omega-3 dietary supplements at preventive daily intake levels would be more than $3.88 billion in cumulative health care cost savings from 2013 to 2020.
  • the use of folic acid, B6, and B12 among the target population at preventive intake levels could yield avoided CHD-related hospital utilization costs savings of an average savings of $1.52 billion per year from 2013 to 2020. The potential net savings in avoided CHD-related health care costs after accounting for the cost of folic acid, B6, and B12 utilization at preventive daily intake levels would be more than $5.23 billion in cumulative health care cost net savings during the same period.

The authors conclude that targeted dietary supplement regimens are recommended as a means to help control rising societal health care costs, and as a means for high-risk individuals to minimize the chance of having to deal with potentially costly events and to invest in increased quality of life.

These conclusions read like a ‘carte blanche’ for marketing all sorts of useless supplements to gullible consumers. I think we should take them with more than a pinch of salt.

To generate results of this nature, it is necessary to make a number of assumptions. If the assumptions are wrong, so will be the results. Furthermore, we should consider that the choice of supplements included was extremely limited and highly selected. Finally, we need to stress that the analysis related to a very specific patient group and not to the population at large. In view of these facts, caution might be advised in taking this analysis as being generalizable.

Because of these caveats, my conclusion would have been quite different: provided that the assumptions underlying these analyses are correct, the use of a small selection of dietary supplements by patients at risk of CHD might reduce health care cost.

Most of the underlying assumptions of alternative medicine (AM) lack plausibility. Whenever this is the case, so the argument put forward by an international team of researchers in a recent paper, there are difficulties involved in obtaining a valid statistical significance in clinical studies.

Using a mostly statistical approach, they argue that, since the prior probability of a research hypothesis is directly related to its scientific plausibility, the commonly used frequentist statistics, which do not account for this probability, are unsuitable for studies exploring matters in various degree disconnected from science. Any statistical significance obtained in this field should be considered with great caution and may be better applied to more plausible hypotheses (like placebo effect) than the specific efficacy of the intervention.

The researchers conclude that, since achieving meaningful statistical significance is an essential step in the validation of medical interventions, AM practices, producing only outcomes inherently resistant to statistical validation, appear not to belong to modern evidence-based medicine.

To emphasize their arguments, the researchers make the following additional points:

  • It is often forgotten that frequentist statistics, commonly used in clinical trials, provides only indirect evidence in support of the hypothesis examined.
  • The p-value inherently tends to exaggerate the support for the hypothesis tested, especially if the scientific plausibility of the hypothesis is low.
  • When the rationale for a clinical intervention is disconnected from the basic principles of science, as in case of complementary alternative medicines, any positive result obtained in clinical studies is more reasonably ascribable to hypotheses (generally to placebo effect) other than the hypothesis on trial, which commonly is the specific efficacy of the intervention.
  • Since meaningful statistical significance as a rule is an essential step to validation of a medical intervention, complementary alternative medicine cannot be considered evidence-based.

Further explanations can be found in the discussion of the article where the authors argue that the quality of the hypothesis tested should be consistent with sound logic and science and therefore have a reasonable prior probability of being correct. As a rule of thumb, assuming a “neutral” attitude towards the null hypothesis (odds = 1:1), a p-value of 0.01 or, better, 0.001 should suffice to give a satisfactory posterior probability of 0.035 and 0.005 respectively.

In the area of AM, hypotheses often are entirely inconsistent with logic and frequently fly in the face of science. Four examples can demonstrate this instantly and sufficiently, I think:

  • Homeopathic remedies which contain not a single ‘active’ molecule are not likely to generate biological effects.
  • Healing ‘energy’ of Reiki masters has no basis in science.
  • Meridians of acupuncture are pure imagination.
  • Chiropractic subluxation have never been shown to exist.

Positive results from clinical trials of implausible forms of AM are thus either due to chance, bias or must be attributed to more credible causes such as the placebo effect. Since the achievement of meaningful statistical significance is an essential step in the validation of medical interventions, unless some authentic scientific support to AM is provided, one has to conclude that AM cannot be considered as evidence-based.

Such arguments are by no means new; they have been voiced over and over again. Essentially, they amount to the old adage: IF YOU CLAIM THAT YOU HAVE A CAT IN YOUR GARDEN, A SIMPLE PICTURE MAY SUFFICE. IF YOU CLAIM THERE IS A UNICORN IN YOUR GARDEN, YOU NEED SOMETHING MORE CONVINCING. An extraordinary claim requires an extraordinary proof! Put into the context of the current discussion about AM, this means that the usual level of clinical evidence is likely to be very misleading as long as it totally neglects the biological plausibility of the prior hypothesis.

Proponents of AM do not like to hear such arguments. They usually insist on what we might call a ‘level playing field’ and fail to see why their assumptions require not only a higher level of evidence but also a reasonable scientific hypothesis. They forget that the playing field is not even to start with; to understand the situation better, they should read this excellent article. Perhaps its elegant statistical approach will convince them – but I would not hold my breath.

Medical treatments with no direct effect, such as homeopathy, are surprisingly popular. But how does a good reputation of such treatments spread and persist? Researchers from the Centre for the Study of Cultural Evolution in Stockholm believe that they have identified the mechanism.

They argue that most medical treatments result in a range of outcomes: some people improve while others deteriorate. If the people who improve are more inclined to tell others about their experiences than the people who deteriorate, ineffective or even harmful treatments would maintain a good reputation.

They conducted a fascinating study to test the hypothesis that positive outcomes are overrepresented in online medical product reviews, examined if this reputational distortion is large enough to bias people’s decisions, and explored the implications of this bias for the cultural evolution of medical treatments.

The researchers compared outcomes of weight loss treatments and fertility treatments as evidenced in clinical trials to outcomes reported in 1901 reviews on Amazon. Subsequently, in a series of experiments, they evaluated people’s choice of weight loss diet after reading different reviews. Finally, a mathematical model was used to examine if this bias could result in less effective treatments having a better reputation than more effective treatments.

The results of these investigations confirmed the hypothesis that people with better outcomes are more inclined to write reviews. After 6 months on the diet, 93% of online reviewers reported a weight loss of 10 kg or more, while just 27% of clinical trial participants experienced this level of weight change. A similar positive distortion was found in fertility treatment reviews. In a series of experiments, the researchers demonstrated that people are more inclined to begin a diet that was backed by many positive reviews, than a diet with reviews that are representative of the diet’s true effect. A mathematical model of medical cultural evolution suggested that the size of the positive distortion critically depends on the shape of the outcome distribution.

The authors concluded that online reviews overestimate the benefits of medical treatments, probably because people with negative outcomes are less inclined to tell others about their experiences. This bias can enable ineffective medical treatments to maintain a good reputation.

To me, this seems eminently plausible; but there are, of course, other reasons why bogus treatments survive or even thrive – and they may vary in their importance to the overall effect from treatment to treatment. As so often in health care, things are complex and there are multiple factors that contribute to a phenomenon.

Recently, I was invited to give a lecture about homeopathy for a large gathering of general practitioners (GPs). In the coffee break after my talk, I found myself chatting to a very friendly GP who explained: “I entirely agree with you that homeopathic remedies are pure placebos, but I nevertheless prescribe them regularly.” “Why would anyone do that?” I asked him. His answer was as frank as it was revealing.

Some of his patients, he explained, have symptoms for which he has tried every treatment possible without success. They typically have already seen every specialist in the book but none could help either. Alternatively they are patients who have nothing wrong with them but regularly consult him for trivial or self-limiting problems.

In either case, the patients come with the expectation of getting a prescription for some sort of medicine. The GP knows that it would be a hassle and most likely a waste of time to try and dissuade them. His waiting room is full, and he is facing the following choice:

  1. to spend valuable 15 minutes or so explaining why he should not prescribe any medication at all, or
  2. to write a prescription for a homeopathic placebo and get the consultation over with in two minutes.

Option number 1 would render the patient unhappy or even angry, and chances are that she would promptly see some irresponsible charlatan who puts her ‘through the mill’ at great expense and considerable risk. Option number 2 would free the GP quickly to help those patients who can be helped, make the patient happy, preserve a good therapeutic relationship between GP and the patient, save the GP’s nerves, let the patient benefit from a potentially powerful placebo-effect, and be furthermore safe as well as cheap.

I was not going to be beaten that easily though. “Basically” I told him “you are using homeopathy to quickly get rid of ‘heart sink’ patients!”

He agreed.

“And you find this alright?”

“No, but do you know a better solution?”

I explained that, by behaving in this way, the GP degrades himself to the level of a charlatan. “No”, he said “I am saving my patients from the many really dangerous charlatans that are out there.”

I explained that some of these patients might suffer from a serious condition which he had been able to diagnose. He countered that this has so far never happened because he is a well-trained and thorough physician.

I explained that his actions are ethically questionable. He laughed and said that, in his view, it was much more ethical to use his time and skills to the best advantage of those who truly need them. In his view, the more important ethical issue over-rides the relatively minor one.

I explained that, by implying that homeopathy is an effective treatment, he is perpetuating a myth which stands in the way of progress. He laughed again and answered that his foremost duty as a GP is not to generate progress on a theoretical level but to provide practical help for the maximum number of patients.

I explained that there cannot be many patients for whom no treatment existed that would be more helpful than a placebo, even if it only worked symptomatically. He looked at me with a pitiful smile and said my remark merely shows how long I am out of clinical medicine.

I explained that doctors as well as patients have to stop that awfully counter-productive culture of relying on prescriptions or ‘magic bullets’ for every ill. We must all learn that, in many cases, it is better to do nothing or rely on life-style changes; and we must get that message across to the public. He agreed, at least partly, but claimed this would require more that the 10 minutes he is allowed for each patient.

I explained….. well, actually, at this point, I had run out of arguments and was quite pleased when someone else started talking to me and this conversation had thus been terminated.

Since that day, I am wondering what other arguments exist. I would be delighted, if my readers could help me out.

General practitioners (GPs) play an important role in advising patients on all sorts of matters related to their health, and this includes, of course, the possible risks of electromagnetic fields (EMF). Their views on EMF are thus relevant and potentially influential.

A team of German and Danish researchers therefore conducted a survey comparing GPs using conventional medicine (COM) with GPs using complementary and alternative medicine (CAM) concerning their perception of EMF risks. A total of 2795 GPs drawn randomly from lists of German GPs were sent an either long or short self-administered postal questionnaire on EMF-related topics. Adjusted logistic regression models were fitted to assess the association of an education in alternative medicine with various aspects of perceiving EMF risks.

Concern about EMF, misconceptions about EMF, and distrust toward scientific organizations are more prevalent in CAM-GPs. CAM-GPs more often falsely believed that mobile phone use can lead to head warming of more than 1°C, more often distrusted the Federal Office for Radiation Protection, were more often concerned about mobile phone base stations, more often attributed own health complaints to EMF, and more often reported at least 1 EMF consultation. GPs using homeopathy perceived EMF as more risky than GPs using acupuncture or naturopathic treatment.

The authors concluded that concern about common EMF sources is highly prevalent among German GPs. CAM-GPs perceive stronger associations between EMF and health problems than COM-GPs. There is a need for evidence-based information about EMF risks for GPs and particularly for CAM-GPs. This is the precondition that GPs can inform patients about EMF and health in line with current scientific knowledge.

True, the evidence is somewhat contradictory but the majority of independent reviews seem to suggest that EMF constitute little or no health risks. In case you don’t believe me, here are a few conclusions from recent reviews:

But even if someone wants to err on the safe side, and seriously considers the possibility that EMF sources might have the potential to harm our health, a general distrust in scientific organizations, and wrong ideas about modern technologies such as mobile phones are hardly very helpful – in fact, I find them pretty worrying. To learn that CAM-GPs are more likely than COM-GPs to hold such overtly anti-scientific views does not inspire me with trust; to see that homeopaths are the worst culprits is perhaps not entirely unexpected. Almost by definition, critical evaluation of the existing evidence is not a skill that is prevalent amongst homeopaths – if it were, there would be no homeopaths!

Twenty years ago, when I started my Exeter job as a full-time researcher of complementary/alternative medicine (CAM), I defined the aim of my unit as applying science to CAM. At the time, this intention upset quite a few CAM-enthusiasts. One of the most prevalent arguments of CAM-proponents against my plan was that the study of CAM with rigorous science was quite simply an impossibility. They claimed that CAM included mind and body practices, holistic therapies, and other complex interventions which cannot not be put into the ‘straight jacket’ of conventional research, e. g. a controlled clinical trial. I spent the next few years showing that this notion was wrong. Gradually and hesitantly CAM researchers seemed to agree with my view – not all, of course, but first a few and then slowly, often reluctantly the majority of them.

What followed was a period during which several research groups started conducting rigorous tests of the hypotheses underlying CAM. All too often, the results turned out to be disappointing, to say the least: not only did most of the therapies in question fail to show efficacy, they were also by no means free of risks. Worst of all, perhaps, much of CAM was disclosed as being biologically implausible. The realization that rigorous scientific scrutiny often generated findings which were not what proponents had hoped for led to a sharp decline in the willingness of CAM-proponents to conduct rigorous tests of their hypotheses. Consequently, many asked whether science was such a good idea after all.

But that, in turn, created a new problem: once they had (at least nominally) committed themselves to science, how could they turn against it? The answer to this dilemma was easier that anticipated: the solution was to appear dedicated to science but, at the same time, to argue that, because CAM is subtle, holistic, complex etc., a different scientific approach was required. At this stage, I felt we had gone ‘full circle’ and had essentially arrived back where we were 20 years ago – except that CAM-proponents no longer rejected the scientific method outright but merely demanded different tools.

A recent article may serve as an example of this new and revised stance of CAM-proponents on science. Here proponents of alternative medicine argue that a challenge for research methodology in CAM/ICH* is the growing recognition that CAM/IHC practice often involves complex combination of novel interventions that include mind and body practices, holistic therapies, and others. Critics argue that the reductionist placebo controlled randomized control trial (RCT) model that works effectively for determining efficacy for most pharmaceutical or placebo trial RCTs may not be the most appropriate for determining effectiveness in clinical practice for either CAM/IHC or many of the interventions used in primary care, including health promotion practices. Therefore the reductionist methodology inherent in efficacy studies, and in particular in RCTs, may not be appropriate to study the outcomes for much of CAM/IHC, such as Traditional Korean Medicine (TKM) or other complex non-CAM/IHC interventions—especially those addressing comorbidities. In fact it can be argued that reductionist methodology may disrupt the very phenomenon, the whole system, that the research is attempting to capture and evaluate (i.e., the whole system in its naturalistic environment). Key issues that surround selection of the most appropriate methodology to evaluate complex interventions are well described in the Kings Fund report on IHC and also in the UK Medical Research Council (MRC) guidelines for evaluating complex interventions—guidelines which have been largely applied to the complexity of conventional primary care and care for patients with substantial comorbidity. These reports offer several potential solutions to the challenges inherent in studying CAM/IHC. [* IHC = integrated health care]

Let’s be clear and disclose what all of this actually means. The sequence of events, as I see it, can be summarized as follows:

  • We are foremost ALTERNATIVE! Our treatments are far too unique to be subjected to reductionist research; we therefore reject science and insist on an ALTERNATIVE.
  • We (well, some of us) have reconsidered our opposition and are prepared to test our hypotheses scientifically (NOT LEAST BECAUSE WE NEED THE RECOGNITION THAT THIS MIGHT BRING).
  • We are dismayed to see that the results are mostly negative; science, it turns out, works against our interests.
  • We need to reconsider our position.
  • We find it inconceivable that our treatments do not work; all the negative scientific results must therefore be wrong.
  • We always said that our treatments are unique; now we realize that they are far too holistic and complex to be submitted to reductionist scientific methods.
  • We still believe in science (or at least want people to believe that we do) – but we need a different type of science.
  • We insist that RCTs (and all other scientific methods that fail to demonstrate the value of CAM) are not adequate tools for testing complex interventions such as CAM.
  • We have determined that reductionist research methods disturb our subtle treatments.
  • We need pragmatic trials and similarly ‘soft’ methods that capture ‘real life’ situations, do justice to CAM and rarely produce a negative result.

What all of this really means is that, whenever the findings of research fail to disappoint CAM-proponents, the results are by definition false-negative. The obvious solution to this problem is to employ different (weaker) research methods, preferably those that cannot possibly generate a negative finding. Or, to put it bluntly: in CAM, science is acceptable only as long as it produces the desired results.

Dodgy science abounds in alternative medicine; this is perhaps particularly true for homeopathy. A brand-new trial seems to confirm this view.

The aim of this study was to test the hypothesis that homeopathy (H) enhances the effects of scaling and root planing (SRP) in patients with chronic periodontitis (CP).

The researchers, dentists from Brazil, randomised 50 patients with CP to one of two treatment groups: SRP (C-G) or SRP + H (H-G). Assessments were made at baseline and after 3 and 12 months of treatment. The local and systemic responses to the treatments were evaluated after one year of follow-up. The results showed that both groups displayed significant improvements, however, the H-G group performed significantly better than C-G group.

The authors concluded that homeopathic medicines, as an adjunctive to SRP, can provide significant local and systemic improvements for CP patients.

Really? I am afraid, I disagree!

Homeopathic medicines might have nothing whatsoever to do with this result. Much more likely is the possibility that the findings are caused by other factors such as:

  • placebo-effects,
  • patients’ expectations,
  • improved compliance with other health-related measures,
  • the researchers’ expectations,
  • the extra attention given to the patients in the H-G group,
  • disappointment of the C-G patients for not receiving the additional care,
  • a mixture of all or some of the above.

I should stress that it would not have been difficult to plan the study in such a way that these factors were eliminated as sources of bias or confounding. But this study was conducted according to the A+B versus B design which we have discussed repeatedly on this blog. In such trials, A is the experimental treatment (homeopathy) and B is the standard care (scaling and root planning). Unless A is an overtly harmful therapy, it is simply not conceivable that A+B does not generate better results than B alone. The simplest way to comprehend this argument is to imagine A and B are two different amounts of money: it is impossible that A+B is not more that B!

It is unclear to me what relevant research question such a study design actually does answer (if anyone knows, please tell me). It seems obvious, however, that it cannot test the hypothesis that homeopathy (H) enhances the effects of scaling and root planing (SRP). This does not necessarily mean that the design is necessarily useless.  But at the very minimum, one would need an adequate research question (one that matches this design) and adequate conclusions based on the findings.

The fact that the conclusions drawn from a dodgy trial are inadequate and misleading could be seen as merely a mild irritation. The facts that, in homeopathy, such poor science and misleading conclusions emerge all too regularly, and that journals continue to publish such rubbish are not just mildly irritating; they are annoying and worrying – annoying because such pseudo-science constitutes an unethical waste of scarce resources; worrying because it almost inevitably leads to wrong decisions in health care.

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories