MD, PhD, FMedSci, FSB, FRCP, FRCPEd

research methodology

Fibromyalgia (FM) is a chronic condition which ruins the quality of life of many patients. It is also a domain of alternative medicine: dozens of different treatments are on offer - this is clearly a paradise for charlatans and bogus claims. So is there a treatment that is demonstrably effective? The purpose of this systematic review is to evaluate the evidence of massage therapy FM.

Electronic databases were searched to identify relevant studies. The main outcome measures were pain, anxiety, depression, and sleep disturbance. Two reviewers independently abstracted data and appraised risk of bias. The risk of bias of eligible studies was assessed based on Cochrane tools.

Nine randomized controlled trials involving 404 patients met the inclusion criteria. A meta-analyses showed that massage therapy with a duration of at least 5 weeks significantly improved pain , anxiety, and depression. Sleep disturbance was not improved by massage therapy.

The authors conclude that massage therapy with duration ≥5 weeks had beneficial immediate effects on improving pain, anxiety, and depression in patients with FM. Massage therapy should be one of the viable complementary and alternative treatments for FM. However, given fewer eligible studies in subgroup meta-analyses and no evidence on follow-up effects, large-scale randomized controlled trials with long follow-up are warrant to confirm the current findings.

To put these results into context, we need to consider the often poor methodological quality of the primary studies. It is, of course, not easy to test massage therapy in rigorous trials. For instance, there is no obvious placebo, and we can therefore not be sure whether the treatment benefits patients through a specific effect or whether non-specific effects are the cause of the improvement.

We also should be aware of the facts that for most other alternative therapies the evidence is not encouraging, and that massage therapy is relatively safe. Therefore the conclusion for those who suffer from FM might well be that massage therapy is worth a try.

The dismal state of chiropractic research is no secret. But is anything being done about it? One important step would be to come up with a research strategy to fill the many embarrassing gaps in our knowledge about the validity of the concepts underlying chiropractic.

A brand-new article might be a step in the right direction. The aim of this survey was to identify chiropractors’ priorities for future research in order to best channel the available resources and facilitate advancement of the profession. The researchers recruited 60 academic and clinician chiropractors who had attended any of the annual European Chiropractors’ Union/European Academy of Chiropractic Researchers’ Day meetings since 2008. A Delphi process was used to identify a list of potential research priorities. Initially, 70 research priorities were identified, and 19 of them reached consensus as priorities for future research. The following three items were thought to be most important:

  1.  cost-effectiveness/economic evaluations,
  2.  identification of subgroups likely to respond to treatment,
  3.  initiation and promotion of collaborative research activities.

The authors state that this is the first formal and systematic attempt to develop a research agenda for the chiropractic profession in Europe. Future discussion and study is necessary to determine whether the themes identified in this survey should be broadly implemented.

Am I the only one who finds these findings extraordinary?

The chiropractic profession only recently lost the libel case against Simon Singh who had disclosed that chiropractors HAPPILY PROMOTE BOGUS TREATMENTS. One would have thought that this debacle might prompt the need for rigorous research testing the many unsubstantiated claims chiropractors still make. Alas, the collective chiropractic wisdom does not consider such research as a priority!

Similarly, I would have hoped that chiropractors perceive an urgency to investigate the safety of their treatments. Serious complications after spinal manipulation are well documented, and I would have thought that any responsible health care profession would consider it essential to generate reliable evidence on the incidence of such events.

The fact that these two areas are not considered to be priorities is revealing. In my view, it suggests that chiropractic is still very far from becoming a mature and responsible profession. It seems that chiropractors have not learned the most important lessons from recent events; on the contrary, they continue to bury their heads in the sand and carry on seeing research as a tool for marketing.

The news that the use of Traditional Chinese Medicine (TCM) positively affects cancer survival might come as a surprise to many readers of this blog; but this is exactly what recent research has suggested. As it was published in one of the leading cancer journals, we should be able to trust the findings – or shouldn’t we?

The authors of this new study used the Taiwan National Health Insurance Research Database to conduct a retrospective population-based cohort study of patients with advanced breast cancer between 2001 and 2010. The patients were separated into TCM users and non-users, and the association between the use of TCM and patient survival was determined.

A total of 729 patients with advanced breast cancer receiving taxanes were included. Their mean age was 52.0 years; 115 patients were TCM users (15.8%) and 614 patients were TCM non-users. The mean follow-up was 2.8 years, with 277 deaths reported to occur during the 10-year period. Multivariate analysis demonstrated that, compared with non-users, the use of TCM was associated with a significantly decreased risk of all-cause mortality (adjusted hazards ratio [HR], 0.55 [95% confidence interval, 0.33-0.90] for TCM use of 30-180 days; adjusted HR, 0.46 [95% confidence interval, 0.27-0.78] for TCM use of > 180 days). Among the frequently used TCMs, those found to be most effective (lowest HRs) in reducing mortality were Bai Hua She She Cao, Ban Zhi Lian, and Huang Qi.

The authors of this paper are initially quite cautious and use adequate terminology when they write that TCM-use was associated with increased survival. But then they seem to get carried away by their enthusiasm and even name the TCM drugs which they thought were most effective in prolonging cancer survival. It is obvious that such causal extrapolations are well out of line with the evidence they produced (oh, how I wished that journal editors would finally wake up to such misleading language!) .

Of course, it is possible that some TCM drugs are effective cancer cures – but the data presented here certainly do NOT demonstrate anything like such an effect. And before such a far-reaching claim is being made, much more and much better research would be necessary.

The thing is, there are many alternative and plausible explanations for the observed phenomenon. For instance, it is conceivable that users and non-users of TCM in this study differed in many ways other than their medication, e.g. severity of cancer, adherence to conventional therapies, life-style, etc. And even if the researchers have used clever statistical methods to control for some of these variables, residual confounding can never be ruled out in such case-control studies.

Correlation is not causation, they say. Neglect of this elementary axiom makes for very poor science – in fact, it produces dangerous pseudoscience which could, like in the present case, lead a cancer patient straight up the garden path towards a premature death.

Systematic reviews are widely considered to be the most reliable type of evidence for judging the effectiveness of therapeutic interventions. Such reviews should be focused on a well-defined research question and identify, critically appraise and synthesize the totality of the high quality research evidence relevant to that question. Often it is possible to pool the data from individual studies and thus create a new numerical result of the existing evidence; in this case, we speak of a meta-analysis, a sub-category of systematic reviews.

One strength of systematic review is that they minimise selection and random biases by considering at the totality of the evidence of a pre-defined nature and quality. A crucial precondition, however, is that the quality of the primary studies is critically assessed. If this is done well, the researchers will usually be able to determine how robust any given result is, and whether high quality trials generate similar findings as those of lower quality. If there is a discrepancy between findings from rigorous and flimsy studies, it is obviously advisable to trust the former and discard the latter.

And this is where systematic reviews of alternative treatments can run into difficulties. For any given research question in this area we usually have a paucity of primary studies. Equally important is the fact that many of the available trials tend to be of low quality. Consequently, there often is a lack of high quality studies, and this makes it all the more important to include a robust critical evaluation of the primary data. Not doing so would render the overall result of the review less than reliable – in fact, such a paper would not qualify as a systematic review at all; it would be a pseudo-systematic review, i.e. a review which pretends to be systematic but, in fact, is not. Such papers are a menace in that they can seriously mislead us, particularly if we are not familiar with the essential requirements for a reliable review.

This is precisely where some promoters of bogus treatments seem to see their opportunity of making their unproven therapy look as though it was evidence-based. Pseudo-systematic reviews can be manipulated to yield a desired outcome. In my last post, I have shown that this can be done by including treatments which are effective so that an ineffective therapy appears effective (“chiropractic is so much more than just spinal manipulation”). An even simpler method is to exclude some of the studies that contradict one’s belief from the review. Obviously, the review would then not comprise the totality of the available evidence. But, unless the reader bothers to do a considerable amount of research, he/she would be highly unlikely to notice. All one needs to do is to smuggle the paper past the peer-review process – hardly a difficult task, given the plethora of alternative medicine journals that bend over backwards to publish any rubbish as long as it promotes alternative medicine.

Alternatively (or in addition) one can save oneself a lot of work and omit the process of critically evaluating the primary studies. This method is increasingly popular in alternative medicine. It is a fool-proof method of generating a false-positive overall result. As poor quality trials have a tendency to deliver false-positive results, it is obvious that a predominance of flimsy studies must create a false-positive result.

A particularly notorious example of a pseudo-systematic review that used this as well as most of the other tricks for misleading the reader is the famous ‘systematic’ review by Bronfort et al. It was commissioned by the UK GENERAL CHIROPRACTIC COUNCIL after the chiropractic profession got into trouble and was keen to defend those bogus treatments disclosed by Simon Singh. Bronfort and his colleagues thus swiftly published (of course, in a chiro-journal) an all-encompassing review attempting to show that, at least for some conditions, chiropractic was effective. Its lengthy conclusions seemed encouraging: Spinal manipulation/mobilization is effective in adults for: acute, subacute, and chronic low back pain; migraine and cervicogenic headache; cervicogenic dizziness; manipulation/mobilization is effective for several extremity joint conditions; and thoracic manipulation/mobilization is effective for acute/subacute neck pain. The evidence is inconclusive for cervical manipulation/mobilization alone for neck pain of any duration, and for manipulation/mobilization for mid back pain, sciatica, tension-type headache, coccydynia, temporomandibular joint disorders, fibromyalgia, premenstrual syndrome, and pneumonia in older adults. Spinal manipulation is not effective for asthma and dysmenorrhea when compared to sham manipulation, or for Stage 1 hypertension when added to an antihypertensive diet. In children, the evidence is inconclusive regarding the effectiveness for otitis media and enuresis, and it is not effective for infantile colic and asthma when compared to sham manipulation. Massage is effective in adults for chronic low back pain and chronic neck pain. The evidence is inconclusive for knee osteoarthritis, fibromyalgia, myofascial pain syndrome, migraine headache, and premenstrual syndrome. In children, the evidence is inconclusive for asthma and infantile colic. 

Chiropractors across the world cite this paper as evidence that chiropractic has at least some evidence base. What they omit to tell us (perhaps because they do not appreciate it themselves) is the fact that Bronfort et al

  • failed to formulate a focussed research question,
  • invented his own categories of inconclusive findings,
  • included all sorts of studies which had nothing to do with chiropractic,
  • and did not to make an assessment of the quality of the included primary studies they included in their review.

If, for a certain condition, three trials were included, for instance, two of which were positive but of poor quality and one was negative but of good quality, the authors would conclude that, overall, there is sound evidence.

Bronfort himself is, of course, more than likely to know all that (he has learnt his trade with an excellent Dutch research team and published several high quality reviews) - but his readers mostly don’t. And for chiropractors, this ‘systematic’ review is now considered to be the most reliable evidence in their field.

The efficacy or effectiveness of medical interventions is, of course, best tested in clinical trials. The principle of a clinical trial is fairly simple: typically, a group of patients is divided (preferably at random) into two subgroups, one (the ‘verum’ group) is treated with the experimental treatment and the other (the ‘control’ group) with another option (often a placebo), and the eventual outcomes of the two groups is compared. If done well, such studies are able to exclude biases and confounding factors such that their findings allow causal inference. In other words, they can tell us whether an outcome was caused by the intervention per se or by some other factor such as the natural history of the disease, regression towards the mean etc.

A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.

For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.

One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about NON-CONCLUSIVE?

That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!

How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo NON-CONCLUSIVE.

In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.

This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.

Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.

The little trick of applying the ‘NON-CONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.

Whenever a new trial of an alternative intervention emerges which fails to confirm the wishful thinking of the proponents of that therapy, the world of alternative medicine is in turmoil. What can be done about yet another piece of unfavourable evidence? The easiest solution would be to ignore it, of course - and this is precisely what is often tried. But this tactic usually proves to be unsatisfactory; it does not neutralise the new evidence, and each time someone brings it up, one has to stick one’s head back into the sand. Rather than denying its existence, it would be preferable to have a tool which invalidates the study in question once and for all.

The ‘fatal flaw’ solution is simpler than anticipated! Alternative treatments are ‘very special’, and this notion must be emphasised, blown up beyond all proportions and used cleverly to discredit studies with unfavourable outcomes: the trick is simply to claim that studies with unfavourable results have a ‘fatal flaw’ in the way the alternative treatment was applied. As only the experts in the ‘very special’ treatment in question are able to judge the adequacy of their therapy, nobody is allowed to doubt their verdict.

Take acupuncture, for instance; it is an ancient ‘art’ which only the very best will ever master – at least that is what we are being told. So, all the proponents need to do in order to invalidate a trial, is read the methods section of the paper in full detail and state ‘ex cathedra’ that the way acupuncture was done in this particular study is completely ridiculous. The wrong points were stimulated, or the right points were stimulated but not long enough [or too long], or the needling was too deep [or too shallow], or the type of stimulus employed was not as recommended by TCM experts, or the contra-indications were not observed etc. etc.

As nobody can tell a correct acupuncture from an incorrect one, this ‘fatal flaw’ method is fairly fool-proof. It is also ever so simple: acupuncture-fans do not necessarily study hard to find the ‘fatal flaw’, they only have to look at the result of a study – if it was favourable, the treatment was obviously done perfectly by highly experienced experts; if it was unfavourable, the therapists clearly must have been morons who picked up their acupuncture skills in a single weekend course. The reasons for this judgement can always be found or, if all else fails, invented.

And the end-result of the ‘fatal flaw’ method is most satisfactory; what is more, it can be applied to all alternative therapies – homeopathy, herbal medicine, reflexology, Reiki healing, colonic irrigation…the method works for all of them! What is even more, the ‘fatal flaw’ method is adaptable to other aspects of scientific investigations such that it fits every conceivable circumstance.

An article documenting the ‘fatal flaw’ has to be published, of course - but this is no problem! There are dozens of dodgy alternative medicine journals which are only too keen to print even the most far-fetched nonsense as long as it promotes alternative medicine in some way. Once this paper is published, the proponents of the therapy in question have a comfortable default position to rely on each time someone cites the unfavourable study: “WHAT NOT THAT STUDY AGAIN! THE TREATMENT HAS BEEN SHOWN TO BE ALL WRONG. NOBODY CAN EXPECT GOOD RESULTS FROM A THERAPY THAT WAS NOT CORRECTLY ADMINISTERED. IF YOU DON’T HAVE BETTER STUDIES TO SUPPORT YOUR ARGUMENTS, YOU BETTER SHUT UP.”

There might, in fact, be better studies – but chances are that the ‘other side’ has already documented a ‘fatal flaw’ in them too.

It is usually BIG PHARMA who stands accused of being less than honest with the evidence, particularly when it runs against commercial interests; and the allegations prove to be correct with depressing regularity. In alternative medicine, commercial interests exist too, but there is usually much less money at stake. So, a common assumption is that conflicts of interest are less relevant in alternative medicine. Like so many assumptions in this area, this notion is clearly and demonstrably erroneous.

The sums of money are definitely smaller, but non-commercial conflicts of interest are potentially more important than the commercial ones. I am thinking of the quasi-religious beliefs that are so very prevalent in alternative medicine. Belief can move mountains, they say – it can surely delude people and make them do the most extraordinary things. Belief can transform advocates of alternative medicine into ‘ALCHEMISTS OF ALTERNATIVE EVIDENCE’ who turn negative/unfavourable into positive/favourable evidence.

The alchemists’ ‘tricks of the trade’ are often the same as used by BIG PHARMA; they include:

  • drawing conclusions which are not supported by the data
  • designing studies such that they will inevitably generate a favourable result
  • cherry-picking the evidence
  • hiding unfavourable findings
  • publishing favourable results multiple times
  • submitting data-sets to multiple statistical tests until a positive result emerges
  • defaming scientists who publish unfavourable findings
  • bribing experts
  • prettify data
  • falsifying data

As I said, these methods, albeit despicable, are well-known to pseudoscientists in all fields of inquiry. To assume that they are unknown in alternative medicine is naïve and unrealistic, as many of my previous posts confirm.

In addition to these ubiquitous ‘standard’ methods of scientific misconduct and fraud, there are a few techniques which are more or less unique to and typical for the alchemists of alternative medicine. In the following parts of this series of articles, I will try to explain these methods in more detail.

There are dozens of observational studies of homeopathy which seem to suggest – at least to homeopaths – that homeopathic treatments generate health benefits. As these investigations lack a control group, their results can be all to easily invalidated by pointing out that factors like ‘regression towards the mean‘ (RTM, a statistical artefact caused by the phenomenon that a variable that is extreme on its first measurement tends to be closer to the average on its second measurement) might be the cause of the observed change. Thus the debate whether such observational data are reliable or not has been raging for decades. Now, German (pro-homeopathy) investigators have published a paper which potentially could resolve this dispute.

With this re-analysis of an observational study, the investigators wanted to evaluate whether the observed changes in previous cohort studies are due to RTM and to estimate RTM adjusted effects. SF-36 quality-of-life (QoL) data from a cohort of 2827 chronically diseased adults treated with homeopathy were reanalysed using a method described in 1991 by Mee and Chua’s. RTM adjusted effects, standardized by the respective standard deviation at baseline, were 0.12 (95% CI: 0.06-0.19, P < 0.001) in the mental and 0.25 (0.22-0.28, P < 0.001) in the physical summary score of the SF-36. Small-to-moderate effects were confirmed for most individual diagnoses in physical, but not in mental component scores. Under the assumption that the true population mean equals the mean of all actually diseased patients, RTM adjusted effects were confirmed for both scores in most diagnoses.

The authors reached the following conclusion: “In our paper we showed that the effects on quality of life observed in patients receiving homeopathic care in a usual care setting are small or moderate at maximum, but cannot be explained by RTM alone. Due to the uncontrolled study design they may, however, completely be due to nonspecific effects. All our analyses made a restrictive and conservative assumption, so the true treatment effects might be larger than shown.” 

Of course, the analysis heavily relies on the validity of Mee and Chua’s modified t-test. It requires the true mean in the target population to be known, a requirement that seldom can be fulfilled. The authors therefore took the SF-36 mean summary scores from the 1998 German health survey as proxies. I am not a statistician and therefore unable to tell how reliable this method might be (- if there is someone out there who can give us some guidance here, please post your comment).

In order to make sense of these data, we need to consider that, during the study period, about half of the patients admitted to have had additional visits to non-homeopathic doctors, and 27% also received conventional drugs. In addition, they would have benefitted from:

  • the benign history of the conditions they were suffering from,
  • a placebo-effect,
  • the care and attention they received
  • and all sorts of other non-specific effects.

So, considering these factors, what does this interesting re-analysis really tell us? My interpretation is as follows: the type of observational study that homeopaths are so fond of yields false-positive results. If we correct them – as the authors have done here for just one single factor, the RTM – the effect size gets significantly smaller. If we were able to correct them for some of the other factors mentioned above, the effect size would shrink more and more. And if we were able to correct them for all confounders, their results would almost certainly concur with those of rigorously controlled trials which demonstrate that homeopathic remedies are pure placebos.

I am quite sure that this interpretation is unpopular with homeopaths, but I am equally certain that it is correct.

Yes, it is unlikely but true! I once was the hero of the world of energy healing, albeit for a short period only. An amusing story, I hope you agree.

Back in the late 1990s, we had decided to run two trials in this area. One of them was to test the efficacy of distant healing for the removal of ordinary warts, common viral infections of the skin which are quite harmless and usually disappear spontaneously. We had designed a rigorous study, obtained ethics approval and were in the midst of recruiting patients, when I suggested I could be the trial’s first participant, as I had noticed a tiny wart on my left foot. As patient-recruitment was sluggish at that stage, my co-workers consulted the protocol to check whether it might prevent me from taking part in my own trial. They came back with the good news that, as I was not involved in the running of the study, there was no reason for me to be excluded.

The next day, they ‘processed’ me like all the other wart sufferers of our investigation. My wart was measured, photographed and documented. A sealed envelope with my trial number was opened (in my absence, of course) by one of the trialists to see whether I would be in the experimental or the placebo group. The former patients were to receive ‘distant healing’ from a group of 10 experienced healers who had volunteered and felt confident to be able to cure warts. All they needed was a few details about each patients, they had confirmed. The placebo group received no such intervention. ‘Blinding’ the patient was easy in this trial; since they were not themselves involved in any healing-action, they could not know whether they were in the placebo or the verum group.

The treatment period lasted for several weeks during which time my wart was re-evaluated in regular intervals. When I had completed the study, final measurements were done, and I was told that I had been the recipient of ‘healing energy’ from the 10 healers during the past weeks. Not that I had felt any of it, and not that my wart had noticed it either: it was still there, completely unchanged.

I remember not being all that surprised…until, the next morning, when I noticed that my wart had disappeared! Gone without a trace!

Of course, I told my co-workers who were quite excited, re-photographed the spot where the wart had been and consulted the study protocol to determine what had to be done next. It turned out that we had made no provisions for events that might occur after the treatment period.

But somehow, this did not feel right, we all thought. So we decided to make a post-hoc addendum to our protocol which stipulated that all participants of our trial would be asked a few days after the end of the treatment whether any changes to their warts had been noted.

Meanwhile the healers had got wind of the professorial wart’s disappearance. They were delighted and quickly told other colleagues. In no time at all, the world of ‘distant healing’ had agreed that warts often reacted to their intervention with a slight delay – and they were pleased to hear that we had duly amended our protocol to adequately capture this important phenomenon. My ‘honest’ and ‘courageous’ action of acknowledging and documenting the disappearance of my wart was praised, and it was assumed that I was about to prove the efficacy of distant healing.

And that’s how I became their ‘hero’ – the sceptical professor who had now seen the light with his own eyes and experienced on his own body the incredible power of their ‘healing energy’.

Incredible it remained though: I was the only trial participant who lost his wart in this way. When we published this study, we concluded: Distant healing from experienced healers had no effect on the number or size of patients’ warts.

AND THAT’S WHEN I STOPPED BEING THEIR ‘HERO’.

Irritable bowel syndrome (IBS) is common and often difficult to treat – unless, of course, you consult a homeopath. Here is just one of virtually thousands of quotes from homeopaths available on the Internet: Homeopathic medicine can reduce Irritable Bowel Syndrome (IBS) symptoms by lowering food sensitivities and allergies. Homeopathy treats the patient as a whole and does not simply focus on the disease. Careful attention is given to the minute details about the presenting complaints, including the severity of diarrhea, constipation, pain, cramps, mucus in the stools, nausea, heartburn, emotional triggers and conventional laboratory findings. In addition, the patient’s eating habits, food preferences, thermal attributes and sleep patterns are noted. The patient’s family history and diseases, along with the patient’s emotions are discussed. Then the homeopathic practitioner will select the remedy that most closely matches the symptoms.

Such optimism might be refreshing, but is there any reason for it? Is homeopathy really an effective treatment for IBS? To answer this question, we now have a brand-new Cochrane review. The aim of this review was to assess the effectiveness and safety of homeopathic treatment for treating irritable bowel syndrome (IBS). (This type of statement always makes me a little suspicious; how on earth can anyone truly assess the safety of a treatment by looking at a few studies? This is NOT how one evaluates safety!) The authors conducted extensive literature searches to identify all RCTs, cohort and case-control studies that compared homeopathic treatment with placebo, other control treatments, or usual care in adults with IBS. The primary outcome was global improvement in IBS.

Three RCTs with a total of 213 participants were included. No cohort or case-control studies were identified. Two studies compared homeopathic remedies to placebos for constipation-predominant IBS. One study compared individualised homeopathic treatment to usual care defined as high doses of dicyclomine hydrochloride, faecal bulking agents and a high fibre diet. Due to the low quality of reporting, the risk of bias in all three studies was unclear on most criteria and high for some criteria.

A meta-analysis of two studies with a total of 129 participants with constipation-predominant IBS found a statistically significant difference in global improvement between the homeopathic ‘asafoetida’ and placebo at a short-term follow-up of two weeks. Seventy-three per cent of patients in the homeopathy group improved compared to 45% of placebo patients. There was no statistically significant difference in global improvement between the homeopathic asafoetida plus nux vomica compared to placebo. Sixty-eight per cent of patients in the homeopathy group improved compared to 52% of placebo patients.

The overall quality of the evidence was very low. There was no statistically significant difference between individualised homeopathic treatment and usual care for the outcome “feeling unwell”. None of the studies reported on adverse events (which, by the way, should be seen as a breech in research ethics on the part of the authors of the three primary studies).

The authors concluded that a pooled analysis of two small studies suggests a possible benefit for clinical homeopathy, using the remedy asafoetida, over placebo for people with constipation-predominant IBS. These results should be interpreted with caution due to the low quality of reporting in these trials, high or unknown risk of bias, short-term follow-up, and sparse data. One small study found no statistically difference between individualised homeopathy and usual care (defined as high doses of dicyclomine hydrochloride, faecal bulking agents and diet sheets advising a high fibre diet). No conclusions can be drawn from this study due to the low number of participants and the high risk of bias in this trial. In addition, it is likely that usual care has changed since this trial was conducted. Further high quality, adequately powered RCTs are required to assess the efficacy and safety of clinical and individualised homeopathy compared to placebo or usual care.

THIS REVIEW REQUIRES A FEW FURTHER COMMENTS, I THINK

Asafoetida, the remedy used in two of the studies, is a plant native to Pakistan, Iran and Afghanistan. It is used in Ayurvedic herbal medicine to treat colic, intestinal parasites and irritable bowel syndrome. In the ‘homeopathic’ trials, asafoetida was used in relatively low dilutions, one that still contains molecules. It is therefore debatable whether this was really homeopathy or whether it is more akin to herbal medicine - it was certainly not homeopathy with its typical ultra-high dilutions.

Regardless of this detail, the Cochrane review does hardly provide sound evidence for homeopathy’s efficacy. On the contrary, my reading of its findings is that the ‘possible benefit’ is NOT real but a false positive result caused by the serious limitations of the original studies. The authors stress that the apparently positive result ‘should be interpreted with caution’; that is certainly correct.

So, if you are a proponent of homeopathy, as the authors of the review seem to be, you will claim that homeopathy offers ‘possible benefits’ for IBS-sufferers. But if you are not convinced of the merits of homeopathy, you might suggest that the evidence is insufficient to recommend homeopathy. I imagine that IBS-sufferers might get as frustrated with such confusion as most scientists will be. Yet there is hope; the answer could be imminent: apparently, a new trial is to report its results within this year.

IS THIS NEW TRIAL GOING TO CONTRIBUTE MEANINGFULLY TO OUR KNOWLEDGE?

It is a three-armed study (same 1st author as in the Cochrane review) which, according to its authors, seeks to explore the effectiveness of individualised homeopathic treatment plus usual care compared to both an attention control plus usual care and usual care alone, for patients with IBS. (Why “explore” and not “determine”, I ask myself.) Patients are randomly selected to be offered, 5 sessions of homeopathic treatment plus usual care, 5 sessions of supportive listening plus usual care or usual care alone. (“To be offered” looks odd to me; does that mean patients are not blinded to the interventions? Yes, indeed it does.) The primary clinical outcome is the IBS Symptom Severity at 26 weeks. Analysis will be by intention to treat and will compare homeopathic treatment with usual care at 26 weeks as the primary analysis, and homeopathic treatment with supportive listening as an additional analysis.

Hold on…the primary analysis “will compare homeopathic treatment with usual care“. Are they pulling my leg? They just told me that patients will be “offered, 5 sessions of homeopathic treatment plus usual care… or usual care alone“.

Oh, I see! We are again dealing with an A+B versus B design, on top of it without patient- or therapist-blinding. This type of analysis cannot ever produce a negative result, even if the experimental treatment is a pure placebo: placebo + usual care is always more than usual care alone. IBS-patients will certainly experience benefit from having the homeopaths’ time, empathy and compassion – never mind the remedies they get from them. And for the secondary analyses, things do not seem to be much more rigorous either.

Do we really need more trials of this nature? The Cochrane review shows that we currently have three studies which are too flimsy to be interpretable. What difference will a further flimsy trial make in this situation? When will we stop wasting time and money on such useless ‘research’? All it can possibly achieve is that apologists of homeopathy will misinterpret the results and suggest that they demonstrate efficacy.

Obviously, I have not seen the data (they have not yet been published) but I think I can nevertheless predict the conclusions of the primary analysis of this trial; they will read something like this: HOMEOPATHY PROVED TO BE SIGNIFICANTLY MORE EFFECTIVE THAN USUAL CARE. I have asked the question before and I do it again: when does this sort of ‘research’ cross the line into the realm of scientific misconduct?

Categories