MD, PhD, FMedSci, FSB, FRCP, FRCPEd

methodology

Recently, I came across the ‘Clinical Practice Guidelines on the Use of Integrative Therapies as Supportive Care in Patients Treated for Breast Cancer’ published by the ‘Society for Integrative Oncology (SIO) Guidelines Working Group’. The mission of the SIO is to “advance evidence-based, comprehensive, integrative healthcare to improve the lives of people affected by cancer. The SIO has consistently encouraged rigorous scientific evaluation of both pre-clinical and clinical science, while advocating for the transformation of oncology care to integrate evidence-based complementary approaches. The vision of SIO is to have research inform the true integration of complementary modalities into oncology care, so that evidence-based complementary care is accessible and part of standard cancer care for all patients across the cancer continuum. As an interdisciplinary and inter-professional society, SIO is uniquely poised to lead the “bench to bedside” efforts in integrative cancer care.”

The aim of the ‘Clinical Practice Guidelines’ was to “inform clinicians and patients about the evidence supporting or discouraging the use of specific complementary and integrative therapies for defined outcomes during and beyond breast cancer treatment, including symptom management.”

This sounds like a most laudable aim. Therefore I studied the document carefully and was surprised to read their conclusions: “Specific integrative therapies can be recommended as evidence-based supportive care options during breast cancer treatment.”

How can this be? On this blog, we have repeatedly seen evidence to suggest that integrative medicine is little more than the admission of quackery into evidence-based healthcare. This got me wondering how their conclusion had been reached, and I checked the document even closer.

On the surface, it seemed well-made. A team of researchers first defined the treatments they wanted to look at, then they searched for RCTs, evaluated their quality, extracted their results, combined them into an overall verdict and wrote the whole thing up. In a word, they conducted what seems a proper systematic review.

Based on the findings of their review, they then issued recommendations which I thought were baffling in several respects. Let me just focus on three of the SIO’s recommendations dealing with acupuncture:

  1. “Acupuncture can be considered for treating anxiety concurrent with ongoing fatigue…” [only RCT (1) cited in support]
  2. “Acupuncture can be considered for improving depressive symptoms in women suffering from hot flashes…” [RCTs (1 and 2) cited in support] 
  3. “Acupuncture can be considered for treating anxiety concurrent with ongoing fatigue…” [only RCT (1) cited in support]
One or two studies as a basis for far-reaching guidelines? Yes, that would normally be a concern! But, at closer scrutiny, my worries about these recommendation turn out to be much more serious than this.

The actual RCT (1) cited in support of all three recommendations stated that the authors “randomly assigned 75 patients to usual care and 227 patients to acupuncture plus usual care…” As we have discussed often before on this blog and elsewhere, such a ‘A+B versus B study design’ will never generate a negative result, does not control for placebo-effects and is certainly not a valid test for the effectiveness of the treatment in question. Nevertheless, the authors of this study concluded that: “Acupuncture is an effective intervention for managing the symptom of cancer-related fatigue and improving patients’ quality of life.”

RCT (2) cited in support of recommendation number 2 seems to be a citation error; the study in question is not an acupuncture-trial and does not back the statement in question. I suspect they meant to cite their reference number 87 (instead of 88). This trial is an equivalence study where 50 patients were randomly assigned to receive 12 weeks of acupuncture (n = 25) or venlafaxine (n = 25) treatment for cancer-related hot flushes. Its results indicate that the two treatments generated the similar results. As the two therapies could also have been equally ineffective, it is impossible, in my view, to conclude that acupuncture is effective.

Finally, RCT (1) does in no way support recommendation number two. Yet RCT (1) and RCT (2) were both cited in support of this recommendation.

I have not systematically checked any other claims made in this document, but I get the impression that many other recommendations made here are based on similarly ‘liberal’ interpretations of the evidence. How can the ‘Society for Integrative Oncology’ use such dodgy pseudo-science for formulating potentially far-reaching guidelines?

I know none of the authors (Heather Greenlee, Lynda G. Balneaves, Linda E. Carlson, Misha Cohen, Gary Deng, Dawn Hershman, Matthew Mumber, Jane Perlmutter, Dugald Seely, Ananda Sen, Suzanna M. Zick, Debu Tripathy) of the document personally. They made the following collective statement about their conflicts of interest: “There are no financial conflicts of interest to disclose. We note that some authors have conducted/authored some of the studies included in the review.” I am a little puzzled to hear that they have no financial conflicts of interest (do not most of them earn their living by practising integrative medicine? Yes they do! The article informs us that: “A multidisciplinary panel of experts in oncology and integrative medicine was assembled to prepare these clinical practice guidelines. Panel members have expertise in medical oncology, radiation oncology, nursing, psychology, naturopathic medicine, traditional Chinese medicine, acupuncture, epidemiology, biostatistics, and patient advocacy.”). I also suspect they have other, potentially much stronger conflicts of interest. They belong to a group of people who seem to religiously believe in the largely nonsensical concept of integrative medicine. Integrating unproven treatments into healthcare must affect its quality in much the same way as the integration of cow pie into apple pie would affect the taste of the latter.

After considering all this carefully, I cannot help wondering whether these ‘Clinical Practice Guidelines’ by the ‘Society for Integrative Oncology’ are just full of honest errors or whether they amount to fraud and scientific misconduct.

WHATEVER THE ANSWER, THE GUIDELINES MUST BE RETRACTED, IF THIS SOCIETY WANTS TO AVOID LOSING ALL CREDIBILITY.

In recent blogs, I have written much about acupuncture and particularly about the unscientific notions of traditional acupuncturists. I was therefore surprised to see that a UK charity is teaming up with traditional acupuncturists in an exercise that looks as though it is designed to mislead the public.

The website of ‘Anxiety UK’ informs us that this charity and the British Acupuncture Council (BAcC) have launched a ‘pilot project’ which will see members of Anxiety UK being able to access traditional acupuncture through this new partnership. Throughout the pilot project, they proudly proclaim, data will be collected to “determine the effectiveness of traditional acupuncture for treating those living with anxiety and anxiety based depression.”

This, they believe, will enable both parties to continue to build a body of evidence to measure the success rate of this type of treatment. Anxiety UK’s Chief Executive Nicky Lidbetter said: “This is an exciting project and will provide us with valuable data and outcomes for those members who take part in the pilot and allow us to assess the benefits of extending the pilot to a regular service for those living with anxiety. “We know anecdotally that many people find complementary therapies used to support conventional care can provide enormous benefit, although it should be remembered they are used in addition to and not instead of seeking medical advice from a doctor or taking prescribed medication. This supports our strategic aim to ensure that we continue to make therapies and services that are of benefit to those with anxiety and anxiety based depression, accessible.”

And what is wrong with that, you might ask.

What is NOT wrong with it, would be my response.

To start with, traditional acupuncture relies of obsolete assumptions like yin and yang, meridians, energy flow, acupuncture points etc. They have one thing in common: they fly in the face of science and evidence. But this might just be a triviality. More important is, I believe, the fact that a pilot project cannot determine the effectiveness of a therapy. Therefore the whole exercise smells very much like a promotional activity for pure quackery.

And what about the hint in the direction of anecdotal evidence in support of the study? Are they not able to do a simple Medline search? Because, if they had done one, they would have found a plethora of articles on the subject. Most of them show that there are plenty of studies but their majority is too flawed to draw firm conclusions.

A review by someone who certainly cannot be accused of being biased against alternative medicine, for instance, informs us that “trials in depression, anxiety disorders and short-term acute anxiety have been conducted but acupuncture interventions employed in trials vary as do the controls against which these are compared. Many trials also suffer from small sample sizes. Consequently, it has not proved possible to accurately assess the effectiveness of acupuncture for these conditions or the relative effectiveness of different treatment regimens. The results of studies showing similar effects of needling at specific and non-specific points have further complicated the interpretation of results. In addition to measuring clinical response, several clinical studies have assessed changes in levels of neurotransmitters and other biological response modifiers in an attempt to elucidate the specific biological actions of acupuncture. The findings offer some preliminary data requiring further investigation.”

Elsewhere, the same author, together with other pro-acupuncture researchers, wrote this: “Positive findings are reported for acupuncture in the treatment of generalised anxiety disorder or anxiety neurosis but there is currently insufficient research evidence for firm conclusions to be drawn. No trials of acupuncture for other anxiety disorders were located. There is some limited evidence in favour of auricular acupuncture in perioperative anxiety. Overall, the promising findings indicate that further research is warranted in the form of well designed, adequately powered studies.”

What does this mean in the context of the charity’s project?

I think, it tells us that acupuncture for anxiety is not exactly the most promising approach to further investigate. Even in the realm of alternative medicine, there are several interventions which are supported by more encouraging evidence. And even if one disagrees with this statement, one cannot possibly disagree with the fact that more flimsy research is not required. If we do need more studies, they must be rigorous and not promotion thinly disguised as science.

I guess the ultimate question here is one of ethics. Do charities not have an ethical and moral duty to spend our donations wisely and productively? When does such ill-conceived pseudo-research cross the line to become offensive or even fraudulent?

The randomized, placebo-controlled, double-blind trial is usually the methodology to test the efficacy of a therapy that carries the least risk of bias. This fact is an obvious annoyance to some alt med enthusiasts, because such trials far too often fail to produce the results they were hoping for.

But there is no need to despair. Here I provide a few simple tips on how to mislead the public with seemingly rigorous trials.

1 FRAUD

The most brutal method for misleading people is simply to cheat. The Germans have a saying, ‘Papier ist geduldig’ (paper is patient), implying that anyone can put anything on paper. Fortunately we currently have plenty of alt med journals which publish any rubbish anyone might dream up. The process of ‘peer-review’ is one of several mechanisms supposed to minimise the risk of scientific fraud. Yet alt med journals are more clever than that! They tend to have a peer-review that rarely involves independent and critical scientists, more often than not you can even ask that you best friend is invited to do the peer-review, and the alt med journal will follow your wish. Consequently the door is wide open to cheating. Once your fraudulent paper has been published, it is almost impossible to tell that something is fundamentally wrong.

But cheating is not confined to original research. You can also apply the method to other types of research, of course. For instance, the authors of the infamous ‘Swiss report’ on homeopathy generated a false positive picture using published systematic reviews of mine by simply changing their conclusions from negative to positive. Simple!

2 PRETTIFICATION

Obviously, outright cheating is not always as simple as that. Even in alt med, you cannot easily claim to have conducted a clinical trial without a complex infrastructure which invariably involves other people. And they are likely to want to have some control over what is happening. This means that complete fabrication of an entire data set may not always be possible. What might still be feasible, however, is the ‘prettification’ of the results. By just ‘re-adjusting’ a few data points that failed to live up to your expectations, you might be able to turn a negative into a positive trial. Proper governance is aimed at preventing his type of ‘mini-fraud’ but fortunately you work in alt med where such mechanisms are rarely adequately implemented.

3 OMISSION

Another very handy method is the omission of aspects of your trial which regrettably turned out to be in disagreement with the desired overall result. In most studies, one has a myriad of endpoints. Once the statistics of your trial have been calculated, it is likely that some of them yield the wanted positive results, while others do not. By simply omitting any mention of the embarrassingly negative results, you can easily turn a largely negative study into a seemingly positive one. Normally, researchers have to rely on a pre-specified protocol which defines a primary outcome measure. Thankfully, in the absence of proper governance, it usually is possible to publish a report which obscures such detail and thus mislead the public (I even think there has been an example of such an omission on this very blog).

4 STATISTICS

Yes – lies, dam lies, and statistics! A gifted statistician can easily find ways to ‘torture the data until they confess’. One only has to run statistical test after statistical test, and BINGO one will eventually yield something that can be marketed as the longed-for positive result. Normally, researchers must have a protocol that pre-specifies all the methodologies used in a trial, including the statistical analyses. But, in alt med, we certainly do not want things to function normally, do we?

5 TRIAL DESIGNS THAT CANNOT GENERATE A NEGATIVE RESULT

All the above tricks are a bit fraudulent, of course. Unfortunately, fraud is not well-seen by everyone. Therefore, a more legitimate means of misleading the public would be highly desirable for those aspiring alt med researchers who do not want to tarnish their record to their disadvantage. No worries guys, help is on the way!

The fool-proof trial design is obviously the often-mentioned ‘A+B versus B’ design. In such a study, patients are randomized to receive an alt med treatment (A) together with usual care (B) or usual care (B) alone. This looks rigorous, can be sold as a ‘pragmatic’ trial addressing a real-fife problem, and has the enormous advantage of never failing to produce a positive result: A+B is always more than B alone, even if A is a pure placebo. Such trials are akin to going into a hamburger joint for measuring the calories of a Big Mac without chips and comparing them to the calories of a Big Mac with chips. We know the result before the research has started; in alt med, that’s how it should be!

I have been banging on about the ‘A+B versus B’ design often enough, but recently I came across a new study design used in alt med which is just as elegantly misleading. The trial in question has a promising title: Quality-of-life outcomes in patients with gynecologic cancer referred to integrative oncology treatment during chemotherapy. Here is the unabbreviated abstract:

OBJECTIVE:

Integrative oncology incorporates complementary medicine (CM) therapies in patients with cancer. We explored the impact of an integrative oncology therapeutic regimen on quality-of-life (QOL) outcomes in women with gynecological cancer undergoing chemotherapy.

PATIENTS AND METHODS:

A prospective preference study examined patients referred by oncology health care practitioners (HCPs) to an integrative physician (IP) consultation and CM treatments. QOL and chemotherapy-related toxicities were evaluated using the Edmonton Symptom Assessment Scale (ESAS) and Measure Yourself Concerns and Wellbeing (MYCAW) questionnaire, at baseline and at a 6-12-week follow-up assessment. Adherence to the integrative care (AIC) program was defined as ≥4 CM treatments, with ≤30 days between each session.

RESULTS:

Of 128 patients referred by their HCP, 102 underwent IP consultation and subsequent CM treatments. The main concerns expressed by patients were fatigue (79.8 %), gastrointestinal symptoms (64.6 %), pain and neuropathy (54.5 %), and emotional distress (45.5 %). Patients in both AIC (n = 68) and non-AIC (n = 28) groups shared similar demographic, treatment, and cancer-related characteristics. ESAS fatigue scores improved by a mean of 1.97 points in the AIC group on a scale of 0-10 and worsened by a mean of 0.27 points in the non-AIC group (p = 0.033). In the AIC group, MYCAW scores improved significantly (p < 0.0001) for each of the leading concerns as well as for well-being, a finding which was not apparent in the non-AIC group.

CONCLUSIONS:

An IP-guided CM treatment regimen provided to patients with gynecological cancer during chemotherapy may reduce cancer-related fatigue and improve other QOL outcomes.

A ‘prospective preference study’ – this is the design the world of alt med has been yearning for! Its principle is beautiful in its simplicity. One merely administers a treatment or treatment package to a group of patients; inevitably some patients take it, while others don’t. The reasons for not taking it could range from lack of perceived effectiveness to experience of side-effects. But never mind, the fact that some do not want your treatment provides you with two groups of patients: those who comply and those who do not comply. With a bit of skill, you can now make the non-compliers appear like a proper control group. Now you only need to compare the outcomes and BOB IS YOUR UNCLE!

Brilliant! Absolutely brilliant!

I cannot think of a more deceptive trial-design than this one; it will make any treatment look good, even one that is a mere placebo. Alright, it is not randomized, and it does not even have a proper control group. But it sure looks rigorous and meaningful, this ‘prospective preference study’!

This study created a media storm when it was first published. Several articles in the lay press seemed to advertise it as though a true breakthrough had been made in the treatment of hypertension. I would not be surprised, if many patients consequently threw their anti-hypertensives over board and queued up at their local acupuncturist.

Good for business, no doubt – but would this be a wise decision?

The aim of this clinical trial was to examine effectiveness of electroacupuncture (EA) for reducing systolic blood pressure (SBP) and diastolic blood pressures (DBP) in hypertensive patients. Sixty-five hypertensive patients not receiving medication were assigned randomly to one of two acupuncture intervention. Patients were assessed with 24-hour ambulatory blood pressure monitoring. They were treated by 4 acupuncturists with 30-minutes of EA at PC 5-6+ST 36-37 or LI 6-7+GB 37-39 (control group) once weekly for 8 weeks. Primary outcomes measuring effectiveness of EA were peak and average SBP and DBP. Secondary outcomes examined underlying mechanisms of acupuncture with plasma norepinephrine, renin, and aldosterone before and after 8 weeks of treatment. Outcomes were obtained by blinded evaluators.

After 8 weeks, 33 patients treated with EA at PC 5-6+ST 36-37 had decreased peak and average SBP and DBP, compared with 32 patients treated with EA at LI 6-7+GB 37-39 control acupoints. Changes in blood pressures significantly differed between the two patient groups. In 14 patients, a long-lasting blood pressure–lowering acupuncture effect was observed for an additional 4 weeks of EA at PC 5-6+ST 36-37. After treatment, the plasma concentration of norepinephrine, which was initially elevated, was decreased by 41%; likewise, renin was decreased by 67% and aldosterone by 22%.

The authors concluded that EA at select acupoints reduces blood pressure. Sympathetic and renin-aldosterone systems were likely related to the long-lasting EA actions.

These results are baffling, to say the least; and they contradict a recent meta-analysis which did not find that acupuncture without antihypertensive medications significantly improves blood pressure in those hypertensive patients.

So, who is right and who is wrong here?

Or shall we just look for alternative explanations of the effects observed in the new study?

There could be dozens of reasons for these findings that are unrelated to the alleged effects of acupuncture. For instance, they could be due to life-style changes suggested to the experimental but not the control group, or they might be caused by some other undisclosed bias or confounding. At the very minimum, we should insist on an independent replication of this trial.

It would be silly, I think, to trust these results and now recommend acupuncture to the millions of hypertensive patients worldwide, particularly as dozens of safe, cheap and very effective treatments for hypertension do already exist.

This seems to be the question that occupies the minds of several homeopaths.

Amazed?

So was I!

Let me explain.

In 1997, Linde et al published their now famous meta-analysis of clinical trials of homeopathy which concluded that “The results of our meta-analysis are not compatible with the hypothesis that the clinical effects of homeopathy are completely due to placebo. However, we found insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition. Further research on homeopathy is warranted provided it is rigorous and systematic.”

This paper had several limitations which Linde was only too happy to admit. The authors therefore conducted a re-analysis which, even though published in an excellent journal, is rarely cited by homeopaths. Linde et al stated in their re-analysis of 2000: “there was clear evidence that studies with better methodological quality tended to yield less positive results.” It was this phenomenon that prompted me and my colleague Max Pittler to publish a ‘letter to the editor’ which now – 15 years later – seems the stone of homeopathic contention.

A blog-post by a believer in homeopathy even asks the interesting question: Did Professor Ernst Sell His Soul to Big Pharma? It continues as follows:

Edzard Ernst is an anti-homeopath who spent his career attacking traditional medicine. In 1993 he became Professor of Complementary Medicine at the University of Exeter. He is often described as the first professor of complementary medicine, but the title he assumed should have fooled no-one. His aim was to discredit medical therapies, notably homeopathy, and he then published some 700 papers in ‘scientific’ journals to do so.

Now, Professor Robert Hahn, in his blog, has made an assessment of the quality of his work… In the interests of the honesty and integrity in science, it is an important assessment. It shows, in his view, how science has been taken over by ideology (or as I would suggest, more accurately, the financial interests of Big Corporations, in this case, Big Pharma). The blog indicates that in order to demonstrate that homeopathy is ineffective, over 95% of scientific research into homeopathy has to be discarded or removed! 

So for those people who, like myself, cannot read the original German, here is an English translation of the blog…

“I have never seen a science writer so blatantly biased as Edzard Ernst: his work should not be considered of any worth at all, and discarded” finds Sweden’s Professor Robert Hahn, a leading medical scientist, physician, and Professor of Anaesthesia and Intensive Care at the University of Linköping, Sweden.

Hahn determined therefore to analyze for himself the ‘research’ which supposedly demonstrated homeopathy to be ineffective, and reached the shocking conclusion that:

“only by discarding 98% of homeopathy trials and carrying out a statistical meta-analysis on the remaining 2% negative studies, can one ‘prove’ that homeopathy is ineffective”.

In other words, all supposedly negative homeopathic meta-analyses which opponents of homeopathy have relied on, are scientifically bogus…
 
 Who can you trust? We can begin by disregarding Edzard Ernst. I have read several other studies that he has published, and they are all untrustworthy. His work should be discarded… 

In the case of homeopathy, one should stick with what the evidence reveals. And the evidence is that only by removing 95-98% of all studies is the effectiveness of homeopathy not demonstrable…

So, now you are wondering, I am sure: HOW MUCH DID HE GET FOR SELLING HIS SOUL TO BIG PHARMA?

No? You are wondering 1) who this brilliant Swedish scientist, Prof Hahn, is and 2) what article of mine he is criticising? Alright, I will try to enlighten you.

PROFESSOR HAHN

Here I can rely on a comment posted on my blog some time ago by someone who can read Swedish (thank you Bjorn). He commented about Hahn as follows:

A renowned director of medical research with well over 300 publications on anesthesia and intensive care and 16 graduated PhD students under his mentorship, who has been leading a life on the side, blogging and writing about spiritualism, and alternative medicine and now ventures on a public crusade for resurrecting the failing realm of homeopathy!?! Unbelievable!

I was unaware of this person before, even if I have lived and worked in Sweden for decades.

I have spent the evening looking up his net-track and at his blog at roberthahn.nu (in Swedish).

I will try to summarise some first impressions:

Hahn is evidently deeply religious and there is the usual, unmistakably narcissistic aura over his writings and sayings. He is religiously confident that there is more to this world than what can be measured and sensed. In effect, he seems to believe that homeopathy (as well as alternative medical methods in general) must work because there are people who say they have experienced it and denying the possibility is akin to heresy (not his wording but the essence of his writing).

He has, along with his wife, authored at least three books on spiritual matters with titles such as (my translations) “Clear replies from the spiritual world” and “Connections of souls”.

He has a serious issue with skeptics and goes on at length about how they are dishonest bluffers[sic] who willfully cherry-pick and misinterpret evidence to fit their preconceived beliefs.

He feels that desperate patients should generally be allowed the chance that alternative methods may offer.

He believes firmly in former-life memories, including his own, which he claims he has found verification for in an ancient Italian parchment.

His main arguments for homeopathy are Claus Linde’s meta analyses and the sheer number of homeopathic research that he firmly believes shows it being superior to placebo, a fact that (in his opinion) shows it has a biological effect. Shang’s work from 2005 he dismisses as seriously flawed.

He also points to individual research like this as credible proof of the biologic effect of remedies.

He somewhat surprisingly denies recommending homeopathy despite being convinced of its effect and maintains that he wants better, more problem oriented and disease specific studies to clarify its applicability. (my interpretation)

If it weren’t for his track record of genuine, acknowledged medical research and him being a renowned authority in a genuine, scientific medical field, this man would be an ordinary, religiously devout quack.

What strikes me as perhaps telling of a consequence of his “exoscientific” activity, is that Hahn, who holds the position of research director at a large city trauma and emergency hospital is an “adjungerad professor”, which is (usually) a part time, time limited, externally financed professorial position, while any Swedish medical doctor with his very extensive formal merits would very likely hold a full professorship at an academic institution.

END OF QUOTE

MY 2000 PAPER THAT SEEMS TO IRRITATE HAHN

This was a short ‘letter to the editor’ by Ernst and Pittler published in the J Clin Epidemiol commenting on the above-mentioned re-analysis by Linde et al which was published in the same journal. As its text is not available on-line, I re-type parts of it here:

In an interesting re-analysis of their meta-analysis of clinical trials of homeopathy, Linde et al conclude that there is no linear relationship between quality scores and study outcome. We have simply re-plotted their data and arrive at a different conclusion. There is an almost perfect correlation between the odds ratio and the Jadad score between the range of 1-4… [some technical explanations follow which I omit]…Linde et al can be seen as the ultimate epidemiological proof that homeopathy is, in fact, a placebo.

And that is, as far as I can see, the whole mysterious story. I cannot even draw a conclusion – all I can do is to ask a question:

DOES ANYONE UNDERSTAND WHAT THEY ARE GOING ON ABOUT?

In the realm of alternative medicine, we encounter many therapeutic claims that beggar belief. This is true for most modalities but perhaps for none more than chiropractic. Many chiropractors still adhere to Palmer’s gospel of the ‘inate’, ‘subluxation’ etc. and thus they believe that their ‘adjustments’ are a cure all. Readers of this blog will know all that, of course, but even they might be surprised by the notion that a chiropractic adjustment improves the voice of a choir singer.

This, however, is precisely the ‘hypothesis’ that was recently submitted to an RCT. To be precise, the study investigated the effect of spinal manipulative therapy (SMT) on the singing voice of male individuals.

Twenty-nine subjects were selected among male members of a local choir. Participants were randomly assigned to two groups: (A) a single session of chiropractic SMT and (B) a single session of non-therapeutic transcutaneous electrical nerve stimulation (TENS). Recordings of the singing voice of each participant were taken immediately before and after the procedures. After a 14-day wash-out period, procedures were switched between groups: participants who underwent SMT on the first occasion were now subjected to TENS and vice versa. Recordings were assessed via perceptual audio and acoustic evaluations. The same recording segment of each participant was selected. Perceptual audio evaluation was performed by a specialist panel (SP). Recordings of each participant were randomly presented thus making the SP blind to intervention type and recording session (before/after intervention). Recordings compiled in a randomized order were also subjected to acoustic evaluation.

No differences in the quality of the singing on perceptual audio evaluation were observed between TENS and SMT.

The authors concluded that no differences in the quality of the singing voice of asymptomatic male singers were observed on perceptual audio evaluation or acoustic evaluation after a single spinal manipulative intervention of the thoracic and cervical spine.

Laughable? Yes!

There is nevertheless an important point to be made here, I feel: some claims are just too silly to waste resources on. Or, to put it in more scientific terms, hypotheses require much more than a vague notion or hunch.

To set up, conduct and eventually publish an RCT as above requires expertise, commitment, time and money. All of this is entirely wasted, if the prior probability of a relevant result approaches zero. In the realm of alternative medicine, this is depressingly often the case. In the final analysis, this suggests that all too often research in this area achieves nothing other than giving science a bad name.

A paper entitled ‘Real world research: a complementary method to establish the effectiveness of acupuncture’ caught my attention recently. I find it quite remarkable and think it might stimulate some discussion on this blog.  Here is its abstract:

Acupuncture has been widely used in the management of a variety of diseases for thousands of years, and many relevant randomized controlled trials have been published. In recent years, many randomized controlled trials have provided controversial or less-than-convincing evidence that supports the efficacy of acupuncture. The clinical effectiveness of acupuncture in Western countries remains controversial.

Acupuncture is a complex intervention involving needling components, specific non-needling components, and generic components. Common problems that have contributed to the equivocal findings in acupuncture randomized controlled trials were imperfections regarding acupuncture treatment and inappropriate placebo/sham controls. In addition, some inherent limitations were also present in the design and implementation of current acupuncture randomized controlled trials such as weak external validity. The current designs of randomized controlled trials of acupuncture need to be further developed. In contrast to examining efficacy and adverse reaction in a “sterilized” environment in a narrowly defined population, real world research assesses the effectiveness and safety of an intervention in a much wider population in real world practice. For this reason, real world research might be a feasible and meaningful method for acupuncture assessment. Randomized controlled trials are important in verifying the efficacy of acupuncture treatment, but the authors believe that real world research, if designed and conducted appropriately, can complement randomized controlled trials to establish the effectiveness of acupuncture. Furthermore, the integrative model that can incorporate randomized controlled trial and real world research which can complement each other and potentially provide more objective and persuasive evidence.

In the article itself, the authors list seven criteria for what they consider good research into acupuncture:

  1. Acupuncture should be regarded as complex and individualized treatment;
  2. The study aim (whether to assess the efficacy of acupuncture needling or the effectiveness of acupuncture treatment) should be clearly defined and differentiated;
  3. Pattern identification should be clearly specified, and non-needling components should also be considered;
  4. The treatment protocol should have some degree of flexibility to allow for individualization;
  5. The placebo or sham acupuncture should be appropriate: knowing “what to avoid” and “what to mimic” in placebos/shams;
  6. In addition to “hard evidence”, one should consider patient-reported outcomes, economic evaluations, patient preferences and the effect of expectancy;
  7. The use of qualitative research (e.g., interview) to explore some missing areas (e.g., experience of practitioners and patient-practitioner relationship) in acupuncture research.

Furthermore, the authors list the advantages of their RWR-concept:

  1. In RWR, interventions are tailored to the patients’ specific conditions, in contrast to standardized treatment. As a result, conclusions based on RWR consider all aspects of acupuncture that affect the effectiveness.
  2. At an operational level, patients’ choice of the treatment(s) decreases the difficulties in recruiting and retaining patients during the data collection period.
  3. The study sample in RWR is much more representative of the real world situation (similar to the section of the population that receives the treatment). The study, therefore, has higher external validity.
  4. RWR tends to have a larger sample size and longer follow-up period than RCT, and thus is more appropriate for assessing the safety of acupuncture.

The authors make much of their notion that acupuncture is a COMPLEX INTERVENTION; specifically they claim the following: Acupuncture treatment includes three aspects: needling, specific non-needling components drove by acupuncture theory, and generic components not unique to acupuncture treatment. In addition, acupuncture treatment should be performed on the basis of the patient condition and traditional Chinese medicine (TCM) theory.

There is so much BS here that it is hard to decide where to begin refuting. As the assumption of acupuncture or other alternative therapies being COMPLEX INTERVENTIONS (and therefore exempt from rigorous tests) is highly prevalent in this field, let me try to just briefly tackle this one.

The last time I saw a patient and prescribed a drug treatment I did all of the following:

  • I greeted her, asked her to sit down and tried to make her feel relaxed.
  • I first had a quick chat about something trivial.
  • I then asked why she had come to see me.
  • I started to take notes.
  • I inquired about the exact nature and the history of her problem.
  • I then asked her about her general medical history, family history and her life-style.
  • I also asked about any psychological problems that might relate to her symptoms.
  • I then conducted a physical examination.
  • Subsequently we discussed what her diagnosis might be.
  • I told her what my working diagnosis was.
  • I ordered a few tests to either confirm or refute it and explained them to her.
  • We decided that she should come back and see me in a few days when her tests had come back.
  • In order to ease her symptoms in the meanwhile, I gave her a prescription for a drug.
  • We discussed this treatment, how and when she should take it, adverse effects etc.
  • We also discussed other therapeutic options, in case the prescribed treatment was in any way unsatisfactory.
  • I reassured her by telling her that her condition did not seem to be serious and stressed that I was confident to be able to help her.
  • She left my office.

The point I am trying to make is: prescribing an entirely straight forward drug treatment is also a COMPLEX INTERVENTION. In fact, I know of no treatment that is NOT complex.

Does that mean that drugs and all other interventions are exempt from being tested in rigorous RCTs? Should we allow drug companies to adopt the RWR too? Any old placebo would pass that test and could be made to look effective using RWR. In the example above, my compassion, care and reassurance would alleviate my patient’s symptoms, even if the prescription I gave her was complete rubbish.

So why should acupuncture (or any other alternative therapy) not be tested in proper RCTs? I fear, the reason is that RCTs might show that it is not as effective as its proponents had hoped. The conclusion about the RWR is thus embarrassingly simple: proponents of alternative medicine want double standards because single standards would risk to disclose the truth.

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):

BACKGROUND:

A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.

METHODS:

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

RESULTS:

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

CONCLUSIONS:

Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
Walach:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
Walach:
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Walach:
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

Jacobs:
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Walach:
Rating: NO (high risk of bias), no details given

Jacobs:
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.

Conclusion

So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.

SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.

Many proponents of alternative medicine seem somewhat suspicious of research; they have obviously understood that it might not produce the positive result they had hoped for; after all, good research tests hypotheses and does not necessarily confirm beliefs. At the same time, they are often tempted to conduct research: this is perceived as being good for the image and, provided the findings are positive, also good for business.

Therefore they seem to be tirelessly looking for a study design that cannot ‘fail’, i.e. one that avoids the risk of negative results but looks respectable enough to be accepted by ‘the establishment’. For these enthusiasts, I have good news: here is the study design that cannot fail.

It is perhaps best outlined as a concrete example; for reasons that will become clear very shortly, I have chosen reflexology as a treatment of diabetic neuropathy, but you can, of course, replace both the treatment and the condition as it suits your needs. Here is the outline:

  • recruit a group of patients suffering from diabetic neuropathy – say 58, that will do nicely,
  • randomly allocate them to two groups,
  • the experimental group receives regular treatments by a motivated reflexologist,
  • the controls get no such therapy,
  • both groups also receive conventional treatments for their neuropathy,
  • the follow-up is 6 months,
  • the following outcome measures are used: pain reduction, glycemic control, nerve conductivity, and thermal and vibration sensitivities,
  • the results show that the reflexology group experience more improvements in all outcome measures than those of control subjects,
  • your conclusion: This study exhibited the efficient utility of reflexology therapy integrated with conventional medicines in managing diabetic neuropathy.

Mission accomplished!

This method is fool-proof, trust me, I have seen it often enough being tested, and never has it generated disappointment. It cannot fail because it follows the notorious A+B versus B design (I know, I have mentioned this several times before on this blog, but it is really important, I think): both patient groups receive the essential mainstream treatment, and the experimental group receives a useless but pleasant alternative treatment in addition. The alternative treatment involves touch, time, compassion, empathy, expectations, etc. All of these elements will inevitably have positive effects, and they can even be used to increase the patients’ compliance with the conventional treatments that is being applied in parallel. Thus all outcome measures will be better in the experimental compared to the control group.

The overall effect is pure magic: even an utterly ineffective treatment will appear as being effective – the perfect method for producing false-positive results.

And now we hopefully all understand why this study design is so very popular in alternative medicine. It looks solid – after all, it’s an RCT!!! – and it thus convinces even mildly critical experts of the notion that the useless treatment is something worth while. Consequently the useless treatment will become accepted as ‘evidence-based’, will be used more widely and perhaps even reimbursed from the public purse. Business will be thriving!

And why did I employ reflexology for diabetic neuropathy? Is that example not a far-fetched? Not a bit! I used it because it describes precisely a study that has just been published. Of course, I could also have taken the chiropractic trial from my last post, or dozens of other studies following the A+B versus B design – it is so brilliantly suited for misleading us all.

Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories