MD, PhD, FMedSci, FSB, FRCP, FRCPEd

research methodology

1 2 3 9

The discussion whether acupuncture is more than a placebo is as long as it is heated. Crucially, it is also quite tedious, tiresome and unproductive, not least because no resolution seems to be in sight. Whenever researchers develop an apparently credible placebo and the results of clinical trials are not what acupuncturists had hoped for, the therapists claim that the placebo is, after all, not inert and the negative findings must be due to the fact that both placebo and real acupuncture are effective.

Laser acupuncture (acupoint stimulation not with needle-insertion but with laser light) offers a possible way out of this dilemma. It is relatively easy to make a placebo laser that looks convincing to all parties concerned but is a pure and inert placebo. Many trials have been conducted following this concept, and it is therefore highly relevant to ask what the totality of this evidence suggests.

A recent systematic review did just that; specifically, it aimed to evaluate the effects of laser acupuncture on pain and functional outcomes when it is used to treat musculoskeletal disorders.

Extensive literature searches were used to identify all RCTs employing laser acupuncture. A meta-analysis was performed by calculating the standardized mean differences and 95% confidence intervals, to evaluate the effect of laser acupuncture on pain and functional outcomes. Included studies were assessed in terms of their methodological quality and appropriateness of laser parameters.

Forty-nine RCTs met the inclusion criteria. Two-thirds (31/49) of these studies reported positive effects. All of them were rated as being of high methodological quality and all of them included sufficient details about the lasers used. Negative or inconclusive studies mostly failed to demonstrate these features. For all diagnostic subgroups, positive effects for both pain and functional outcomes were more consistently seen at long-term follow-up rather than immediately after treatment.

The authors concluded that moderate-quality evidence supports the effectiveness of laser acupuncture in managing musculoskeletal pain when applied in an appropriate treatment dosage; however, the positive effects are seen only at long-term follow-up and not immediately after the cessation of treatment.

Surprised? Well, I am!

This is a meta-analysis I always wanted to conduct and never came round to doing. Using the ‘trick’ of laser acupuncture, it is possible to fully blind patients, clinicians and data evaluators. This eliminates the most obvious sources of bias in such studies. Those who are convinced that acupuncture is a pure placebo would therefore expect a negative overall result.

But the result is quite clearly positive! How can this be? I can see three options:

  • The meta-analysis could be biased and the result might therefore be false-positive. I looked hard but could not find any significant flaws.
  • The primary studies might be wrong, fraudulent etc. I did not see any obvious signs for this to be so.
  • Acupuncture might be more than a placebo after all. This notion might be unacceptable to sceptics.

I invite anyone who sufficiently understands clinical trial methodology to scrutinise the data closely and tell us which of the three possibilities is the correct one.

Even though it has been published less than a month ago, my new book ‘A SCIENTIST IN WONDERLAND…‘ has already received many most flattering reviews. For me, the most impressive one was by the journal ‘Nature'; they called my memoire ‘ferociously frank’ and ‘a clarion call for medical ethics’.

I did promise to provide several little excerpts for the readers of this blog to enable them to make up their own minds as to whether they want to read it or not. Today I offer you the start of the chapter 6 entitled ‘WONDERLAND’. I do hope you enjoy it.

It has been claimed by some members of the lunatic fringe of alternative medicine that I took up the Laing Chair at Exeter with the specific agenda of debunking alternative medicine. This is certainly not true; if anything, I was predisposed to look kindly on it. After all, I had grown up and done my medical training in Germany where the use of alternative therapies in a supportive role alongside standard medical care was considered routine and unremarkable. As a clinician, I had seen positive results from alternative therapies. If I came to Exeter with any preconceived ideas at all, they were of a generally favourable kind. I was sure that, if we applied the rules of science to the study of alternative medicine, we would find plenty of encouraging evidence.
As if to prove this point, the managing director of a major UK homeopathic pharmacy wrote a comment on my blog in April 2014: “…I met you once in Exeter in the 90s when exploring a possible clinical study. I found you most encouraging and openly enthusiastic about homeopathy. I would go so far as to say I was inspired to go further in homeopathy thanks to you but now you want to close down something which in my experience does so much good in the world. What went wrong?”
The answer to this question is fairly simple: nothing went wrong, but the evidence demonstrated more and more indispu-tably that most alternative therapies are not nearly as effective as enthusiasts tried to make us believe…

Here is another short passage from my new book A SCIENTIST IN WONDERLAND. It describes the event where I was first publicly exposed to the weird and wonderful world of alternative medicine in the UK. It is also the scene which, in my original draft, was the very beginning of the book.

I hope that the excerpt inspires some readers to read the entire book – it currently is BOOK OF THE WEEK in the TIMES HIGHER EDUCATION!!!

… [an] aggressive and curious public challenge occurred a few weeks later during a conference hosted by the Research Council for Complementary Medicine in London. This organization had been established a few years earlier with the aim of conducting and facilitating research in all areas of alternative medicine. My impression of this institution, and indeed of the various other groups operating in this area, was that they were far too uncritical, and often proved to be hopelessly biased in favour of alternative medicine. This, I thought, was an extraordinary phenomenon: should research councils and similar bodies not have a duty to be critical and be primarily concerned about the quality of the research rather than the overall tenor of the results? Should research not be critical by nature? In this regard, alternative medicine appeared to be starkly different from any other type of health care I had encountered previously.

On short notice, I had accepted an invitation to address this meeting packed with about 100 proponents of alternative medicine. I felt that their enthusiasm and passion were charming but, no matter whom I talked to, there seemed to be little or no understanding of the role of science in all this. A strange naïvety pervaded this audience: alternative practitioners and their supporters seemed a bit like children playing “doctor and patient”. The language, the rituals and the façade were all more or less in place, but somehow they seemed strangely detached from reality. It felt a bit as though I had landed on a different planet. The delegates passionately wanted to promote alternative medicine, while I, with equal passion and conviction, wanted to conduct good science. The two aims were profoundly different. Nevertheless, I managed to convince myself that they were not irreconcilable, and that we would manage to combine our passions and create something worthwhile, perhaps even groundbreaking.

Everyone was excited about the new chair in Exeter; high hopes and expectations filled the room. The British alternative medicine scene had long felt discriminated against because they had no academic representation to speak of. I certainly did sympathize with this particular aspect and felt assured that, essentially, I was amongst friends who realized that my expertise and their enthusiasm could add up to bring about progress for the benefit of many patients.
During my short speech, I summarized my own history as a physician and a scientist and outlined what I intended to do in my new post—nothing concrete yet, merely the general gist. I stressed that my plan was to apply science to this field in order to find out what works and what doesn’t; what is safe and what isn’t. Science, I pointed out, generates progress through asking critical questions and through testing hypotheses. Alternative medicine would either be shown by good science to be of value, or it would turn out to be little more than a passing fad. The endowment of the Laing chair represented an important mile-stone on the way towards the impartial evaluation of alternative medicine, and surely this would be in the best interest of all parties concerned.

To me, all this seemed an entirely reasonable approach, particularly as it merely reiterated what I had just published in an editorial for The Lancet entitled “Scrutinizing the Alternatives”.

My audience, however, was not impressed. When I had finished, there was a stunned, embarrassed silence. Finally someone shouted angrily from the back row: “How did they dare to appoint a doctor to this chair?” I was startled by this question and did not quite understand. What had prompted this reaction? What did this audience expect? Did they think my qualifications were not good enough? Why were they upset by the appointment of a doctor? Who else, in their view, might be better equipped to conduct medical research?

It wasn’t until weeks later that it dawned on me: they had been waiting for someone with a strong commitment to the promotion of alternative medicine. Such a commitment could only come from an alternative practitioner. A doctor personified the establishment, and “alternative” foremost symbolized “anti-establishment”. My little speech had upset them because it confirmed their worst fears of being annexed by “the establishment”. These enthusiasts had hoped for a believer from their own ranks and certainly not for a doctor-scientist to be appointed to the world’s first chair of complementary medicine. They had expected that Exeter University would lend its support to their commercial and ideological interests; they had little understanding of the concept that universities should not be in the business of promoting anything other than high standards.

Even today, after having given well over 600 lectures on the topic of alternative medicine, and after coming on the receiving end of ever more hostile attacks, aggressive questions and personal insults, this particular episode is still etched deeply into my memory. In a very real way, it set the scene for the two decades to come: the endless conflicts between my agenda of testing alternative medicine scientifically and the fervent aspirations of enthusiasts to promote alternative medicine uncritically. That our positions would prove mutually incompatible had been predictable from the very start. The writing had been on the wall—but it took me a while to be able to fully understand the message.

A recent article in the BMJ about my new book seems to have upset fellow researchers of alternative medicine. I am told that the offending passage is the following:

“Too much research on complementary therapies is done by people who have already made up their minds,” the first UK professor of complementary medicine has said. Edzard Ernst, who left his chair at Exeter University early after clashing with the Prince of Wales, told journalists at the Science Media Centre in London that, although more research into alternative medicines was now taking place, “none of the centres is anywhere near critical enough.”

Following this publication, I received indignant inquiries from colleagues asking whether I meant to say that their work lacks critical thinking. As this is a valid question, I will try to answer it the best I presently can.

Any critical evaluation of alternative medicine has to yield its fair share of negative conclusions about the value of alternative medicine. If it fails to do that, one would need to assume that most or all alternative therapies generate more good than harm – and very few experts (who are not proponents of alternative medicine) would assume that this can possibly be the case.

Put differently, this means that a researcher or a research group that does not generate its fair share of negative conclusions is suspect of lacking a critical attitude. In a previous post, I have addressed this issue in more detail by creating an ‘index': THE TRUSTWORTHINESS INDEX. I have also provided a concrete example of a researcher who seems to be associated with a remarkably high index (the higher the index, the more suspicion of critical attitude).

Instead of unnecessarily upsetting my fellow researchers of alternative medicine any further, I will just issue this challenge: if any research group can demonstrate to have an index below 0.5 (which would mean the team has published twice as many negative conclusions as positive ones), I will gladly and publicly retract my suspicion that this group is “anywhere near critical enough”.

Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?

Here is a brand new one which might stand for dozens of others.

In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).

The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.

Good news then for enthusiasts of homeopathy? 91% improvement!

Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:

Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.

Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:

  1. How on earth can we take this and so many other articles on homeopathy seriously?
  2. When does this sort of article cross the line between wishful thinking and scientific misconduct?

Guest post by Louise Lubetkin

(A SCIENTIST IN WONDERLAND: A MEMOIRE OF SEARCHING FOR TRUTH AND FINDING TROUBLE has now been published. An apt opportunity perhaps to post a letter and comment from the person who helped me GREATLY in finishing it.)

People write memoirs for a variety of reasons but perhaps one of the strongest impelling forces is the need to make sense of one’s own experiences. It is not surprising that you, who spent your entire professional career searching for explanations, identifying associations and parsing correlations, found yourself looking at your own life with the same analytical curiosity. Memoir is in many respects a natural choice in this regard.

That you chose to undertake a profoundly personal inventory at this juncture is also understandable in human terms. Retirement, whether anticipated and planned for, or (as in your case) thrust rudely upon you, reorders one’s sense of identity in ways that cannot fail to prompt reflection. It would have been surprising had you not felt an urge to look back and take stock, to trace the narrative arc of your life from its beginnings in post-war Germany all the way to the quiet house in rural Suffolk where you now sit, surrounded by the comfort of books and the accumulated paraphernalia of a life spent digging and delving in search of the building blocks of truth.

Given the contentious circumstances surrounding your departure from academic life, it is quite likely that you will be asked whether your decision to write a memoir was driven, at least in part, by a desire to settle scores. I think you can dismiss such a question unhesitatingly. You have no scores to settle: you came to England after a steady and unbroken ascent to the apex of your professional career, voluntarily leaving behind a position that most people would regard with envy and deference. You were never a supplicant at Exeter’s door; far from it. The fact that things went inexorably downhill over the course of your 20 years’ tenure there, and ended so deplorably, is not a reflection on you, your department, or the quality or quantity of work you turned out. Rather, it is a reflection on the very nature of the work you went there to do – and if there is any message in your memoir, it is this:

Alternative medicine is not, at its heart, a logical enterprise, and its adherents are not committed to – nor even interested in – a rational evaluation of their methods. Rather, alternative medicine is primarily an ideological position, a political credo, a reaction against mainstream medicine. To many of its adherents and impassioned advocates, its appeal lies not in any demonstrable therapeutic efficacy but in its perceived outsider status as the countercultural medicine, the medicine for Everyman, the David to the bullying medical-pharmaceutical Goliath. That your research work would elicit howls of protest was perhaps inevitable, given the threat it posed to the profitable and powerful alternative medicine industry. But it didn’t stop there: astonishingly, your work drew the ire of none less than the meddlesome heir apparent to the British throne. Prince Charles’ attempts to stymie your work call to mind the twelfth century martyr Thomas à Becket, of whom Henry II reputedly cried: “Oh, who will rid me of this turbulent priest?” (Henry’s sycophantic henchmen were quick to oblige, dispatching the hapless cleric on the steps of Canterbury cathedral.)

It’s clear that you were acutely aware, as a young man growing up in Germany, that science was not immune to the corrupting influence of political ideology, and that the German medical profession had entered – enthusiastically – into a Faustian compact with the Nazi regime. You have exhibited a courageous insistence on confronting and examining a national past that has at times felt like an intensely personal burden to you. It is ironic that in going to sleepy Exeter in an earnest, conscious attempt to shake off the constricting, intrigue-ridden atmosphere of academic Vienna, you ultimately found yourself once again mired in a struggle against the influence of ideology and the manipulation of science for political ends.

You went to Exeter strictly as a scientist, a skilled inquirer, a methodical investigator, expecting to be able to bring the rigors of logic and the scientific method to bear on an area of medical practice that had until then not been subjected to any kind of systematic evaluation. Instead, you were caught in a maelstrom of intrigue far worse than that which you had gratefully left behind in Vienna, buffeted and bruised by forces against which a lesser man would surely not have had the fortitude to push back so long and so hard.

On 1/12/2014 I published a post in which I offered to give lectures to students of alternative medicine:

Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .

A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine.

I did not think that this would create much interest – and I was right: only the ANGLO-EUROPEAN COLLEGE OF CHIROPRACTIC has so far hoisted me on my own petard and, after some discussion (see comment section of the original post) hosted me for a lecture. Several people seem keen on knowing how this went; so here is a brief report.

I was received, on 14/1/2015, with the utmost kindness by my host David Newell. We has a coffee and a chat and then it was time to start the lecture. The hall was packed with ~150 students and the same number was listening in a second lecture hall to which my talk was being transmitted.

We had agreed on the title CHIROPRACTIC: FALLACIES AND FACTS. So, after telling the audience about my professional background, I elaborated on 7 fallacies:

  1. Appeal to tradition
  2. Appeal to authority
  3. Appeal to popularity
  4. Subluxation exists
  5. Spinal manipulation is effective
  6. Spinal manipulation is safe
  7. Ad hominem attack

Numbers 3, 5 and 6 were dealt with in more detail than the rest. The organisers had asked me to finish by elaborating on what I perceive as the future challenges of chiropractic; so I did:

  1. Stop happily promoting bogus treatments
  2. Denounce obsolete concepts like ‘subluxation’
  3. Clarify differences between chiros, osteos and physios
  4. Start a culture of critical thinking
  5. Take action against charlatans in your ranks
  6. Stop attacking everyone who voices criticism

I ended by pointing out that the biggest challenge, in my view, was to “demonstrate with rigorous science which chiropractic treatments demonstrably generate more good than harm for which condition”.

We had agreed that my lecture would be followed by half an hour of discussion; this period turned out to be lively and had to be extended to a full hour. Most questions initially came from the tutors rather than the students, and most were polite – I had expected much more aggression.

In his email thanking me for coming to Bournemouth, David Newell wrote about the event: The general feedback from staff and students was one of relief that you possessed only one head, :-). I hope you may have felt the same about us. You came over as someone who had strong views, a fair amount of which we disagreed with, but that presented them in a calm, informative and courteous manner as we did in listening and discussing issues after your talk. I think everyone enjoyed the questions and debate and felt that some of the points you made were indeed fair critique of what the profession may need to do, to secure a more inclusive role in the health care arena.

 
As you may have garnered from your visit here, the AECC is committed to this task as we continue to provide the highest quality of education for the 21st C representatives of such a profession. We believe centrally that it is to our society at large and our communities within which we live and work that we are accountable. It is them that we serve, not ourselves, and we need to do that as best we can, with the best tools we have or can develop and that have as much evidence as we can find or generate. In this aim, your talk was important in shining a more ‘up close and personal’ torchlight on our profession and the tasks ahead whilst also providing us with a chance to debate the veracity or otherwise of yours and ours differing positions on interpretation of the evidence.

My own impression of the day is that some of my messages were not really understood, that some of the questions, including some from the tutors, seemed like coming from a different planet, and that people were more out to teach me than to learn from my talk. One overall impression that I took home from that day is that, even in this college which prides itself of being open to scientific evidence and unimpressed by chiropractic fundamentalism, students are strangely different from other health care professionals. The most tangible aspect of this is the openly hostile attitude against drug therapies voiced during the discussion by some students.

The question I always ask myself after having invested a lot of time in preparing and delivering a lecture is: WAS IT WORTH IT? In the case of this lecture, I think the answer is YES. With 300 students present, I am fairly confident that I did manage to stimulate a tiny bit of critical thinking in a tiny percentage of them. The chiropractic profession needs this badly!

 

Hard to believe but, in the last 35 years, I have written or edited a total of 49 books; about half of them on alternative medicine and the rest on various subjects related to clinical medicine and research. Each time a new one comes out, I am excited, of course, but this one is special:

  • I have not written a book for several years.
  • I have worked on it much longer than on any book before.
  • Never before have I written a book with is so much about myself.
  • None of my previous book covered material that is as ‘sensitive’ as this one.

I started on this book shortly after TRICK OR TREATMENT had been published. Its initial working title was ALTERNATIVE MEDICINE: THE INSIDE STORY. My aim was to focus on the extraordinary things which had happened during my time in Exeter, to shed some light on the often not so quaint life in academia, and to show how bizarre the world of alternative medicine truly is. But several people who know about these things and who had glanced at the first draft chapters strongly advised me to radically change this concept. They told me that such a book could only work as a personal memoire.

Yet I was most reluctant to write about myself; I wanted to write about science, research as well as the obstacles which some people manage to put in their way. So, after much discussion and contemplation, I compromised and added the initial chapters which told the reader about my background and my work prior to the Exeter appointment. This brought in subjects like my research on ‘Nazi-medicine’ (which, I believe, is more important than that on alternative medicine) that seemed almost entirely unrelated to alternative medicine, and the whole thing began to look a bit disjointed, in my view. However, my advisers felt this was a step in the right direction and argued that my compromise was not enough; they wanted more about me as a person, my motivations, my background etc. Eventually I (partly) gave in and provided a bit more of what they seemed to want.

But I am clearly not a novelist, most of what I have ever written is medical stuff; my style is too much that of a scientist – dry and boring. In other words, my book seemed to be going nowhere. Just when, after years of hard work, I was about to throw it all in the bin, help came from a totally unexpected corner.

Louise Lubetkin (even today, I have never met her in person) had contributed several posts as ‘guest editor’ to this blog, and I very much liked her way with words. When she offered to have a look at my book, I was thrilled. It is largely thanks to her that my ‘memoire’ ever saw the light of day. She helped enormously with making it readable and with joining up the seemingly separate episodes describes in my book.

Finding a fitting title was far from easy. Nothing seemed to encapsulate its contents, and ‘A SCIENTIST IN WONDERLAND’, the title I eventually chose, is a bit of a compromise; the subtitle does describe it much better, I think: A MEMOIR OF SEARCHING FOR TRUTH AND FINDING TROUBLE.

Now that the book is about to be published, I am anxious as never before on similar occasions. I do, of course, not think for a minute that it will be anything near to a best-seller, but I want people with an interest in alternative medicine, academia or science to read it (get it from a library to save money) and foremost I want them to understand why I wrote it. For me, this is neither about settling scores nor about self-promotion, it is about telling a story which is important in more than one way.

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):

BACKGROUND:

A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.

METHODS:

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

RESULTS:

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

CONCLUSIONS:

Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
Walach:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
Walach:
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Walach:
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

Jacobs:
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Walach:
Rating: NO (high risk of bias), no details given

Jacobs:
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.

Conclusion

So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.

SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.

1 2 3 9
Recent Comments
Click here for a comprehensive list of recent comments.
Categories