MD, PhD, FMedSci, FSB, FRCP, FRCPEd

research methodology

1 2 3 9

Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?

Here is a brand new one which might stand for dozens of others.

In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).

The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.

Good news then for enthusiasts of homeopathy? 91% improvement!

Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:

Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.

Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:

  1. How on earth can we take this and so many other articles on homeopathy seriously?
  2. When does this sort of article cross the line between wishful thinking and scientific misconduct?

Guest post by Louise Lubetkin

(A SCIENTIST IN WONDERLAND: A MEMOIRE OF SEARCHING FOR TRUTH AND FINDING TROUBLE has now been published. An apt opportunity perhaps to post a letter and comment from the person who helped me GREATLY in finishing it.)

People write memoirs for a variety of reasons but perhaps one of the strongest impelling forces is the need to make sense of one’s own experiences. It is not surprising that you, who spent your entire professional career searching for explanations, identifying associations and parsing correlations, found yourself looking at your own life with the same analytical curiosity. Memoir is in many respects a natural choice in this regard.

That you chose to undertake a profoundly personal inventory at this juncture is also understandable in human terms. Retirement, whether anticipated and planned for, or (as in your case) thrust rudely upon you, reorders one’s sense of identity in ways that cannot fail to prompt reflection. It would have been surprising had you not felt an urge to look back and take stock, to trace the narrative arc of your life from its beginnings in post-war Germany all the way to the quiet house in rural Suffolk where you now sit, surrounded by the comfort of books and the accumulated paraphernalia of a life spent digging and delving in search of the building blocks of truth.

Given the contentious circumstances surrounding your departure from academic life, it is quite likely that you will be asked whether your decision to write a memoir was driven, at least in part, by a desire to settle scores. I think you can dismiss such a question unhesitatingly. You have no scores to settle: you came to England after a steady and unbroken ascent to the apex of your professional career, voluntarily leaving behind a position that most people would regard with envy and deference. You were never a supplicant at Exeter’s door; far from it. The fact that things went inexorably downhill over the course of your 20 years’ tenure there, and ended so deplorably, is not a reflection on you, your department, or the quality or quantity of work you turned out. Rather, it is a reflection on the very nature of the work you went there to do – and if there is any message in your memoir, it is this:

Alternative medicine is not, at its heart, a logical enterprise, and its adherents are not committed to – nor even interested in – a rational evaluation of their methods. Rather, alternative medicine is primarily an ideological position, a political credo, a reaction against mainstream medicine. To many of its adherents and impassioned advocates, its appeal lies not in any demonstrable therapeutic efficacy but in its perceived outsider status as the countercultural medicine, the medicine for Everyman, the David to the bullying medical-pharmaceutical Goliath. That your research work would elicit howls of protest was perhaps inevitable, given the threat it posed to the profitable and powerful alternative medicine industry. But it didn’t stop there: astonishingly, your work drew the ire of none less than the meddlesome heir apparent to the British throne. Prince Charles’ attempts to stymie your work call to mind the twelfth century martyr Thomas à Becket, of whom Henry II reputedly cried: “Oh, who will rid me of this turbulent priest?” (Henry’s sycophantic henchmen were quick to oblige, dispatching the hapless cleric on the steps of Canterbury cathedral.)

It’s clear that you were acutely aware, as a young man growing up in Germany, that science was not immune to the corrupting influence of political ideology, and that the German medical profession had entered – enthusiastically – into a Faustian compact with the Nazi regime. You have exhibited a courageous insistence on confronting and examining a national past that has at times felt like an intensely personal burden to you. It is ironic that in going to sleepy Exeter in an earnest, conscious attempt to shake off the constricting, intrigue-ridden atmosphere of academic Vienna, you ultimately found yourself once again mired in a struggle against the influence of ideology and the manipulation of science for political ends.

You went to Exeter strictly as a scientist, a skilled inquirer, a methodical investigator, expecting to be able to bring the rigors of logic and the scientific method to bear on an area of medical practice that had until then not been subjected to any kind of systematic evaluation. Instead, you were caught in a maelstrom of intrigue far worse than that which you had gratefully left behind in Vienna, buffeted and bruised by forces against which a lesser man would surely not have had the fortitude to push back so long and so hard.

On 1/12/2014 I published a post in which I offered to give lectures to students of alternative medicine:

Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .

A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine.

I did not think that this would create much interest – and I was right: only the ANGLO-EUROPEAN COLLEGE OF CHIROPRACTIC has so far hoisted me on my own petard and, after some discussion (see comment section of the original post) hosted me for a lecture. Several people seem keen on knowing how this went; so here is a brief report.

I was received, on 14/1/2015, with the utmost kindness by my host David Newell. We has a coffee and a chat and then it was time to start the lecture. The hall was packed with ~150 students and the same number was listening in a second lecture hall to which my talk was being transmitted.

We had agreed on the title CHIROPRACTIC: FALLACIES AND FACTS. So, after telling the audience about my professional background, I elaborated on 7 fallacies:

  1. Appeal to tradition
  2. Appeal to authority
  3. Appeal to popularity
  4. Subluxation exists
  5. Spinal manipulation is effective
  6. Spinal manipulation is safe
  7. Ad hominem attack

Numbers 3, 5 and 6 were dealt with in more detail than the rest. The organisers had asked me to finish by elaborating on what I perceive as the future challenges of chiropractic; so I did:

  1. Stop happily promoting bogus treatments
  2. Denounce obsolete concepts like ‘subluxation’
  3. Clarify differences between chiros, osteos and physios
  4. Start a culture of critical thinking
  5. Take action against charlatans in your ranks
  6. Stop attacking everyone who voices criticism

I ended by pointing out that the biggest challenge, in my view, was to “demonstrate with rigorous science which chiropractic treatments demonstrably generate more good than harm for which condition”.

We had agreed that my lecture would be followed by half an hour of discussion; this period turned out to be lively and had to be extended to a full hour. Most questions initially came from the tutors rather than the students, and most were polite – I had expected much more aggression.

In his email thanking me for coming to Bournemouth, David Newell wrote about the event: The general feedback from staff and students was one of relief that you possessed only one head, :-). I hope you may have felt the same about us. You came over as someone who had strong views, a fair amount of which we disagreed with, but that presented them in a calm, informative and courteous manner as we did in listening and discussing issues after your talk. I think everyone enjoyed the questions and debate and felt that some of the points you made were indeed fair critique of what the profession may need to do, to secure a more inclusive role in the health care arena.

 
As you may have garnered from your visit here, the AECC is committed to this task as we continue to provide the highest quality of education for the 21st C representatives of such a profession. We believe centrally that it is to our society at large and our communities within which we live and work that we are accountable. It is them that we serve, not ourselves, and we need to do that as best we can, with the best tools we have or can develop and that have as much evidence as we can find or generate. In this aim, your talk was important in shining a more ‘up close and personal’ torchlight on our profession and the tasks ahead whilst also providing us with a chance to debate the veracity or otherwise of yours and ours differing positions on interpretation of the evidence.

My own impression of the day is that some of my messages were not really understood, that some of the questions, including some from the tutors, seemed like coming from a different planet, and that people were more out to teach me than to learn from my talk. One overall impression that I took home from that day is that, even in this college which prides itself of being open to scientific evidence and unimpressed by chiropractic fundamentalism, students are strangely different from other health care professionals. The most tangible aspect of this is the openly hostile attitude against drug therapies voiced during the discussion by some students.

The question I always ask myself after having invested a lot of time in preparing and delivering a lecture is: WAS IT WORTH IT? In the case of this lecture, I think the answer is YES. With 300 students present, I am fairly confident that I did manage to stimulate a tiny bit of critical thinking in a tiny percentage of them. The chiropractic profession needs this badly!

 

Hard to believe but, in the last 35 years, I have written or edited a total of 49 books; about half of them on alternative medicine and the rest on various subjects related to clinical medicine and research. Each time a new one comes out, I am excited, of course, but this one is special:

  • I have not written a book for several years.
  • I have worked on it much longer than on any book before.
  • Never before have I written a book with is so much about myself.
  • None of my previous book covered material that is as ‘sensitive’ as this one.

I started on this book shortly after TRICK OR TREATMENT had been published. Its initial working title was ALTERNATIVE MEDICINE: THE INSIDE STORY. My aim was to focus on the extraordinary things which had happened during my time in Exeter, to shed some light on the often not so quaint life in academia, and to show how bizarre the world of alternative medicine truly is. But several people who know about these things and who had glanced at the first draft chapters strongly advised me to radically change this concept. They told me that such a book could only work as a personal memoire.

Yet I was most reluctant to write about myself; I wanted to write about science, research as well as the obstacles which some people manage to put in their way. So, after much discussion and contemplation, I compromised and added the initial chapters which told the reader about my background and my work prior to the Exeter appointment. This brought in subjects like my research on ‘Nazi-medicine’ (which, I believe, is more important than that on alternative medicine) that seemed almost entirely unrelated to alternative medicine, and the whole thing began to look a bit disjointed, in my view. However, my advisers felt this was a step in the right direction and argued that my compromise was not enough; they wanted more about me as a person, my motivations, my background etc. Eventually I (partly) gave in and provided a bit more of what they seemed to want.

But I am clearly not a novelist, most of what I have ever written is medical stuff; my style is too much that of a scientist – dry and boring. In other words, my book seemed to be going nowhere. Just when, after years of hard work, I was about to throw it all in the bin, help came from a totally unexpected corner.

Louise Lubetkin (even today, I have never met her in person) had contributed several posts as ‘guest editor’ to this blog, and I very much liked her way with words. When she offered to have a look at my book, I was thrilled. It is largely thanks to her that my ‘memoire’ ever saw the light of day. She helped enormously with making it readable and with joining up the seemingly separate episodes describes in my book.

Finding a fitting title was far from easy. Nothing seemed to encapsulate its contents, and ‘A SCIENTIST IN WONDERLAND’, the title I eventually chose, is a bit of a compromise; the subtitle does describe it much better, I think: A MEMOIR OF SEARCHING FOR TRUTH AND FINDING TROUBLE.

Now that the book is about to be published, I am anxious as never before on similar occasions. I do, of course, not think for a minute that it will be anything near to a best-seller, but I want people with an interest in alternative medicine, academia or science to read it (get it from a library to save money) and foremost I want them to understand why I wrote it. For me, this is neither about settling scores nor about self-promotion, it is about telling a story which is important in more than one way.

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):

BACKGROUND:

A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.

METHODS:

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

RESULTS:

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

CONCLUSIONS:

Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
Walach:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
Walach:
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Walach:
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

Jacobs:
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Walach:
Rating: NO (high risk of bias), no details given

Jacobs:
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.

Conclusion

So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.

SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.

Guest post by Pete Attkins

Commentator “jm” asked a profound and pertinent question: “What DOES it take for people to get real in this world, practice some common sense, and pay attention to what’s going on with themselves?” This question was asked in the context of asserting that personal experience always trumps the results of large-scale scientific experiments; and asserting that alt-med experts are better able to provide individulized healthcare than 21st Century orthodox medicine.

What does common sense and paying attention lead us to conclude about the following? We test a six-sided die for bias by rolling it 100 times. The number 1 occurs only once and the number 6 occurs many times, never on its own, but in several groups of consecutive sixes.

I think it is reasonable to say that common sense would, and should, lead everyone to conclude that the die is biased and not fit for its purpose as a source of random numbers.

In other words, we have a gut feeling that the die is untrustworthy. Gut instincts and common sense are geared towards maximizing our chances of survival in our complex and unpredictable world — these are innate and learnt behaviours that have enabled humans to survive despite the harshness of our ever changing habitat.

Only very recently in the long history of our species have we developed specialized tools that enable us to better understand our harsh and complex world: science and critical thinking. These tools are difficult to master because they still haven’t been incorporated into our primary and secondary formal education systems.

The vast majority of people do not have these skills therefore, when a scientific finding flies in the face of our gut instincts and/or common sense, it creates an overwhelming desire to reject the finding and classify the scientist(s) as being irrational and lacking basic common sense. It does not create an intense desire to accept the finding then painstakingly learn all of the science that went into producing the finding.

With that in mind, let’s rethink our common sense conclusion that the six-sided die is biased and untrustworthy. What we really mean is that the results have given all of us good reason to be highly suspicious of this die. We aren’t 100% certain that this die is biased, but our gut feeling and common sense are more than adequate to form a reasonable mistrust of it and to avoid using it for anything important to us. Reasons to keep this die rather than discard it might be to provide a source of mild entertainment or to use its bias for the purposes of cheating.

Some readers might be surprised to discover at this point that the results I presented from this apparently heavily-biased die are not only perfectly valid results obtained from a truly random unbiased die, they are to be fully expected. Even if the die had produced 100 sixes in that test, it would not confirm that the die is biased in any way whatsoever. Rolling a truly unbiased die once will produce one of six possible outcomes. Rolling the same die 100 times will produce one unique sequence out of the 6^100 (6.5 x 10^77) possible sequences: all of which are equally valid!

Gut feeling plus common sense rightfully informs us that the probability of a random die producing one hundred consecutive sixes is so incredibly remote that nobody will ever see it occur in reality. This conclusion is also mathematically sound: if there were 6.5 x 10^77 people on Earth, each performing the same test on truly random dice, there is no guarantee that anyone would observe a sequence of one hundred consecutive sixes.

When we observe a sequence such as 2 5 1 4 6 3 1 4 3 6 5 2… common sense informs us that the die is very likely random. If we calculate the arithmetic mean to be very close to 3.5 then common sense will lead us to conclude that the die is both random and unbiased enough to use it as a reliable source of random numbers.

Unfortunately, this is a perfect example of our gut feelings and common sense failing us abysmally. They totally failed to warn us that the 2 5 1 4 6 3 1 4 3 6 5 2… sequence we observed had exactly the same (im)probability of occurring as a sequence of one hundred 6s or any other sequence that one can think of that doesn’t look random to a human observer.

The 100-roll die test is nowhere near powerful enough to properly test a six-sided die, but this test is more than adequately powered to reveal some of our cognitive biases and some of the deficits in our personal mastery of science and critical thinking.

To properly test the die we need to provide solid evidence that it is both truly random and that its measured bias tends towards zero as the number of rolls tends towards infinity. We could use the services of one testing lab to conduct billions of test rolls, but this would not exclude errors caused by such things as miscalibrated equipment and experimenter bias. It is better to subdivide the testing across multiple labs then carefully analyse and appropriately aggregate the results: this dramatically reduces errors caused by equipment and humans.

In medicine, this testing process is performed via systematic reviews of multiple, independent, double-blind, placebo-controlled trials — every trial that is insufficiently powered to add meaningfully to the result is rightfully excluded from the aggregation.

Alt-med relies on a diametrically opposed testing process. It performs a plethora of only underpowered tests; presents those that just happen to show a positive result (just as a random die could’ve produced); and sweeps under the carpet the overwhelming number of tests that produced a negative result. It publishes only the ‘successes’, not its failures. By sweeping its failures under the carpet it feels justified in making the very bold claim: Our plethora of collected evidence shows clearly that it mostly ‘works’ and, when it doesn’t, it causes no harm.

One of the most acidic tests for a hypothesis and its supporting data (which is a mandatory test in a few branches of critical engineering) is to substitute the collected data for random data that has been carefully crafted to emulate the probability mass functions of the collected datasets. This test has to be run multiple times for reasons that I’ve attempted to explain in my random die example. If the proposer of the hypothesis is unable to explain the multiple failures resulting from this acid test then it is highly likely that the proposer either does not fully understand their hypothesis or that their hypothesis is indistinguishable from the null hypothesis.

Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .

A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine. I suggest to

  • do a general lecture on the clinical evidence of the 4 major types of alternative medicine (acupuncture, chiropractic, herbal medicine, homeopathy) or
  • give a more specific lecture with in-depth analyses of any given alternative therapy.

I imagine that most of the institutions in question might be a bit anxious about such an idea, but there is no need to worry: I guarantee that everything I say will be strictly and transparently evidence-based. I will disclose my sources and am willing to make my presentation available to students so that they can read up the finer details about the evidence later at home. In other words, I will do my very best to only transmit the truth about the subject at hand.

Nobody wants to hire a lecturer without having at least a rough outline of what he will be talking about – fair enough! Here I present a short summary of the lecture as I envisage it:

  • I will start by providing a background about myself, my qualifications and my experience in researching and lecturing on the matter at hand.
  • This will be followed by a background on the therapies in question, their history, current use etc.
  • Next I would elaborate on the main assumptions of the therapies in question and on their biological plausibility.
  • This will be followed by a review of the claims made for the therapies in question.
  • The main section of my lecture would be to review the clinical evidence regarding the efficacy of therapies in question. In doing this, I will not cherry-pick my evidence but rely, whenever possible, on authoritative systematic reviews, preferably those from the Cochrane Collaboration.
  • This, of course, needs to be supplemented by a review of safety issues.
  • If wanted, I could also say a few words about the importance of the placebo effect.
  • I also suggest to discuss some of the most pertinent ethical issues.
  • Finally, I would hope to arrive at a few clear conclusions.

You see, all is entirely up to scratch!

Perhaps you have some doubts about my abilities to lecture? I can assure you, I have done this sort of thing all my life, I have been a professor at three different universities, and I will probably manage a lecture to your students.

A final issue might be the costs involved. As I said, I would charge neither for the preparation (this can take several days depending on the exact topic), nor for the lecture itself. All I would hope for is that you refund my travel (and, if necessary over-night) expenses. And please note: this is  time-limited: approaches will be accepted until 1 January 2015 for lectures any time during 2015.

I can assure you, this is a generous offer  that you ought to consider seriously – unless, of course, you do not want your students to learn the truth!

(In which case, one would need to wonder why not)

Guest post by Jan Oude-Aost

ADHD is a common disorder among children. There are evidence based pharmacological treatments, the best known being methylphenidate (MPH). MPH has kind of a bad reputation, but is effective and reasonably safe. The market is also full of alternative treatments, pharmacological and others, some of them under investigation, some unproven and many disproven. So I was not surprised to find a study about Ginkgo biloba as a treatment for ADHD. I was surprised, however, to find this study in the German Journal of Child and Adolescent Psychiatry and Psychotherapy, officially published by the “German Society of Child and Adolescent Psychiatry and Psychotherapy“ (Deutsche Gesellschaft für Kinder- und Jugendpsychiatrie und Psychotherapie). The journal’s guidelines state that studies should provide new scientific results.

The study is called “Ginkgo biloba Extract EGb 761® in Children with ADHD“. EGb 761® is the key ingredient in “Tebonin®“, a herbal drug made by “Dr. Wilma Schwabe GmbH“. The abstract states:

One possible treatment, at least for cognitive problems, might be the administration of Ginkgo biloba, though evidence is rare.This study tests the clinical efficacy of a Ginkgo biloba special extract (EGb 761®) (…) in children with ADHD (…).

Eine erfolgversprechende, bislang kaum untersuchte Möglichkeit zur Behandlung kognitiver Aspekte ist die Gabe von Ginkgo biloba. Ziel der vorliegenden Studie war die Prüfung klinischer Wirksamkeit (…) von Ginkgo biloba-Extrakt Egb 761® bei Kindern mit ADHS.“ (Taken from the English and German abstracts.)

The study sample (20!) was recruited among children who “did not tolerate or were unwilling“ to take MPH. The unwilling part struck me as problematic. There is likely a strong selection bias towards parents who are unwilling to give their children MPH. I guess it is not the children who are unwilling to take MPH, but the parents who are unwilling to administer it. At least some of these parents might be biased against MPH and might already favor CAMmodalities.

The authors state three main problems with “herbal therapy“ that require more empirical evidence: First of all the question of adverse reactions, which they claim occur in about 1% of cases with “some CAMs“ (mind you, not “herbal therapy“). Secondly, the question of drug interactions and thirdly, the lack of information physicians have about the CAMs their patients use.

A large part of the study is based on results of an EEG-protocol, which I choose to ignore, because the clinical results are too weak to give the EEG findings any clinical relevance.

Before looking at the study itself, let’s look at what is known about Ginkgo biloba as a drug. Ginkgo is best known for its use in patients with dementia, cognitive impairment und tinnitus. A Cochrane review from 2009 concluded:

There is no convincing evidence that Ginkgo biloba is efficacious for dementia and cognitive impairment“ [1].

The authors of the current Study cite Sarris et al. (2011), a systematic review of complementary treatment of ADHD. Sarris et al. mention Salehi et al. (2010) who tested Ginkgo against MPH. MPH turned out to be much more effective than Ginkgo, but Sarris et al. argue that the duration of treatment (6 weeks) might have been too short to see the full effects of Ginkgo.

Given the above information it is unclear why Ginkgo is judged a “possible“ treatment, properly translated from German even “promising”, and why the authors state that Ginkgo has been “barely studied“.

In an unblinded, uncontrolled study with a sample likely to be biased toward the tested intervention, anything other than a positive result would be odd. In the treatment of autism there are several examples of implausible treatments that worked as long as parents knew that their children were getting the treatment, but didn’t after proper blinding (e.g. secretin).

This study’s aim was to test clinical efficacy, but the conclusion begins with how well tolerated Ginkgo was. The efficacy is mentioned subsequently: “Following administration, interrelated improvements on behavioral ratings of ADHD symptoms (…) were detected (…).“ But the way they where “detected“ is interesting. The authors used an established questionnaire (FBB-HKS) to let parents rate their children. Only the parents. The children and their teachers where not given the FBB-HKS-questionnaires, inspite of this being standard clinical practice (and inspite of giving children questionnaires to determine changes in quality of life, which were not found).

None of the three problems that the authors describe as important (adverse reactions, drug interactions, lack of information) can be answered by this study. I am no expert in statistics but it seems unlikely to me to meaningfully determine adverse effects in just 20 patients especially when adverse effects occur at a rate of 1%. The authors claim they found an incidence rate of 0,004% in “700 observation days“. Well, if they say so.

The authors conclude:

Taken together, the current study provides some preliminary evidence that Ginkgo biloba Egb 761® seems to be well tolerated in the short term and may be a clinically useful treatment for children with ADHD. Double-blind randomized trials are required to clarify the value of the presented data.

Given the available information mentioned earlier, one could have started with that conclusion and conducted a double blind RCT in the first place!

Clinical Significance

The trends of this preliminary open study may suggest that Ginkgo biloba Egb 761® might be considered as a complementary or alternative medicine for treating children with ADHD.“

So, why do I care? If preliminary evidence “may suggest“ that something “might be considered“ as a treatment? Because I think that this study does not answer any important questions or give us any new or useful knowledge. Following the journal’s guidelines, it should therefore not have been published. I also think it is an example of bad science. Bad not just because of the lack of critical thinking. It also adds to the misinformation about possible ADHD treatments spreading through the internet. The study was published in September. In November I found a website citing the study and calling it “clinical proof“ when it is not. But child psychiatrists will have to explain that to many parents, instead of talking about their children’s health.

I somehow got the impression that this study was more about marketing than about science. I wonder if Schwabe will help finance the necessary double-blind randomized trial…

[1] See more at: http://summaries.cochrane.org/CD003120/DEMENTIA_there-is-no-convincing-evidence-that-ginkgo-biloba-is-efficacious-for-dementia-and-cognitive-impairment#sthash.oqKFrSCC.dpuf

Reiki is a form of energy healing that evidently has been getting so popular that, according to the ‘Shropshire Star’, even stressed hedgehogs are now being treated with this therapy. In case you argue that this publication is not cutting edge when it comes to reporting of scientific advances, you may have a point. So, let us see what evidence we find on this amazing intervention.

A recent systematic review of the therapeutic effects of Reiki concludes that the serious methodological and reporting limitations of limited existing Reiki studies preclude a definitive conclusion on its effectiveness. High-quality randomized controlled trials are needed to address the effectiveness of Reiki over placebo. Considering that this article was published in the JOURNAL OF ALTERNATIVE AND COMPLEMENTARY MEDICINE, this is a fairly damming verdict. The notion that Reiki is but a theatrical placebo recently received more support from a new clinical trial.

This pilot study examined the effects of Reiki therapy and companionship on improvements in quality of life, mood, and symptom distress during chemotherapy. Thirty-six breast cancer patients received usual care, Reiki, or a companion during chemotherapy. Data were collected from patients while they were receiving usual care. Subsequently, patients were randomized to either receive Reiki or a companion during chemotherapy. Questionnaires assessing quality of life, mood, symptom distress, and Reiki acceptability were completed at baseline and chemotherapy sessions 1, 2, and 4. Reiki was rated relaxing and caused no side effects. Both Reiki and companion groups reported improvements in quality of life and mood that were greater than those seen in the usual care group.

The authors of this study conclude that interventions during chemotherapy, such as Reiki or companionship, are feasible, acceptable, and may reduce side effects.

This is an odd conclusion, if there ever was one. Clearly the ‘companionship’ group was included to see whether Reiki has effects beyond simply providing sympathetic attention. The results show that this is not the case. It follows, I think, that Reiki is a placebo; its perceived relaxing effects are the result of non-specific phenomena which have nothing to do with Reiki per se. The fact that the authors fail to spell this out more clearly makes me wonder whether they are researchers or promoters of Reiki.

Some people will feel that it does not matter how Reiki works, the main thing is that it does work. I beg to differ!

If its effects are due to nothing else than attention and companionship, we do not need ‘trained’ Reiki masters to do the treatment; anyone who has time, compassion and sympathy can do it. More importantly, if Reiki is a placebo, we should not mislead people that some super-natural energy is at work. This only promotes irrationality – and, as Voltaire once said: those who make you believe in absurdities can make you commit atrocities.

1 2 3 9
Recent Comments
Click here for a comprehensive list of recent comments.
Categories