MD, PhD, FMedSci, FSB, FRCP, FRCPEd

false positive

1 2 3 9

Iyengar Yoga, named after and developed by B. K. S. Iyengar, is a form of Hatha Yoga that has an emphasis on detail, precision and alignment in the performance of posture (asana) and breath control (pranayama). The development of strength, mobility and stability is gained through the asanas.

B.K.S. Iyengar has systematised over 200 classical yoga poses and 14 different types of Pranayama (with variations of many of them) ranging from the basic to advanced. This helps ensure that students progress gradually by moving from simple poses to more complex ones and develop their mind, body and spirit step by step.

Iyengar Yoga often makes use of props, such as belts, blocks, and blankets, as aids in performing asanas (postures). The props enable students to perform the asanas correctly, minimising the risk of injury or strain, and making the postures accessible to both young and old.

Sounds interesting? But does it work?

The objective of this recent systematic review was to conduct a systematic review of the existing research on Iyengar yoga for relieving back and neck pain. The authors conducted extensive literature searches and found 6 RCTs that met the inclusion criteria.

The difference between the groups on the post-intervention pain or functional disability intensity assessment was, in all 6 studies, favouring the yoga group, which projected a decrease in back and neck pain.

The authors concluded that Iyengar yoga is an effective means for both back and neck pain in comparison to control groups. This systematic review found strong evidence for short-term effectiveness, but little evidence for long-term effectiveness of yoga for chronic spine pain in the patient-centered outcomes.

So, if we can trust this evidence (I would not call the evidence ‘strong), we have yet another treatment that might be effective for acute back and neck pain. The trouble, I fear, is not that we have too few such treatments, the trouble seems to be that we have too many of them. They all seem similarly effective, and I cannot help but wonder whether, in fact, they are all similarly ineffective.

Regardless of the answer to this troubling question, I feel the need to re-state what I have written many times before: FOR A CONDITION WITH A MULTITUDE OF ALLEGEDLY EFFECTIVE THERAPIES, IT MIGHT BE BEST TO CHOSE THE ONE THAT IS SAFEST AND CHEAPEST.

A recent article in the BMJ about my new book seems to have upset fellow researchers of alternative medicine. I am told that the offending passage is the following:

“Too much research on complementary therapies is done by people who have already made up their minds,” the first UK professor of complementary medicine has said. Edzard Ernst, who left his chair at Exeter University early after clashing with the Prince of Wales, told journalists at the Science Media Centre in London that, although more research into alternative medicines was now taking place, “none of the centres is anywhere near critical enough.”

Following this publication, I received indignant inquiries from colleagues asking whether I meant to say that their work lacks critical thinking. As this is a valid question, I will try to answer it the best I presently can.

Any critical evaluation of alternative medicine has to yield its fair share of negative conclusions about the value of alternative medicine. If it fails to do that, one would need to assume that most or all alternative therapies generate more good than harm – and very few experts (who are not proponents of alternative medicine) would assume that this can possibly be the case.

Put differently, this means that a researcher or a research group that does not generate its fair share of negative conclusions is suspect of lacking a critical attitude. In a previous post, I have addressed this issue in more detail by creating an ‘index': THE TRUSTWORTHINESS INDEX. I have also provided a concrete example of a researcher who seems to be associated with a remarkably high index (the higher the index, the more suspicion of critical attitude).

Instead of unnecessarily upsetting my fellow researchers of alternative medicine any further, I will just issue this challenge: if any research group can demonstrate to have an index below 0.5 (which would mean the team has published twice as many negative conclusions as positive ones), I will gladly and publicly retract my suspicion that this group is “anywhere near critical enough”.

Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?

Here is a brand new one which might stand for dozens of others.

In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).

The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.

Good news then for enthusiasts of homeopathy? 91% improvement!

Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:

Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.

Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:

  1. How on earth can we take this and so many other articles on homeopathy seriously?
  2. When does this sort of article cross the line between wishful thinking and scientific misconduct?

I would have never thought that someone would be able to identify the author of the text I quoted in the previous post:

It is known that not just novel therapies but also traditional ones, such as homeopathy, suffer opposition and rejection by some doctors without having ever been subjected to serious tests. The doctor is in charge of medical treatment; he is thus responsible foremost for making sure all knowledge and all methods are employed for the benefit of public health…I ask the medical profession to consider even previously excluded therapies with an open mind. It is necessary that an unbiased evaluation takes place, not just of the theories but also of the clinical effectiveness of alternative medicine.

More often than once has science, when it relied on theory alone, arrived at verdicts which later had to be overturned – frequently this occurred only after long periods of time, after progress had been hindered and most acclaimed pioneers had suffered serious injustice. I do not need to remind you of the doctor who, more than 100 years ago, in fighting puerperal fever, discovered sepsis and asepsis but was laughed at and ousted by his colleagues throughout his lifetime. Yet nobody would today deny that this knowledge is most relevant to medicine and that it belongs to the basis of medicine. Insightful doctors, some of whom famous, have, during the recent years, spoken openly about the crisis in medicine and the dead end that health care has maneuvered itself into. It seems obvious that the solution is going in directions which embrace nature. Hardly any other form of science is so tightly bound to nature as is the science occupied with healing living creatures. The demand for holism is getting stronger and stronger, a general demand which has already been fruitful on the political level. For medicine, the challenge is to treat more than previously by influencing the whole organism when we aim to heal a diseased organ.

It is from the opening speech by Rudolf Hess on the occasion of the WORLD CONFERENCE ON HOMEOPATHY 1937, in Berlin. Hess, at the time Hitler’s deputy, was not the only Nazi-leader. I knew of the opening speech because, a few years ago, DER SPIEGEL published a theme issue on homeopathy, and they published a photo of the opening ceremony of this meeting. It shows many men in SS-uniform and, in the first row of the auditorium, we see Hess (as well as Himmler) ready to spring into action.

Hess in particular was besotted with alternative medicine which the Nazis elected to call NEUE DEUTSCHE HEILKUNDE. Somewhat to the dismay of today’s alternative medicine enthusiasts, I have repeatedly published on this aspect of alternative medicine’s past, and it also is an important part of my new book A SCIENTIST IN WONDERLAND which the lucky winner (my congratulations!) of my little competition to identify the author has won. The abstract of an 2001 article explains this history succinctly:

The aim of this article is to discuss complementary/alternative medicine (CAM) in the Third Reich. Based on a general movement towards all things natural, a powerful trend towards natural ways of healing had developed in the 19(th)century. By 1930 this had led to a situation where roughly as many lay practitioners of CAM existed in Germany as doctors. To re-unify German medicine under the banner of ‘Neue Deutsche Heilkunde’, the Nazi officials created the ‘Heilpraktiker’ – a profession which was meant to become extinct within one generation. The ‘flag ship’ of the ‘Neue Deutsche Heilkunde’ was the ‘Rudolf Hess Krankenhaus’ in Dresden. It represented a full integration of CAM and orthodox medicine. An example of systematic research into CAM is the Nazi government’s project to validate homoeopathy. Even though the data are now lost, the results of this research seem to have been negative. Even though there are some striking similarities between today’s CAM and yesterday’s ‘Neue Deutsche Heilkunde’ there are important differences. Most importantly, perhaps, today’s CAM is concerned with the welfare of the individual, whereas the ‘Neue Deutsche Heilkunde’ was aimed at ensuring the dominance of the Aryan race.

One fascinating aspect of this past is the fact that the NEUE DEUTSCHE HEILKUNDE was de facto the invention of what we today call ‘integrated medicine’. Then it was more like a ‘shot-gun marriage’, while today it seems to be driven more by political correctness and sloppy thinking. It did not work 70 years ago for the same reason that it will fail today: the integration of bogus (non-evidence based) treatments into conventional medicine must inevitably render health care not better but worse!

One does not need to be a rocket scientist to understand that, and Hess as well as other proponents of alternative medicine of his time had certainly got the idea. So they initiated the largest ever series of scientific tests of homeopathy. This research program was not just left to the homeopaths, who never had a reputation of being either rigorous or unbiased, but some of the best scientists of the era were recruited for it. The results vanished in the hands of the homeopaths during the turmoil of the war. But one eye-witness report of a homeopaths, Fritz Donner, makes it very clear: as it turned out, there was not a jot of evidence in favour of homeopathy.

And this, I think, is the other fascinating aspect of the story: homeopaths did not give up their plight to popularise homeopathy. On the contrary, they re-doubled their efforts to fool us all and to convince us with dodgy results (see recent posts on this blog) that homeopathy somehow does defy the laws of nature and is, in effect, very effective for all sorts of diseases.

My readers suggested all sorts of potential authors for the Hess speech; and they are right! It could have been written by any proponent of alternative medicine. This fact is amusing and depressing at the same time. Amusing because it discloses the lack of new ideas and arguments (even the same fallacies are being used). Depressing because it suggests that progress in alternative medicine is almost totally absent.

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):

BACKGROUND:

A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.

METHODS:

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

RESULTS:

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

CONCLUSIONS:

Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
Walach:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
Walach:
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Walach:
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

Jacobs:
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Walach:
Rating: NO (high risk of bias), no details given

Jacobs:
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.

Conclusion

So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.

SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.

Guest post by Jan Oude-Aost

ADHD is a common disorder among children. There are evidence based pharmacological treatments, the best known being methylphenidate (MPH). MPH has kind of a bad reputation, but is effective and reasonably safe. The market is also full of alternative treatments, pharmacological and others, some of them under investigation, some unproven and many disproven. So I was not surprised to find a study about Ginkgo biloba as a treatment for ADHD. I was surprised, however, to find this study in the German Journal of Child and Adolescent Psychiatry and Psychotherapy, officially published by the “German Society of Child and Adolescent Psychiatry and Psychotherapy“ (Deutsche Gesellschaft für Kinder- und Jugendpsychiatrie und Psychotherapie). The journal’s guidelines state that studies should provide new scientific results.

The study is called “Ginkgo biloba Extract EGb 761® in Children with ADHD“. EGb 761® is the key ingredient in “Tebonin®“, a herbal drug made by “Dr. Wilma Schwabe GmbH“. The abstract states:

One possible treatment, at least for cognitive problems, might be the administration of Ginkgo biloba, though evidence is rare.This study tests the clinical efficacy of a Ginkgo biloba special extract (EGb 761®) (…) in children with ADHD (…).

Eine erfolgversprechende, bislang kaum untersuchte Möglichkeit zur Behandlung kognitiver Aspekte ist die Gabe von Ginkgo biloba. Ziel der vorliegenden Studie war die Prüfung klinischer Wirksamkeit (…) von Ginkgo biloba-Extrakt Egb 761® bei Kindern mit ADHS.“ (Taken from the English and German abstracts.)

The study sample (20!) was recruited among children who “did not tolerate or were unwilling“ to take MPH. The unwilling part struck me as problematic. There is likely a strong selection bias towards parents who are unwilling to give their children MPH. I guess it is not the children who are unwilling to take MPH, but the parents who are unwilling to administer it. At least some of these parents might be biased against MPH and might already favor CAMmodalities.

The authors state three main problems with “herbal therapy“ that require more empirical evidence: First of all the question of adverse reactions, which they claim occur in about 1% of cases with “some CAMs“ (mind you, not “herbal therapy“). Secondly, the question of drug interactions and thirdly, the lack of information physicians have about the CAMs their patients use.

A large part of the study is based on results of an EEG-protocol, which I choose to ignore, because the clinical results are too weak to give the EEG findings any clinical relevance.

Before looking at the study itself, let’s look at what is known about Ginkgo biloba as a drug. Ginkgo is best known for its use in patients with dementia, cognitive impairment und tinnitus. A Cochrane review from 2009 concluded:

There is no convincing evidence that Ginkgo biloba is efficacious for dementia and cognitive impairment“ [1].

The authors of the current Study cite Sarris et al. (2011), a systematic review of complementary treatment of ADHD. Sarris et al. mention Salehi et al. (2010) who tested Ginkgo against MPH. MPH turned out to be much more effective than Ginkgo, but Sarris et al. argue that the duration of treatment (6 weeks) might have been too short to see the full effects of Ginkgo.

Given the above information it is unclear why Ginkgo is judged a “possible“ treatment, properly translated from German even “promising”, and why the authors state that Ginkgo has been “barely studied“.

In an unblinded, uncontrolled study with a sample likely to be biased toward the tested intervention, anything other than a positive result would be odd. In the treatment of autism there are several examples of implausible treatments that worked as long as parents knew that their children were getting the treatment, but didn’t after proper blinding (e.g. secretin).

This study’s aim was to test clinical efficacy, but the conclusion begins with how well tolerated Ginkgo was. The efficacy is mentioned subsequently: “Following administration, interrelated improvements on behavioral ratings of ADHD symptoms (…) were detected (…).“ But the way they where “detected“ is interesting. The authors used an established questionnaire (FBB-HKS) to let parents rate their children. Only the parents. The children and their teachers where not given the FBB-HKS-questionnaires, inspite of this being standard clinical practice (and inspite of giving children questionnaires to determine changes in quality of life, which were not found).

None of the three problems that the authors describe as important (adverse reactions, drug interactions, lack of information) can be answered by this study. I am no expert in statistics but it seems unlikely to me to meaningfully determine adverse effects in just 20 patients especially when adverse effects occur at a rate of 1%. The authors claim they found an incidence rate of 0,004% in “700 observation days“. Well, if they say so.

The authors conclude:

Taken together, the current study provides some preliminary evidence that Ginkgo biloba Egb 761® seems to be well tolerated in the short term and may be a clinically useful treatment for children with ADHD. Double-blind randomized trials are required to clarify the value of the presented data.

Given the available information mentioned earlier, one could have started with that conclusion and conducted a double blind RCT in the first place!

Clinical Significance

The trends of this preliminary open study may suggest that Ginkgo biloba Egb 761® might be considered as a complementary or alternative medicine for treating children with ADHD.“

So, why do I care? If preliminary evidence “may suggest“ that something “might be considered“ as a treatment? Because I think that this study does not answer any important questions or give us any new or useful knowledge. Following the journal’s guidelines, it should therefore not have been published. I also think it is an example of bad science. Bad not just because of the lack of critical thinking. It also adds to the misinformation about possible ADHD treatments spreading through the internet. The study was published in September. In November I found a website citing the study and calling it “clinical proof“ when it is not. But child psychiatrists will have to explain that to many parents, instead of talking about their children’s health.

I somehow got the impression that this study was more about marketing than about science. I wonder if Schwabe will help finance the necessary double-blind randomized trial…

[1] See more at: http://summaries.cochrane.org/CD003120/DEMENTIA_there-is-no-convincing-evidence-that-ginkgo-biloba-is-efficacious-for-dementia-and-cognitive-impairment#sthash.oqKFrSCC.dpuf

Acupuncture seems to be as popular as never before – many conventional pain clinics now employ acupuncturists, for instance. It is probably true to say that acupuncture is one of the best-known types of all alternative therapies. Yet, experts are still divided in their views about this treatment – some proclaim that acupuncture is the best thing since sliced bread, while others insist that it is no more than a theatrical placebo. Consumers, I imagine, are often left helpless in the middle of these debates. Here are 7 important bits of factual information that might help you make up your mind, in case you are tempted to try acupuncture.

  1. Acupuncture is ancient; some enthusiast thus claim that it has ‘stood the test of time’, i. e. that its long history proves its efficacy and safety beyond reasonable doubt and certainly more conclusively than any scientific test. Whenever you hear such arguments, remind yourself that the ‘argumentum ad traditionem’ is nothing but a classic fallacy. A long history of usage proves very little – think of how long blood letting was used, even though it killed millions.
  2. We often think of acupuncture as being one single treatment, but there are many different forms of this therapy. According to believers in acupuncture, acupuncture points can be stimulated not just by inserting needles (the most common way) but also with heat, electrical currents, ultrasound, pressure, etc. Then there is body acupuncture, ear acupuncture and even tongue acupuncture. Finally, some clinicians employ the traditional Chinese approach based on the assumption that two life forces are out of balance and need to be re-balanced, while so-called ‘Western’ acupuncturists adhere to the concepts of conventional medicine and claim that acupuncture works via scientifically explainable mechanisms that are unrelated to ancient Chinese philosophies.
  3. Traditional Chinese acupuncturists have not normally studied medicine and base their practice on the Taoist philosophy of the balance between yin and yang which has no basis in science. This explains why acupuncture is seen by traditional acupuncturists as a ‘cure all’ . In contrast, medical acupuncturists tend to cite neurophysiological explanations as to how acupuncture might work. However, it is important to note that, even though they may appear plausible, these explanations are currently just theories and constitute no proof for the validity of acupuncture as a medical intervention.
  4. The therapeutic claims made for acupuncture are legion. According to the traditional view, acupuncture is useful for virtually every condition affecting mankind; according to the more modern view, it is effective for a relatively small range of conditions only. On closer examination, the vast majority of these claims can be disclosed to be based on either no or very flimsy evidence. Once we examine the data from reliable clinical trials (today several thousand studies of acupuncture are available – see below), we realise that acupuncture is associated with a powerful placebo effect, and that it works better than a placebo only for very few (some say for no) conditions.
  5. The interpretation of the trial evidence is far from straight forward: most of the clinical trials of acupuncture originate from China, and several investigations have shown that very close to 100% of them are positive. This means that the results of these studies have to be taken with more than a small pinch of salt. In order to control for patient-expectations, clinical trials can be done with sham needles which do not penetrate the skin but collapse like miniature stage-daggers. This method does, however, not control for acupuncturists’ expectations; blinding of the therapists is difficult and therefore truly double (patient and therapist)-blind trials of acupuncture do hardly exist. This means that even the most rigorous studies of acupuncture are usually burdened with residual bias.
  6. Few acupuncturists warn their patients of possible adverse effects; this may be because the side-effects of acupuncture (they occur in about 10% of all patients) are mostly mild. However, it is important to know that very serious complications of acupuncture are on record as well: acupuncture needles can injure vital organs like the lungs or the heart, and they can introduce infections into the body, e. g. hepatitis. About 100 fatalities after acupuncture have been reported in the medical literature – a figure which, due to lack of a monitoring system, may disclose just the tip of an iceberg.
  7. Given that, for the vast majority of conditions, there is no good evidence that acupuncture works beyond a placebo response, and that acupuncture is associated with finite risks, it seems to follow that, in most situations, the risk/benefit balance for acupuncture fails to be convincingly positive.

Reiki is a form of energy healing that evidently has been getting so popular that, according to the ‘Shropshire Star’, even stressed hedgehogs are now being treated with this therapy. In case you argue that this publication is not cutting edge when it comes to reporting of scientific advances, you may have a point. So, let us see what evidence we find on this amazing intervention.

A recent systematic review of the therapeutic effects of Reiki concludes that the serious methodological and reporting limitations of limited existing Reiki studies preclude a definitive conclusion on its effectiveness. High-quality randomized controlled trials are needed to address the effectiveness of Reiki over placebo. Considering that this article was published in the JOURNAL OF ALTERNATIVE AND COMPLEMENTARY MEDICINE, this is a fairly damming verdict. The notion that Reiki is but a theatrical placebo recently received more support from a new clinical trial.

This pilot study examined the effects of Reiki therapy and companionship on improvements in quality of life, mood, and symptom distress during chemotherapy. Thirty-six breast cancer patients received usual care, Reiki, or a companion during chemotherapy. Data were collected from patients while they were receiving usual care. Subsequently, patients were randomized to either receive Reiki or a companion during chemotherapy. Questionnaires assessing quality of life, mood, symptom distress, and Reiki acceptability were completed at baseline and chemotherapy sessions 1, 2, and 4. Reiki was rated relaxing and caused no side effects. Both Reiki and companion groups reported improvements in quality of life and mood that were greater than those seen in the usual care group.

The authors of this study conclude that interventions during chemotherapy, such as Reiki or companionship, are feasible, acceptable, and may reduce side effects.

This is an odd conclusion, if there ever was one. Clearly the ‘companionship’ group was included to see whether Reiki has effects beyond simply providing sympathetic attention. The results show that this is not the case. It follows, I think, that Reiki is a placebo; its perceived relaxing effects are the result of non-specific phenomena which have nothing to do with Reiki per se. The fact that the authors fail to spell this out more clearly makes me wonder whether they are researchers or promoters of Reiki.

Some people will feel that it does not matter how Reiki works, the main thing is that it does work. I beg to differ!

If its effects are due to nothing else than attention and companionship, we do not need ‘trained’ Reiki masters to do the treatment; anyone who has time, compassion and sympathy can do it. More importantly, if Reiki is a placebo, we should not mislead people that some super-natural energy is at work. This only promotes irrationality – and, as Voltaire once said: those who make you believe in absurdities can make you commit atrocities.

Acute tonsillitis (AT) is an upper respiratory tract infection which is prevalent, particularly in children. The cause is usually a viral or, less commonly, a bacterial infection. Treatment is symptomatic and usually consists of ample fluid intake and pain-killers; antibiotics are rarely indicated, even if the infection is bacterial by nature. The condition is self-limiting and symptoms subside normally after one week.

Homeopaths believe that their remedies are effective for AT – but is there any evidence? A recent trial seems to suggest there is.

It aimed, according to its authors, to determine the efficacy of a homeopathic complex on the symptoms of acute viral tonsillitis in African children in South Africa.

The double-blind, placebo-controlled RCT was a 6-day “pilot study” and included 30 children aged 6 to 12 years, with acute viral tonsillitis. Participants took two tablets 4 times per day. The treatment group received lactose tablets medicated with the homeopathic complex (Atropa belladonna D4, Calcarea phosphoricum D4, Hepar sulphuris D4, Kalium bichromat D4, Kalium muriaticum D4, Mercurius protoiodid D10, and Mercurius biniodid D10). The placebo consisted of the unmedicated vehicle only. The Wong-Baker FACES Pain Rating Scale was used for measuring pain intensity, and a Symptom Grading Scale assessed changes in tonsillitis signs and symptoms.

The results showed that the treatment group had a statistically significant improvement in the following symptoms compared with the placebo group: pain associated with tonsillitis, pain on swallowing, erythema and inflammation of the pharynx, and tonsil size.

The authors drew the following conclusions: the homeopathic complex used in this study exhibited significant anti-inflammatory and pain-relieving qualities in children with acute viral tonsillitis. No patients reported any adverse effects. These preliminary findings are promising; however, the sample size was small and therefore a definitive conclusion cannot be reached. A larger, more inclusive research study should be undertaken to verify the findings of this study.

Personally, I agree only with the latter part of the conclusion and very much doubt that this study was able to “determine the efficacy” of the homeopathic product used. The authors themselves call their trial a “pilot study”. Such projects are not meant to determine efficacy but are usually designed to determine the feasibility of a trial design in order to subsequently mount a definitive efficacy study.

Moreover, I have considerable doubts about the impartiality of the authors. Their affiliation is “Department of Homoeopathy, University of Johannesburg, Johannesburg, South Africa”, and their article was published in a journal known to be biased in favour of homeopathy. These circumstances in itself might not be all that important, but what makes me more than a little suspicious is this sentence from the introduction of their abstract:

“Homeopathic remedies are a useful alternative to conventional medications in acute uncomplicated upper respiratory tract infections in children, offering earlier symptom resolution, cost-effectiveness, and fewer adverse effects.”

A useful alternative to conventional medications (there are no conventional drugs) for earlier symptom resolution?

If it is true that the usefulness of homeopathic remedies has been established, why conduct the study?

If the authors were so convinced of this notion (for which there is, of course, no good evidence) how can we assume they were not biased in conducting this study?

I think that, in order to agree that a homeopathic remedy generates effects that differ from those of placebo, we need a proper (not a pilot) study, published in a journal of high standing by unbiased scientists.

1 2 3 9
Recent Comments
Click here for a comprehensive list of recent comments.
Categories