MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

critical thinking

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):

BACKGROUND:

A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.

METHODS:

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

RESULTS:

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

CONCLUSIONS:

Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
Walach:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
Walach:
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Walach:
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

Jacobs:
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Walach:
Rating: NO (high risk of bias), no details given

Jacobs:
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.

Conclusion

So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.

SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.

Each year, during the Christmas period, we are bombarded with religious ideology, soapy sentimentality and delusive festive cheer. In case you are beginning to feel slightly nauseous about all this, it might be time to counter-balance this abundance with my (not entirely serious) version of the ’10 commandments of quackery’?

  1. You must not use therapies other than those recommended by your healer – certainly nothing that is evidence-based!
  2. You must never doubt what your healer tells you; (s)he embraces the wisdom of millennia combined with the deep insights of post-modernism – and is therefore beyond doubt.
  3. You must happily purchase all the books, gadgets, supplements etc. your healer offers for sale. For more merchandise, you must frequent your local health food shops. Money is no object!
  4. You must never read scientific literature; it is the writing of evil. The truth can only be found by studying the texts recommended by your healer.
  5. You must never enter into discussions with sceptics or other critical thinkers; they are wicked and want to destroy your well-being.
  6. You must do everything in your power to fight the establishment, Big Pharma, their dangerous drugs and vicious vaccines.
  7. You must support Steiner Schools, Prince Charles and other enlightened visionaries so that the next generation is guided towards the eternal light.
  8. You must detox regularly to eliminate the ubiquitous, malignant poisons of Satan.
  9. You must blindly, unreservedly and religiously believe in vitalism, quantum medicine, vibrational energy and all other concepts your healer relies upon.
  10. You must denounce, vilify, aggress and attack anyone who disagrees with the gospel of your healer.

The regular consumption of fish-oil has a potentially favourable role in inflammation, carcinogenesis inhibition and cancer outcomes. An analysis of the literature aimed to review the evidence for the roles of dietary-fish and fish-oil intake in prostate-cancer (PC) risk, aggressiveness and mortality.

A systematic-review, following PRISMA guidelines was conducted. PubMed, MEDLINE and Embase were searched to explore PC-risk, aggressiveness and mortality associated with dietary-fish and fish-oil intake. 37 studies were selected.

A total of 37-studies with 495,321 participants were analysed. They revealed various relationships regarding PC-risk (n = 31), aggressiveness (n = 8) and mortality (n = 3). Overall, 10 studies considering PC-risk found significant inverse trends with fish and fish-oil intake. One found a dose–response relationship whereas greater intake of long-chain-polyunsaturated fatty acids increased risk of PC when considering crude odds-ratios [OR: 1.36 (95% CI: 0.99–1.86); p = 0.014]. Three studies addressing aggressiveness identified significant positive relationships with reduced risk of aggressive cancer when considering the greatest intake of total fish [OR 0.56 (95% CI 0.37–0.86)], dark fish and shellfish-meat (p < 0.0001), EPA (p = 0.03) and DHA (p = 0.04). Three studies investigating fish consumption and PC-mortality identified a significantly reduced risk. Multivariate-OR (95% CI) were 0.9 (0.6–1.7), 0.12 (0.05–0.32) and 0.52 (0.30–0.91) at highest fish intakes.

The authors concluded that fish and fish-oil do not show consistent roles in reducing PC incidence, aggressiveness and mortality. Results suggest that the specific fish type and the fish-oil ratio must be considered. Findings suggest the need for large intervention randomised placebo-controlled trials.

Several other recent reviews have also generated encouraging evidence, e.g.:

Available evidence is suggestive, but currently inadequate, to support the hypothesis that n-3 PUFAs protect against skin malignancy.

…omega-3 fatty acids may exert their anticancer actions by influencing multiple targets implicated in various stages of cancer development, including cell proliferation, cell survival, angiogenesis, inflammation, metastasis and epigenetic abnormalities that are crucial to the onset and progression of cancer.

If I was aiming for a career as a cancer quack, I would now use this evidence to promote my very own cancer prevention and treatment diet. As I have no such ambitions, I should tell you that regular fish oil consumption is no way to treat cancer. It also is no way to prevent cancer. If anything, it might turn out to be a way of slightly reducing the risk of certain cancers. To be sure, we need a lot more research, and once we have it, fish oil will be entirely mainstream. Raising false hopes regarding ‘alternative cancer cures’ based on fairly preliminary evidence is counter-productive, unethical and irresponsible.

Guest post by Pete Attkins

Commentator “jm” asked a profound and pertinent question: “What DOES it take for people to get real in this world, practice some common sense, and pay attention to what’s going on with themselves?” This question was asked in the context of asserting that personal experience always trumps the results of large-scale scientific experiments; and asserting that alt-med experts are better able to provide individulized healthcare than 21st Century orthodox medicine.

What does common sense and paying attention lead us to conclude about the following? We test a six-sided die for bias by rolling it 100 times. The number 1 occurs only once and the number 6 occurs many times, never on its own, but in several groups of consecutive sixes.

I think it is reasonable to say that common sense would, and should, lead everyone to conclude that the die is biased and not fit for its purpose as a source of random numbers.

In other words, we have a gut feeling that the die is untrustworthy. Gut instincts and common sense are geared towards maximizing our chances of survival in our complex and unpredictable world — these are innate and learnt behaviours that have enabled humans to survive despite the harshness of our ever changing habitat.

Only very recently in the long history of our species have we developed specialized tools that enable us to better understand our harsh and complex world: science and critical thinking. These tools are difficult to master because they still haven’t been incorporated into our primary and secondary formal education systems.

The vast majority of people do not have these skills therefore, when a scientific finding flies in the face of our gut instincts and/or common sense, it creates an overwhelming desire to reject the finding and classify the scientist(s) as being irrational and lacking basic common sense. It does not create an intense desire to accept the finding then painstakingly learn all of the science that went into producing the finding.

With that in mind, let’s rethink our common sense conclusion that the six-sided die is biased and untrustworthy. What we really mean is that the results have given all of us good reason to be highly suspicious of this die. We aren’t 100% certain that this die is biased, but our gut feeling and common sense are more than adequate to form a reasonable mistrust of it and to avoid using it for anything important to us. Reasons to keep this die rather than discard it might be to provide a source of mild entertainment or to use its bias for the purposes of cheating.

Some readers might be surprised to discover at this point that the results I presented from this apparently heavily-biased die are not only perfectly valid results obtained from a truly random unbiased die, they are to be fully expected. Even if the die had produced 100 sixes in that test, it would not confirm that the die is biased in any way whatsoever. Rolling a truly unbiased die once will produce one of six possible outcomes. Rolling the same die 100 times will produce one unique sequence out of the 6^100 (6.5 x 10^77) possible sequences: all of which are equally valid!

Gut feeling plus common sense rightfully informs us that the probability of a random die producing one hundred consecutive sixes is so incredibly remote that nobody will ever see it occur in reality. This conclusion is also mathematically sound: if there were 6.5 x 10^77 people on Earth, each performing the same test on truly random dice, there is no guarantee that anyone would observe a sequence of one hundred consecutive sixes.

When we observe a sequence such as 2 5 1 4 6 3 1 4 3 6 5 2… common sense informs us that the die is very likely random. If we calculate the arithmetic mean to be very close to 3.5 then common sense will lead us to conclude that the die is both random and unbiased enough to use it as a reliable source of random numbers.

Unfortunately, this is a perfect example of our gut feelings and common sense failing us abysmally. They totally failed to warn us that the 2 5 1 4 6 3 1 4 3 6 5 2… sequence we observed had exactly the same (im)probability of occurring as a sequence of one hundred 6s or any other sequence that one can think of that doesn’t look random to a human observer.

The 100-roll die test is nowhere near powerful enough to properly test a six-sided die, but this test is more than adequately powered to reveal some of our cognitive biases and some of the deficits in our personal mastery of science and critical thinking.

To properly test the die we need to provide solid evidence that it is both truly random and that its measured bias tends towards zero as the number of rolls tends towards infinity. We could use the services of one testing lab to conduct billions of test rolls, but this would not exclude errors caused by such things as miscalibrated equipment and experimenter bias. It is better to subdivide the testing across multiple labs then carefully analyse and appropriately aggregate the results: this dramatically reduces errors caused by equipment and humans.

In medicine, this testing process is performed via systematic reviews of multiple, independent, double-blind, placebo-controlled trials — every trial that is insufficiently powered to add meaningfully to the result is rightfully excluded from the aggregation.

Alt-med relies on a diametrically opposed testing process. It performs a plethora of only underpowered tests; presents those that just happen to show a positive result (just as a random die could’ve produced); and sweeps under the carpet the overwhelming number of tests that produced a negative result. It publishes only the ‘successes’, not its failures. By sweeping its failures under the carpet it feels justified in making the very bold claim: Our plethora of collected evidence shows clearly that it mostly ‘works’ and, when it doesn’t, it causes no harm.

One of the most acidic tests for a hypothesis and its supporting data (which is a mandatory test in a few branches of critical engineering) is to substitute the collected data for random data that has been carefully crafted to emulate the probability mass functions of the collected datasets. This test has to be run multiple times for reasons that I’ve attempted to explain in my random die example. If the proposer of the hypothesis is unable to explain the multiple failures resulting from this acid test then it is highly likely that the proposer either does not fully understand their hypothesis or that their hypothesis is indistinguishable from the null hypothesis.

Naturopathy can be defined as ‘an eclectic system of health care that uses elements of complementary and conventional medicine to support and enhance self-healing processes’. This basically means that naturopaths employ treatments based on those therapeutic options that are seen as natural, e. g. herbs, water, exercise, diet, fresh air, heat and cold – but occasionally also acupuncture, homeopathy and manual therapies. If you are tempted to see a naturopath, you might want to consider the following 7 points:

  1. In many countries, naturopathy is not a protected title; this means your naturopaths may have some training but this is not obligatory. Some medical doctors also practice naturopathy, and in some countries there are ‘doctors of naturopathy’ (these practitioners tend to see themselves as primary care physicians but they have not been to medical school).
  2. Naturopathy is steeped in the obsolete concept of vitalism which has been described as the belief that “living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things.”
  3. While there is some evidence to suggest that some of the treatments used by naturopaths are effective for treating some conditions, this is by no means the case for all of the treatments in question.
  4. Naturopathy is implicitly based on the assumption that natural means safe. This notion is clearly wrong and misleading: not all the treatments used by naturopaths are strictly speaking natural, and very few are totally free of risks.
  5. Many naturopaths advise their patients against conventional treatments such as vaccines or antibiotics.
  6. Naturopaths tend to believe they can cure all or most diseases. Consequently many of the therapeutic claims for naturopathy found on the Internet and elsewhere are dangerously over-stated.
  7. The direct risks of naturopathy depend, of course, on the modality used; some of them can be considerable. The indirect risks of naturopathy can be even more serious and are mostly due to naturopathic treatments replacing more effective conventional therapies in cases of severe illness.

Complementary treatments have become a popular (and ‘political correct’) option to keep desperate cancer patients happy. But how widely accepted is their use in oncology units? A brand-new article tried to find the answer to this question.

The principal aim of this survey was to map centres across Europe prioritizing those that provide public health services and operating within the national health system in integrative oncology (IO). A cross-sectional descriptive survey design was used to collect data. A questionnaire was elaborated concerning integrative oncology therapies to be administered to all the national health system oncology centres or hospitals in each European country. These institutes were identified by convenience sampling, searching on oncology websites and forums. The official websites of these structures were analysed to obtain more information about their activities and contacts.

Information was received from 123 (52.1 %) out of the 236 centres contacted until 31 December 2013. Forty-seven out of 99 responding centres meeting inclusion criteria (47.5 %) provided integrative oncology treatments, 24 from Italy and 23 from other European countries. The number of patients seen per year was on average 301.2 ± 337. Among the centres providing these kinds of therapies, 33 (70.2 %) use fixed protocols and 35 (74.5 %) use systems for the evaluation of results. Thirty-two centres (68.1 %) had research in progress or carried out until the deadline of the survey. The complementary and alternative medicines (CAMs) more frequently provided to cancer patients were acupuncture 26 (55.3 %), homeopathy 19 (40.4 %), herbal medicine 18 (38.3 %) and traditional Chinese medicine 17 (36.2 %); anthroposophic medicine 10 (21.3 %); homotoxicology 6 (12.8 %); and other therapies 30 (63.8 %). Treatments are mainly directed to reduce adverse reactions to chemo-radiotherapy (23.9 %), in particular nausea and vomiting (13.4 %) and leucopenia (5 %). The CAMs were also used to reduce pain and fatigue (10.9 %), to reduce side effects of iatrogenic menopause (8.8 %) and to improve anxiety and depression (5.9 %), gastrointestinal disorders (5 %), sleep disturbances and neuropathy (3.8 %).

As so often with surveys of this nature, the high non-response rate creates a problem: it is not unreasonable to assume that those centres that responded had an interest in IO, while those that failed to respond tended to have none. Thus the figures reported here for the usage of alternative therapies might be far higher than they actually are. One can only hope that this is the case. The idea that 40% of all cancer patients receive homeopathy, for instance, is hardly one that is in accordance with the principles of evidence-based practice.

The list of medical reasons for using largely unproven treatments is interesting, I think. I am not aware of lots of strong evidence to show that any of the treatments in question would generate more good than harm for any of the conditions in question.

What follows from all of this is worrying, in my view: thousands of desperate cancer patients are being duped into having bogus treatments paid for by their national health system. This, I think, begs the question whether these most vulnerable patients do not deserve better.

Adults using unproven treatments is one thing; if kids do it because they are told to, that is quite another thing. Children are in many ways more vulnerable than grown-ups and they usually cannot give fully informed consent. It follows that the use of such treatments for kids can be a delicate and complex matter.

A recent systematic review was aimed at summarizes the international findings for prevalence and predictors of complementary and alternative medicine (CAM) use among children/adolescents. The authors systematically searched 4 electronic databases (PubMed, Embase, PsycINFO, AMED; last update in 07/2013) and reference lists of existing reviews and all included studies. Publications without language restriction reporting patterns of CAM utilization among children/adolescents without chronic conditions were selected for inclusion. The prevalence rates for overall CAM use, homeopathy, and herbal drug use were extracted with a focus on country and recall period (lifetime, 1 year, current use). As predictors, the authors extracted socioeconomic factors, child‘s age, and gender.

Fifty-eight studies from 19 countries could be included in the review. There were strong variations regarding study quality. Prevalence rates for overall CAM use ranged from 10.9 – 87.6 % for lifetime use, and from 8 – 48.5 % for current use. The respective percentages for homeopathy (highest in Germany, United Kingdom, and Canada) ranged from 0.8 – 39 % (lifetime) and from 1 – 14.3 % (current). Herbal drug use (highest in Germany, Turkey, and Brazil) was reported for 0.8 – 85.5 % (lifetime) and 2.2 – 8.9 % (current) of the children/adolescents. Studies provided a relatively uniform picture of the predictors of overall CAM use: higher parental income and education, older children. But only a few studies analyzed predictors for single CAM modalities.

The authors drew the following conclusion: CAM use is widespread among children/adolescents. Prevalence rates vary widely regarding CAM modality, country, and reported recall period.

In 1999, I published a very similar review; at the time, I found just 10 studies. Their results suggested that the prevalence of CAM use by kids was variable but generally high. CAM was often perceived as helpful. Insufficient data existed about safety and cost. Today, the body of surveys monitoring CAM use by children seems to have grown almost six-fold, and the conclusions are still more or less the same – but have we made progress in answering the most pressing questions? Do we know whether all these CAM treatments generate more good than harm for children?

Swiss authors recently published a review of Cochrane reviews which might help answering these important questions. They performed a synthesis of all Cochrane reviews published between 1995 and 2012 in paediatrics that assessed the efficacy, and clinical implications and limitations of CAM use in children. Main outcome variables were: percentage of reviews that concluded that a certain intervention provides a benefit, percentage of reviews that concluded that a certain intervention should not be performed, and percentage of studies that concluded that the current level of evidence is inconclusive.

A total of 135 reviews were included – most from the United Kingdom (29/135), Australia (24/135) and China (24/135). Only 5/135 (3.7%) reviews gave a recommendation in favour of a certain intervention; 26/135 (19.4%) issued a conditional positive recommendation, and 9/135 (6.6%) reviews concluded that certain interventions should not be performed. Ninety-five reviews (70.3%) were inconclusive. The proportion of inconclusive reviews increased during three, a priori-defined, time intervals (1995-2000: 15/27 [55.6%]; 2001-2006: 33/44 [75%]; and 2007-2012: 47/64 [73.4%]). The three most common criticisms of the quality of the studies included were: more research needed (82/135), low methodological quality (57/135) and small number of study participants (48/135).

The Swiss authors concluded that given the disproportionate number of inconclusive reviews, there is an ongoing need for high quality research to assess the potential role of CAM in children. Unless the study of CAM is performed to the same science-based standards as conventional therapies, CAM therapies risk being perpetually marginalised by mainstream medicine.

And what about the risks?

To determine the types of adverse events associated with the use of CAM that come to the attention of Australian paediatricians. Australian researchers conducted a monthly active surveillance study of CAM-associated adverse events as reported to the Australian Paediatric Surveillance Unit between January 2001 and December 2003. They found 39 reports of adverse events associated with CAM use, including four reported deaths. Reports highlighted several areas of concern, including the risks associated with failure to use conventional medicine, the risks related to medication changes made by CAM practitioners and the significant dangers of dietary restriction. The reported deaths were associated with a failure to use conventional medicine in favour of a CAM therapy.

These authors concluded that CAM use has the potential to cause significant morbidity and fatal adverse outcomes. The diversity of CAM therapies and their associated adverse events demonstrate the difficulty addressing this area and the importance of establishing mechanisms by which adverse effects may be reported or monitored.

So, we know that lots of children are using CAMs because their parents want them to. We also know that most of the CAMs used for childhood conditions are not based on sound evidence. The crucial question is: can we be sure that CAM for kids generates more good than harm? I fear the answer is a clear and worrying NO.

Guest post by Jan Oude-Aost

ADHD is a common disorder among children. There are evidence based pharmacological treatments, the best known being methylphenidate (MPH). MPH has kind of a bad reputation, but is effective and reasonably safe. The market is also full of alternative treatments, pharmacological and others, some of them under investigation, some unproven and many disproven. So I was not surprised to find a study about Ginkgo biloba as a treatment for ADHD. I was surprised, however, to find this study in the German Journal of Child and Adolescent Psychiatry and Psychotherapy, officially published by the “German Society of Child and Adolescent Psychiatry and Psychotherapy“ (Deutsche Gesellschaft für Kinder- und Jugendpsychiatrie und Psychotherapie). The journal’s guidelines state that studies should provide new scientific results.

The study is called “Ginkgo biloba Extract EGb 761® in Children with ADHD“. EGb 761® is the key ingredient in “Tebonin®“, a herbal drug made by “Dr. Wilma Schwabe GmbH“. The abstract states:

One possible treatment, at least for cognitive problems, might be the administration of Ginkgo biloba, though evidence is rare.This study tests the clinical efficacy of a Ginkgo biloba special extract (EGb 761®) (…) in children with ADHD (…).

Eine erfolgversprechende, bislang kaum untersuchte Möglichkeit zur Behandlung kognitiver Aspekte ist die Gabe von Ginkgo biloba. Ziel der vorliegenden Studie war die Prüfung klinischer Wirksamkeit (…) von Ginkgo biloba-Extrakt Egb 761® bei Kindern mit ADHS.“ (Taken from the English and German abstracts.)

The study sample (20!) was recruited among children who “did not tolerate or were unwilling“ to take MPH. The unwilling part struck me as problematic. There is likely a strong selection bias towards parents who are unwilling to give their children MPH. I guess it is not the children who are unwilling to take MPH, but the parents who are unwilling to administer it. At least some of these parents might be biased against MPH and might already favor CAMmodalities.

The authors state three main problems with “herbal therapy“ that require more empirical evidence: First of all the question of adverse reactions, which they claim occur in about 1% of cases with “some CAMs“ (mind you, not “herbal therapy“). Secondly, the question of drug interactions and thirdly, the lack of information physicians have about the CAMs their patients use.

A large part of the study is based on results of an EEG-protocol, which I choose to ignore, because the clinical results are too weak to give the EEG findings any clinical relevance.

Before looking at the study itself, let’s look at what is known about Ginkgo biloba as a drug. Ginkgo is best known for its use in patients with dementia, cognitive impairment und tinnitus. A Cochrane review from 2009 concluded:

There is no convincing evidence that Ginkgo biloba is efficacious for dementia and cognitive impairment“ [1].

The authors of the current Study cite Sarris et al. (2011), a systematic review of complementary treatment of ADHD. Sarris et al. mention Salehi et al. (2010) who tested Ginkgo against MPH. MPH turned out to be much more effective than Ginkgo, but Sarris et al. argue that the duration of treatment (6 weeks) might have been too short to see the full effects of Ginkgo.

Given the above information it is unclear why Ginkgo is judged a “possible“ treatment, properly translated from German even “promising”, and why the authors state that Ginkgo has been “barely studied“.

In an unblinded, uncontrolled study with a sample likely to be biased toward the tested intervention, anything other than a positive result would be odd. In the treatment of autism there are several examples of implausible treatments that worked as long as parents knew that their children were getting the treatment, but didn’t after proper blinding (e.g. secretin).

This study’s aim was to test clinical efficacy, but the conclusion begins with how well tolerated Ginkgo was. The efficacy is mentioned subsequently: “Following administration, interrelated improvements on behavioral ratings of ADHD symptoms (…) were detected (…).“ But the way they where “detected“ is interesting. The authors used an established questionnaire (FBB-HKS) to let parents rate their children. Only the parents. The children and their teachers where not given the FBB-HKS-questionnaires, inspite of this being standard clinical practice (and inspite of giving children questionnaires to determine changes in quality of life, which were not found).

None of the three problems that the authors describe as important (adverse reactions, drug interactions, lack of information) can be answered by this study. I am no expert in statistics but it seems unlikely to me to meaningfully determine adverse effects in just 20 patients especially when adverse effects occur at a rate of 1%. The authors claim they found an incidence rate of 0,004% in “700 observation days“. Well, if they say so.

The authors conclude:

Taken together, the current study provides some preliminary evidence that Ginkgo biloba Egb 761® seems to be well tolerated in the short term and may be a clinically useful treatment for children with ADHD. Double-blind randomized trials are required to clarify the value of the presented data.

Given the available information mentioned earlier, one could have started with that conclusion and conducted a double blind RCT in the first place!

Clinical Significance

The trends of this preliminary open study may suggest that Ginkgo biloba Egb 761® might be considered as a complementary or alternative medicine for treating children with ADHD.“

So, why do I care? If preliminary evidence “may suggest“ that something “might be considered“ as a treatment? Because I think that this study does not answer any important questions or give us any new or useful knowledge. Following the journal’s guidelines, it should therefore not have been published. I also think it is an example of bad science. Bad not just because of the lack of critical thinking. It also adds to the misinformation about possible ADHD treatments spreading through the internet. The study was published in September. In November I found a website citing the study and calling it “clinical proof“ when it is not. But child psychiatrists will have to explain that to many parents, instead of talking about their children’s health.

I somehow got the impression that this study was more about marketing than about science. I wonder if Schwabe will help finance the necessary double-blind randomized trial…

[1] See more at: http://summaries.cochrane.org/CD003120/DEMENTIA_there-is-no-convincing-evidence-that-ginkgo-biloba-is-efficacious-for-dementia-and-cognitive-impairment#sthash.oqKFrSCC.dpuf

A German homeopathic journal, Zeitschrift Homoeopathie, has just published the following interesting article entitled HOMEOPATHIC DOCTORS HELP IN LIBERIA. It provides details about the international team of homeopaths that travelled to Liberia to cure Ebola. Here I take the liberty of translating it from German into English. As most of it is fairly self-explanatory, I abstain from any comments of my own – however, I am sure that my readers will want to add their views.

In mid-October, an international team of 4 doctors travelled to the West African country for three weeks. The mission in a hospital in Ganta, a town with about 40 000 inhabitants on the border to Guinea, ended as planned on 7 November. The exercise was organised by the World Association of Homeopathic Doctors, the Liga Medicorum Homoeopathica Internationalis (LMHI), with support of by the German Central Association of Homeopathic Doctors. The aim was to support the local doctors in the care of the population and, if possible, also to help in the fight against the Ebola epidemic. The costs for the three weeks’ stay were financed mostly through donations from homeopathic doctors.

“We know that we were invited mainly as well-trained doctors to Liberia, and that or experience in homeopathy was asked for only as a secondary issue”, stresses Cornelia Bajic, first chairperson of the DZVhA (German Central Association of Homeopathic Doctors). The doctors from India, USA, Switzerland and Germany were able to employ their expertise in several wards of the hospital, to help patients, and to support their Liberian colleagues. It was planned to use and document the homeopathic treatment of Ebola-patients as an adjunct to the WHO prescribed standard treatment. “Our experience from the treatment of other epidemics in the history of medicine allows the conclusion that a homeopathic treatment might significantly reduce the mortality of Ebola patients”, judges Bajic. The successful use of homeopathic remedies has been documented for example in Cholera, Diphtheria or Yellow Fever.

Overview of the studies related to the homeopathic treatment of epidemics

In Ganta, the doctors of the LMHI team treated patients with “at times most serious diseases, particularly inflammatory conditions, children with Typhus, meningitis, pneumonias, and unclear fevers – each time only under the supervision of the local doctor in charge”, reports Dr Ortrud Lindemann, who also worked obstetrically in Ganta. The medical specialist reports after her return: “When we had been 10 days in the hospital, the successes had become known, and the patients stood in queues to get treated by us.” The homeopathic doctors received thanks from the Ganta hospital for their work, it was said that it had been helpful for the patients and a blessing for the employees of the hospital.

POLITICAL CONSIDERATIONS MORE IMPORTANT THAN MEDICAL TREATMENT? 

This first LMHI team of doctors was forbidden to care for patients from the “Ebola Treatment Unit”. The decision was based on an order of the WHO. A team of Cuban doctors was also waiting in vain for being allowed to work. “We are dealing here with a dangerous epidemic and a large number of seriously ill patients. And despite a striking lack of doctors in West Africa political considerations are more important than the treatment of these patients”, criticises the DZVhA chairperson Bajic. Now a second team is to travel to Ganta to support the local doctors.

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories