conflict of interest

1 2 3 9

Chinese proprietary herbal medicines (CPHMs) are a well-established and a hugely profitable part of Traditional Chinese Medicine (TCM) with a long history in China and elsewhere; they are used for all sorts of conditions, not least for the treatment of common cold. Many CPHMs have been listed in the ‘China national essential drug list’ (CNEDL), the official reference published by the Chinese Ministry of Health. One would hope that such a document to be based on reliable evidence – but is it?

The aim of a recent review was to provide an assessment on the potential benefits and harms of CPHMs for common cold listed in the CNEDL.

The authors of this assessment were experts from the Chinese ‘Centre for Evidence-Based Medicine’ and one well-known researcher of alternative medicine from the UK. They searched CENTRAL, MEDLINE, EMBASE, SinoMed, CNKI, VIP, China Important Conference Papers Database, China Dissertation Database, and online clinical trial registry websites from their inception to 31 March 2013 for clinical studies of CPHMs listed in the CNEDL for common cold.

Of the 33 CPHMs listed in the 2012 CNEDL for the treatment of common cold, only 7 had any type of clinical trial evidence at all. A total of 6 randomised controlled trials (RCTs) and 7 case series (CSs) could be included in the assessments.

All these studies had been conducted in China and published in Chinese. All of them were burdened with poor study design and low methodological quality, and all had to be graded as being associated with a very high risk of bias.

The authors concluded that the use of CPHMs for common cold is not supported by robust evidence. Further rigorous well designed placebo-controlled, randomized trials are needed to substantiate the clinical claims made for CPHMs.

I should state that it is, in my view, most laudable that the authors draw such a relatively clear, negative conclusion. This does certainly not happen often with papers originating from China, and George Lewith, the UK collaborator in this article, is also not known for his critical attitude towards alternative medicine. But there are other, less encouraging issues here to mention.

In the discussion section of their paper, the authors mention that the CNEDL has been approved by the Chinese Ministry of Public Health and is currently regarded as the accepted reference point for the medicines used in China. They also explain that the CNEDL was officially launched and implemented in August 2009. The CNEDL is now up-dated every 3 years, and its 2012 edition contains 520 medicines, including 203 CPHMs. The CPHMs listed in CNEDL cover 137 herbal remedies for internal medicine, 11 for surgery, 20 for gynaecology, 7 for ophthalmology, 13 for otorhinolaryngology and 15 for orthopaedics and traumatology.

Moreover, the authors inform us that about 3,100 medical and clinical experts had been recruited to evaluate the safety, effectiveness and costs of CPHMs. The selection process of medicines into CNEDL was strictly in accordance with the principle that they ‘must be preventive and curative, safe and effective, affordable, easy to use, think highly of both Chinese and Western medicine’. A detailed procedure for evaluation is, however, not available because the files are confidential.

The authors finally state that their paper demonstrates that the selection of CPHMs into the CNEDL is less likely to be ‘evidence-based’ and revealed the sharp contrast between the policy and priority given to by the Chinese government to Traditional Chinese Medicine(TCM).

This surely must be a benign judgement, if there ever was one! I would say that the facts disclosed in this review show that TCM seems to exist in a strange universe where commercial interests are officially allowed to reign supreme over patients’ interests and public health.

How often have we heard it on this blog and elsewhere?

  • chiropractic is progressing,
  • chiropractors are no longer adhering to their obsolete concepts and bizarre beliefs,
  • chiropractic is fast becoming evidence-based,
  • subluxation is a thing of the past.

American chiropractors wanted to find out to what extent these assumptions are true and collected data from chiropractic students enrolled in colleges throughout North America. The stated purpose of their study is to investigate North American chiropractic students’ opinions concerning professional identity, role and future.

A 23-item cross-sectional electronic questionnaire was developed. A total of 7,455 chiropractic students from 12 North American English-speaking chiropractic colleges were invited to complete the survey. Survey items encompassed demographics, evidence-based practice, chiropractic identity and setting, and scope of practice. Data were collected and descriptive statistical analyses were performed.

A total of 1,243 questionnaires were electronically submitted. This means the response rate was 16.7%. Most respondents agreed (34.8%) or strongly agreed (52.2%) that it is important for chiropractors to be educated in evidence-based practice. A majority agreed (35.6%) or strongly agreed (25.8%) the emphasis of chiropractic intervention is to eliminate vertebral subluxations/vertebral subluxation complexes. A large number of respondents (55.2%) were not in favor of expanding the scope of the chiropractic profession to include prescribing medications with appropriate advanced training. Most respondents estimated that chiropractors should be considered mainstream health care practitioners (69.1%). About half of all respondents (46.8%) felt that chiropractic research should focus on the physiological mechanisms of chiropractic adjustments.

The authors of this paper concluded that the chiropractic students in this study showed a preference for participating in mainstream health care, report an exposure to evidence-based practice, and desire to hold to traditional chiropractic theories and practices. The majority of students would like to see an emphasis on correction of vertebral subluxation, while a larger percent found it is important to learn about evidence-based practice. These two key points may seem contradictory, suggesting cognitive dissonance. Or perhaps some students want to hold on to traditional theory (e.g., subluxation-centered practice) while recognizing the need for further research to fully explore these theories. Further research on this topic is needed.

What should we make of these findings? The answer clearly must be NOT A LOT.

  • the response rate was dismal,
  • the questionnaire was not validated
  • there seems to be little critical evaluation or discussion of the findings.

If anything, these findings seem to suggest that chiropractors want to join evidence based medicine, but on their own terms and without giving up their bogus beliefs, concept and practices. They seem to want the cake and eat it, in other words. The almost inevitable result of such a development would be that real medicine becomes diluted with quackery.

Here is another short passage from my new book A SCIENTIST IN WONDERLAND. It describes the event where I was first publicly exposed to the weird and wonderful world of alternative medicine in the UK. It is also the scene which, in my original draft, was the very beginning of the book.

I hope that the excerpt inspires some readers to read the entire book – it currently is BOOK OF THE WEEK in the TIMES HIGHER EDUCATION!!!

… [an] aggressive and curious public challenge occurred a few weeks later during a conference hosted by the Research Council for Complementary Medicine in London. This organization had been established a few years earlier with the aim of conducting and facilitating research in all areas of alternative medicine. My impression of this institution, and indeed of the various other groups operating in this area, was that they were far too uncritical, and often proved to be hopelessly biased in favour of alternative medicine. This, I thought, was an extraordinary phenomenon: should research councils and similar bodies not have a duty to be critical and be primarily concerned about the quality of the research rather than the overall tenor of the results? Should research not be critical by nature? In this regard, alternative medicine appeared to be starkly different from any other type of health care I had encountered previously.

On short notice, I had accepted an invitation to address this meeting packed with about 100 proponents of alternative medicine. I felt that their enthusiasm and passion were charming but, no matter whom I talked to, there seemed to be little or no understanding of the role of science in all this. A strange naïvety pervaded this audience: alternative practitioners and their supporters seemed a bit like children playing “doctor and patient”. The language, the rituals and the façade were all more or less in place, but somehow they seemed strangely detached from reality. It felt a bit as though I had landed on a different planet. The delegates passionately wanted to promote alternative medicine, while I, with equal passion and conviction, wanted to conduct good science. The two aims were profoundly different. Nevertheless, I managed to convince myself that they were not irreconcilable, and that we would manage to combine our passions and create something worthwhile, perhaps even groundbreaking.

Everyone was excited about the new chair in Exeter; high hopes and expectations filled the room. The British alternative medicine scene had long felt discriminated against because they had no academic representation to speak of. I certainly did sympathize with this particular aspect and felt assured that, essentially, I was amongst friends who realized that my expertise and their enthusiasm could add up to bring about progress for the benefit of many patients.
During my short speech, I summarized my own history as a physician and a scientist and outlined what I intended to do in my new post—nothing concrete yet, merely the general gist. I stressed that my plan was to apply science to this field in order to find out what works and what doesn’t; what is safe and what isn’t. Science, I pointed out, generates progress through asking critical questions and through testing hypotheses. Alternative medicine would either be shown by good science to be of value, or it would turn out to be little more than a passing fad. The endowment of the Laing chair represented an important mile-stone on the way towards the impartial evaluation of alternative medicine, and surely this would be in the best interest of all parties concerned.

To me, all this seemed an entirely reasonable approach, particularly as it merely reiterated what I had just published in an editorial for The Lancet entitled “Scrutinizing the Alternatives”.

My audience, however, was not impressed. When I had finished, there was a stunned, embarrassed silence. Finally someone shouted angrily from the back row: “How did they dare to appoint a doctor to this chair?” I was startled by this question and did not quite understand. What had prompted this reaction? What did this audience expect? Did they think my qualifications were not good enough? Why were they upset by the appointment of a doctor? Who else, in their view, might be better equipped to conduct medical research?

It wasn’t until weeks later that it dawned on me: they had been waiting for someone with a strong commitment to the promotion of alternative medicine. Such a commitment could only come from an alternative practitioner. A doctor personified the establishment, and “alternative” foremost symbolized “anti-establishment”. My little speech had upset them because it confirmed their worst fears of being annexed by “the establishment”. These enthusiasts had hoped for a believer from their own ranks and certainly not for a doctor-scientist to be appointed to the world’s first chair of complementary medicine. They had expected that Exeter University would lend its support to their commercial and ideological interests; they had little understanding of the concept that universities should not be in the business of promoting anything other than high standards.

Even today, after having given well over 600 lectures on the topic of alternative medicine, and after coming on the receiving end of ever more hostile attacks, aggressive questions and personal insults, this particular episode is still etched deeply into my memory. In a very real way, it set the scene for the two decades to come: the endless conflicts between my agenda of testing alternative medicine scientifically and the fervent aspirations of enthusiasts to promote alternative medicine uncritically. That our positions would prove mutually incompatible had been predictable from the very start. The writing had been on the wall—but it took me a while to be able to fully understand the message.

A recent article in the BMJ about my new book seems to have upset fellow researchers of alternative medicine. I am told that the offending passage is the following:

“Too much research on complementary therapies is done by people who have already made up their minds,” the first UK professor of complementary medicine has said. Edzard Ernst, who left his chair at Exeter University early after clashing with the Prince of Wales, told journalists at the Science Media Centre in London that, although more research into alternative medicines was now taking place, “none of the centres is anywhere near critical enough.”

Following this publication, I received indignant inquiries from colleagues asking whether I meant to say that their work lacks critical thinking. As this is a valid question, I will try to answer it the best I presently can.

Any critical evaluation of alternative medicine has to yield its fair share of negative conclusions about the value of alternative medicine. If it fails to do that, one would need to assume that most or all alternative therapies generate more good than harm – and very few experts (who are not proponents of alternative medicine) would assume that this can possibly be the case.

Put differently, this means that a researcher or a research group that does not generate its fair share of negative conclusions is suspect of lacking a critical attitude. In a previous post, I have addressed this issue in more detail by creating an ‘index': THE TRUSTWORTHINESS INDEX. I have also provided a concrete example of a researcher who seems to be associated with a remarkably high index (the higher the index, the more suspicion of critical attitude).

Instead of unnecessarily upsetting my fellow researchers of alternative medicine any further, I will just issue this challenge: if any research group can demonstrate to have an index below 0.5 (which would mean the team has published twice as many negative conclusions as positive ones), I will gladly and publicly retract my suspicion that this group is “anywhere near critical enough”.

Few subjects lead to such heated debate as the risk of stroke after chiropractic manipulations (if you think this is an exaggeration, look at the comment sections of previous posts on this subject). Almost invariably, one comes to the conclusion that more evidence would be helpful for arriving at firmer conclusions. Before this background, this new publication by researchers (mostly chiropractors) from the US ‘Dartmouth Institute for Health Policy & Clinical Practice’ is noteworthy.

The purpose of this study was to quantify the risk of stroke after chiropractic spinal manipulation, as compared to evaluation by a primary care physician, for Medicare beneficiaries aged 66 to 99 years with neck pain.

The researchers conducted a retrospective cohort analysis of a 100% sample of annualized Medicare claims data on 1 157 475 beneficiaries aged 66 to 99 years with an office visit to either a chiropractor or to a primary care physician for neck pain. They compared hazard of vertebrobasilar stroke and any stroke at 7 and 30 days after office visit using a Cox proportional hazards model. We used direct adjusted survival curves to estimate cumulative probability of stroke up to 30 days for the 2 cohorts.

The findings indicate that the proportion of subjects with a stroke of any type in the chiropractic cohort was 1.2 per 1000 at 7 days and 5.1 per 1000 at 30 days. In the primary care cohort, the proportion of subjects with a stroke of any type was 1.4 per 1000 at 7 days and 2.8 per 1000 at 30 days. In the chiropractic cohort, the adjusted risk of stroke was significantly lower at 7 days as compared to the primary care cohort (hazard ratio, 0.39; 95% confidence interval, 0.33-0.45), but at 30 days, a slight elevation in risk was observed for the chiropractic cohort (hazard ratio, 1.10; 95% confidence interval, 1.01-1.19).

The authors conclude that, among Medicare B beneficiaries aged 66 to 99 years with neck pain, incidence of vertebrobasilar stroke was extremely low. Small differences in risk between patients who saw a chiropractor and those who saw a primary care physician are probably not clinically significant.

I do, of course, applaud any new evidence on this rather ‘hot’ topic – but is it just me, or are the above conclusions a bit odd? Five strokes per 1000 patients is definitely not “extremely low” in my book; and furthermore I do wonder whether all experts would agree that a doubling of risk at 30 days in the chiropractic cohort is “probably not clinically significant” – particularly, if we consider that chiropractic spinal manipulation has so very little proven benefit.


On 1/12/2014 I published a post in which I offered to give lectures to students of alternative medicine:

Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .

A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine.

I did not think that this would create much interest – and I was right: only the ANGLO-EUROPEAN COLLEGE OF CHIROPRACTIC has so far hoisted me on my own petard and, after some discussion (see comment section of the original post) hosted me for a lecture. Several people seem keen on knowing how this went; so here is a brief report.

I was received, on 14/1/2015, with the utmost kindness by my host David Newell. We has a coffee and a chat and then it was time to start the lecture. The hall was packed with ~150 students and the same number was listening in a second lecture hall to which my talk was being transmitted.

We had agreed on the title CHIROPRACTIC: FALLACIES AND FACTS. So, after telling the audience about my professional background, I elaborated on 7 fallacies:

  1. Appeal to tradition
  2. Appeal to authority
  3. Appeal to popularity
  4. Subluxation exists
  5. Spinal manipulation is effective
  6. Spinal manipulation is safe
  7. Ad hominem attack

Numbers 3, 5 and 6 were dealt with in more detail than the rest. The organisers had asked me to finish by elaborating on what I perceive as the future challenges of chiropractic; so I did:

  1. Stop happily promoting bogus treatments
  2. Denounce obsolete concepts like ‘subluxation’
  3. Clarify differences between chiros, osteos and physios
  4. Start a culture of critical thinking
  5. Take action against charlatans in your ranks
  6. Stop attacking everyone who voices criticism

I ended by pointing out that the biggest challenge, in my view, was to “demonstrate with rigorous science which chiropractic treatments demonstrably generate more good than harm for which condition”.

We had agreed that my lecture would be followed by half an hour of discussion; this period turned out to be lively and had to be extended to a full hour. Most questions initially came from the tutors rather than the students, and most were polite – I had expected much more aggression.

In his email thanking me for coming to Bournemouth, David Newell wrote about the event: The general feedback from staff and students was one of relief that you possessed only one head, :-). I hope you may have felt the same about us. You came over as someone who had strong views, a fair amount of which we disagreed with, but that presented them in a calm, informative and courteous manner as we did in listening and discussing issues after your talk. I think everyone enjoyed the questions and debate and felt that some of the points you made were indeed fair critique of what the profession may need to do, to secure a more inclusive role in the health care arena.

As you may have garnered from your visit here, the AECC is committed to this task as we continue to provide the highest quality of education for the 21st C representatives of such a profession. We believe centrally that it is to our society at large and our communities within which we live and work that we are accountable. It is them that we serve, not ourselves, and we need to do that as best we can, with the best tools we have or can develop and that have as much evidence as we can find or generate. In this aim, your talk was important in shining a more ‘up close and personal’ torchlight on our profession and the tasks ahead whilst also providing us with a chance to debate the veracity or otherwise of yours and ours differing positions on interpretation of the evidence.

My own impression of the day is that some of my messages were not really understood, that some of the questions, including some from the tutors, seemed like coming from a different planet, and that people were more out to teach me than to learn from my talk. One overall impression that I took home from that day is that, even in this college which prides itself of being open to scientific evidence and unimpressed by chiropractic fundamentalism, students are strangely different from other health care professionals. The most tangible aspect of this is the openly hostile attitude against drug therapies voiced during the discussion by some students.

The question I always ask myself after having invested a lot of time in preparing and delivering a lecture is: WAS IT WORTH IT? In the case of this lecture, I think the answer is YES. With 300 students present, I am fairly confident that I did manage to stimulate a tiny bit of critical thinking in a tiny percentage of them. The chiropractic profession needs this badly!


The very first article on a subject related to alternative medicine with a 2015 date that I came across is a case-report. I am afraid it will not delight our chiropractic friends who tend to deny that their main therapy can cause serious problems.

In this paper, US doctors tell the story of a young woman who developed headache, vomiting, diplopia, dizziness, and ataxia following a neck manipulation by her chiropractor. A computed tomography scan of the head was ordered and it revealed an infarct in the inferior half of the left cerebellar hemisphere and compression of the fourth ventricle causing moderately severe, acute obstructive hydrocephalus. Magnetic resonance angiography showed severe narrowing and low flow in the intracranial segment of the left distal vertebral artery. The patient was treated with mannitol and a ventriculostomy. Following these interventions, she made an excellent functional recovery.

The authors of the case-report draw the following conclusions: This report illustrates the potential hazards associated with neck trauma, including chiropractic manipulation. The vertebral arteries are at risk for aneurysm formation and/or dissection, which can cause acute stroke.

I can already hear the counter-arguments: this is not evidence, it’s an anecdote; the evidence from the Cassidy study shows there is no such risk!

Indeed the Cassidy study concluded that vertebral artery accident (VBA) stroke is a very rare event in the population. The increased risks of VBA stroke associated with chiropractic and primary care physician visits is likely due to patients with headache and neck pain from VBA dissection seeking care before their stroke. We found no evidence of excess risk of VBA stroke associated chiropractic care compared to primary care. That, of course, was what chiropractors longed to hear (and it is the main basis for their denial of risk) – so much so that Cassidy et al published the same results a second time (most experts feel that this is a violation of publication ethics).

But repeating arguments does not make them more true. What we should not forget is that the Cassidy study was but one of several case-control studies investigating this subject. And the totality of all such studies does not deny an association between neck manipulation and stroke.

Much more important is the fact that a re-analysis of the Cassidy data found that prior studies grossly misclassified cases of cervical dissection and mistakenly dismissed a causal association with manipulation. The authors of this new paper found a classification error of cases by Cassidy et al and they re-analysed the Cassidy data, which reported no association between spinal manipulation and cervical artery dissection (odds ratio [OR] 5 1.12, 95% CI .77-1.63). These re-calculated results reveal an odds ratio of 2.15 (95% CI.98-4.69). For patients less than 45 years of age, the OR was 6.91 (95% CI 2.59-13.74). The authors of the re-analysis conclude as follows: If our estimates of case misclassification are applicable outside the VA population, ORs for the association between SMT exposure and CAD are likely to be higher than those reported using the Rothwell/Cassidy strategy, particularly among younger populations. Future epidemiologic studies of this association should prioritize the accurate classification of cases and SMT exposure.
I think they are correct; but my conclusion of all this would be more pragmatic and much simpler: UNTIL WE HAVE CONVINCING EVIDENCE TO THE CONTRARY, WE HAVE TO ASSUME THAT CHIROPRACTIC NECK MANIPULATION CAN CAUSE A STROKE.

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):


A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.


The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.


Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).


Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Rating: NO (high risk of bias), no details given

Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.


So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.


Adverse events have been reported extensively following chiropractic.  About 50% of patients suffer side-effects after seeing a chiropractor. The majority of these events are mild, transitory and self-limiting. However, chiropractic spinal manipulations, particularly those of the upper spine, have also been associated with very serious complications; several hundred such cases have been reported in the medical literature and, as there is no monitoring system to record these instances, this figure is almost certainly just the tip of a much larger iceberg.

Despite these facts, little is known about patient filed compensation claims related to the chiropractic consultation process. The aim of a new study was to describe claims reported to the Danish Patient Compensation Association and the Norwegian System of Compensation to Patients related to chiropractic from 2004 to 2012.

All finalized compensation claims involving chiropractors reported to one of the two associations between 2004 and 2012 were assessed for age, gender, type of complaint, decisions and appeals. Descriptive statistics were used to describe the study population.

338 claims were registered in Denmark and Norway between 2004 and 2012 of which 300 were included in the analysis. 41 (13.7%) were approved for financial compensation. The most frequent complaints were worsening of symptoms following treatment (n = 91, 30.3%), alleged disk herniations (n = 57, 19%) and cases with delayed referral (n = 46, 15.3%). A total financial payment of €2,305,757 (median payment €7,730) were distributed among the forty-one cases with complaints relating to a few cases of cervical artery dissection (n = 11, 5.7%) accounting for 88.7% of the total amount.

The authors concluded that chiropractors in Denmark and Norway received approximately one compensation claim per 100.000 consultations. The approval rate was low across the majority of complaint categories and lower than the approval rates for general practitioners and physiotherapists. Many claims can probably be prevented if chiropractors would prioritize informing patients about the normal course of their complaint and normal benign reactions to treatment.

Despite its somewhat odd conclusion (it is not truly based on the data), this is a unique article; I am not aware that other studies of chiropractic compensation  claims exist in an European context. The authors should be applauded for their work. Clearly we need more of the same from other countries and from all professions doing manipulative therapies.

In the discussion section of their article, the authors point out that Norwegian  and Danish chiropractors both deliver approximately two million consultations annually. They receive on average 42 claims combined suggesting roughly one claim per 100.000 consultations. By comparison, Danish statistics show that in the period 2007–2012 chiropractors, GPs and physiotherapists (+ occupational therapists) received 1.76, 1.32 and 0.52 claims per 100.000 consultations, respectively with approval rates of 13%, 25% and 21%, respectively. During this period these three groups were reimbursed on average €58,000, €29,000 and €18,000 per approved claim, respectively.

These data are preliminary and their interpretation might be a matter of debate. However, one thing seems clear enough: contrary to what we frequently hear from apologists, chiropractors do receive a considerable amount of compensation claims which means many patients do get harmed.

1 2 3 9
Recent Comments
Click here for a comprehensive list of recent comments.