MD, PhD, FMedSci, FSB, FRCP, FRCPEd

conflict of interest

1 2 3 9

A recent article in the BMJ about my new book seems to have upset fellow researchers of alternative medicine. I am told that the offending passage is the following:

“Too much research on complementary therapies is done by people who have already made up their minds,” the first UK professor of complementary medicine has said. Edzard Ernst, who left his chair at Exeter University early after clashing with the Prince of Wales, told journalists at the Science Media Centre in London that, although more research into alternative medicines was now taking place, “none of the centres is anywhere near critical enough.”

Following this publication, I received indignant inquiries from colleagues asking whether I meant to say that their work lacks critical thinking. As this is a valid question, I will try to answer it the best I presently can.

Any critical evaluation of alternative medicine has to yield its fair share of negative conclusions about the value of alternative medicine. If it fails to do that, one would need to assume that most or all alternative therapies generate more good than harm – and very few experts (who are not proponents of alternative medicine) would assume that this can possibly be the case.

Put differently, this means that a researcher or a research group that does not generate its fair share of negative conclusions is suspect of lacking a critical attitude. In a previous post, I have addressed this issue in more detail by creating an ‘index': THE TRUSTWORTHINESS INDEX. I have also provided a concrete example of a researcher who seems to be associated with a remarkably high index (the higher the index, the more suspicion of critical attitude).

Instead of unnecessarily upsetting my fellow researchers of alternative medicine any further, I will just issue this challenge: if any research group can demonstrate to have an index below 0.5 (which would mean the team has published twice as many negative conclusions as positive ones), I will gladly and publicly retract my suspicion that this group is “anywhere near critical enough”.

Few subjects lead to such heated debate as the risk of stroke after chiropractic manipulations (if you think this is an exaggeration, look at the comment sections of previous posts on this subject). Almost invariably, one comes to the conclusion that more evidence would be helpful for arriving at firmer conclusions. Before this background, this new publication by researchers (mostly chiropractors) from the US ‘Dartmouth Institute for Health Policy & Clinical Practice’ is noteworthy.

The purpose of this study was to quantify the risk of stroke after chiropractic spinal manipulation, as compared to evaluation by a primary care physician, for Medicare beneficiaries aged 66 to 99 years with neck pain.

The researchers conducted a retrospective cohort analysis of a 100% sample of annualized Medicare claims data on 1 157 475 beneficiaries aged 66 to 99 years with an office visit to either a chiropractor or to a primary care physician for neck pain. They compared hazard of vertebrobasilar stroke and any stroke at 7 and 30 days after office visit using a Cox proportional hazards model. We used direct adjusted survival curves to estimate cumulative probability of stroke up to 30 days for the 2 cohorts.

The findings indicate that the proportion of subjects with a stroke of any type in the chiropractic cohort was 1.2 per 1000 at 7 days and 5.1 per 1000 at 30 days. In the primary care cohort, the proportion of subjects with a stroke of any type was 1.4 per 1000 at 7 days and 2.8 per 1000 at 30 days. In the chiropractic cohort, the adjusted risk of stroke was significantly lower at 7 days as compared to the primary care cohort (hazard ratio, 0.39; 95% confidence interval, 0.33-0.45), but at 30 days, a slight elevation in risk was observed for the chiropractic cohort (hazard ratio, 1.10; 95% confidence interval, 1.01-1.19).

The authors conclude that, among Medicare B beneficiaries aged 66 to 99 years with neck pain, incidence of vertebrobasilar stroke was extremely low. Small differences in risk between patients who saw a chiropractor and those who saw a primary care physician are probably not clinically significant.

I do, of course, applaud any new evidence on this rather ‘hot’ topic – but is it just me, or are the above conclusions a bit odd? Five strokes per 1000 patients is definitely not “extremely low” in my book; and furthermore I do wonder whether all experts would agree that a doubling of risk at 30 days in the chiropractic cohort is “probably not clinically significant” – particularly, if we consider that chiropractic spinal manipulation has so very little proven benefit.

My message to (chiropractic) researchers is simple: PLEASE REMEMBER THAT SCIENCE IS NOT A TOOL FOR CONFIRMING BUT FOR TESTING HYPOTHESES.

On 1/12/2014 I published a post in which I offered to give lectures to students of alternative medicine:

Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .

A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine.

I did not think that this would create much interest – and I was right: only the ANGLO-EUROPEAN COLLEGE OF CHIROPRACTIC has so far hoisted me on my own petard and, after some discussion (see comment section of the original post) hosted me for a lecture. Several people seem keen on knowing how this went; so here is a brief report.

I was received, on 14/1/2015, with the utmost kindness by my host David Newell. We has a coffee and a chat and then it was time to start the lecture. The hall was packed with ~150 students and the same number was listening in a second lecture hall to which my talk was being transmitted.

We had agreed on the title CHIROPRACTIC: FALLACIES AND FACTS. So, after telling the audience about my professional background, I elaborated on 7 fallacies:

  1. Appeal to tradition
  2. Appeal to authority
  3. Appeal to popularity
  4. Subluxation exists
  5. Spinal manipulation is effective
  6. Spinal manipulation is safe
  7. Ad hominem attack

Numbers 3, 5 and 6 were dealt with in more detail than the rest. The organisers had asked me to finish by elaborating on what I perceive as the future challenges of chiropractic; so I did:

  1. Stop happily promoting bogus treatments
  2. Denounce obsolete concepts like ‘subluxation’
  3. Clarify differences between chiros, osteos and physios
  4. Start a culture of critical thinking
  5. Take action against charlatans in your ranks
  6. Stop attacking everyone who voices criticism

I ended by pointing out that the biggest challenge, in my view, was to “demonstrate with rigorous science which chiropractic treatments demonstrably generate more good than harm for which condition”.

We had agreed that my lecture would be followed by half an hour of discussion; this period turned out to be lively and had to be extended to a full hour. Most questions initially came from the tutors rather than the students, and most were polite – I had expected much more aggression.

In his email thanking me for coming to Bournemouth, David Newell wrote about the event: The general feedback from staff and students was one of relief that you possessed only one head, :-). I hope you may have felt the same about us. You came over as someone who had strong views, a fair amount of which we disagreed with, but that presented them in a calm, informative and courteous manner as we did in listening and discussing issues after your talk. I think everyone enjoyed the questions and debate and felt that some of the points you made were indeed fair critique of what the profession may need to do, to secure a more inclusive role in the health care arena.

 
As you may have garnered from your visit here, the AECC is committed to this task as we continue to provide the highest quality of education for the 21st C representatives of such a profession. We believe centrally that it is to our society at large and our communities within which we live and work that we are accountable. It is them that we serve, not ourselves, and we need to do that as best we can, with the best tools we have or can develop and that have as much evidence as we can find or generate. In this aim, your talk was important in shining a more ‘up close and personal’ torchlight on our profession and the tasks ahead whilst also providing us with a chance to debate the veracity or otherwise of yours and ours differing positions on interpretation of the evidence.

My own impression of the day is that some of my messages were not really understood, that some of the questions, including some from the tutors, seemed like coming from a different planet, and that people were more out to teach me than to learn from my talk. One overall impression that I took home from that day is that, even in this college which prides itself of being open to scientific evidence and unimpressed by chiropractic fundamentalism, students are strangely different from other health care professionals. The most tangible aspect of this is the openly hostile attitude against drug therapies voiced during the discussion by some students.

The question I always ask myself after having invested a lot of time in preparing and delivering a lecture is: WAS IT WORTH IT? In the case of this lecture, I think the answer is YES. With 300 students present, I am fairly confident that I did manage to stimulate a tiny bit of critical thinking in a tiny percentage of them. The chiropractic profession needs this badly!

 

The very first article on a subject related to alternative medicine with a 2015 date that I came across is a case-report. I am afraid it will not delight our chiropractic friends who tend to deny that their main therapy can cause serious problems.

In this paper, US doctors tell the story of a young woman who developed headache, vomiting, diplopia, dizziness, and ataxia following a neck manipulation by her chiropractor. A computed tomography scan of the head was ordered and it revealed an infarct in the inferior half of the left cerebellar hemisphere and compression of the fourth ventricle causing moderately severe, acute obstructive hydrocephalus. Magnetic resonance angiography showed severe narrowing and low flow in the intracranial segment of the left distal vertebral artery. The patient was treated with mannitol and a ventriculostomy. Following these interventions, she made an excellent functional recovery.

The authors of the case-report draw the following conclusions: This report illustrates the potential hazards associated with neck trauma, including chiropractic manipulation. The vertebral arteries are at risk for aneurysm formation and/or dissection, which can cause acute stroke.

I can already hear the counter-arguments: this is not evidence, it’s an anecdote; the evidence from the Cassidy study shows there is no such risk!

Indeed the Cassidy study concluded that vertebral artery accident (VBA) stroke is a very rare event in the population. The increased risks of VBA stroke associated with chiropractic and primary care physician visits is likely due to patients with headache and neck pain from VBA dissection seeking care before their stroke. We found no evidence of excess risk of VBA stroke associated chiropractic care compared to primary care. That, of course, was what chiropractors longed to hear (and it is the main basis for their denial of risk) – so much so that Cassidy et al published the same results a second time (most experts feel that this is a violation of publication ethics).

But repeating arguments does not make them more true. What we should not forget is that the Cassidy study was but one of several case-control studies investigating this subject. And the totality of all such studies does not deny an association between neck manipulation and stroke.

Much more important is the fact that a re-analysis of the Cassidy data found that prior studies grossly misclassified cases of cervical dissection and mistakenly dismissed a causal association with manipulation. The authors of this new paper found a classification error of cases by Cassidy et al and they re-analysed the Cassidy data, which reported no association between spinal manipulation and cervical artery dissection (odds ratio [OR] 5 1.12, 95% CI .77-1.63). These re-calculated results reveal an odds ratio of 2.15 (95% CI.98-4.69). For patients less than 45 years of age, the OR was 6.91 (95% CI 2.59-13.74). The authors of the re-analysis conclude as follows: If our estimates of case misclassification are applicable outside the VA population, ORs for the association between SMT exposure and CAD are likely to be higher than those reported using the Rothwell/Cassidy strategy, particularly among younger populations. Future epidemiologic studies of this association should prioritize the accurate classification of cases and SMT exposure.
I think they are correct; but my conclusion of all this would be more pragmatic and much simpler: UNTIL WE HAVE CONVINCING EVIDENCE TO THE CONTRARY, WE HAVE TO ASSUME THAT CHIROPRACTIC NECK MANIPULATION CAN CAUSE A STROKE.

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):

BACKGROUND:

A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.

METHODS:

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

RESULTS:

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

CONCLUSIONS:

Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
Walach:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
Walach:
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

Jacobs:
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Walach:
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

Jacobs:
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Walach:
Rating: NO (high risk of bias), no details given

Jacobs:
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.

Conclusion

So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because

  1. the assumptions which underpin homeopathy fly in the face of science,
  2. the clinical evidence fails to show that it works beyond a placebo effect.

But was I correct?

A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.

The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.

Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).

The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:

Clinical evidence for homeopathy published

Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.

The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.

The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.

The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.

“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”

Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.

Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.

SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.

Adverse events have been reported extensively following chiropractic.  About 50% of patients suffer side-effects after seeing a chiropractor. The majority of these events are mild, transitory and self-limiting. However, chiropractic spinal manipulations, particularly those of the upper spine, have also been associated with very serious complications; several hundred such cases have been reported in the medical literature and, as there is no monitoring system to record these instances, this figure is almost certainly just the tip of a much larger iceberg.

Despite these facts, little is known about patient filed compensation claims related to the chiropractic consultation process. The aim of a new study was to describe claims reported to the Danish Patient Compensation Association and the Norwegian System of Compensation to Patients related to chiropractic from 2004 to 2012.

All finalized compensation claims involving chiropractors reported to one of the two associations between 2004 and 2012 were assessed for age, gender, type of complaint, decisions and appeals. Descriptive statistics were used to describe the study population.

338 claims were registered in Denmark and Norway between 2004 and 2012 of which 300 were included in the analysis. 41 (13.7%) were approved for financial compensation. The most frequent complaints were worsening of symptoms following treatment (n = 91, 30.3%), alleged disk herniations (n = 57, 19%) and cases with delayed referral (n = 46, 15.3%). A total financial payment of €2,305,757 (median payment €7,730) were distributed among the forty-one cases with complaints relating to a few cases of cervical artery dissection (n = 11, 5.7%) accounting for 88.7% of the total amount.

The authors concluded that chiropractors in Denmark and Norway received approximately one compensation claim per 100.000 consultations. The approval rate was low across the majority of complaint categories and lower than the approval rates for general practitioners and physiotherapists. Many claims can probably be prevented if chiropractors would prioritize informing patients about the normal course of their complaint and normal benign reactions to treatment.

Despite its somewhat odd conclusion (it is not truly based on the data), this is a unique article; I am not aware that other studies of chiropractic compensation  claims exist in an European context. The authors should be applauded for their work. Clearly we need more of the same from other countries and from all professions doing manipulative therapies.

In the discussion section of their article, the authors point out that Norwegian  and Danish chiropractors both deliver approximately two million consultations annually. They receive on average 42 claims combined suggesting roughly one claim per 100.000 consultations. By comparison, Danish statistics show that in the period 2007–2012 chiropractors, GPs and physiotherapists (+ occupational therapists) received 1.76, 1.32 and 0.52 claims per 100.000 consultations, respectively with approval rates of 13%, 25% and 21%, respectively. During this period these three groups were reimbursed on average €58,000, €29,000 and €18,000 per approved claim, respectively.

These data are preliminary and their interpretation might be a matter of debate. However, one thing seems clear enough: contrary to what we frequently hear from apologists, chiropractors do receive a considerable amount of compensation claims which means many patients do get harmed.

This investigation was aimed at examining the messages utilised by the chiropractic profession around issues of scope and efficacy through website communication with the public. For this purpose, the authors submitted the website content of 11 major Canadian chiropractic associations and colleges, and of 80 commercial clinics to a mixed-methods analysis. Content was reviewed to quantify specific health conditions described as treatable by chiropractic care. A qualitative textual analysis identified the primary messages related to evidence and efficacy utilised by the websites.

The results show that chiropractic was claimed to be capable of addressing a wide range of health issues. Quantitative analysis revealed that association and college websites identified a total of 41 unique conditions treatable by chiropractic, while private clinic websites named 159 distinct conditions. The most commonly cited conditions included back pain, headaches/migraines and neck pain. Qualitative analysis revealed three prominent themes drawn upon in discussions of efficacy and evidence: grounded in science, the conflation of safety and efficacy and “natural” healing.

The authors concluded that the chiropractic profession claims the capacity to treat health conditions that exceed those more traditionally associated with chiropractic. Website content persistently declared that such claims are supported by research and scientific evidence, and at times blurred the lines between safety and efficacy. The chiropractic profession may be struggling to define themselves both within the paradigm of conventional science as well as an alternative paradigm that embraces natural approaches.

These findings strike me as being similar to the ones we published 4 years ago. At this stage, we had conducted a review of 200 chiropractor websites and 9 chiropractic associations’ World Wide Web claims in Australia, Canada, New Zealand, the United Kingdom, and the United States. The outcome measures were claims (either direct or indirect) regarding the eight reviewed conditions, made in the context of chiropractic treatment: asthma, headache/migraine, infant colic, colic, ear infection/earache/otitis media, neck pain, whiplash (not supported by sound evidence), and lower back pain (supported by some evidence).

We found evidence that 190 (95%) chiropractor websites made unsubstantiated claims regarding at least one of the conditions. When colic and infant colic data were collapsed into one heading, there was evidence that 76 (38%) chiropractor websites made unsubstantiated claims about all the conditions not supported by sound evidence. Fifty-six (28%) websites and 4 of the 9 (44%) associations made claims about lower back pain, whereas 179 (90%) websites and all 9 associations made unsubstantiated claims about headache/migraine. Unsubstantiated claims were made about asthma, ear infection/earache/otitis media, neck pain.

At the time, we concluded that the majority of chiropractors and their associations in the English-speaking world seem to make therapeutic claims that are not supported by sound evidence, whilst only 28% of chiropractor websites promote lower back pain, which is supported by some evidence. We suggest the ubiquity of the unsubstantiated claims constitutes an ethical and public health issue.

Comparing the two studies, what should we conclude? Of course, the new investigation was confined to Canada; we therefore cannot generalise its results to other countries. Nevertheless it provides a fascinating insight into the (lack of) development of chiropractic in this part of the world. My conclusion is that, at least in Canada, there is very little evidence that chiropractic is about to become an ethical and evidence-based profession.

Some of the recent comments on this blog have been rather emotional, a few even irrational, and several were, I am afraid, outright insulting (I usually omit to post the worst excesses). Moreover, I could not avoid the impression that some commentators have little understanding of what the aim of this blog really is. I tried to point this out in the very first paragraph of my very first post:

Why another blog offering critical analyses of the weird and wonderful stuff that is going on in the world of alternative medicine? The answer is simple: compared to the plethora of uncritical misinformation on this topic, the few blogs that do try to convey more reflected, sceptical views are much needed; and the more we have of them, the better.

My foremost aim with his blog is to inform consumers through critical analysis and, in this way, I hope to prevent harm from patients in the realm of alternative medicine. What follows, are a few simple yet important points about this blog which I try to spell out here as clearly as I can:

  • I am not normally commenting on issues related to conventional medicine – not because I feel there is nothing to criticise in mainstream medicine, but because my expertise has long been in alternative medicine. So commentators might as well forget about arguments like “more people die because of drugs than alternative treatments”; they are firstly fallacious and secondly not relevant to this blog.
  • I have researched alternative medicine for many years (~ 40 clinical studies, > 300 systematic reviews etc.) and my readers can be confident that I know what I am talking about. Thus comments like ‘he does not know anything about the subject’ are usually not well placed and just show the ignorance of those who post them.
  • I am not in the pocket of anyone. I do not receive payments for doing this blog, nor did I, as an academic, receive any financial or other inducements for researching alternative medicine (on the contrary, I have often been given to understand that my life could be made much easier, if I adopted a more promotional stance towards my alternative medicine). I also do not belong to any organisation that is financed by BIG PHARMA or similar power houses. So my critics might as well abandon their conspiracy theories and  focus on a more promising avenue of criticism.
  • My allegiance is not with any interest group in (or outside) the field of alternative medicine. For instance, I do not see it as my job to help chiropractors, homeopaths etc. getting their act together. My task here is to point out the deficits in chiropractic (or any other area of alternative medicine) so that consumers are better protected. (I should think, however, that this also creates pressure on professions to become more evidence-based – but I see this as a mere welcome side-effect.)
  • If some commentators seem to find my arguments alarmist or see it as venomous scare-mongering, I suggest they re-examine their own position and learn to think a little more (self-) critically. I furthermore suggest that, instead of claiming such nonsense, they point out where they think I have gone wrong and provide evidence for their views.
  • Some people seem convinced that I have an axe to grind, that I have been personally injured by some alternative practitioner, or had some other unpleasant or traumatic experience. To those who think so, I have to say very clearly that none of this has ever happened. I recommend they inform themselves of the nature of critical analysis and its benefits.
  • This is a blog, not a scientific journal. I try to reach as many lay people as I can and therefore I tend to use simple language and sometimes aim to be entertaining. Those who feel that this renders my blog more journalistic than scientific are probably correct. If they want science, I recommend they look for my scientific articles in the medical literature; I can assure them that they will find plenty.
  • I very much invite an open and out-spoken debate. But ad hominem attacks are usually highly counterproductive – they only demonstrate that the author has no rational arguments left, or had none in the first place. Authors of insults also risks being banned from this blog.
  • Finally, I fear that some readers of my blog might sometimes get confused in the arguments and counter-arguments, and end up uncertain which side is right and which is wrong. To those who have this problem, I recommend a simple method for deciding where the truth is usually more likely to be found: ask yourself who might be merely defending his/her self-interest and who might be free of such conflicts of interest and thus more objective. For example, in my endless disputes with chiropractors, one could well ask: do the chiropractors have an interest in defending their livelihood, and what interest do I have in questioning whether chiropractors do generate more good than harm?

Acute tonsillitis (AT) is an upper respiratory tract infection which is prevalent, particularly in children. The cause is usually a viral or, less commonly, a bacterial infection. Treatment is symptomatic and usually consists of ample fluid intake and pain-killers; antibiotics are rarely indicated, even if the infection is bacterial by nature. The condition is self-limiting and symptoms subside normally after one week.

Homeopaths believe that their remedies are effective for AT – but is there any evidence? A recent trial seems to suggest there is.

It aimed, according to its authors, to determine the efficacy of a homeopathic complex on the symptoms of acute viral tonsillitis in African children in South Africa.

The double-blind, placebo-controlled RCT was a 6-day “pilot study” and included 30 children aged 6 to 12 years, with acute viral tonsillitis. Participants took two tablets 4 times per day. The treatment group received lactose tablets medicated with the homeopathic complex (Atropa belladonna D4, Calcarea phosphoricum D4, Hepar sulphuris D4, Kalium bichromat D4, Kalium muriaticum D4, Mercurius protoiodid D10, and Mercurius biniodid D10). The placebo consisted of the unmedicated vehicle only. The Wong-Baker FACES Pain Rating Scale was used for measuring pain intensity, and a Symptom Grading Scale assessed changes in tonsillitis signs and symptoms.

The results showed that the treatment group had a statistically significant improvement in the following symptoms compared with the placebo group: pain associated with tonsillitis, pain on swallowing, erythema and inflammation of the pharynx, and tonsil size.

The authors drew the following conclusions: the homeopathic complex used in this study exhibited significant anti-inflammatory and pain-relieving qualities in children with acute viral tonsillitis. No patients reported any adverse effects. These preliminary findings are promising; however, the sample size was small and therefore a definitive conclusion cannot be reached. A larger, more inclusive research study should be undertaken to verify the findings of this study.

Personally, I agree only with the latter part of the conclusion and very much doubt that this study was able to “determine the efficacy” of the homeopathic product used. The authors themselves call their trial a “pilot study”. Such projects are not meant to determine efficacy but are usually designed to determine the feasibility of a trial design in order to subsequently mount a definitive efficacy study.

Moreover, I have considerable doubts about the impartiality of the authors. Their affiliation is “Department of Homoeopathy, University of Johannesburg, Johannesburg, South Africa”, and their article was published in a journal known to be biased in favour of homeopathy. These circumstances in itself might not be all that important, but what makes me more than a little suspicious is this sentence from the introduction of their abstract:

“Homeopathic remedies are a useful alternative to conventional medications in acute uncomplicated upper respiratory tract infections in children, offering earlier symptom resolution, cost-effectiveness, and fewer adverse effects.”

A useful alternative to conventional medications (there are no conventional drugs) for earlier symptom resolution?

If it is true that the usefulness of homeopathic remedies has been established, why conduct the study?

If the authors were so convinced of this notion (for which there is, of course, no good evidence) how can we assume they were not biased in conducting this study?

I think that, in order to agree that a homeopathic remedy generates effects that differ from those of placebo, we need a proper (not a pilot) study, published in a journal of high standing by unbiased scientists.

1 2 3 9
Recent Comments
Click here for a comprehensive list of recent comments.
Categories