1 2 3 16

One could define alternative medicine by the fact that it is used almost exclusively for conditions for which conventional medicine does not have an effective and reasonably safe cure. Once such a treatment has been found, few patients would look for an alternative.

Alzheimer’s disease (AD) is certainly one such condition. Despite intensive research, we are still far from being able to cure it. It is thus not really surprising that AD patients and their carers are bombarded with the promotion of all sorts of alternative treatments. They must feel bewildered by the choice and all too often they fall victim to irresponsible quacks.

Acupuncture is certainly an alternative therapy that is frequently claimed to help AD patients. One of the first websites that I came across, for instance, stated boldly: acupuncture improves memory and prevents degradation of brain tissue.

But is there good evidence to support such claims? To answer this question, we need a systematic review of the trial data. Fortunately, such a paper has just been published.

The objective of this review was to assess the effectiveness and safety of acupuncture for treating AD. Eight electronic databases were searched from their inception to June 2014. Randomized clinical trials (RCTs) with AD treated by acupuncture or by acupuncture combined with drugs were included. Two authors extracted data independently.

Ten RCTs with a total of 585 participants were included in a meta-analysis. The combined results of 6 trials showed that acupuncture was better than drugs at improving scores on the Mini Mental State Examination (MMSE) scale. Evidence from the pooled results of 3 trials showed that acupuncture plus donepezil was more effective than donepezil alone at improving the MMSE scale score. Only 2 trials reported the incidence of adverse reactions related to acupuncture. Seven patients had adverse reactions related to acupuncture during or after treatment; the reactions were described as tolerable and not severe.

The Chinese authors of this review concluded that acupuncture may be more effective than drugs and may enhance the effect of drugs for treating AD in terms of improving cognitive function. Acupuncture may also be more effective than drugs at improving AD patients’ ability to carry out their daily lives. Moreover, acupuncture is safe for treating people with AD.

Anyone reading this and having a friend or family member who is affected by AD will think that acupuncture is the solution and warmly recommend trying this highly promising option. I would, however, caution to remain realistic. Like so very many systematic reviews of acupuncture or other forms of TCM that are currently flooding the medical literature, this assessment of the evidence has to be taken with more than just a pinch of salt:

  • As far as I can see, there is no biological plausibility or mechanism for the assumption that acupuncture can do anything for AD patients.
  • The abstract fails to mention that the trials were of poor methodological quality and that such studies tend to generate false-positive findings.
  • The trials had small sample sizes.
  • They were mostly not blinded.
  • They were mostly conducted in China, and we know that almost 100% of all acupuncture studies from that country draw positive conclusions.
  • Only two trials reported about adverse effects which is, in my view, a sign of violation of research ethics.

As I already mentioned, we are currently being flooded with such dangerously misleading reviews of Chinese primary studies which are of such dubious quality that one could do probably nothing better than to ignore them completely.

Isn’t that a bit harsh? Perhaps, but I am seriously worried that such papers cause real harm:

  • They might motivate some to try acupuncture and give up conventional treatments which can be helpful symptomatically.
  • They might prompt some families to spend sizable amounts of money for no real benefit.
  • They might initiate further research into this area, thus drawing money away from research into much more promising avenues.


You may feel that homeopaths are bizarre, irrational, perhaps even stupid – but you cannot deny their tenacity. Since 200 years, they are trying to convince us that their treatments are effective beyond placebo. And they seem to get more and more bold with their claims: while they used to suggest that homeopathy was effective for trivial conditions like a common cold, they now have their eyes on much more ambitious things. Two recent studies, for instance, claim that homeopathic remedies can help cancer patients.

The aim of the first study was to evaluate whether homeopathy influenced global health status and subjective wellbeing when used as an adjunct to conventional cancer therapy.

In this pragmatic randomized controlled trial, 410 patients, who were treated by standard anti-neoplastic therapy, were randomized to receive or not receive classical homeopathic adjunctive therapy in addition to standard therapy. The main outcome measures were global health status and subjective wellbeing as assessed by the patients. At each of three visits (one baseline, two follow-up visits), patients filled in two questionnaires for quantification of these endpoints.

The results show that 373 patients yielded at least one of three measurements. The improvement of global health status between visits 1 and 3 was significantly stronger in the homeopathy group by 7.7 (95% CI 2.3-13.0, p=0.005) when compared with the control group. A significant group difference was also observed with respect to subjective wellbeing by 14.7 (95% CI 8.5-21.0, p<0.001) in favor of the homeopathic as compared with the control group. Control patients showed a significant improvement only in subjective wellbeing between their first and third visits.

Our homeopaths concluded that the results suggest that the global health status and subjective wellbeing of cancer patients improve significantly when adjunct classical homeopathic treatment is administered in addition to conventional therapy.

The second study is a little more modest; it had the aim to explore the benefits of a three-month course of individualised homeopathy (IH) for survivors of cancer.

Fifteen survivors of any type of cancer were recruited by a walk-in cancer support centre. Conventional treatment had to have taken place within the last three years. Patients scored their total, physical and emotional wellbeing using the Functional Assessment of Chronic Illness Therapy for Cancer (FACIT-G) before and after receiving four IH sessions.

The results showed that 11 women had statistically positive results for emotional, physical and total wellbeing based on FACIT-G scores.

And the conclusion: Findings support previous research, suggesting CAM or individualised homeopathy could be beneficial for survivors of cancer.

As I said: one has to admire their tenacity, perhaps also their chutzpa – but not their understanding of science or their intelligence. If they were able to think critically, they could only arrive at one conclusion: STUDY DESIGNS THAT ARE WIDE OPEN TO BIAS ARE LIKELY TO DELIVER BIASED RESULTS.

The second study is a mere observation without a control group. The reported outcomes could be due to placebo, expectation, extra attention or social desirability. We obviously need an RCT! But the first study was an RCT!!! Its results are therefore more convincing, aren’t they?

No, not at all. I can repeat my sentence from above: The reported outcomes could be due to placebo, expectation, extra attention or social desirability. And if you don’t believe it, please read what I have posted about the infamous ‘A+B versus B’ trial design (here and here and here and here and here for instance).

My point is that such a study, while looking rigorous to the naïve reader (after all, it’s an RCT!!!), is just as inconclusive when it comes to establishing cause and effect as a simple case series which (almost) everyone knows by now to be utterly useless for that purpose. The fact that the A+B versus B design is nevertheless being used over and over again in alternative medicine for drawing causal conclusions amounts to deceit – and deceit is unethical, as we all know.

My overall conclusion about all this:


On 26/5/2015, I received the email reproduced below. I thought it was interesting, looked up its author (“Shawn is a philosopher and writer educated at York University in Toronto, and the author of two books. He’s also worked with Aboriginal youth in the Northwest Territories of Canada”) and decided to respond by writing a blog-post rather than by answering Alli directly.

Hello Dr. Ernst, this is Shawn Alli from Canada, a blogger and philosopher. I recently finished a critical article on James Randi’s legacy. It gets into everything from ideological science, manipulation, ESP, faith healing, acupuncture and homeopathy.

Let me know what you think about it:

It’s quite long so save it for a rainy day.

So far, the reply from skeptical organizations range from: “I couldn’t read further than the first few paragraphs because I disagree with the claims…” to one word replies: “Petty.”

It’s always nice to know how open-minded skeptical organizations are.

Hopefully you can add a bit more.



Yes, indeed, I can but try to add a bit more!

However, Alli’s actual article is far too long to analyse it here in full. I therefore selected just the bit that I feel most competent commenting on and which is closest to my heart. Below, I re-produce this section of Alli’s article in full. I add my comments at the end (in bold) by inserting numbered responses which refer to the numbers (in round brackets [the square ones refer to Alli’s references]) inserted throughout Alli’s text. Here we go:

Homeopathy & Acupuncture:

A significant part of Randi’s legacy is his war against homeopathy. This is where Randi shines even above mainstream scientists such as Dawkins or Tyson.

Most of his talks ridicule homeopathy as nonsense that doesn’t deserve the distinction of being called a treatment. This is due to the fact that the current scientific method is unable to account for the results of homeopathy (1). In reality, the current scientific method can’t account for the placebo effect as well (2).

But then again, that presents an internal problem as well. The homeopathic community is divided by those who believe it’s a placebo effect and those that believe it’s more than that, advocating the theory of water memory, which mainstream scientists ridicule and vilify (3).

I don’t know what camp is correct (4), but I do know that the homeopathic community shouldn’t follow the lead of mainstream scientists and downplay the placebo effect as, it’s just a placebo (5).

Remember, the placebo effect is downplayed because the current scientific method is unable to account for the phenomenon (3, 5). It’s a wondrous and real effect, regardless of the ridicule and vilification (6) that’s attached to it.

While homeopathy isn’t suitable as a treatment for severe or acute medical conditions, it’s an acceptable treatment for minor, moderate or chronic ones (7). Personally, I’ve never tried homeopathic treatments. But I would never tell individuals not to consider it. To each their own, as long as it’s within universal ethics (8).

A homeopathic community in Greece attempts to conduct an experiment demonstrating a biological effect using homeopathic medicine and win Randi’s million dollar challenge. George Vithoulkas and his team spend years creating the protocol of the study, only to be told by Randi to redo it from scratch. [29] (9) I recommend readers take a look at:

The facts about an ingenious homeopathic experiment that was not completed due to the “tricks” of Mr. James Randi.

Randi’s war against homeopathy is an ideological one (10). He’ll never change his mind despite positive results in and out of the lab (11). This is the epitome of dogmatic ideological thinking (12).

The same is true for acupuncture (13). In his NECSS 2012 talk Randi says:

Harvard Medical School is now offering an advanced course for physicians in acupuncture, which has been tested endlessly for centuries and it does not work in any way. And believe me, I know what I’m talking about. [30]

Acupuncture is somewhat of a grey area for mainstream scientists and the current scientific method. One ideological theory states that acupuncture operates on principles of non-physical energy in the human body and relieving pressure on specific meridians. The current scientific method is unable to account for non-physical human energy and meridians.

A mainstream scientific theory of acupuncture is one of neurophysiology, whereby acupuncture works by affecting the release of neurotransmitters. I don’t know which theory is correct; but I do know that those who do try acupuncture usually feel better (14).

In regards to the peer-reviewed literature, I believe (15) that there’s a publication bias against acupuncture being seen as a viable treatment for minor, moderate or chronic conditions. A few peer-reviewed articles support the use of acupuncture for various conditions:

Eight sessions of weekly group acupuncture compared with group oral care education provide significantly better relief of symptoms in patients suffering from chronic radiation-induced xerostomia. [31]

It is concluded that this study showed highly positive effects on pain and function through the collaborative treatment of acupuncture and motion style in aLBP [acute lower back pain] patients. [32]

Given the limited efficacy of antidepressant treatment…the present study provides evidence in supporting the viewpoint that acupuncture is an effective and safe alternative treatment for depressive disorders, and could be considered an alternative option especially for patients with MDD [major depressive disorder] and PSD [post-stroke depression], although evidence for its effects in augmenting antidepressant agents remains controversial. [33]

In conclusion: We find that acupuncture significantly relieves hot flashes and sleep disturbances in women treated for breast cancer. The effect was seen in the therapy period and at least 12 weeks after acupuncture treatment ceased. The effect was not correlated with increased levels of plasma estradiol. The current study showed no side effects of acupuncture. These results indicate that acupuncture can be used as an effective treatment of menopausal discomfort. [34]

In conclusion, the present study demonstrates, in rats, that EA [electroacupuncture] significantly attenuates bone cancer induced hyperalgesia, which, at least in part, is mediated by EA suppression of IL-1…expression. [35]

In animal model of focal cerebral ischemia, BBA [Baihui (GV20)-based Scalp acupuncture] could improve IV [infarct volume] and NFS [neurological function score]. Although some factors such as study quality and possible publication bias may undermine the validity of positive findings, BBA may have potential neuroprotective role in experimental stroke. [36]

In conclusion, this randomized sham-controlled study suggests that electroacupuncture at acupoints including Zusanli, Sanyinjiao, Hegu, and Zhigou is more effective than no acupuncture and sham acupuncture in stimulating early return of bowel function and reducing postoperative analgesic requirements after laparoscopic colorectal surgery. Electroacupuncture is also more effective than no acupuncture in reducing the duration of hospital stay. [37]

In conclusion, we found acupuncture to be superior to both no acupuncture control and sham acupuncture for the treatment of chronic pain…Our results from individual patient data meta-analyses of nearly 18000 randomized patients in high-quality RCTs [randomized controlled trials] provide the most robust evidence to date that acupuncture is a reasonable referral option for patients with chronic pain. [38]

While Randi and many other mainstream scientists will argue (16) that the above claims are the result of ideological science and cherry picking, in reality, they’re the result of good science going up against dogmatic (17) and profit-driven (17) ideological (17) science.

Yes, the alternative medicine industry is now a billion dollar industry. But the global pharmaceutical medical industry is worth hundreds of trillions of dollars. And without its patients (who need to be in a constant state of ill health), it can’t survive (18).

Individuals who have minor, moderate, or chronic medical conditions don’t want to be part of the hostile debate between alternative medicine vs. pharmaceutical medical science (19). They just want to get better and move on with their life. The constant war that mainstream scientists wage against alternative medicine is only hurting the people they’re supposed to be helping (20).

Yes, the ideologies (21) are incompatible. Yes, there are no accepted scientific theories for such treatments. Yes, it defies what mainstream scientists currently “know” about the human body (22).

It would be impressive if a peace treaty can exist between both sides, where both don’t agree, but respect each other enough to put aside their pride and help patients to regain their health (23).


And here are my numbered comments:

(1) This is not how I understand Randi’s position. Randi makes a powerful point about the fact that the assumptions of homeopathy are not plausible, which is entirely correct – so much so that even some leading homeopaths admit that this is true.

(2) This is definitely not correct; the placebo effect has been studied in much detail, and we can certainly ‘account’ for it.

(3) In my 40 years of researching homeopathy and talking to homeopaths, I have not met any homeopaths who “believe it’s a placebo effect”.

(4) There is no ‘placebo camp’ amongst homeopaths; so this is not a basis for an argument; it’s a fallacy.

(5) They very definitely are mainstream scientists, like F Benedetti, who research the placebo effect and they certainly do not ‘downplay’ it. (What many people fail to understand is that, in placebo-controlled trials, one aims at controlling the placebo effect; to a research-naïve person, this may indeed LOOK LIKE downplaying it. But this impression is wrong and reflects merely a lack of understanding.)

(6) No serious scientist attaches ‘ridicule and vilification’ to it.

(7) Who says so? I know only homeopaths who hold this opinion; and it is not evidence-based.

(8) Ethics demand that patients require the best available treatment; homeopathy does not fall into this category.

(9) At one stage (more than 10 years ago), I was involved in the design of this test. My recollection of it is not in line with the report that is linked here.

(10) So far, we have seen no evidence for this statement.

(11) Which ones? No examples are provided.

(12) Yet another statement without evidence – potentially libellous.

(13) Conclusion before any evidence; sign for a closed mind?

(14) This outcome could be entirely unrelated to acupuncture, as anyone who has a minimum of health care knowledge should know.

(15) We are not concerned with beliefs, we concerned with facts here, aren’t we ?

(16) But did they argue this? Where is the evidence to support this statement?

(17) Non-evidence-based accusations.

(18) Classic fallacy.

(19) The debate is not between alt med and ‘pharmaceutical science’, it is between those who insist on treatments which demonstrably generate more good than harm, and those who want alt med regardless of any such considerations.

(20) Warning consumers of treatments which fail to fulfil the above criterion is, in my view, an ethical duty which can save much money and many lives.

(21) Yes, alt med is clearly ideology-driven; by contrast conventional medicine is not (if it were, Alli would have explained what ideology it is precisely). Conventional medicine changes all the time, sometimes even faster than we can cope with, and is mainly orientated on evidence which is not an ideology. Alt med hardly changes or progresses at all; for the most part, its ideology is that of a cult celebrating anti-science and obsolete traditions.

(22) Overt contradiction to what Alli just stated about acupuncture.

(23) To me, this seems rather nonsensical and a hindrance to progress.

In summary, I feel that Alli argues his corner very poorly. He makes statements without supporting evidence, issues lots of opinion without providing the facts (occasionally even hiding them), falls victim of logical fallacies, and demonstrates an embarrassing lack of knowledge and common sense. Most crucially, the text seems bar of any critical analysis; to me, it seems like a bonanza of unreason.

To save Alli the embarrassment of arguing that I am biased or don’t know what I am talking about, I’d like to declare the following: I am not paid by ‘Big Pharma’ or anyone else, I am not aware of having any other conflicts of interest, I have probably published more research on alt med (some of it with positive conclusions !!!) than anyone else on the planet, my research was funded mostly by organisations/donors who were in favour of alt med, and I have no reason whatsoever to defend Randi (I only met him personally once). My main motivation for responding to Alli’s invitation to comment on his bizarre article is that I have fun exposing ‘alt med nonsense’ and believe it is a task worth doing.

A recent comment to a post of mine (by a well-known and experienced German alt med researcher) made the following bold statement aimed directly at me and at my apparent lack of understanding research methodology:

C´mon , as researcher you should know the difference between efficacy and effectiveness. This is pharmacological basic knowledge. Specific (efficacy) + nonspecific effects = effectiveness. And, in fact, everything can be effective – because of non-specific or placebo-like effects. That does not mean that efficacy is existent.

The point he wanted to make is that outcome studies – studies without a control group where the researcher simply observe the outcome of a particular treatment in a ‘real life’ situation – suffice to demonstrate the effectiveness of therapeutic interventions. This belief is very wide-spread in alternative medicine and tends to mislead all concerned. It is therefore worth re-visiting this issue here in an attempt to create some clarity.

When a patient’s condition improves after receiving a therapy, it is very tempting to feel that this improvement reflects the effectiveness of the intervention (as the researcher mentioned above obviously does). Tempting but wrong: there are many other factors involved as well, for instance:

  • the placebo effect (mainly based on conditioning and expectation),
  • the therapeutic relationship with the clinician (empathy, compassion etc.),
  • the regression towards the mean (outliers tend to return to the mean value),
  • the natural history of the patient’s condition (most conditions get better even without treatment),
  • social desirability (patients tend to say they are better to please their friendly clinician),
  • concomitant treatments (patients often use treatments other than the prescribed one without telling their clinician).

So, how does this fit into the statement above ‘Specific (efficacy) + nonspecific effects = effectiveness’? Even if this formula were correct, it would not mean that outcome studies of the nature described demonstrate the effectiveness of a therapy. It all depends, of course, on what we call ‘non-specific’ effects. We all agree that placebo-effects belong to this category. Probably, most experts also would include the therapeutic relationship and the regression towards the mean under this umbrella. But the last three points from my list are clearly not non-specific effects of the therapy; they are therapy-independent determinants of the clinical outcome.

The most important factor here is usually the natural history of the disease. Some people find it hard to imagine what this term actually means. Here is a little joke which, I hope, will make its meaning clear and memorable.


Doc A: The patient from room 12 is much better today.

Doc B: Yes, we stared his treatment just in time; a day later and he would have been cured without it!

I am sure that most of my readers now understand (and never forget) that clinical improvement cannot be equated with the effectiveness of the treatment administered (they might thus be immune to the misleading messages they are constantly exposed to). Yet, I am not at all sure that all ‘alternativists’ have got it.

Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?

Here is a brand new one which might stand for dozens of others.

In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).

The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.

Good news then for enthusiasts of homeopathy? 91% improvement!

Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:

Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.

Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:

  1. How on earth can we take this and so many other articles on homeopathy seriously?
  2. When does this sort of article cross the line between wishful thinking and scientific misconduct?

Few subjects lead to such heated debate as the risk of stroke after chiropractic manipulations (if you think this is an exaggeration, look at the comment sections of previous posts on this subject). Almost invariably, one comes to the conclusion that more evidence would be helpful for arriving at firmer conclusions. Before this background, this new publication by researchers (mostly chiropractors) from the US ‘Dartmouth Institute for Health Policy & Clinical Practice’ is noteworthy.

The purpose of this study was to quantify the risk of stroke after chiropractic spinal manipulation, as compared to evaluation by a primary care physician, for Medicare beneficiaries aged 66 to 99 years with neck pain.

The researchers conducted a retrospective cohort analysis of a 100% sample of annualized Medicare claims data on 1 157 475 beneficiaries aged 66 to 99 years with an office visit to either a chiropractor or to a primary care physician for neck pain. They compared hazard of vertebrobasilar stroke and any stroke at 7 and 30 days after office visit using a Cox proportional hazards model. We used direct adjusted survival curves to estimate cumulative probability of stroke up to 30 days for the 2 cohorts.

The findings indicate that the proportion of subjects with a stroke of any type in the chiropractic cohort was 1.2 per 1000 at 7 days and 5.1 per 1000 at 30 days. In the primary care cohort, the proportion of subjects with a stroke of any type was 1.4 per 1000 at 7 days and 2.8 per 1000 at 30 days. In the chiropractic cohort, the adjusted risk of stroke was significantly lower at 7 days as compared to the primary care cohort (hazard ratio, 0.39; 95% confidence interval, 0.33-0.45), but at 30 days, a slight elevation in risk was observed for the chiropractic cohort (hazard ratio, 1.10; 95% confidence interval, 1.01-1.19).

The authors conclude that, among Medicare B beneficiaries aged 66 to 99 years with neck pain, incidence of vertebrobasilar stroke was extremely low. Small differences in risk between patients who saw a chiropractor and those who saw a primary care physician are probably not clinically significant.

I do, of course, applaud any new evidence on this rather ‘hot’ topic – but is it just me, or are the above conclusions a bit odd? Five strokes per 1000 patients is definitely not “extremely low” in my book; and furthermore I do wonder whether all experts would agree that a doubling of risk at 30 days in the chiropractic cohort is “probably not clinically significant” – particularly, if we consider that chiropractic spinal manipulation has so very little proven benefit.


On 1/12/2014 I published a post in which I offered to give lectures to students of alternative medicine:

Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .

A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine.

I did not think that this would create much interest – and I was right: only the ANGLO-EUROPEAN COLLEGE OF CHIROPRACTIC has so far hoisted me on my own petard and, after some discussion (see comment section of the original post) hosted me for a lecture. Several people seem keen on knowing how this went; so here is a brief report.

I was received, on 14/1/2015, with the utmost kindness by my host David Newell. We has a coffee and a chat and then it was time to start the lecture. The hall was packed with ~150 students and the same number was listening in a second lecture hall to which my talk was being transmitted.

We had agreed on the title CHIROPRACTIC: FALLACIES AND FACTS. So, after telling the audience about my professional background, I elaborated on 7 fallacies:

  1. Appeal to tradition
  2. Appeal to authority
  3. Appeal to popularity
  4. Subluxation exists
  5. Spinal manipulation is effective
  6. Spinal manipulation is safe
  7. Ad hominem attack

Numbers 3, 5 and 6 were dealt with in more detail than the rest. The organisers had asked me to finish by elaborating on what I perceive as the future challenges of chiropractic; so I did:

  1. Stop happily promoting bogus treatments
  2. Denounce obsolete concepts like ‘subluxation’
  3. Clarify differences between chiros, osteos and physios
  4. Start a culture of critical thinking
  5. Take action against charlatans in your ranks
  6. Stop attacking everyone who voices criticism

I ended by pointing out that the biggest challenge, in my view, was to “demonstrate with rigorous science which chiropractic treatments demonstrably generate more good than harm for which condition”.

We had agreed that my lecture would be followed by half an hour of discussion; this period turned out to be lively and had to be extended to a full hour. Most questions initially came from the tutors rather than the students, and most were polite – I had expected much more aggression.

In his email thanking me for coming to Bournemouth, David Newell wrote about the event: The general feedback from staff and students was one of relief that you possessed only one head, :-). I hope you may have felt the same about us. You came over as someone who had strong views, a fair amount of which we disagreed with, but that presented them in a calm, informative and courteous manner as we did in listening and discussing issues after your talk. I think everyone enjoyed the questions and debate and felt that some of the points you made were indeed fair critique of what the profession may need to do, to secure a more inclusive role in the health care arena.

As you may have garnered from your visit here, the AECC is committed to this task as we continue to provide the highest quality of education for the 21st C representatives of such a profession. We believe centrally that it is to our society at large and our communities within which we live and work that we are accountable. It is them that we serve, not ourselves, and we need to do that as best we can, with the best tools we have or can develop and that have as much evidence as we can find or generate. In this aim, your talk was important in shining a more ‘up close and personal’ torchlight on our profession and the tasks ahead whilst also providing us with a chance to debate the veracity or otherwise of yours and ours differing positions on interpretation of the evidence.

My own impression of the day is that some of my messages were not really understood, that some of the questions, including some from the tutors, seemed like coming from a different planet, and that people were more out to teach me than to learn from my talk. One overall impression that I took home from that day is that, even in this college which prides itself of being open to scientific evidence and unimpressed by chiropractic fundamentalism, students are strangely different from other health care professionals. The most tangible aspect of this is the openly hostile attitude against drug therapies voiced during the discussion by some students.

The question I always ask myself after having invested a lot of time in preparing and delivering a lecture is: WAS IT WORTH IT? In the case of this lecture, I think the answer is YES. With 300 students present, I am fairly confident that I did manage to stimulate a tiny bit of critical thinking in a tiny percentage of them. The chiropractic profession needs this badly!


According to the ‘General Osteopathic Council’ (GOC), osteopathy is a primary care profession, focusing on the diagnosis, treatment, prevention and rehabilitation of musculoskeletal disorders, and the effects of these conditions on patients’ general health.

Using many of the diagnostic procedures applied in conventional medical assessment, osteopaths seek to restore the optimal functioning of the body, where possible without the use of drugs or surgery. Osteopathy is based on the principle that the body has the ability to heal, and osteopathic care focuses on strengthening the musculoskeletal systems to treat existing conditions and to prevent illness. 

Osteopaths’ patient-centred approach to health and well-being means they consider symptoms in the context of the patient’s full medical history, as well as their lifestyle and personal circumstances. This holistic approach ensures that all treatment is tailored to the individual patient.

On a good day, such definitions make me smile; on a bad day, they make me angry. I can think of quite a few professions which would fit this definition just as well or better than osteopathy. What are we supposed to think about a profession that is not even able to provide an adequate definition of itself?

Perhaps I try a different angle: what conditions do osteopaths treat? The GOC informs us that commonly treated conditions include back and neck pain, postural problems, sporting injuries, muscle and joint deterioration, restricted mobility and occupational ill-health.

This statement seems not much better than the previous one. What on earth is ‘muscle and joint deterioration’? It is not a condition that I find in any medical dictionary or textbook. Can anyone think of a broader term than ‘occupational ill health’? This could be anything from tennis elbow to allergies or depression. Do osteopaths treat all of those?

One gets the impression that osteopaths and their GOC are deliberately vague – perhaps because this would diminish the risk of being held to account on any specific issue?

The more one looks into the subject of osteopathy, the more confused one gets. The profession goes back to Andrew Still ((August 6, 1828 – December 12, 1917) Palmer, the founder of chiropractic is said to have been one of Still’s pupils and seems to have ‘borrowed’ most of his concepts from him – even though he always denied this) who defined osteopathy as a science which consists of such exact exhaustive and verifiable knowledge of the structure and functions of the human mechanism, anatomy and physiology & psychology including the chemistry and physics of its known elements as is made discernable certain organic laws and resources within the body itself by which nature under scientific treatment peculiar to osteopathic practice apart from all ordinary methods of extraneous, artificial & medicinal stimulation and in harmonious accord with its own mechanical principles, molecular activities and metabolic processes may recover from displacements, derangements, disorganizations and consequent diseases and regain its normal equilibrium of form and function in health and strength.

This and many other of his statements seem to indicate that the art of using language for obfuscation has a long tradition in osteopathy and goes back directly to its founding father.

What makes the subject of osteopathy particularly confusing is not just the oddity that, in conventional medicine, the term means ‘disease of the bone’ (which renders any literature searches in this area a nightmare) but also the fact that, in different countries, osteopaths are entirely different professionals. In the US, osteopathy has long been fully absorbed by mainstream medicine and there is hardly any difference between MDs and ODs. In the UK, osteopaths are alternative practitioners regulated by statute but are, compared to chiropractors, of minor importance. In Germany, osteopaths are not regulated and fairly ‘low key’, while in France, they are numerous and like to see themselves as primary care physicians.

And what about the evidence base of osteopathy? Well, that’s even more confusing, in my view. Evidence for which treatment? As US osteopaths might use any therapy from drugs to surgery, it could get rather complicated. So let’s just focus on the manual treatment as used by osteopaths outside the US.

Anyone who attempts to critically evaluate the published trial evidence in this area will be struck by at least two phenomena:

  1. the wide range of conditions treated with osteopathic manual therapy (OMT)
  2. the fact that there are several groups of researchers that produce one positive result after the next.

The best example is probably the exceedingly productive research team of J. C. Licciardone from the Osteopathic Research Center, University of North Texas. Here are a few conclusions from their clinical studies:

  1. The large effect size for OMT in providing substantial pain reduction in patients with chronic LBP of high severity was associated with clinically important improvement in back-specific functioning. Thus, OMT may be an attractive option in such patients before proceeding to more invasive and costly treatments.
  2. The large effect size for short-term efficacy of OMT was driven by stable responders who did not relapse.
  3. Osteopathic manual treatment has medium to large treatment effects in preventing progressive back-specific dysfunction during the third trimester of pregnancy. The findings are potentially important with respect to direct health care expenditures and indirect costs of work disability during pregnancy.
  4. Severe somatic dysfunction was present significantly more often in patients with diabetes mellitus than in patients without diabetes mellitus. Patients with diabetes mellitus who received OMT had significant reductions in LBP severity during the 12-week period. Decreased circulating levels of TNF-α may represent a possible mechanism for OMT effects in patients with diabetes mellitus. A larger clinical trial of patients with diabetes mellitus and comorbid chronic LBP is warranted to more definitively assess the efficacy and mechanisms of action of OMT in this population.
  5. The OMT regimen met or exceeded the Cochrane Back Review Group criterion for a medium effect size in relieving chronic low back pain. It was safe, parsimonious, and well accepted by patients.
  6. Osteopathic manipulative treatment slows or halts the deterioration of back-specific functioning during the third trimester of pregnancy.
  7. The only consistent finding in this study was an association between type 2 diabetes mellitus and tissue changes at T11-L2 on the right side. Potential explanations for this finding include reflex viscerosomatic changes directly related to the progression of type 2 diabetes mellitus, a spurious association attributable to confounding visceral diseases, or a chance observation unrelated to type 2 diabetes mellitus. Larger prospective studies are needed to better study osteopathic palpatory findings in type 2 diabetes mellitus.
  8. OMT significantly reduces low back pain. The level of pain reduction is greater than expected from placebo effects alone and persists for at least three months. Additional research is warranted to elucidate mechanistically how OMT exerts its effects, to determine if OMT benefits are long lasting, and to assess the cost-effectiveness of OMT as a complementary treatment for low back pain.

Based on this brief review of the evidence origination from one of the most active research team, one could be forgiven to think that osteopathy is a panacea. But such an assumption is, of course, nonsensical; a more reasonable conclusion might be the following: osteopathy is one of the most confusing and confused subject under the already confused umbrella of alternative medicine.

I know, it’s not really original to come up with the 10000th article on “10 things…” – but you will have to forgive me, I read so many of these articles over the holiday period that I can’t help but jump on the already over-crowded bandwagon and compose yet another one.

So, here are 10 things which could, if implemented, bring considerable improvement in 2015 to my field of inquiry, alternative medicine.

  1. Consumers need to get better at acting as bull shit (BS) detectors. Let’s face it, much of what we read or hear about this subject is utter BS. Yet consumers frequently lap up even the worst drivel like it were some source of deep wisdom. They could save themselves so much money, if they learnt to be just a little bit more critical.
  2. Dr Oz should focus on being a heart surgeon. His TV show has been demonstrated far too often to be promoting dangerous quackery. Yet as a heart surgeon, he actually might do some good.
  3. Journalists ought to remember that they have a job that extends well beyond their ambition to sell copy. They have a responsibility to inform the public truthfully and responsibly.
  4. Book publishers should abstain from churning out book after book that does little else but mislead the public about alternative medicine in a way that all to often is dangerous to the readers’ health. The world does not need the 1000th book repeating nonsense on detox, wellness etc.!
  5. Alternative practitioners must realise that claiming that therapy x cures condition y is not just slightly over-optimistic (or based on ‘years of experience’); if the claim is not based on sound evidence, it is what most people would call an outright lie.
  6. Proponents of alternative medicine should learn that it is neither fair nor productive to fiercely attack everyone personally who disagrees with their enthusiasm for this or that form of alternative medicine. In fact, it merely highlights the acute lack of rational arguments.
  7. Researchers of alternative medicine have to remember how important it is to think critically – an uncritical scientist is at best a contradiction in terms and at worst a pseudo-scientist who is likely to cause harm.
  8. Authorities should amass the courage, the political power and the financial means of going after those charlatans who ruthlessly exploit the public by making a fast and easy buck on the gullibility of consumers. Only if there is the likelihood of hefty fines will we see a meaningful decrease in the current epidemic of alternative health fraud.
  9. Politicians should realise that alternative medicine is not just a trivial subject with which one might win votes, if one issues platitudes to please the majority; alternative medicine is used by so many people that it has become an important public health issue.
  10. Prince Charles need to learn how to control himself and abstain from meddling in health politics by using every conceivable occasion to promote what he thinks is ‘integrated medicine’ but which, in fact, can easily be disclosed to be quackery.

As you see, my list almost instantly turned into a wish-list, and the big questions that follow from it are:

  1. How could we increase the likelihood of these wishes to come true?
  2. And would there be anything left of alternative medicine, if all of these wishes miraculously became true in 2015?

I do not pretend to have the answers, but I do feel strongly that a healthy dose of critical thinking in all levels of education – from kindergartens to schools, from colleges to universities etc. – would be a good and necessary starting point.

I know, my list is not just a wish list, it also is a wishful thinking list. It would be hopelessly naïve to assume that major advances will be made in 2015. I am realistic, sometimes even quite pessimistic, about progress in alternative medicine. But this does not mean that I or anyone else should just give up. 2015 will be a year where at least one thing is certain: you will see me continuing me my fight for reason, critical analysis, rational debate and good evidence – and that’s a promise!

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):


A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.


The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.


Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).


Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Rating: NO (high risk of bias), no details given

Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.


So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

1 2 3 16
Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.

Click here for a comprehensive list of recent comments.