1 2 3 35

The plethora of dodgy meta-analyses in alternative medicine has been the subject of a recent post – so this one is a mere update of a regular lament.

This new meta-analysis was to evaluate evidence for the effectiveness of acupuncture in the treatment of lumbar disc herniation (LDH). (Call me pedantic, but I prefer meta-analyses that evaluate the evidence FOR AND AGAINST a therapy.) Electronic databases were searched to identify RCTs of acupuncture for LDH, and 30 RCTs involving 3503 participants were included; 29 were published in Chinese and one in English, and all trialists were Chinese.

The results showed that acupuncture had a higher total effective rate than lumbar traction, ibuprofen, diclofenac sodium and meloxicam. Acupuncture was also superior to lumbar traction and diclofenac sodium in terms of pain measured with visual analogue scales (VAS). The total effective rate in 5 trials was greater for acupuncture than for mannitol plus dexamethasone and mecobalamin, ibuprofen plus fugui gutong capsule, loxoprofen, mannitol plus dexamethasone and huoxue zhitong decoction, respectively. Two trials showed a superior effect of acupuncture in VAS scores compared with ibuprofen or mannitol plus dexamethasone, respectively.

The authors from the College of Traditional Chinese Medicine, Jinan University, Guangzhou, Guangdong, China, concluded that acupuncture showed a more favourable effect in the treatment of LDH than lumbar traction, ibuprofen, diclofenac sodium, meloxicam, mannitol plus dexamethasone and mecobalamin, fugui gutong capsule plus ibuprofen, mannitol plus dexamethasone, loxoprofen and huoxue zhitong decoction. However, further rigorously designed, large-scale RCTs are needed to confirm these findings.

Why do I call this meta-analysis ‘dodgy’? I have several reasons, 10 to be exact:

  1. There is no plausible mechanism by which acupuncture might cure LDH.
  2. The types of acupuncture used in these trials was far from uniform and  included manual acupuncture (MA) in 13 studies, electro-acupuncture (EA) in 10 studies, and warm needle acupuncture (WNA) in 7 studies. Arguably, these are different interventions that cannot be lumped together.
  3. The trials were mostly of very poor quality, as depicted in the table above. For instance, 18 studies failed to mention the methods used for randomisation. I have previously shown that some Chinese studies use the terms ‘randomisation’ and ‘RCT’ even in the absence of a control group.
  4. None of the trials made any attempt to control for placebo effects.
  5. None of the trials were conducted against sham acupuncture.
  6. Only 10 studies 10 trials reported dropouts or withdrawals.
  7. Only two trials reported adverse reactions.
  8. None of these shortcomings were critically discussed in the paper.
  9. Despite their affiliation, the authors state that they have no conflicts of interest.
  10. All trials were conducted in China, and, on this blog, we have discussed repeatedly that acupuncture trials from China never report negative results.

And why do I find the journal ‘dodgy’?

Because any journal that publishes such a paper is likely to be sub-standard. In the case of ‘Acupuncture in Medicine’, the official journal of the British Medical Acupuncture Society, I see such appalling articles published far too frequently to believe that the present paper is just a regrettable, one-off mistake. What makes this issue particularly embarrassing is, of course, the fact that the journal belongs to the BMJ group.

… but we never really thought that science publishing was about anything other than money, did we?

What an odd title, you might think.

Systematic reviews are the most reliable evidence we presently have!

Yes, this is my often-voiced and honestly-held opinion but, like any other type of research, systematic reviews can be badly abused; and when this happens, they can seriously mislead us.

new paper by someone who knows more about these issues than most of us, John Ioannidis from Stanford university, should make us think. It aimed at exploring the growth of published systematic reviews and meta‐analyses and at estimating how often they are redundant, misleading, or serving conflicted interests. Ioannidis demonstrated that publication of systematic reviews and meta‐analyses has increased rapidly. In the period January 1, 1986, to December 4, 2015, PubMed tags 266,782 items as “systematic reviews” and 58,611 as “meta‐analyses.” Annual publications between 1991 and 2014 increased 2,728% for systematic reviews and 2,635% for meta‐analyses versus only 153% for all PubMed‐indexed items. Ioannidis believes that probably more systematic reviews of trials than new randomized trials are published annually. Most topics addressed by meta‐analyses of randomized trials have overlapping, redundant meta‐analyses; same‐topic meta‐analyses may exceed 20 sometimes.

Some fields produce massive numbers of meta‐analyses; for example, 185 meta‐analyses of antidepressants for depression were published between 2007 and 2014. These meta‐analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests. China has rapidly become the most prolific producer of English‐language, PubMed‐indexed meta‐analyses. The most massive presence of Chinese meta‐analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes. Furthermore, many contracting companies working on evidence synthesis receive industry contracts to produce meta‐analyses, many of which probably remain unpublished. Many other meta‐analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta‐analyses are both non‐misleading and useful.

The author concluded that the production of systematic reviews and meta‐analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta‐analyses are unnecessary, misleading, and/or conflicted.

Ioannidis makes the following ‘Policy Points’:

  • Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta‐analyses. Instead of promoting evidence‐based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools.
  • Suboptimal systematic reviews and meta‐analyses can be harmful given the major prestige and influence these types of studies have acquired.
  • The publication of systematic reviews and meta‐analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.

Obviously, Ioannidis did not have alternative medicine in mind when he researched and published this article. But he easily could have! Virtually everything he stated in his paper does apply to it. In some areas of alternative medicine, things are even worse than Ioannidis describes.

Take TCM, for instance. I have previously looked at some of the many systematic reviews of TCM that currently flood Medline, based on Chinese studies. This is what I concluded at the time:

Why does that sort of thing frustrate me so much? Because it is utterly meaningless and potentially harmful:

  • I don’t know what treatments the authors are talking about.
  • Even if I managed to dig deeper, I cannot get the information because practically all the primary studies are published in obscure journals in Chinese language.
  • Even if I  did read Chinese, I do not feel motivated to assess the primary studies because we know they are all of very poor quality – too flimsy to bother.
  • Even if they were formally of good quality, I would have my doubts about their reliability; remember: 100% of these trials report positive findings!
  • Most crucially, I am frustrated because conclusions of this nature are deeply misleading and potentially harmful. They give the impression that there might be ‘something in it’, and that it (whatever ‘it’ might be) could be well worth trying. This may give false hope to patients and can send the rest of us on a wild goose chase.

So, to ease the task of future authors of such papers, I decided give them a text for a proper EVIDENCE-BASED conclusion which they can adapt to fit every review. This will save them time and, more importantly perhaps, it will save everyone who might be tempted to read such futile articles the effort to study them in detail. Here is my suggestion for a conclusion soundly based on the evidence, not matter what TCM subject the review is about:


On another occasion, I stated that I am getting very tired of conclusions stating ‘…XY MAY BE EFFECTIVE/HELPFUL/USEFUL/WORTH A TRY…’ It is obvious that the therapy in question MAY be effective, otherwise one would surely not conduct a systematic review. If a review fails to produce good evidence, it is the authors’ ethical, moral and scientific obligation to state this clearly. If they don’t, they simply misuse science for promotion and mislead the public. Strictly speaking, this amounts to scientific misconduct.

In yet another post on the subject of systematic reviews, I wrote that if you have rubbish trials, you can produce a rubbish review and publish it in a rubbish journal (perhaps I should have added ‘rubbish researchers).

And finally this post about a systematic review of acupuncture: it is almost needless to mention that the findings (presented in a host of hardly understandable tables) suggest that acupuncture is of proven or possible effectiveness/efficacy for a very wide array of conditions. It also goes without saying that there is no critical discussion, for instance, of the fact that most of the included evidence originated from China, and that it has been shown over and over again that Chinese acupuncture research never seems to produce negative results.

The main point surely is that the problem of shoddy systematic reviews applies to a depressingly large degree to all areas of alternative medicine, and this is misleading us all.

So, what can be done about it?

My preferred (but sadly unrealistic) solution would be this:


Research is not fundamentally different from other professional activities; to do it well, one needs adequate training; and doing it badly can cause untold damage.

A few days ago, the German TV ‘FACT’ broadcast a film (it is in German, the bit on homeopathy starts at ~min 20) about a young woman who had her breast cancer first operated but then decided to forfeit subsequent conventional treatments. Instead she chose homeopathy which she received from Dr Jens Wurster at the ‘Clinica Sta Croce‘ in Lucano/Switzerland.

Elsewhere Dr Wurster stated this: Contrary to chemotherapy and radiation, we offer a therapy with homeopathy that supports the patient’s immune system. The basic approach of orthodox medicine is to consider the tumor as a local disease and to treat it aggressively, what leads to a weakening of the immune system. However, when analyzing all studies on cured cancer cases it becomes evident that the immune system is always the decisive factor. When the immune system is enabled to recognize tumor cells, it will also be able to combat them… When homeopathic treatment is successful in rebuilding the immune system and reestablishing the basic regulation of the organism then tumors can disappear again. I’ve treated more than 1000 cancer patients homeopathically and we could even cure or considerably ameliorate the quality of life for several years in some, advanced and metastasizing cases.

The recent TV programme showed a doctor at this establishment confirming that homeopathy alone can cure cancer. Dr Wurster (who currently seems to be a star amongst European homeopaths) is seen lecturing at the 2017 World Congress of Homeopathic Physicians in Leipzig and stating that a ‘particularly rigorous study’ conducted by conventional scientists (the senior author is Harald Walach!, hardly a conventional scientist in my book) proved homeopathy to be effective for cancer. Specifically, he stated that this study showed that ‘homeopathy offers a great advantage in terms of quality of life even for patients suffering from advanced cancers’.

This study did, of course, interest me. So, I located it and had a look. Here is the abstract:


Many cancer patients seek homeopathy as a complementary therapy. It has rarely been studied systematically, whether homeopathic care is of benefit for cancer patients.


We conducted a prospective observational study with cancer patients in two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). For a direct comparison, matched pairs with patients of the same tumour entity and comparable prognosis were to be formed. Main outcome parameter: change of quality of life (FACT-G, FACIT-Sp) after 3 months. Secondary outcome parameters: change of quality of life (FACT-G, FACIT-Sp) after a year, as well as impairment by fatigue (MFI) and by anxiety and depression (HADS).


HG: FACT-G, or FACIT-Sp, respectively improved statistically significantly in the first three months, from 75.6 (SD 14.6) to 81.1 (SD 16.9), or from 32.1 (SD 8.2) to 34.9 (SD 8.32), respectively. After 12 months, a further increase to 84.1 (SD 15.5) or 35.2 (SD 8.6) was found. Fatigue (MFI) decreased; anxiety and depression (HADS) did not change. CG: FACT-G remained constant in the first three months: 75.3 (SD 17.3) at t0, and 76.6 (SD 16.6) at t1. After 12 months, there was a slight increase to 78.9 (SD 18.1). FACIT-Sp scores improved significantly from t0 (31.0 – SD 8.9) to t1 (32.1 – SD 8.9) and declined again after a year (31.6 – SD 9.4). For fatigue, anxiety, and depression, no relevant changes were found. 120 patients of HG and 206 patients of CG met our criteria for matched-pairs selection. Due to large differences between the two patient populations, however, only 11 matched pairs could be formed. This is not sufficient for a comparative study.


In our prospective study, we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment. It would take considerably larger samples to find matched pairs suitable for comparison in order to establish a definite causal relation between these effects and homeopathic treatment.


Even the abstract makes several points very clear, and the full text confirms further embarrassing details:

  • The patients in this study received homeopathy in addition to standard care (the patient shown in the film only had homeopathy until it was too late, and she subsequently died, aged 33).
  • The study compared A+B with B alone (A=homeopathy, B= standard care). It is hardly surprising that the additional attention of A leads to an improvement in quality of life. It is arguably even unethical to conduct a clinical trial to demonstrate such an obvious outcome.
  • The authors of this paper caution that it is not possible to conclude that a causal relationship between homeopathy and the outcome exists.
  • This is true not just because of the small sample size, but also because of the fact that the two groups had not been allocated randomly and therefore are bound to differ in a whole host of variables that have not or cannot be measured.
  • Harald Walach, the senior author of this paper, held a position which was funded by Heel, Baden-Baden, one of Germany’s largest manufacturer of homeopathics.
  • The H.W.& J.Hector Foundation, Germany, and the Samueli Institute, provided the funding for this study.

In the film, one of the co-authors of this paper, the oncologist HH Bartsch from Freiburg, states that Dr Wurster’s interpretation of this study is ‘dishonest’.

I am inclined to agree.

On this blog, we had many chiropractors commenting that their profession is changing fast and the old ‘philosophy’ is a thing of the past. But are these assertions really true? This survey might provide an answer. A questionnaire was sent to chiropractic students in all chiropractic programs in Australia and New Zealand. It explored student viewpoints about the identity, role/scope, setting, and future of chiropractic practice as it relates to chiropractic education and health promotion. Associations between the number of years in the program, highest degree preceding chiropractic education, institution, and opinion summary scores were evaluated by multivariate analysis of variance tests.

A total of 347 chiropractic students participated. For identity, most students (51.3%) hold strongly to the traditional chiropractic theory but also agree (94.5%) it is important that chiropractors are educated in evidence-based practice. The main predictor of student viewpoints was a student’s chiropractic institution. Chiropractic institution explained over 50% of the variance around student opinions about role/scope of practice and approximately 25% for identity and future practice.

The authors concluded that chiropractic students in Australia and New Zealand seem to hold both traditional and mainstream viewpoints toward chiropractic practice. However, students from different chiropractic institutions have divergent opinions about the identity, role, setting, and future of chiropractic practice, which is most strongly predicted by the institution. Chiropractic education may be a potential determinant of chiropractic professional identity, raising concerns about heterogeneity between chiropractic schools.

Traditional chiropractic theory is, of course, all the palmereque nonsense about ‘95% of all diseases are caused by subluxations of the spine’ etc. And evidence-based practice means knowing that subluxations are a figment of the chiropractic imagination.

Imagine a physician who believes in evidence and, at the same time, in the theory of the 4 humours determining our health.

Imagine a geologist thinking that the earth is flat and also spherical.

Imagine a biologist subscribing to both creationism and evolution.

Imagine a surgeon earning his livelihood with blood-letting and key-hole surgery.

Imagine a doctor believing in vital energy after having been taught physiology.

Imagine an airline pilot considering the use of flying carpets.

Imagine a chemist engaging in alchemy.

Imagine a Brexiteer who is convinced of doing the best for the UK.

Imagine a homeopath who thinks he practices evidence-based medicine.

Imagine a plumber with a divining rod.

Imagine an expert in infectious diseases believing is the miasma theory.

Imagine a psychic hoping to use her skills for winning a fortune on the stock market.


Once you have imagined all of these situations, I fear, you might know (almost) all worth knowing about chiropractic.

Clinical trials are a most useful tool, but they can easily be abused. It is not difficult to misuse them in such a way that even the most useless treatment appears to be effective. Sadly, this sort of thing happens all too often in the realm of alternative medicine. Take for instance this recently published trial of homeopathy.

The objective of this study was to investigate the usefulness of classical homeopathy for the prevention of recurrent urinary tract infections (UTI) in patients with spinal cord injury (SCI). Patients were admitted to this trial, if they had chronic SCI and had previously suffered from at least three UTI/year. They were treated either with a standardized prophylaxis alone, or with a standardized prophylaxis in combination with homeopathy. The number of UTIs, general and specific quality of life (QoL), and satisfaction with homeopathic treatment were assessed prospectively over the period of one year. Ten patients were in the control group and 25 patients received adjunctive homeopathic treatment. The median number of self-reported UTI in the homeopathy group decreased significantly, whereas it remained unchanged in the control group. The domain incontinence impact of the KHQ improved significantly, whereas the general QoL did not change. The satisfaction with homeopathic care was high.

The authors concluded that adjunctive homeopathic treatment lead to a significant decrease of UTI in SCI patients. Therefore, classical homeopathy could be considered in SCI patients with recurrent UTI.

Where to begin?

Here are just some of the most obvious flaws of and concerns with this study:

  1. There is no plausible rationale to even plan such a study.
  2. The sample size was far too small for allowing generalizable conclusions.
  3. There was no adequate randomisation and patients were able to chose the homeopathy option.
  4. The study seems to lack objective outcome measures.
  5. The study design did not allow to control for non-specific effects; therefore, it seems likely that the observed outcomes are unrelated to the homeopathic treatments but are caused by placebo and other non-specific effects.
  6. Even if the study had been rigorous, we would need independent replications before we draw such definitive conclusions.
  7. Two of the authors are homeopaths, and it is in their clinics that the study took place.
  8. Some of the authors have previously published a very similar paper – except that this ‘case series’ included no control group at all.
  9. The latter paper seems to have been published more than once.
  10. Of this paper, one of the authors claimed that ” the usefulness of classical homeopathy as an adjunctive measure for UTI prophylaxis in patients with NLUTD due to SCI has been demonstrated in a case series”. He seems to be unaware of the fact that a case series cannot possible lend itself to demonstrate this.
  11. I do wonder: did they just add a control group to their case series thus pretending it became a controlled clinical trial?

What strikes me most with such pseudo-research is its abundance and the naivety – or should I call it ignorance? – of the enthusiasts who conduct it. Most of them, I am fairly sure do not mean to do harm; but by Jove they do!


Do chiropractors even know the difference between promotion and research?

Probably a rhetorical question.

Personally, I have seen them doing so much pseudo-research that I doubt they recognise the real thing, even if they fell over it.

Here is a recent example that stands for many, many more such ‘research’ projects (some of which have been discussed on this blog).

But first a few sentences on the background of this new ‘study’.

The UD chiropractic profession is currently on the ‘opioid over-use bandwagon’ hoping that this move might promote their trade. Most chiropractors have always been against using (any type of) pharmaceutical treatment and advise their patients accordingly. D D Palmer, the founder of chiropractic, was adamant that drugs are to be avoided; he stated for instance that Drugs are delusive; they do not adjust anything. And “as the Founder intended, chiropractic has existed as a drug-free healthcare profession for better than 120 years.” To this day, chiropractors are educated and trained to argue against non-drug treatments and regularly claim that chiropractic is a drug-free alternative to traditional medicine.

Considering this background, this new piece of (pseudo) research is baffling, in my view.

The objective of this investigation was to evaluate the association between utilization of chiropractic services and the use of prescription opioid medications. The authors used a retrospective cohort design to analyse health insurance claims data. The data source was the all payer claims database administered by the State of New Hampshire. The authors chose New Hampshire because health claims data were readily available for research, and in 2015, New Hampshire had the second-highest age-adjusted rate of drug overdose deaths in the United States.

The study population comprised New Hampshire residents aged 18-99 years, enrolled in a health plan, and with at least two clinical office visits within 90 days for a primary diagnosis of low-back pain. The authors excluded subjects with a diagnosis of cancer. They measured likelihood of opioid prescription fill among recipients of services delivered by chiropractors compared with a control group of patients not consulting a chiropractor. They also compared the cohorts with regard to rates of prescription fills for opioids and associated charges.

The adjusted likelihood of filling a prescription for an opioid analgesic was 55% lower among chiropractic compared to non-chiropractic patients. Average charges per person for opioid prescriptions were also significantly lower among the former group.

The authors concluded that among New Hampshire adults with office visits for noncancer low-back pain, the likelihood of filling a prescription for an opioid analgesic was significantly lower for recipients of services delivered by doctors of chiropractic compared with nonrecipients. The underlying cause of this correlation remains unknown, indicating the need for further investigation.

The underlying cause remains unknown???


Let me speculate, or even better, let me extrapolate by drawing an analogy:

Employees by a large Hamburger chain set out to study the association between utilization of Hamburger restaurant services and vegetarianism. The authors used a retrospective cohort design. The study population comprised New Hampshire residents aged 18-99 years, who had entered the premises of a Hamburger restaurant within 90 days for a primary purpose of eating. The authors excluded subjects with a diagnosis of cancer. They measured the likelihood of  vegetarianism among recipients of services delivered by Hamburger restaurants compared with a control group of individuals not using meat-dispensing facilities. They also compared the cohorts with regard to the money spent in Hamburger restaurants.

The adjusted likelihood of being a vegetarian was 55% lower among the experimental group compared to controls. The average money spent per person in Hamburger restaurants were also significantly lower among the Hamburger group.

The authors concluded that among New Hampshire adults visiting Hamburger restaurants, the likelihood of vegetarianism was significantly lower for consumers frequenting Hamburger restaurants compared with those who failed to frequent such places. The underlying cause of this correlation remains unknown, indicating the need for further investigation.



The question whether spinal manipulative therapy (SMT) has any specific therapeutic effects is still open. This fact must irritate ardent chiropractors, and they therefore try everything to dispel our doubts. One way would be to demonstrate a dose-effect relationship between SMT and the clinical outcome. But, for several reasons, this is not an easy task.

This RCT was aimed at identifying the dose-response relationship between visits for SMT and chronic cervicogenic headache (CGH) outcomes; to evaluate the efficacy of SMT by comparison with a light massage control.

The study included 256 adults with chronic CGH. The primary outcome was days with CGH in the prior 4 weeks evaluated at the 12- and 24-week primary endpoints. Secondary outcomes included CGH days at remaining endpoints, pain intensity, disability, perceived improvement, medication use, and patient satisfaction. Participants were randomized to 4 different dose levels of chiropractic SMT: 0, 6, 12, or 18 sessions. They were treated 3 times per week for 6 weeks and received a focused light-massage control at sessions when SMT was not assigned. Linear dose effects and comparisons to the no-manipulation control group were evaluated at 6, 12, 24, 39, and 52 weeks.

A linear dose-response was observed for all follow-ups, a reduction of approximately 1 CGH day/4 weeks per additional 6 SMT visits (p<.05); a maximal effective dose could not be determined. CGH days/4 weeks were reduced from about 16 to 8 for the highest and most effective dose of 18 SMT visits. Mean differences in CGH days/4 weeks between 18 SMT visits and control were -3.3 (p=.004) and -2.9 (p=.017) at the primary endpoints, and similar in magnitude at the remaining endpoints (p<.05). Differences between other SMT doses and control were smaller in magnitude (p > .05). CGH intensity showed no important improvement nor differed by dose. Other secondary outcomes were generally supportive of the primary.

The authors concluded that there was a linear dose-response relationship between SMT visits and days with CGH. For the highest and most effective dose of 18 SMT visits, CGH days were reduced by half, and about 3 more days per month than for the light-massage control.

This trial would make sense, if the effectiveness of SMT for CGH had been a well-documented fact, and if the study had rigorously controlled for placebo-effects.

But guess what?

Neither of these conditions were met.

A recent review concluded that there are few published randomized controlled trials analyzing the effectiveness of spinal manipulation and/or mobilization for TTH, CeH, and M in the last decade. In addition, the methodological quality of these papers is typically low. Clearly, there is a need for high-quality randomized controlled trials assessing the effectiveness of these interventions in these headache disorders. And this is by no means the only article making such statements; similar reviews arrive at similar conclusions. In turn, this means that the effects observed after SMT are not necessarily specific effects due to SMT but could easily be due to placebo or other non-specific effects. In order to avoid confusion, one would need a credible placebo – one that closely mimics SMT – and make sure that patients were ‘blinded’. But ‘light massage’ clearly does not mimic SMT, and patients obviously were aware of which interventions they received.

So, an alternative – and I think at least as plausible – conclusion of the data provided by this new RCT is this:

Chiropractic SMT is associated with a powerful placebo response which, of course, obeys a dose-effect relationship. Thus these findings are in keeping with the notion that SMT is a placebo.

And why would the researchers – who stress that they have no conflicts of interest – mislead us by making this alternative interpretation of their findings not abundantly clear?

I fear, the reason might be simple: they also seem to mislead us about their conflicts of interest: they are mostly chiropractors with a long track record of publishing promotional papers masquerading as research. What, I ask myself, could be a stronger conflict of interest?

(Pity that a high-impact journal like SPINE did not spot these [not so little] flaws)

Yesterday, a press-release about our new book has been distributed by our publisher. As I hope than many regular readers of my blog might want to read this book – if you don’t want to buy it, please get it via your library – I decided to re-publish the press-release here:

Governments must legislate to regulate and restrict the sale of complementary and alternative therapies, conclude authors of new book More Harm Than Good.

Heidelberg, 20 February 2018

Commercial organisations selling lethal weapons or addictive substances clearly exploit customers, damage third parties and undermine genuine autonomy. Purveyors of complementary and alternative medicine (CAM) do too, argue authors Edzard Ernst and Kevin Smith.

The only downside to regulating such a controversial industry is that regulation could confer upon it an undeserved stamp of respectability and approval. At best, it can ensure the competent delivery of therapies that are inherently incompetent.

This is just one of the ethical dilemmas at the heart of the book. In all areas of healthcare, consumers are entitled to expect essential elements of medical ethics to be upheld. These include access to competent, appropriately-trained practitioners who base treatment decisions on evidence from robust scientific research. Such requirements are frequently neglected, ignored or wilfully violated in CAM.

“We would argue that a competent healthcare professional should be defined as one who practices or recommends plausible therapies that are supported by robust evidence,” says bioethicist Kevin Smith.

“Regrettably, the reality is that many CAM proponents allow themselves to be deluded as to the efficacy or safety of their chosen therapy, thus putting at risk the health of those who heed their advice or receive their treatment,” he says.

Therapies covered include homeopathy, acupuncture, chiropractic, iridology, Reiki, crystal healing, naturopathy, intercessory prayer, wet cupping, Bach flower therapy, Ukrain and craniosacral therapy. Their inappropriate use can not only raises false hope and inflicts financial hardship on consumers, but can also be dangerous; either through direct harm or because patients fail to receive more effective treatment. For example, advice given by homeopaths to diabetic patients has the potential to kill them; and when anthroposophic doctors advise against vaccination, they can be held responsible for measles outbreaks.

There are even ethical concerns to subjecting such therapies to clinical research. In mainstream medical research, a convincing database from pre-clinical research is accumulated before patients are experimented upon. However, this is mostly not possible with CAM. Pre-scientific forms of medicine have been used since time immemorial, but their persistence alone does not make them credible or effective. Some are based on notions so deeply implausible that accepting them is tantamount to believing in magic.

“Dogma and ideology, not rationality and evidence, are the drivers of CAM practice,” says Professor Edzard Ernst.

Edzard Ernst, Kevin Smith
More Harm than Good?
1st ed. 2018, XXV, 223 p.
Softcover $22.99, €19,99, £15.99 ISBN 978-3-319-69940-0
Also available as an eBook ISBN 978-3-319-69941-7


As I already stated above, I hope you will read our new book. It offers something that has, I think, not been attempted before: it critically evaluates many aspects of alternative medicine by holding them to the ethical standards of medicine. Previously, we have often been asking WHERE IS THE EVIDENCE FOR THIS OR THAT CLAIM? In our book, we ask different questions: IS THIS OR THAT ASPECT OF ALTERNATIVE MEDICINE ETHICAL? Of course, the evidence question does come into this too, but our approach in this book is much broader.

The conclusions we draw are often surprising, sometimes even provocative.

Well, you will see for yourself (I hope).

Cranio-sacral therapy is firstly implausible, and secondly it lacks evidence of effectiveness (see for instance here, here, here and here). Yet, some researchers are nevertheless not deterred to test it in clinical trials. While this fact alone might be seen as embarrassing, the study below is a particular and personal embarrassment to me, in fact, I am shocked by it and write these lines with considerable regret.

Why? Bear with me, I will explain later.

The purpose of this trial was to evaluate the effectiveness of osteopathic manipulative treatment and osteopathy in the cranial field in temporomandibular disorders. Forty female subjects with temporomandibular disorders lasting at least three months were included. At enrollment, subjects were randomly assigned into two groups: (1) osteopathic manipulative treatment group (n=20) and (2) osteopathy in the cranial field [craniosacral therapy for you and me] group (n=20). Examinations were performed at baseline (E0) and at the end of the last treatment (E1), and consisted of subjective pain intensity with the Visual Analog Scale, Helkimo Index and SF-36 Health Survey. Subjects had five treatments, once a week. 36 subjects completed the study.

Patients in both groups showed significant reduction in Visual Analog Scale score (osteopathic manipulative treatment group: p = 0.001; osteopathy in the cranial field group: p< 0.001), Helkimo Index (osteopathic manipulative treatment group: p = 0.02; osteopathy in the cranial field group: p = 0.003) and a significant improvement in the SF-36 Health Survey – subscale “Bodily Pain” (osteopathic manipulative treatment group: p = 0.04; osteopathy in the cranial field group: p = 0.007) after five treatments (E1). All subjects (n = 36) also showed significant improvements in the above named parameters after five treatments (E1): Visual Analog Scale score (p< 0.001), Helkimo Index (p< 0.001), SF-36 Health Survey – subscale “Bodily Pain” (p = 0.001). The differences between the two groups were not statistically significant for any of the three endpoints.

The authors concluded that both therapeutic modalities had similar clinical results. The findings of this pilot trial support the use of osteopathic manipulative treatment and osteopathy in the cranial field as an effective treatment modality in patients with temporomandibular disorders. The positive results in both treatment groups should encourage further research on osteopathic manipulative treatment and osteopathy in the cranial field and support the importance of an interdisciplinary collaboration in patients with temporomandibular disorders. Implications for rehabilitation Temporomandibular disorders are the second most prevalent musculoskeletal condition with a negative impact on physical and psychological factors. There are a variety of options to treat temporomandibular disorders. This pilot study demonstrates the reduction of pain, the improvement of temporomandibular joint dysfunction and the positive impact on quality of life after osteopathic manipulative treatment and osteopathy in the cranial field. Our findings support the use of osteopathic manipulative treatment and osteopathy in the cranial field and should encourage further research on osteopathic manipulative treatment and osteopathy in the cranial field in patients with temporomandibular disorders. Rehabilitation experts should consider osteopathic manipulative treatment and osteopathy in the cranial field as a beneficial treatment option for temporomandibular disorders.

This study has so many flaws that I don’t know where to begin. Here are some of the more obvious ones:

  • There is, as already mentioned, no rationale for this study. I can see no reason why craniosacral therapy should work for the condition. Without such a rationale, the study should never even have been conceived.
  • Technically,  this RCTs an equivalence study comparing one therapy against another. As such it needs to be much larger to generate a meaningful result and it also would require a different statistical approach.
  • The authors mislabelled their trial a ‘pilot study’. However, a pilot study “is a preliminary small-scale study that researchers conduct in order to help them decide how best to conduct a large-scale research project. Using a pilot study, a researcher can identify or refine a research question, figure out what methods are best for pursuing it, and estimate how much time and resources will be necessary to complete the larger version, among other things.” It is not normally a study suited for evaluating the effectiveness of a therapy.
  • Any trial that compares one therapy of unknown effectiveness to another of unknown effectiveness is a complete and utter nonsense. Equivalent studies can only ever make sense, if one of the two treatments is of proven effectiveness – think of it as a mathematical equation: one equation with two unknowns is unsolvable.
  • Controlled studies such as RCTs are for comparing the outcomes of two or more groups, and only between-group differences are meaningful results of such trials.
  • The ‘positive results’ which the authors mention in their conclusions are meaningless because they are based on such within-group changes and nobody can know what caused them: the natural history of the condition, regression towards the mean, placebo-effects, or other non-specific effects – take your pick.
  • The conclusions are a bonanza of nonsensical platitudes and misleading claims which do not follow from the data.

As regular readers of this blog will doubtlessly have noticed, I have seen plenty of similarly flawed pseudo-research before – so, why does this paper upset me so much? The reason is personal, I am afraid: even though I do not know any of the authors in person, I know their institution more than well. The study comes from the Department of Physical Medicine and Rehabilitation, Medical University of Vienna, Austria. I was head of this department before I left in 1993 to take up the Exeter post. And I had hoped that, even after 25 years, a bit of the spirit, attitude, knowhow, critical thinking and scientific rigor – all of which I tried so hard to implant in my Viennese department at the time – would have survived.

Perhaps I was wrong.

Difficulties breastfeeding?

Some say that Chinese herbal medicine offers a solution.

This Chinese multi-centre RCT included 588 mothers considering breastfeeding. The intervention group received the Chinese herbal mixture Zengru Gao, while the control group received no therapy. The primary outcomes were the percentages of fully and partially breastfeeding mothers, and a secondary outcome was baby’s daily formula intake.

At day 3 and 7 after delivery, significant differences were found in favour of Zengru Gao group on the percentage of full/ partial breastfeeding. At day 7, the percentage of full/ partial breastfeeding of the active group increased to 71.48%/20.70% versus 58.67%/30.26% in the control group, the differences remained significant. No statistically significant differences were detected on primary measures at day. While intake of formula differed between groups at day 1 and 3, this difference did not achieve statistical significance, but this difference was apparent by day 7.

The authors concluded that the Chinese Herbal medicine Zengru Gao enhanced breastfeeding success during one week postpartum. The approach is acceptable to participants and merits further evaluation.

To the naïve observer, this study might look rigorous, but it is a seriously flawed RCT. Here are just some of its most obvious limitations:

  • All we get in the methods section is this explanation: Participants were randomly allocated to the blank control group or the intervention group: Zengru Gao, orally, 30 g a time and 3 times a day. This seems to indicate that the control group got no treatment at all which means there was no blinding nor placebo control. The authors even comment on this point in the discussion section of their paper stating that because we included new mothers who received no treatment as a control group, we were able to prove that the improvement in breastfeeding was not due to the placebo effect. However, this is a totally nonsensical argument.
  • The experimental treatment is not reproducible. The authors state: Zengru Gao, a Chinese herbal formula, which is composed of 8 herbs: Semen Vaccariae, Medulla Tetrapanacis, Radix Rehmanniae Praeparata, Radix Angelicae Sinensis, Radix Paeoniae Alba,Rhizoma Chuanxiong, Herba Leonuri, Radix Trichosanthis. This is not enough information to replicate the study outside China where the mixture is not commercially available.
  • The primary outcome was the percentage of fully, and partially breastfeeding mothers. Breastfeeding was defined as mother’s milk given by direct breast feeding. Full breastfeeding meant that no other types of milk or solids were given. Partially breastfeeding meant that sustained latch with deep rhythmic sucking through the length of the feed, with some pause, on either/ or both breasts. We are not being told how the endpoint was quantified. Presumably women kept diaries. We cannot guess how accurate this process was.
  • As far as I can see, there was no correction for multiple testing for statistical significance. This means that some or all of the significant results might be false-positive.
  • There is insufficient data to show that the herbal mixture is safe for the mothers and the babies. At the very minimum, the researchers should have measured essential safety parameters. This omission is a gross violation of research ethics.
  • Towards the end of the paper, we find the following statement: The authors would like to thank the Research and Development Department of Zhangzhou Pien Tze Huang Pharmaceutical co., Ltd. … The authors declare that they have no competing interests. And the 1st and 3rd authors are “affiliated with” Guangzhou Hipower Pharmaceutical Technology Co., Ltd, Guangzhou, China, i. e. work for the manufacturer of the mixture. This does clearly not make any sense whatsoever.

I have seen too many flawed studies of alternative medicine to be shocked or even surprised by this level of incompetence and nonsense. Yet, I still find it lamentable. But, in my view, the worst is that supposedly peer-reviewed journals such as ‘BMC Complement Altern Med’ publish such overt rubbish.

It would be easy to shrug one’s shoulder and bin the paper. But the effect of such fatally flawed research is too serious for that. In our recent book MORE HARM THAN GOOD? THE MORAL MAZE OF COMPLEMENTARY AND ALTERNATIVE MEDICINE, we discuss that such flawed science amounts to a violation of medical ethics:  CAM journals allocate peer review tasks to a narrow range of CAM enthusiasts who often have been chosen by the authors of the article in question. The raison d’être of CAM journals and CAM researchers is inextricably tied to a belief in CAM, resulting in a self-referential situation which is permissive to the acceptance of weak or flawed reports of clinical effectiveness… Defective research—whether at the design, execution, analysis, or reporting stage—corrupts the repository of reliable medical knowledge. Ultimately, this leads to suboptimal and erroneous treatment decisions…

1 2 3 35
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.

Click here for a comprehensive list of recent comments.