1 2 3 26

Osteopathic manipulative treatment (OMT) is frequently recommended by osteopaths for improving breastfeeding. But does it work?

This double-blind randomised clinical trial tested whether OMT was effective for facilitating breastfeeding. Breastfed term infants were eligible if one of the following criteria was met:

  • suboptimal breastfeeding behaviour,
  • maternal cracked nipples,
  • maternal pain.

The infants were randomly assigned to the intervention or the control group. The intervention consisted of two sessions of early OMT, while in the control group, the manipulations were performed on a doll behind a screen. The primary outcome was the exclusive breastfeeding rate at 1 month, which was assessed in an intention-to-treat analysis. Randomisation was computer generated and only accessible to the osteopath practitioner. The parents, research assistants and paediatricians were masked to group assignment.

One hundred twenty-eight mother-infant dyads were randomised, with 64 assigned to each group. In each group, five infants were lost to follow-up. In the intervention group, 31 of 59 (53%) of infants were still exclusively breastfed at 1 month vs 39 of 59 (66%) in the control group. After adjustment for suboptimal breastfeeding behaviour, caesarean section, use of supplements and breast shields, the adjusted OR was 0.44. No adverse effects were reported in either group.

The authors concluded dryly that OMT did not improve exclusive breastfeeding at 1 month.

This is a rigorous trial with clear and expected results. It was conducted in cooperation with a group of 7 French osteopaths, and the study was sponsored by the ‘Société Européenne de Recherche en Osthéopathie Périnatale et Pédiatrique’, the ‘Fonds pour la Recherche en Ostéopathie’ and ‘Formation et Recherche Ostéopathie et Prévention’. The researchers need to be congratulated on publishing this trial and expressing the results so clearly despite the fact that the findings were not what the osteopaths had hoped for.

Three questions come to my mind:

  1. Is any of the many therapeutic recommendations of osteopaths valid?
  2. Why was it ever assumed that OMT would be effective?
  3. Do we really have to test every weird assumption before we can dismiss it?

The authors of this study claim that, in the aging brain, reduction in the pulsation of cerebral vasculature and fluid circulation causes impairment in the fluid exchange between different compartments and lays a foundation for the neuroinflammation that results in Alzheimer disease (AD). The knowledge that lymphatic vessels in the central nervous system play a role in the clearance of brain-derived metabolic waste products opens an unprecedented capability to increase the clearance of macromolecules such as amyloid β proteins. However, currently, there is no pharmacologic mechanism available to increase fluid circulation in the aging brain.

Based on these considerations, the authors conducted a study to demonstrate the influence of an osteopathic cranial manipulative medicine (OCMM) technique, specifically, compression of the fourth ventricle, on spatial memory and changes in substrates associated with mechanisms of metabolic waste clearance in the central nervous system using the naturally aged rat model of AD.

The rats in the OCMM group received the CV4 technique every day for 7 days for 4 to 7 minutes at each session. Rats were anesthetized with 1.5% to 3% isoflurane throughout the procedure. Rats in the UT group were also anesthetized to nullify any influence of isoflurane in spatial learning. During the CV4 procedure, the operator applied mechanical pressure over the rat’s occiput, medial to the junction of the occiput and temporal bone and inferior to the lambdoid suture to place tension on the dural membrane around the fourth ventricle. This gentle pressure was applied to resist cranial flexion with the aim of improving symmetry in the cranial rhythmic impulse (CRI), initiating a rhythmic fluctuation of the CSF, and improving mobility of the cranial bones and dural membranes. This rhythmic fluctuation is thought to be primarily due to flexion and extension that takes place at the synchondrosis between the sphenoid and basiocciput. The treatment end point was achieved when the operator identified that the tissues relaxed, a still point was reached, and improved symmetry or fullness of the CRI was felt. Currently, there is no quantitative measure for the pressure used in this treatment.

The results showed a significant improvement in spatial memory in 6 rats after 7 days of OCMM sessions. Live animal positron emission tomographic imaging and immunoassays revealed that OCMM reduced amyloid β levels, activated astrocytes, and improved neurotransmission in the aged rat brains.

The authors concluded that these findings demonstrate the molecular mechanism of OCMM in aged rats. This study and further investigations will help physicians promote OCMM as an evidence-based adjunctive treatment for patients with AD.

If there ever was an adventurous, over-optimistic extrapolation, this must be it!

Even assuming that all of the findings can be confirmed and replicated, they would be a very far shot from rendering OCMM an evidence-based treatment for AD:

  • Rats are not humans.
  • Aged rats do not have AD.
  • OCMM is not a plausible treatment.
  • An animal study is not a clinical trial.

I am at a complete loss to see how the findings of this bizarre animal experiment might help physicians promote OCMM as an evidence-based adjunctive treatment for patients with AD.

The aim of this paper was to synthesize the most recent evidence investigating the effectiveness and safety of therapeutic touch as a complementary therapy in clinical health applications.
A rapid evidence assessment (REA) approach was used to review recent TT research adopting PRISMA 2009 guidelines. CINAHL, PubMed, MEDLINE, Cochrane databases, Web of Science, PsychINFO and Google Scholar were screened between January 2009–March 2020 for studies exploring TT therapies as an intervention. The main outcome measures were for pain, anxiety, sleep, nausea and functional improvement.
Twenty‐one studies covering a range of clinical issues were identified, including 15 randomized trials, four quasi‐experimental studies, one chart review study, and one mixed-methods study including 1,302 patients. Eighteen of the studies reported positive outcomes. Only four exhibited a low risk of bias. All others had serious methodological flaws, bias issues, were statistically underpowered, and scored as low‐quality studies. No high‐quality evidence was found for any of the benefits claimed.

 The authors offer the following conclusions:

After 45 years of study, scientific evidence of the value of TT as a complementary intervention in the management of any condition still remains immature and inconclusive:

  • Given the mixed result, lack of replication, overall research quality, and significant issues of bias identified, there currently exists no good-quality evidence that supports the implementation of TT as an evidence‐based clinical intervention in any context.
  • Research over the past decade exhibits the same issues as earlier work, with highly diverse poor quality unreplicated studies mainly published in alternative health media.
  • As the nature of human biofield energy remains undemonstrated, and that no quality scientific work has established any clinically significant effect, more plausible explanations of the reported benefits are from wishful thinking and use of an elaborate theatrical placebo.

These are clear and much-needed words addressed at nurses (the paper was published in a nursing journal). Nurses have been oddly fond of TT. Therefore, it seems important to send evidence-based information in their direction. In my recent book, I arrived at similar conclusions about TT:

  1. The assumptions that form the basis for TT are not biologically plausible.
  2. Several trials and reviews of TT have emerged. However, many of them are by ardent proponents of TT, seriously flawed, and thus less than reliable. e.g.[1],[2]
  3. One rigorous pre-clinical study, co-designed by a 9-year-old girl, found that experienced TT practitioners were unable to detect the investigator’s “energy field.” Their failure to substantiate TT’s most fundamental claim is unrefuted evidence that the claims of TT are groundless and that further professional use is unjustified. [3]
  4. There are no reasons to assume that TT causes direct harm. One could, however, argue that, like all forms of paranormal healing, it undermines rational thinking.




In the world of homeopathy, Prof Michael Frass is a famous man. He is the First Chairman of the Scientific Society for Homeopathy (WissHom), the president of the Umbrella organization of Austrian Doctors for Holistic Medicine, and the Vicepresident of the Doctors Association for Classical Homeopathy. Frass has featured on this blog before, not least because he has published numerous studies of homeopathy, none of which has ever failed to produce a positive result

This is not just remarkable, in my view, it defies logic and the laws of nature. Even if homeopathy were a supremely effective therapy – a very broad consensus holds that it is not! – one would occasionally expect some negative results. No treatment works under all circumstances

… that is no treatment except homeopathy, according to Frass.

Recently Frass amazed even the world of oncology by publishing a study suggesting that homeopathy can prolong the survival of lung cancer patients. Every oncologist I know was flabbergasted.

Can this be true? This is the question, many people have been asking for some time in relation to Frass’s research.

In my quest to shine more light on it, I was recently alerted to an article by the formidable Austrian investigative journalist, Alwin Schönberger. In 2015, he came across a press release announcing that “HOMEOPATHY HAD BEEN PROVEN TO WORK AFTER ALL” (strikingly similar to one issued in 2018). It came from Austria’s leading manufacturer who was giving an award to an apparently outstanding thesis supervised by Frass. Even today, this piece of research has not been published in the peer-reviewed literature.

Yet, after some difficulties, Schönberger managed to obtain a copy. What he found was surprising, and he thus published his findings in the respected Austrian journal ‘Profil’ (2. Mai 2015 • profil 22).

Frass’s student had been given the task to systematically review all the homeopathy trials published between 2008 and 2012. Contrary to the hype of the press release, the meta-analysis merely suggested a very small effect. When digging deeper, Schönberger found several inconsistencies and mistakes in the analysis. They all were such that they produced a false-positive picture for homeopathy. Upon their correction, homeopathy turned out to be no longer significantly superior to placebo. Frass was then interviewed about it and claimed that the inconsistencies were only ‘errors’ but insisted that homeopathy is not a placebo therapy.

Yes, of course, errors happen in research. But if they all go in one direction and if that direction coincides with the interests of the researchers, we have the right, perhaps even the duty, to be suspicious. The questions that arise from this story are, I think, as follows:

  • Have the errors been corrected?
  • Are there perhaps other errors in Frass’s research?
  • Can we trust anything that Frass says?
  • Is it time to consider an official investigation into Frass’s studies of homeopathy?



There are plenty of people who find it hard to accept that highly diluted homeopathic remedies are placebos. They religiously believe in the notion that homeopathy works and studiously ignore the overwhelming evidence (plus a few laws of nature). Yet, they pretend to staunchly believe in science and keep on conducting (pseudo?) scientific studies of homeopathy. To me, this seems oddly schizophrenic because, on the one hand, they seem to accept science by conducting trials, while, on the other hand, they reject science by negating the scientific consensus.

The objective of this recent study was to evaluate the quality of life (QoL) of women treated with homeopathy within the Public Health System of Belo Horizonte, Brazil.

The study was designed as a prospective randomized controlled pragmatic trial. The patients were divided into two independent groups, one group underwent homeopathic treatment during a 6-month period, while the other did not receive any homeopathic treatment. In both randomized groups, patients maintained their conventional medical treatment as necessary. The World Health Organization Quality of Life abbreviated questionnaire (WHOQOL-BREF) was used for QoL analysis prior to treatment and 6 months later.

Randomization was successful in that it resulted in similar baseline results in three domains of QoL analysis for both groups. After 6 months’ treatment, the investigators noted a statistically significant difference between groups in the physical domain of WHOQOL-BREF: the average score improved to 63.6 ± (SD) 15.8 in the homeopathy group, compared with 53.1 ± (SD) 16.7 in the control group.

The authors concluded that homeopathic treatment showed a positive impact at 6 months on the QoL of women with chronic diseases. Further studies should be performed to determine the long-term effects of homeopathic treatment on QoL and its determinant factors.

I would not be surprised if the world of homeopathy were to celebrate this trial as yet another proof that homeopathy is effective. I am afraid, however, that I might have to put a damper on their excitement:


Why not?

Regular readers of this blog will have already guessed it: the trail follows the infamous ‘A+B versus B’ design. Some people will think that I am obsessed with this theme – but I am not; it’s just that, in SCAM, it comes up with such depressing regularity. And as this blog is mainly about commenting on newly published research, I am unable to avoid the subject.

So, let me explain it again.

Think of it in monetary terms: you have an amount X, your friend has the same amount X plus an extra sum Y. Who do you think has more money? You don’t need to be a genius to guess, do you?

The same happens in the above ‘A+B versus B’ trial:

  • the patients in group 1 received homeopathy (A) plus usual care (B);
  • the patients in group 2 received usual care (B) and nothing else.

You don’t need to be a genius to guess who might have the better outcomes.

Because of homeopathy?

No! Because of the patients’ expectation, the placebo effect, and the extra attention of the homeopaths. They call this trial design ‘pragmatic’. I feel it is an attempt to mislead the public.

So, allow me to re-write the authors’ conclusion as follows:

The effect of a homeopathic consultation and the administration of a placebo generated a positive impact at 6 months on the QoL of women with chronic diseases. This was entirely predictable and totally unrelated to homeopathy. Further studies to determine the long-term effects of homeopathic treatment on QoL and its determinant factors are not needed.


This study was aimed at determining the effectiveness of electroacupuncture or auricular acupuncture for chronic musculoskeletal pain in cancer survivors.

The Personalized Electroacupuncture vs Auricular Acupuncture Comparativeness Effectiveness (PEACE) trial is a randomized clinical trial that was conducted from March 2017 to October 2019 (follow-up completed April 2020) across an urban academic cancer center and 5 suburban sites in New York and New Jersey. Study statisticians were blinded to treatment assignments. The 360 adults included in the study had a prior cancer diagnosis but no current evidence of disease, reported musculoskeletal pain for at least 3 months, and self-reported pain intensity on the Brief Pain Inventory (BPI) ranging from 0 (no pain) to 10 (worst pain imaginable).

Patients were randomized 2:2:1 to:

  1. electroacupuncture (n = 145),
  2. auricular acupuncture (n = 143),
  3. or usual care (n = 72).

Intervention groups received 10 weekly sessions of electroacupuncture or auricular acupuncture. Ten acupuncture sessions were offered to the usual care group from weeks 12 through 24.

The primary outcome was a change in the average pain severity score on the BPI from baseline to week 12. Using a gatekeeping multiple-comparison procedure, electroacupuncture and auricular acupuncture were compared with usual care using a linear mixed model. Noninferiority of auricular acupuncture to electroacupuncture was tested if both interventions were superior to usual care.

Among 360 cancer survivors (mean [SD] age, 62.1 [12.7] years; mean [SD] baseline BPI score, 5.2 [1.7] points; 251 [69.7%] women; and 88 [24.4%] non-White), 340 (94.4%) completed the primary end point. Compared with usual care, electroacupuncture reduced pain severity by 1.9 points (97.5% CI, 1.4-2.4 points; P < .001) and auricular acupuncture reduced by 1.6 points (97.5% CI, 1.0-2.1 points; P < .001) from baseline to week 12. Noninferiority of auricular acupuncture to electroacupuncture was not demonstrated. Adverse events were mild; 15 of 143 (10.5%) patients receiving auricular acupuncture and 1 of 145 (0.7%) patients receiving electroacupuncture discontinued treatments due to adverse events (P < .001).

The authors of this study concluded that, in this randomized clinical trial among cancer survivors with chronic musculoskeletal pain, electroacupuncture and auricular acupuncture produced greater pain reduction than usual care. However, auricular acupuncture did not demonstrate noninferiority to electroacupuncture, and patients receiving it had more adverse events.

I think the authors made a mistake in formulating their conclusions. Perhaps they allow me to correct it:

In this randomized clinical trial among cancer survivors with chronic musculoskeletal pain, electroacupuncture plus usual care and auricular acupuncture plus usual care produced greater pain reduction than usual care alone.

I know, I must sound like a broken record, but – because it followed the often-discussed ‘A+B versus B’ design – this study does simply not show what the authors conclude. In fact, it tells us very little about any effects caused by the two acupuncture versions per se. The study does not control for placebo effects and therefore its results are consistent with acupuncture itself having no effect at all.

Here is an attempt at explaining the ‘A+B versus B’ study design I posted previously:

As regularly mentioned on this blog, there are several ways to design a study such that the risk of producing a negative result is minimal. The most popular one in SCAM research is the ‘A+B versus B’ design…

Imagine you have an amount of money A and your friend owns the same sum plus another amount B. Who has more money? Simple, it is, of course your friend: A+B will always be more than A [unless B is a negative amount]. For the same reason, such “pragmatic” trials will always generate positive results [unless the treatment in question does actual harm]. Treatment as usual plus acupuncture is more than treatment as usual alone, and the former is therefore more than likely to produce a better result. This will be true, even if acupuncture is a pure placebo – after all, a placebo is more than nothing, and the placebo effect will impact on the outcome, particularly if we are dealing with a highly subjective symptom such as fatigue.

Imagine the two interventions had been a verbal encouragement or pat on the shoulder or a pat on the right shoulder for group 1 and one on the left for group 2. The findings could well have been very similar. To provide evidence that acupuncture PRODUCES PAIN REDUCTION, we need proper tests of the hypothesis. And to ‘determine the effectiveness of electroacupuncture or auricular acupuncture for chronic musculoskeletal pain in cancer survivors’, we need a different methodology.

This is, of course, all very elementary. Nothing elaborate or complicated! Scientists know it; editors know it; reviewers know it. Or at least they should know it. Therefore, I am at a loss trying to understand why even journals of high standing publish IMPROPER tests, better known as pseudo-science.

It is hard not to conclude that they deliberately try to mislead us.

Osteopathic manipulative treatment (OMT) is popular, but does it work? On this blog, we have often discussed that there are good reasons to doubt it.

This study compared the efficacy of standard OMT vs sham OMT for reducing low back pain (LBP)-specific activity limitations at 3 months in persons with nonspecific subacute or chronic LBP. It was designed as a prospective, parallel-group, single-blind, single-center, sham-controlled randomized clinical trial. 400 patients with nonspecific subacute or chronic LBP were recruited from a tertiary care center in France starting and randomly allocated to interventions in a 1:1 ratio.

Six sessions (1 every 2 weeks) of standard OMT or sham OMT delivered by osteopathic practitioners. For both
experimental and control groups, each session lasted 45 minutes and consisted of 3 periods: (1) interview focusing on pain location, (2) full osteopathic examination, and (3) intervention consisting of standard or sham OMT. In both groups, practitioners assessed 7 anatomical regions for dysfunction (lumbar spine, root of mesentery, diaphragm, and atlantooccipital, sacroiliac, temporomandibular, and talocrural joints) and applied sham OMT to all areas or standard OMT to those that were considered dysfunctional.

The primary endpoint was the mean reduction in LBP-specific activity limitations at 3 months as measured by the self-administered Quebec Back Pain Disability Index. Secondary outcomes were the mean reduction in LBP-specific activity limitations; mean changes in pain and health-related quality of life; number and duration of sick leave, as well as the number of LBP episodes at 12 months, and the consumption of analgesics and nonsteroidal anti-inflammatory drugs at 3 and 12 months. Adverse events were self-reported at 3, 6, and 12 months.

A total of 200 participants were randomly allocated to standard OMT and 200 to sham OMT, with 197 analyzed in each group; the median (range) age at inclusion was 49.8 (40.7-55.8) years, 235 of 394 (59.6%) participants were women, and 359 of 393 (91.3%) were currently working. The mean (SD) duration of the current LBP episode had been 7.5 (14.2) months. Overall, 164 (83.2%) patients in the standard OMT group and 159 (80.7%) patients in the sham OMT group had the primary outcome data available at 3 months.

The mean (SD) Quebec Back Pain Disability Index scores were:

  • 31.5 (14.1) at baseline and 25.3 (15.3) at 3 months in the OMT-group,
  • 27.2 (14.8) at baseline and 26.1 (15.1) at 3 months in the sham group.

The mean reduction in LBP-specific activity limitations at 3 months was -4.7 (95% CI, -6.6 to -2.8) and -1.3 (95% CI, -3.3 to 0.6) for the standard OMT and sham OMT groups, respectively (mean difference, -3.4; 95% CI, -6.0 to -0.7; P = .01). At 12 months, the mean difference in mean reduction in LBP-specific activity limitations was -4.3 (95% CI, -7.6 to -1.0; P = .01), and at 3 and 12 months, the mean difference in mean reduction in pain was -1.0 (95% CI, -5.5 to 3.5; P = .66) and -2.0 (95% CI, -7.2 to 3.3; P = .47), respectively. There were no statistically significant differences in other secondary outcomes. Four and 8 serious adverse events were self-reported in the standard OMT and sham OMT groups, respectively, though none was considered related to OMT.

The authors concluded that standard OMT had a small effect on LBP-specific activity limitations vs sham OMT. However, the clinical relevance of this effect is questionable.

This study was funded the French Ministry of Health and sponsored by the Département de la Recherche Clinique et du Développement de l’Assistance Publique-Hôpitaux de Paris. It is of exceptionally good quality. Its findings are important, particularly in France, where osteopaths have become as numerous as their therapeutic claims irresponsible.

In view of what we have been repeatedly discussing on this blog, the findings of the new trial are unsurprising. Osteopathy is far less well supported by sound evidence than osteopaths want us to believe. This is true, of course, for the plethora of non-spinal claims, but also for LBP. The French authors cite previously published evidence that is in line with their findings: In a systematic review, Rubinstein and colleagues compared the efficacy of manipulative treatment to sham manipulative treatment on LBP-specific activity limitations and did not find evidence of differences at 3 and 12 months (3 RCTs with 573 total participants and 1 RCT with 63 total participants). Evidence was considered low to very low quality. When merging the present results with these findings, we found similar standardized mean difference values at 3months (−0.11 [95% CI, −0.24 to 0.02]) and 12 months (−0.11 [95% CI, −0.33 to 0.11]) (4 RCTs with 896 total participants and 2 RCTs with 320 total participants).

So, what should LBP patients do?

The answer is, as I have often mentioned, simple: exercise!

And what will the osteopaths do?

The answer to this question is even simpler: they will find/invent reasons why the evidence is not valid, ignore the science, and carry on making unsupported therapeutic claims about OMT.

Guest post by Alan Henness

When I discovered a homeopath admitting on camera that she believed she and her fellow homeopaths had managed to unblind a triple-blinded homeopathy trial they were taking part in, I submitted a complaint to the journal that published the paper on the trial, the university of the researcher who had conducted the trial and the current university of the homeopath who had subsequently moved into research.

The paper concerned is the 2004 paper by Weatherley-Jones et al. A randomised, controlled, triple-blind trial of the efficacy of homeopathic treatment for chronic fatigue syndrome. This was published in the Journal of Psychosomatic Research.

The homeopath was Clare Relton, currently Senior Lecturer in Clinical Trials at the Centre for Primary Care and Public Health at Queen Mary University of London and Honorary Senior Research Fellow, School of Health and Related Research at the University of Sheffield.

She gave a presentation at the 2019 conference of the Homeopathy Research Institute. Billed as an International Homeopathy Research Conference, it was subtitled, Cutting edge research in homeopathy. The videos of the conference were sponsored by homeopathy manufacturing giant, Boiron.

My complaint email (see below) explains what I discovered and sets the context. As a result of the investigation by the journal, the current editor along with two former editors have just published a peer-reviewed paper on my complaint and their investigation:

When is lack of scientific integrity a reason for retracting a paper? A case study

Misconduct and unethical behaviour

It’s worth noting how serious the Journal of Psychosomatic Research considered the misconduct they identified by Relton and others. From the Results section of the paper:

We found the presentation by Dr. Relton disturbing on multiple grounds. This admission of unethical behavior calls her scientific integrity into question. The premise for her actions rests on an errant assumption widespread among clinicians, based on anecdotal experience, that one possesses an ultimate knowledge of what works and doesn’t work without the need for rigorous study. The history of medicine, unfortunately, has been littered by countless treatments that practitioners believed in and dispensed, only to be later found not beneficial or even harmful [4]. This underscores the importance of rigorous study for treatments where equipoise exists in the scientific community, as it arguably did for the use of homeopathy for chronic fatigue syndrome. Dr. Relton likely did not hold that equipoise herself, but if she had ethical concerns about the study, the appropriate action would have been to not participate in it. Instead, she purports to have enlisted colleagues to deliberately and systematically undermine the study.

In watching the presentation, the purpose of this admission seemed to be to discount the results of a rigorous but essentially negative study in the context of promoting her own ideas related to trial design. While we cannot know for certain that her motivation was to discount the results of this study, what she said clearly seeks to undermine the credibility of a trial whose results challenged her firmly held but untested beliefs about the benefit of a treatment that she had high allegiance to. Regardless of her intent or what actually happened during the trial, Dr. Relton’s presentation is ipso facto evidence of either an admitted prior ethical breach or is itself an ethical breach for the following reasons. Either she indeed undermined an ambitious effort to study of the efficacy of homeopathy for chronic fatigue syndrome, negating the work of all other investigators, study staff, and participants involved in the study as well as the investment of the public, or she is conducting a late and inappropriate attack on the study’s credibility. Her presentation certainly warrants formal censure from the scientific community, and this paper may contribute to that. Despite this clear indictment, after discussing and considering the complaint of Mr. Henness for several months, we ultimately decided not to retract the paper.

They decided not to retract the paper but instead use it for ethical reflection. However, they concluded I had highlighted “undisputable evidence of scientific misconduct” by the homeopaths concerned:

When is lack of scientific integrity a reason for retracting a paper? A case study

Objective: The journal received a request to retract a paper reporting the results of a triple-blind randomized placebo-controlled trial. The present and immediate past editors expand on the journal’s decision not to retract this paper in spite of undisputable evidence of scientific misconduct on behalf of one of the investigators.

Methods: The editors present an ethical reflection on the request to retract this randomized clinical trial with consideration of relevant guidelines from the committee on Publication Ethics (COPE) and the International Committee of Medical Journal Editors (ICMJE) applied to the unique contextual issues of this case.

Results: In this case, scientific misconduct by a blinded provider of a homeopathy intervention attempted to undermine the study blind. As part of the study, the integrity of the study blind was assessed. Neither participants nor homeopaths were able to identify whether the participant was assigned to homeopathic medicine or placebo. Central to the decision not to retract the paper was the fact that the rigorous scientific design provided evidence that the outcome of the study was not affected by the misconduct. The misconduct itself was thought to be insufficient reason to retract the paper.

Conclusion: Retracting a paper of which the outcome is still valid was in itself considered unethical, as it takes away the opportunity to benefit from its results, rendering the whole study useless. In such cases, scientific misconduct is better handled through other professional channels.

Ethical misconduct

The authors had additional ethical concerns:

Apart from the intention of ‘circumventing the blind’, there is another unethical aspect to the behavior of Dr. Relton, namely the fact that patients were systematically subject to an intervention (carcinosin administration) that was not part of the original research protocol and to which they did not consent as part of the study. Although the systematic administration of carcinosin was not part of the study protocol, it was administered only to patients taking part in the study, and because they took part in the study. Presumably, these patients were not properly informed, or maybe even misinformed, about the rationale of a double-blind trial design and/or the true reason for administrating carcinosin. Apparently, ‘deep listening and deep understanding’ does not necessarily need to be accompanied by an honest and open attitude towards patients that participate in research. Dr. Relton stated in her lecture ‘I’m not trained to be deceiving people’, but that is exactly what she did. Not only did she deceive patients, but also the researchers and study leaders that she is supposed to collaborate with as a colleague. [emphasis in original]


The authors said:

The authors are of the opinion that in case the misconduct was not conducted by or on behalf of the principal investigator – as is the case here -, the initiative for further action should lie with them. Not only is the principal investigator the one that was deceived, but they are in a better position to report the misconduct to the institution and funding body. If the principal investigator is responsible for the misconduct, the editor is probably the only one that can initiate further action, in which case the researcher’s institution should be informed and requested to take appropriate action.

It will be interesting to see what further action, if any, is taken by Weatherley-Jones as is suggested.

I had already brought my concerns to the attention of both the University of Sheffield and Queen Mary University of London. The former concluded:

This is to confirm that the University of Sheffield has now completed its assessment of this matter, and it has been agreed that it would not be appropriate for the University of Sheffield to undertake a research misconduct investigation of the allegation against Clare Relton, since she is not a current member of University staff, nor was she a member of staff at the time of the clinical trial in question.

In relation to the potential concerns about the reliability of the published research findings, the University is satisfied that the Journal of Psychosomatic Research is consulting with the authors and taking steps to address the concerns as appropriate. The University will therefore be taking no further action.

I received no response from Queen Mary University of London, despite their Principal being copied in on all the relevant correspondence.

I will be writing again to both and Weatherley-Jones now the paper has been published.


My thanks to Jess G. Fiedorowicz, Editor, Journal of Psychosomatic, for his thorough investigation of my complaint.

My complaint


The results of a trial were published in the Journal of Psychosomatic Research in 2004 (see attached copy):

A randomised, controlled, triple-blind trial of the efficacy of homeopathic treatment for chronic fatigue syndrome


Elaine Weatherley-Jones a,*, Jon P Nicholl a, Kate J Thomas a, Gareth J Parry a, Michael W McKendrickb, Stephen T Green b, Philip J Stanley c, Sean PJ Lynch d

a Medical Care Research Unit, School of Health and Related Research, University of Sheffield, Regent Court, 30 Regent Street, Sheffield, S1 4DA, UK
b Communicable Diseases Directorate, Royal Hallamshire Hospital, Sheffield, UK
c Seacroft Hospital, Leeds Teaching Hospitals NHS Trust, Leeds, UK
d St. James’s University Hospital, University of Leeds, Beckett Street, Leeds, UK

* Corresponding author. Tel.: +44-114-222-0744; fax: +44-114-222-0749.
E-mail address: [email protected] (E. Weatherley-Jones)

The paper is indexed in PubMed here.

Elaine Weatherley-Jones is listed as the Corresponding author at the Medical Care Research Unit, School of Health and Related Research, University of Sheffield as are others.

One of the homeopaths involved in providing treatment was Clare Relton, currently Senior Lecturer in Clinical Trials at the Centre for Primary Care and Public Health at Queen Mary University of London.

The full list of those involved in providing treatment during the trial is given as:

The Homeopathic Trials Group: Homeopaths— Gill de Boer, MBChB, MFHom, Maryjoan Foster, RSHom, Susanne Hartley, RSHom, Jane Howarth, BRCPHom, Pat Mayborne RSHom, Georgina Ramsayer RSHom, Clare Relton, RSHom, Pat Strong, MBBS, MFHom, Angela Zajac, BSc, RSHom, BRCPHom.

Dr Relton gave a talk at the Conference in London of the Homeopathy Research Institute held 14 to 16 June 2019. The video of her talk has recently been published: I have a copy of this video.

I invite you to watch all 30 minutes of it.

At about five minutes in, she begins to discuss the above trial, having just said she was a non-medical homeopath at the Wellforce Clinic in Sheffield. She is currently listed as Chair of Directors.

She then goes on to describe how she took part as one of the homeopaths in the trial and relates how she came up with “a cunning way of circumventing the blinding”.

I offer the following transcript of the segment of her talk where she discusses this (all transcription errors are mine):

Timestamp 05:12

So while I was still a homeopath in the Wellforce clinic, a researcher from the University of Sheffield which was actually only five minutes away from my clinic which was really handy came along and said, “I’ve got some money from Lord Sainsbury to do a trial of chronic fatigue syndrome of homoeopathy” and she described the design and I remember thinking, “not sure what that’s going to show”.

But anyway there were ten homeopaths recruited in Sheffield and Leeds and we saw patients with chronic fatigue syndrome.

A lot of us were getting patients with chronic fatigue syndrome anyway and particularly if they were never been well since glandular fever couple of doses of carcinosin 30 or 200 and they seem to make a really good recovery.

So we’re pretty confident about taking part in this trial.

So there were 130 or 140 patients recruited to the trial and then allocated to the homeopaths: there were five at our clinic and I was one of them.

Patients would arrive; you would do the normal thing, have the consultation with them. They seemed a bit standoffish, they were quite distant – I couldn’t work out why.

And then at the end of the consultation I had to say to them “well there’s a 50% chance that whatever I prescribe you is going to be a placebo”, which sort of sort of lowered the temperature in the in the in the Consulting room because you know they came because they have chronic fatigue; they came… didn’t come because they wanted to take part in an experimental game.

So we would ring the pharmacy up and tell them our prescription. Helios Pharmacy would then send out either placebo or the real remedy according to the allocation of the patient.

The patient would come back four weeks later and if they were better, great and if they weren’t it was really, really difficult. So, had I got the wrong prescription or were they on placebo?

So after about six months of this we started working out there was a cunning way of circumventing the blinding and we worked out, well if we give them all a dose of carcinosin they’re going to have some reaction: there’s going to be a dream there’s going to be some change and if when they come back at the second appointment they haven’t changed then we know they’re on placebo. So don’t bother doing all that trying to find the right remedy; just use all your other amazing skills you have as a homeopath: the deep listening we have the deep understanding of what we know about what’s toxic in our systems, about diet and counselling.

So that’s what we did. Because we’re homeopaths. We’re trained to treat people I’m not trained to be deceiving people. That’s what I do, that’s what I did then; that’s what all my colleagues did.

So ok, so the trial ended and at the end the results came out I’m sure quite a few of us are familiar with it.

There were two groups, so there was a group… everybody in the patient… everybody in the trial received treatment… a course of treatment by a homeopath and 50% of them received a placebo remedy 50% the real remedy, the verum.

And the results… both groups got better and the group that received the real remedy improved better than the group that received the placebo but was the difference clinically significant? Not quite. How many trials do we have that? So this trial was so much realisation, so many questions came out of my experience being inside, inside a double-blind placebo randomised controlled trial. What is seen as the… you know the… summit of evidence-based medicine in terms of rigorousness, I  just thought “what is this doing?” I don’t know what… I don’t know what this has shown.

This is what’s called an explanatory trial and I thought well it’s explaining nothing to me, apart from the fact that the system for designing and conducting randomised controlled trials at the moment isn’t working.

So lots of questions.

Timestamp 09:02

The paper states:

Patients were successfully blinded to their group allocation, and therefore we have assumed that whatever the reasons for nonresponse, they are the same for the treatment arm and the placebo arm and that the data are comparable. Therefore, intention to treat analyses was done on actual data plus imputed missing item data, but all unit missing data were excluded from analyses.


Checking of double blinding showed that prediction of treatment group was made by neither homeopaths (j =. 07, P c.60) nor patients (j = 0.11, P c.48).

The trial was of a triple-blind design but there is no mention of the deliberate attempts to circumvent the blinding in the paper. The effects on participants by the actions – inadvertent or otherwise – of Relton and her colleagues are not considered and not known.

I believe the actions of Relton, the other four homeopaths at her clinic whom she clearly implicates in this circumvention of blinding, and possibly the remaining four homeopaths if they were all known to each other and in contact with each other since they were all in the same area of Leeds/Sheffield, compromised the trial design, rendered the results unreliable and seriously undermined the integrity of the paper and its conclusions. I do not believe it matters whether or not they were in fact able to circumvent the blinding, but it does matter that Relton and others believed they had because she admits it led to different behaviour on their part resulting in contamination of the results.

I believe the actions amount to misconduct.

I note additional criticism of this paper by Prof Edzard Ernst (see attached).

I ask that Sheffield University investigate this matter and that along with Queen Mary University of London and the Editor-in-chief of the Journal of Psychosomatic Research, Jess Fiedorowicz, MD, PhD, decide what actions to take. I ask that consideration is given to retracting this unsound paper.

Please consider this email as a formal complaint against Dr Clare Relton and others.

Please acknowledge receipt by return and keep me informed of your progress in investigating this matter and of your conclusions and outcome.

If you require any further information, please do not hesitate to contact me.

Best regards.
Alan Henness

The use of homeopathy in oncological supportive care seems to be progressing. The first French prevalence study, performed in 2005 in Strasbourg, showed that only 17% of the subjects were using it. This descriptive study, using a questionnaire identical to that used in 2005, investigated whether the situation has changed since then.

A total of 633 patients undergoing treatment in three anti-cancer centers in Strasbourg were included. The results of the “homeopathy” sub-group were extracted and studied.

Of the 535 patients included, 164 (30.7%) used homeopathy. The main purpose of its use was to reduce the side effects of cancer treatments (75%). Among the users,

  • 82.6% were “somewhat” or “very” satisfied,
  • 15.5% were “quite” satisfied,
  • 1.9% were “not at all” satisfied.

The homeopathic treatment was prescribed by a doctor in 75.6% of the cases; the general practitioner was kept informed in 87% of the cases and the oncologist in 82%. Fatigue, pain, nausea, anxiety, sadness, and diarrhea were improved in 80% of the cases. Hair-loss, weight disorders, and loss of libido were the least improved symptoms. The use of homeopathy was significantly associated with the female sex.

The authors concluded that with a prevalence of 30.7%, homeopathy is the most used complementary medicine in integrative oncology in Strasbourg. Over 12 years, we have witnessed an increase of 83% in its use in the same city. Almost all respondents declare themselves satisfied and tell their doctors more readily than in 2005.

There is one (possibly only one) absolutely brilliant statement in this abstract:

The use of homeopathy was significantly associated with the female sex.

Why do I find this adorable?

Because to claim that any of the observed outcomes of this study are causally related to homeopathy seems like claiming that homeopathy turns male patients into women.


In case you do not understand my clumsy attempt at humor and satire, rest assured: I do not truly believe that homeopathy turns men into women, and neither do I believe that it improves fatigue, pain, nausea, anxiety, sadness, and diarrhea. Remember: correlation is not causation.

The author of this study introduces the subject by stating that Reiki is a biofield energy therapy that focuses on optimizing the body’s natural healing abilities balancing the life force energy or qi/chi. Reiki has been shown to reduce stress, pain levels, help with depression/anxiety, increase relaxation, improve fatigue, and quality of life.

Despite the fact that the author seems to have no doubt about the effectiveness of Reiki, she decided single-handedly to conduct a study of it – well, not a real study but a ‘pilot study’:

In this pilot randomized, double-blinded, and placebo-controlled study, the effects of Reiki on heart rate, diastolic and systolic blood pressure, body temperature, and stress levels were explored in an effort to gain objective outcome measures and to understand the underlying physiological mechanisms of how Reiki may be having these therapeutic effects on subjective measures of stress, pain, relaxation, and depression/anxiety.

Forty-eight subjects were block-randomized into three groups (Reiki treatment, sham treatment, and no treatment). The changes in pre-and post-treatment measurements for each outcome measure were analyzed through analysis of variance (ANOVA) post hoc multiple comparison test, which found no statistically significant difference between any of the groups. The p-value for the comparison of Reiki and sham groups for heart rate was 0.053, which is very close to being significant and so, a definitive conclusion can not be made based on this pilot study alone.

The author concluded that a second study with a larger sample size is warranted to investigate this finding further and perhaps with additional outcome measures to look at other possible physiological mechanisms that may underlie the therapeutic effects of Reiki.

I have a few questions about this paper:

  • If a researcher already knows that a treatment works, why do a study?
  • If she nevertheless does a study, why a pilot that is not meant for evaluating effects but for testing the feasibility?
  • Why does the author calculate effects instead of evaluating the feasibility of his project?
  • Why does the author try to interpret a negative outcome as though it signifies an almost positive effect?
  • Why did someone who knows how to do research at the Ohio Wesleyan University (the author’s affiliation) not give her some guidance?
  • Why did the reviewers of this paper let it pass?
  • Why does any journal publish such rubbish?

Oh, the embarrassment!

It’s a journal for which I once (a long time ago) served on the editorial board.

1 2 3 26
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.