scientific misconduct

1 2 3 6

This is a question which I have asked myself more often than I care to remember. The reason is probably that, in alternative medicine, I feel surrounded by so much dodgy research that I simply cannot avoid asking it.

In particular, the co-called ‘pragmatic’ trials which are so much ‘en vogue’ at present are, in my view, a reason for concern. Take a study of cancer patients, for instance, where one group is randomized to get the usual treatments and care, while the experimental group receives the same and several alternative treatments in addition. These treatments are carefully selected to be agreeable and pleasant; each patient can choose the ones he/she likes best, always had wanted to try, or has heard many good things about. The outcome measure of our fictitious study would, of course, be some subjective parameter such as quality of life.

In this set-up, the patients in our experimental group thus have high expectations, are delighted to get something extra, even more happy to get it for free, receive plenty of attention and lots of empathy, care, time, attention etc. By contrast, our poor patients in the control group would be a bit miffed to have drawn the ‘short straw’ and receive none of this.

What result do we expect?

Will the quality of life after all this be equal in both groups?

Will it be better in the miffed controls?

Or will it be higher in those lucky ones who got all this extra pampering?

I don’t think I need to answer these questions; the answers are too obvious and too trivial.

But the real and relevant question is the following, I think: IS SUCH A TRIAL JUST SILLY AND MEANINGLESS OR IS IT UNETHICAL?

I would argue the latter!


Because the results of the study are clearly known before the first patient had even been recruited. This means that the trial was not necessary; the money, time and effort has been wasted. Crucially, patients have been misled into thinking that they give their time, co-operation, patience etc. because there is a question of sufficient importance to be answered.

But, in truth, there is no question at all!

Perhaps you believe that nobody in their right mind would design, fund and conduct such a daft trial. If so, you assumed wrongly. Such studies are currently being published by the dozen. Here is the abstract of the most recent one I could find:

The aim of this study was to evaluate the effectiveness of an additional, individualized, multi-component complementary medicine treatment offered to breast cancer patients at the Merano Hospital (South Tyrol) on health-related quality of life compared to patients receiving usual care only. A randomized pragmatic trial with two parallel arms was performed. Women with confirmed diagnoses of breast cancer were randomized (stratified by usual care treatment) to receive individualized complementary medicine (CM group) or usual care alone (usual care group). Both groups were allowed to use conventional treatment for breast cancer. Primary endpoint was the breast cancer-related quality of life FACT-B score at 6 months. For statistical analysis, we used analysis of covariance (with factors treatment, stratum, and baseline FACT-B score) and imputed missing FACT-B scores at 6 months with regression-based multiple imputation. A total of 275 patients were randomized between April 2011 and March 2012 to the CM group (n = 136, 56.3 ± 10.9 years of age) or the usual care group (n = 139, 56.0 ± 11.0). After 6 months from randomization, adjusted means for health-related quality of life were higher in the CM group (FACT-B score 107.9; 95 % CI 104.1-111.7) compared to the usual care group (102.2; 98.5-105.9) with an adjusted FACT-B score difference between groups of 5.7 (2.6-8.7, p < 0.001). Thus, an additional individualized and complex complementary medicine intervention improved quality of life of breast cancer patients compared to usual care alone. Further studies evaluating specific effects of treatment components should follow to optimize the treatment of breast cancer patients. 

The key sentence in this abstract is, of course: complementary medicine intervention improved quality of life of breast cancer patients… It provides the explanation as to why these trials are so popular with alternative medicine researchers: they are not real research but they are quite simply promotion! The next step would be to put a few of those pseudo-scientific trials together and claim that there is solid proof that integrating alternative treatments into conventional health care produces better results. At that stage, few people will bother asking whether this is really due to the treatments in questioning or to the additional attention, pampering etc.


I would very much appreciate your opinion.

A new study of homeopathic arnica suggests efficacy. How come?

Subjects scheduled for rhinoplasty surgery with nasal bone osteotomies by a single surgeon were prospectively randomized to receive either oral perioperative arnica or placebo in a double-blinded fashion. A commercially available preparation was used which contained 12 capsules: one 500 mg capsule with arnica 1M is given preoperatively on the morning of surgery and two more later that day after surgery. Thereafter, arnica was administered in the 12C potency three times daily for the next 3 days (“C” indicates a 100-fold serial dilution; and M, a 1000-fold dilution)

Ecchymosis was measured in digital “three-quarter”-view photographs at three postoperative time points. Each bruise was outlined with Adobe Photoshop and the extent was scaled to a standardized reference card. Cyan, magenta, yellow, black, and luminosity were analyzed in the bruised and control areas to calculate change in intensity.

Compared with 13 subjects receiving placebo, 9 taking arnica had 16.2%, 32.9%, and 20.4% less extent on postoperative days 2/3, 7, and 9/10, a statistically significant difference on day 7. Color change initially showed 13.1% increase in intensity with arnica, but 10.9% and 36.3% decreases on days 7 and 9/10, a statistically significant difference on day 9/10. One subject experienced mild itching and rash with the study drug that resolved during the study period.

The authors concluded that Arnica montana seems to accelerate postoperative healing, with quicker resolution of the extent and the intensity of ecchymosis after osteotomies in rhinoplasty surgery, which may dramatically affect patient satisfaction.

Why are the results positive? Pervious systematic reviews confirm that homeopathic arnica is a pure placebo. First, I thought the answer lies in the 1M potency. It could well still contain active molecules. But then I realised that the answer is much more simple: if we apply the conventional level of statistical significance, there are no statistically significant differences to placebo at all! I had not noticed the little sentence by the authors: a P value of 0.1 was set as a meaningful difference with statistical significance. In fact, none of the effects called significant by the authors pass the conventionally used probability level of 5%.

So, what so the results of this new study truly mean? In my view, they show what was known all along: HOMEOPATHIC REMEDIES ARE PLACEBOS.

In my last post, I claimed that researchers of alternative medicine tend to be less than rigorous. I did not link this statement to any evidence at all. Perhaps I should have at least provided an example!? As it happens, I just came across a brand new paper which nicely demonstrates what I meant.

According to its authors, this non-interventional study was performed to generate data on safety and treatment effects of a complex homeopathic drug. They treated 1050 outpatients suffering from common cold with a commercially available homeopathic remedy for 8 days. The study was conducted in 64 German outpatient practices of medical doctors trained in CAM. Tolerability, compliance and the treatment effects were assessed by the physicians and by patient diaries. Adverse events were collected and assessed with specific attention to homeopathic aggravation and proving symptoms. Each adverse effect was additionally evaluated by an advisory board of experts.

The physicians detected 60 adverse events from 46 patients (4.4%). Adverse drug reactions occurred in 14 patients (1.3%). Six patients showed proving symptoms (0.57%) and only one homeopathic aggravation (0.1%) appeared. The rate of compliance was 84% for all groups. The global assessment of the treatment effects resulted in the verdict “good” and “very good” in 84.9% of all patients.

The authors concluded that the homeopathic complex drug was shown to be safe and effective for children and adults likewise. Adverse reactions specifically related to homeopathic principles are very rare. All observed events recovered quickly and were of mild to moderate intensity.

So why do I think this is ‘positively barmy’?

The study had no control group. This means that there is no way anyone can attribute the observed ‘treatment effects’ to the homeopathic remedy. There are many other phenomena that may have caused or contributed to it, e. g.:

  • a placebo effect
  • the natural history of the condition
  • regression to the mean
  • other treatments which the patients took but did not declare
  • the empathic encounter with the physician
  • social desirability

To plan a study with the aim as stated above and to draw the conclusion as cited above is naïve and unprofessional (to say the least) on the part of the researchers (I often wonder where, in such cases, the boundary between incompetence and research misconduct might lie). To pass such a paper through the peer review process is negligent on the part of the reviewers. To publish the article is irresponsible on the part of the editor.


In the realm of homeopathy there is no shortage of irresponsible claims. I am therefore used to a lot – but this new proclamation takes the biscuit, particularly as it currently is being disseminated in various forms worldwide. It is so outrageously unethical that I decided to reproduce it here [in a slightly shortened version]:

“Homeopathy has given rise to a new hope to patients suffering from dreaded HIV, tuberculosis and the deadly blood disease Hemophilia. In a pioneering two-year long study, city-based homeopath Dr Rajesh Shah has developed a new medicine for AIDS patients, sourced from human immunodeficiency virus (HIV) itself.

The drug has been tested on humans for safety and efficacy and the results are encouraging, said Dr Shah. Larger studies with and without concomitant conventional ART (Antiretroviral therapy) can throw more light in future on the scope of this new medicine, he said. Dr Shah’s scientific paper for debate has just been published in Indian Journal of Research in Homeopathy…

The drug resulted in improvement of blood count (CD4 cells) of HIV patients, which is a very positive and hopeful sign, he said and expressed the hope that this will encourage an advanced research into the subject. Sourcing of medicines from various virus and bacteria has been a practise in the homeopathy stream long before the prevailing vaccines came into existence, said Dr Shah, who is also organising secretary of Global Homeopathy Foundation (GHF)…

Dr Shah, who has been campaigning for the integration of homeopathy and allopathic treatments, said this combination has proven to be useful for several challenging diseases. He teamed up with noted virologist Dr Abhay Chowdhury and his team at the premier Haffkine Institute and developed a drug sourced from TB germs of MDR-TB patients.”

So, where is the study? It is not on Medline, but I found it on the journal’s website. This is what the abstract tells us:

“Thirty-seven HIV-infected persons were registered for the trial, and ten participants were dropped out from the study, so the effect of HIV nosode 30C and 50C, was concluded on 27 participants under the trial.

Results: Out of 27 participants, 7 (25.93%) showed a sustained reduction in the viral load from 12 to 24 weeks. Similarly 9 participants (33.33%) showed an increase in the CD4+ count by 20% altogether in 12 th and 24 th week. Significant weight gain was observed at week 12 (P = 0.0206). 63% and 55% showed an overall increase in either appetite or weight. The viral load increased from baseline to 24 week through 12 week in which the increase was not statistically significant (P > 0.05). 52% (14 of 27) participants have shown either stability or improvement in CD4% at the end of 24 weeks, of which 37% participants have shown improvement (1.54-48.35%) in CD4+ count and 15% had stable CD4+ percentage count until week 24 week. 16 out of 27 participants had a decrease (1.8-46.43%) in CD8 count. None of the adverse events led to discontinuation of study.

Conclusion: The study results revealed improvement in immunological parameters, treatment satisfaction, reported by an increase in weight, relief in symptoms, and an improvement in health status, which opens up possibilities for future studies.”

In other words, the study had not even a control group. This means that the observed ‘effects’ are most likely just the normal fluctuations one would expect without any clinical significance whatsoever.

The homeopathic Ebola cure was bad enough, I thought, but, considering the global importance of AIDS, the homeopathic HIV treatment is clearly worse.

Today, I had a great day: two wonderful book reviews, one in THE TIMES HIGHER EDUCATION and one in THE SPECTATOR. But then I did something that I shouldn’t have done – I looked whether someone had already written a review on the Amazon site. There were three reviews; the first was nice the last was very stupid and the third one almost made me angry. Here it is:

I was at Exeter when Ernst took over what was already a successful Chair in CAM. I am afraid this part of it appears to be fiction. It was embarrassing for those of us CAM scientists trying to work there, but the university nevertheless supported his right to freedom of speech through all the one-sided attacks he made on CAM. Sadly, it became impossible to do genuine CAM research at Exeter, as one had to either agree with him that CAM is rubbish, or go elsewhere. He was eventually asked to leave the university, having spent the £2.M charity pot set up by Maurice Laing to help others benefit from osteopathy. CAM research funding is so tiny (in fact it is pretty much non-existent) and the remedies so cheap to make, that there is not the kind of corruption you find in multi-billion dollar drug companies (such as that recently in China) or the intrigue described. Subsequently it is not possible to become a big name in CAM in the UK (which may explain the ‘about face’ from the author when he found that out?). The book bears no resemblance to what I myself know about the field of CAM research, which is clearly considerably more than the author, and I would recommend anyone not to waste time and money on this particular account.

I know, I should just ignore it, but outright lies have always made me cross!

Here are just some of the ‘errors’ in the above text:

  • There was no chair when I came.
  • All the CAM scientists – not sure what that is supposed to mean.
  • I was never asked to leave.
  • The endowment was not £ 2 million.
  • It was not set up to help others benefit from osteopathy.

It is a pity that this ‘CAM-expert’ hides behind a pseudonym. Perhaps he/she will tell us on this blog who he/she is. And then we might find out how well-informed he/she truly is and how he/she was able to insert so many lies into such a short text.

Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?

Here is a brand new one which might stand for dozens of others.

In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).

The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.

Good news then for enthusiasts of homeopathy? 91% improvement!

Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:

Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.

Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:

  1. How on earth can we take this and so many other articles on homeopathy seriously?
  2. When does this sort of article cross the line between wishful thinking and scientific misconduct?

Guest post by Nick Ross

If you’re a fan of Edzard Ernst – and who with a rational mind would not be – then you will be a fan of HealthWatch.

Edzard is a distinguished supporter. Do join us. I can’t promise much in return except that you will be part of a small and noble organisation that campaigns for treatments that work – in other words for evidence based medicine. Oh, and you get a regular Newsletter, which is actually rather good.

HealthWatch was inspired 25 years ago by Professor Michael Baum, the breast cancer surgeon who was incandescent that so many women presented to his clinic late, doomed and with suppurating sores, because they had been persuaded to try ‘alternative treatment’ rather than the real thing.

But like Edzard (and indeed like Michael Baum), HealthWatch keeps an open mind. If there are reliable data to show that an apparently weirdo treatment works, hallelujah. If there is evidence that an orthodox one doesn’t then it deserves a raspberry. HealthWatch has worked to expose quacks and swindlers and to get the Advertising Standards Authority to do its job regulating against false claims and flimflam. It has fought the NHS to have women given fair and balanced advice about the perils of mass screening. It has campaigned with Sense About Science, English Pen and Index to protect whistleblowing scientists from vexatious libel laws, and it has joined the AllTrials battle for transparency in drug trials. It has an annual competition for medical and nursing students to encourage critical analysis of clinical research protocols, and it stages the annual HealthWatch Award and Lecture which has featured Edzard (in 2005) and a galaxy of other champions of scepticism and good evidence including Sir Iain Chalmers, Richard Smith, David Colquhoun, Tim Harford, John Diamond, Richard Doll, Peter Wilmshurst, Ray Tallis, Ben Goldacre, Fiona Godlee and, last year, Simon Singh. We are shortly to sponsor a national debate on Lord Saatchi’s controversial Medical innovation Bill.

But we need new blood. Do please check us out. Be careful, because since we first registered our name a host of brazen copycats have emerged, not least Her Majesty’s Government with ‘Healthwatch England’ which is part of the Care Quality Commission. We have had to put ‘uk’ at the end of our web address to retain our identity. So take the link to, or better still take out a (very modestly priced) subscription.

As Edmund Burke might well have said, all it takes for quackery to flourish is that good men and women do nothing.

As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.

To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):


A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.


The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.


Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).


Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.

I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.

It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.

Why did they do that?

The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).

By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?

Well, I think they committed several serious mistakes.

  • Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
  • Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.

There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.

And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:

I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.

For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.

Domain I: Sequence generation:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)

“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)

Domain IIIb: Blinding of outcome assessor
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)

“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)

Domain V: Selective outcome reporting

Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)

No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)

Domain VI: Other sources of bias:

Rating: NO (high risk of bias), no details given

Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given

In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.


So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying. 

Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.

One thing that has often irritated me – alright, I admit it: sometimes it even infuriated me – is the pseudoscientific language of authors writing about alternative medicine. Reading publications in this area often seems to me like being in the middle of a game of ‘bullshit bingo’ (I am afraid that some of the commentators on this blog have importantly contributed to this phenomenon). In an article of 2004, I once discussed this issue in some detail and concluded that “… pseudo-scientific language … can be seen as an attempt to present nonsense as science…this misleads patients and can thus endanger their health…” For this paper, I had focussed on examples from the ‘bioresonance’- literature – more by coincidence than by design, I should add. I could have selected any other alternative treatment or diagnostic method; the use of pseudoscientific language is truly endemic in alternative medicine.

To give you a little flavour, here is the section of my 2004 paper where I used 5 quotes from recent articles on bioresonance and added a brief comment after each of them.

Quote No. 1

The biophysical control processes are superordinate to the biochemical processes. In the same way as the atomic processes result in chemical compounds the ultrafine biocommunication results in the biochemical processes. Control signals have an electromagnetic quality. Disturbing signals or ‘disturbing energies’ also have an electromagnetic quality. This is the reason why they can, for example, be conducted through cables and transformed into therapy signals by means of sophisticated electronic devices. The purpose is to clear the pathological part of the signals.’

Here the author uses highly technical language which, at first, sounds very complicated and scientific. However, after a second read, one is bound to discover that the words hide more than they reveal. In particular, the scientific tone distracts from the lack of logic in the argument. The basic message, once the pseudoscientific veneer is stripped away, seems to be the following. Living systems display electromagnetic phenomena. The electromagnetic energies that they rely upon can make us ill. The energies can also be transferred into an electronic instrument where they can be changed so that they don’t cause any more harm.

Quote No. 2

A very important advantage of the BICOM device as compared to the original form of the MORA-therapy in paediatry is the possibility to reduce the oscillation, a fact which meets much better the reaction pattern of the child and gives better results’ [3].

This paragraph essentially states that the BICOM instrument can change (the frequency or amplitude of) some sort of (electromagnetic) wave. We are told that, for children, this is preferable because of the way children tend to react. This would then be more effective.

Quote No. 3

The question how causative the Bioresonanz-Therapy can be must be answered in a differentiated way. The BR is in the first place effective on the informative level, which means on the ultrafine biokybernetical regulation level of the organism. This also includes the time factor and with that the functional aspect, and thus it influences the material-biochemical area of the body. The BRT is in comparison to other therapy procedures very high on the scale of causativeness, but it still remains in the physical level, and does not reach into the spiritual area. The freeing of the patient from his diseases can self evidently also lead to a change and improvement of conduct and attitudes and to a general wellbeing of the patient’ [4].

This amazing statement is again not easy to understand. If my reading is correct, the author essentially wants to tell us that BR interferes with the flow of information within organisms. The process is time-dependent and therefore affects function, physical and biochemical properties. Compared to other treatments, BR is more causative without affecting our spiritual sphere. As BR cures a disease, it can also change behaviour, attitudes and wellbeing.

Quote No. 4

MORA therapy is an auto-iso-therapy using the patient’s own vibrations in a wide range of the electromagnetic spectrum. Strictly speaking, we have hyperwaves in a six-dimensional cosmos with two hidden parameters (as predicted by Albert Einstein and others). Besides the physical plane there are six other planes of existence and the MORA therapy works in the biological plane, a region called the M-field, according to Sheldrake and Burkhard Heim’ [5].

Here we seem to be told that the MORA therapy is a selftreatment using the body’s own resources, namely a broad range of electromagnetic waves. These waves are hyperwaves in 6 dimensions and their existence has already been predicted by Einstein. Six (or 7?) planes of existence seem to have been discovered and the MORA therapy is operative in one of them.

Quote No. 5

The author presents an overall medical conception of the world between mass maximum and masslessness and completes it with the pair of concepts of subjectivity/objectivity. Three test procedures of the bioelectronic function diagnostics are presented and incorporated in addition to other procedures in this conception of the world. Therefore, in the sense of a holistic medicine, there is a useful indication for every medical procedure, because there are different objectives associated with each procedure. A one-sided assessment of the procedures does not do justice to the human being as a whole’ [6].

This author introduces a new concept of the world between maxima and minima of mass or objectivity. He has developed 3 tests of BR diagnosis that fit into the new concept. Therefore, holistically speaking, any therapy is good for something because each may have a different aim. One-sided assessments of such holistic treatments are too narrow bearing in mind the complexity of a human being.

The danger of pseudoscientific language in health care is obvious: it misleads patients, consumers, journalists, politicians, and everyone else (perhaps even some of the original authors?) into believing that nonsense is credible; to express it more bluntly: it is a method of cheating the unsuspecting public. Yes, the way I see it, it is a form of health fraud. Thus it leads to wrong therapeutic decisions and endangers public health.

I could easily get quite cross with the many authors who publish such drivel. But let’s not allow them to spoil our day; let’s take a different approach: let’s try to have some fun.

I herewith invite my readers to post quotes in the comments section of the most extraordinary excesses of pseudoscientific language that they have come across. If the result is sufficiently original, I might try to design a new BULLSHIT BINGO with it.

Rigorous research into the effectiveness of a therapy should tell us the truth about the ability of this therapy to treat patients suffering from a given condition — perhaps not one single study, but the totality of the evidence (as evaluated in systematic reviews) should achieve this aim. Yet, in the realm of alternative medicine (and probably not just in this field), such reviews are often highly contradictory.

A concrete example might explain what I mean.

There are numerous systematic reviews assessing the effectiveness of acupuncture for fibromyalgia syndrome (FMS). It is safe to assume that the authors of these reviews have all conducted comprehensive searches of the literature in order to locate all the published studies on this subject. Subsequently, they have evaluated the scientific rigor of these trials and summarised their findings. Finally they have condensed all of this into an article which arrives at a certain conclusion about the value of the therapy in question. Understanding this process (outlined here only very briefly), one would expect that all the numerous reviews draw conclusions which are, if not identical, at least very similar.

However, the disturbing fact is that they are not remotely similar. Here are two which, in fact, are so different that one could assume they have evaluated a set of totally different primary studies (which, of course, they have not).

One recent (2014) review concluded that acupuncture for FMS has a positive effect, and acupuncture combined with western medicine can strengthen the curative effect.

Another recent review concluded that a small analgesic effect of acupuncture was present, which, however, was not clearly distinguishable from bias. Thus, acupuncture cannot be recommended for the management of FMS.

How can this be?

By contrast to most systematic reviews of conventional medicine, systematic reviews of alternative therapies are almost invariably based on a small number of primary studies (in the above case, the total number was only 7 !). The quality of these trials is often low (all reviews therefore end with the somewhat meaningless conclusion that more and better studies are needed).

So, the situation with primary studies of alternative therapies for inclusion into systematic reviews usually is as follows:

  • the number of trials is low
  • the quality of trials is even lower
  • the results are not uniform
  • the majority of the poor quality trials show a positive result (bias tends to generate false positive findings)
  • the few rigorous trials yield a negative result

Unfortunately this means that the authors of systematic reviews summarising such confusing evidence often seem to feel at liberty to project their own pre-conceived ideas into their overall conclusion about the effectiveness of the treatment. Often the researchers are in favour of the therapy in question – in fact, this usually is precisely the attitude that motivated them to conduct a review in the first place. In other words, the frequently murky state of the evidence (as outlined above) can serve as a welcome invitation for personal bias to do its effect in skewing the overall conclusion. The final result is that the readers of such systematic reviews are being misled.

Authors who are biased in favour of the treatment will tend to stress that the majority of the trials are positive. Therefore the overall verdict has to be positive as well, in their view. The fact that most trials are flawed does not usually bother them all that much (I suspect that many fail to comprehend the effects of bias on the study results); they merely add to their conclusions that “more and better trials are needed” and believe that this meek little remark is sufficient evidence for their ability to critically analyse the data.

Authors who are not biased and have the necessary skills for critical assessment, on the other hand, will insist that most trials are flawed and therefore their results must be categorised as unreliable. They will also emphasise the fact that there are a few reliable studies and clearly point out that these are negative. Thus their overall conclusion must be negative as well.

In the end, enthusiasts will conclude that the treatment in question is at least promising, if not recommendable, while real scientists will rightly state that the available data are too flimsy to demonstrate the effectiveness of the therapy; as it is wrong to recommend unproven treatments, they will not recommend the treatment for routine use.


1 2 3 6
Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.

Click here for a comprehensive list of recent comments.