Guest post by Norbert Aust, Udo Endruscheit, and Edzard Ernst
How do we know whether a treatment is reasonable or just some so-called alternative medicine (SCAM) that is at best useless? A simple answer is that the former is evidence-based, while the latter is not. But how can we tell the difference? High-quality studies, with independent replications or even a systematic review, are the sort of things we are usually looking for. But there is an underlying assumption, namely that, in science, bogus studies are prevented from polluting the scientific database or, if such trials have emerged, there are ways to identify and eliminate them.
And what if this assumption is wrong?
What if respectable universities and research organizations venture into the realm of pseudoscience either knowingly or because it had slipped their attention?
What if the editorial board of a top journal passes bogus studies to peer review?
What if such a paper is eventually reviewed by a proponent of the implausible therapy?
What if the readers of the article, once it is published, are too lethargic to object and do not write letters to the editor in protest?
And what if skeptics do formulate a protest but the journal editor refuses to publish it?
Well, if all the checks that should prevent faulty results from entering the scientific knowledge fail, we have fake evidence: a study that looks like sound science but that, in fact, is invalid. It is not hard to imagine what would happen if SCAM therapies are supported by seemingly respectable studies published in top journals. The fake evidence would accumulate as part of the body of evidence and eventually enter mainstream clinical practice, education, politics, etc., etc. Thus the reputation of bogus therapies would grow unjustifiably.
If you think this cannot happen, you are in the wrong. After the infamous study by Frass et al about homeopathy as an add-on treatment for lung cancer, another homeopathy paper was published in 2022 by Gaertner et al. in Pediatric Research (PR), a Medline-indexed journal with a two-year impact factor of 3.95 belonging to the nature-group of journals. According to this meta-analysis ‘individualized homeopathy showed a clinically relevant and statistically robust effect in the treatment of ADHD’. Shortly after the publication of this paper, we sent a letter to the editor to point out the shortcomings of this study. Here it is:
with this letter we like to comment on the systematic review and meta-analysis on childhood ADHD by Gaertner et al. recently published in your journal.
First off, we are surprised, that your journal that is connected to nature does publish a paper on a treatment that has no a-priory probability at all and thus can only contain false positives if any. And this review is no exception as will be seen presently.
Our concerns are:
Out of the six studies included three were mere pilot trials (Fibert_2019, Jacobs_2005, Oberai_2013, ) which cannot provide any evidence for the shortcomings involved in pilots. Three of the six trials show severe issues in blinding (Fibert_2016, Fibert_2019, Oberai_2013), with two of them concerning both of the participants and the test personnel. This usually leads to massive bias in favour of the treatment [Zitat Cochrane Handbook].
Then we compared data from two trials with the data reported in the review and found some major misrepresentations:
(1) Jacobs et al. report an improvement in the T-score of their main outcome (CGI-P) of 4.1 for homeopathy and 9.1 for controls, that is placebo outperformed the homeopathic intervention. But the authors give an effect size of 0.272 in favour of homeopathy which is the opposite of the findings in the trial.
(2) Oberai et al. report effect sizes for their three main outcomes of 0.22, 0.59 and 0.54 (CPRS-R, CGISS, CGIIS repectively). There is no way that this yields a pooled effect size of 1.436 as given in the review.
We conclude that the positive result obtained by the authors is due to a combination of the inclusion of biased trials unsuitable to build evidence together with some major misreporting of study outcomes.
Our recommendation would be that the authors reconsider their review and improve their report. Maybe the editors would like to add a caution-notice to the paper – if not to withdraw it completely.
In June 2023, a full year after our submission, we were informed that Pediatric Research would not publish our criticism because the priority given to it was not sufficient to justify publication. But we were assured, that the journal would take the matter seriously, that they will investigate this matter and take appropriate editorial action. But as of today (End of June 2023) no expression of concern has been published.
Did the journal receive other comments or criticisms related to the paper in question? No, apparently there were none, at least none was published and the paper remains unchallenged to this day. This means that it might be taken for reliable evidence on the effectiveness of homeopathy and mislead patients, carers, practitioners, decision-makers, etc.
We feel this is unacceptable and therefore again wrote to the editors asking to reconsider their decision. Here is our letter:
together with my co-authors we would like to comment your decision about our letter to the editor about an extremely faulty and misleading paper that may well create harm to patients. In fact we find it very hard to accept your decision not to publish our comment.
We understand that Pediatric Research is a high impact journal with a 2-year IF of nearly 4. Your journal is member of COPE and is indexed with quite a few first rank institutions. By all standards, any reader will be convinced that a paper published in Pediatric Research is based on solid research and the results are derived by rigorous methodology and are as reliable as can be. Especially if this paper remains unchallenged by any reader’s comments for a full year after publication. This is your responsibility to the scientific community. And to the children that might receive treatment based on knowledge spread through your journal.
How then can it be, that an article about homeopathy, a thoroughly implausible lore, in the treatment of ADHD is published in Pediatric Research, where the authors come to the conclusion “that individualized homeopathy showed a clinically relevant and statistically robust effect in the treatment of ADHD”?
In our comment we point out that the authors made a lot of errors – to say it mildly. They deny the doubtful quality of the studies they included in their meta-analysis, they did not stick to their own exclusion criteria, the data the authors report do not resemble the findings of the studies they were allegedly taken from, the one study setting the results is a mere pilot study.
The reason you give for our letter not being published is that it was not given enough priority to justify publication. We would like to know: Which issues can conceivably receive higher priority than the fact that a paper in your journal is downright wrong and misleading?
What do you need to deem a comment important? Up to now the paper is unchallenged by any reader’s comments, so apparently there was no other letter to the editor that might be given higher priority than ours.
We ask you to review your decision, or better still, consider a retraction of the paper altogether. If so, an expression of concern should be issued at once. After all, the COPE-guidelines for retraction state “clear evidence, that the findings are unreliable, either as a result of major error (…), or as a result of fabrication (…) or faslification (…)’ as a reason to consider retraction.
Otherwise the malpractice of homeopathy will have a first class evidence that will be helpful to promote homeopathy to parents and their children.
Watch this space!