Before a scientific paper gets published in a journal, it is submitted to the process of peer-review. Essentially, this means that the editor sends it to 2 or 3 experts in the field asking them to review the submission. Reviewers usually do not get any reward for this, yet the task they are asked to do can be tedious, difficult and time-consuming. Therefore, most reviewers think carefully before accepting it.
My friend Timothy Caulfield was recently invited by a medical journal to review a study of homeopathy. Here is his response to the editor as posted on Twitter:
I find myself regularly in similar situations. Yet, I have never responded in this way. Here is what I normally do:
- I have a look at the journal itself. If it is one of those SCAM publications, I tend to politely reject the invitation because, in my experience, their review process is farcical and not worth the effort. All too often it has happened that I reviewed a paper that was of very poor quality and thus recommended rejecting it. Yet the editor ignored my expert opinion and published the article nevertheless. This is why, several years ago, I decided enough is enough and no longer consider investing my time is such frustrating work.
- If the journal is of decent standing, I would have a look at the submission the editor sent me. If it makes any sense at all I would consider reviewing it (obviously depending on whether I have the time and the expertise).
- If a decent journal invites me to review a nonsensical paper (I assume that was the case Timothy referred to), I find myself in the same position as my friend Timothy. But, contrary to Timothy, I normally take the trouble to write a critical review of a nonsensical submission. Why? The reason is simple: if I don’t do it, the editor will simply send it to another reviewer. Many journals allow authors to suggest reviewers of their choice. Thus, the editor might send the submission next to the person suggested by the author who most likely will write a favourable review, thus hugely increasing the chances that the paper will be published in a decent journal.
On this blog, we have seen repeatedly that even top journal occasionally publish rubbish papers. Perhaps they do so because well-intentioned experts react in the way my friend Timothy did above (as he failed to tell us what journal invited him, I might be wrong).
If we want pseudoscience to disappear, we are fighting a lost battle. It will always rear its ugly head in third class journals. This is lamentable, but perhaps not so disastrous: by publishing little else than rubbish, these SCAM journals discredit themselves and will eventually be read only by pseudoscientists.
But we can do our bit to get rid of pseudoscience in decent journals. For this to happen, I think, rational thinkers need to accept invitations from such journals and do a proper review. And, of course, they can add to it a sentence or two about the futility of reviewing nonsense.
I am sure Timothy and I both want to eliminate pseudoscience as much as possible. In other words, we are in agreement about the aim, yet we differ in our approach. The question is: which is more effective?
I remember reading this paper entitled ‘Comparison of acupuncture and other drugs for chronic constipation: A network meta-analysis’ when it first came out. I considered discussing it on my blog, but then decided against it for a range of reasons which I shall explain below. The abstract of the original meta-analysis is copied below:
The objective of this study was to compare the efficacy and side effects of acupuncture, sham acupuncture and drugs in the treatment of chronic constipation. Randomized controlled trials (RCTs) assessing the effects of acupuncture and drugs for chronic constipation were comprehensively retrieved from electronic databases (such as PubMed, Cochrane Library, Embase, CNKI, Wanfang Database, VIP Database and CBM) up to December 2017. Additional references were obtained from review articles. With quality evaluations and data extraction, a network meta-analysis (NMA) was performed using a random-effects model under a frequentist framework. A total of 40 studies (n = 11032) were included: 39 were high-quality studies and 1 was a low-quality study. NMA showed that (1) acupuncture improved the symptoms of chronic constipation more effectively than drugs; (2) the ranking of treatments in terms of efficacy in diarrhoea-predominant irritable bowel syndrome was acupuncture, polyethylene glycol, lactulose, linaclotide, lubiprostone, bisacodyl, prucalopride, sham acupuncture, tegaserod, and placebo; (3) the ranking of side effects were as follows: lactulose, lubiprostone, bisacodyl, polyethylene glycol, prucalopride, linaclotide, placebo and tegaserod; and (4) the most commonly used acupuncture point for chronic constipation was ST25. Acupuncture is more effective than drugs in improving chronic constipation and has the least side effects. In the future, large-scale randomized controlled trials are needed to prove this. Sham acupuncture may have curative effects that are greater than the placebo effect. In the future, it is necessary to perform high-quality studies to support this finding. Polyethylene glycol also has acceptable curative effects with fewer side effects than other drugs.
END OF 1st QUOTE
This meta-analysis has now been retracted. Here is what the journal editors have to say about the retraction:
After publication of this article , concerns were raised about the scientific validity of the meta-analysis and whether it provided a rigorous and accurate assessment of published clinical studies on the efficacy of acupuncture or drug-based interventions for improving chronic constipation. The PLOS ONE Editors re-assessed the article in collaboration with a member of our Editorial Board and noted several concerns including the following:
- Acupuncture and related terms are not mentioned in the literature search terms, there are no listed inclusion or exclusion criteria related to acupuncture, and the outcome measures were not clearly defined in terms of reproducible clinical measures.
- The study included acupuncture and electroacupuncture studies, though this was not clearly discussed or reported in the Title, Methods, or Results.
- In the “Routine paired meta-analysis” section, both acupuncture and sham acupuncture groups were reported as showing improvement in symptoms compared with placebo. This finding and its implications for the conclusions of the article were not discussed clearly.
- Several included studies did not meet the reported inclusion criteria requiring that studies use adult participants and assess treatments of >2 weeks in duration.
- Data extraction errors were identified by comparing the dataset used in the meta-analysis (S1 Table) with details reported in the original research articles. Errors included aspects of the study design such as the experimental groups included in the study, the number of study arms in the trial, number of participants, and treatment duration. There are also several errors in the Reference list.
- With regard to side effects, 22 out of 40 studies were noted as having reported side effects. It was not made clear whether side effects were assessed as outcome measures for the other 18 studies, i.e. did the authors collect data clarifying that there were no side effects or was this outcome measure not assessed or reported in the original article. Without this clarification the conclusion comparing side effect frequencies is not well supported.
- The network geometry presented in Fig 5 is not correct and misrepresents some of the study designs, for example showing two-arm studies as three-arm studies.
- The overall results of the meta-analysis are strongly reliant on the evidence comparing acupuncture versus lactulose treatment. Several of the trials that assessed this comparison were poorly reported, and the meta-analysis dataset pertaining to these trials contained data extraction errors. Furthermore, potential bias in studies assessing lactulose efficacy in acupuncture trials versus lactulose efficacy in other trials was not sufficiently addressed.
While some of the above issues could be addressed with additional clarifications and corrections to the text, the concerns about study inclusion, the accuracy with which the primary studies’ research designs and data were represented in the meta-analysis, and the reporting quality of included studies directly impact the validity and accuracy of the dataset underlying the meta-analysis. As a consequence, we consider that the overall conclusions of the study are not reliable. In light of these issues, the PLOS ONE Editors retract the article. We apologize that these issues were not adequately addressed during pre-publication peer review.
LZ disagreed with the retraction. YM and XD did not respond.
END OF 2nd QUOTE
Let me start by explaining why I initially decided not to discuss this paper on my blog. Already the first sentence of the abstract put me off, and an entire chorus of alarm-bells started ringing once I read further.
- A meta-analysis is not a ‘study’ in my book, and I am somewhat weary of researchers who employ odd or unprecise language.
- We all know (and I have discussed it repeatedly) that studies of acupuncture frequently fail to report adverse effects (in doing this, their authors violate research ethics!). So, how can it be a credible aim of a meta-analysis to compare side-effects in the absence of adequate reporting?
- The methodology of a network meta-analysis is complex and I know not a lot about it.
- Several things seemed ‘too good to be true’, for instance, the funnel-plot and the overall finding that acupuncture is the best of all therapeutic options.
- Looking at the references, I quickly confirmed my suspicion that most of the primary studies were in Chinese.
In retrospect, I am glad I did not tackle the task of criticising this paper; I would probably have made not nearly such a good job of it as PLOS ONE eventually did. But it was only after someone raised concerns that the paper was re-reviewed and all the defects outlined above came to light.
While some of my concerns listed above may have been trivial, my last point is the one that troubles me a lot. As it also related to dozens of Cochrane reviews which currently come out of China, it is worth our attention, I think. The problem, as I see it, is as follows:
- Chinese (acupuncture, TCM and perhaps also other) trials are almost invariably reporting positive findings, as we have discussed ad nauseam on this blog.
- Data fabrication seems to be rife in China.
- This means that there is good reason to be suspicious of such trials.
- Many of the reviews that currently flood the literature are based predominantly on primary studies published in Chinese.
- Unless one is able to read Chinese, there is no way of evaluating these papers.
- Therefore reviewers of journal submissions tend to rely on what the Chinese review authors write about the primary studies.
- As data fabrication seems to be rife in China, this trust might often not be justified.
- At the same time, Chinese researchers are VERY keen to publish in top Western journals (this is considered a great boost to their career).
- The consequence of all this is that reviews of this nature might be misleading, even if they are published in top journals.
I have been struggling with this problem for many years and have tried my best to alert people to it. However, it does not seem that my efforts had even the slightest success. The stream of such reviews has only increased and is now a true worry (at least for me). My suspicion – and I stress that it is merely that – is that, if one would rigorously re-evaluate these reviews, their majority would need to be retracted just as the above paper. That would mean that hundreds of papers would disappear because they are misleading, a thought that should give everyone interested in reliable evidence sleepless nights!
So, what can be done?
Personally, I now distrust all of these papers, but I admit, that is not a good, constructive solution. It would be better if Journal editors (including, of course, those at the Cochrane Collaboration) would allocate such submissions to reviewers who:
- are demonstrably able to conduct a CRITICAL analysis of the paper in question,
- can read Chinese,
- have no conflicts of interest.
In the case of an acupuncture review, this would narrow it down to perhaps just a handful of experts worldwide. This probably means that my suggestion is simply not feasible.
But what other choice do we have?
One could oblige the authors of all submissions to include full and authorised English translations of non-English articles. I think this might work, but it is, of course, tedious and expensive. In view of the size of the problem (I estimate that there must be around 1 000 reviews out there to which the problem applies), I do not see a better solution.
(I would truly be thankful, if someone had a better one and would tell us)
Did you know that I falsified my qualifications?
Neither did I!
But this is exactly what has been posted on Amazon as a review of my book HOMEOPATHY, THE UNDILUTED FACTS. The Amazon review in question is dated 7 August 2018 and authored by ‘Paul’. As it might not be there for long (because it is clearly abusive) I copied it for you:
Edzard Ernst falsified his qualifications to get a job as a professor. When the university found out they fired him. This book is as false as the Mr Ernst
Over the years, I have received so many insults that I stared to collect them and began to quite like them. I even posted selections on this blog (see for instance here and here). Some are really funny and others are enlightening because they reflect on the mind-set of the authors. All of them show that the author has run out of arguments; thus they really are little tiny victories over unreason, I think.
But, somehow, this new one is different. It is actionable, no doubt, and contains an unusual amount of untruths in so few words.
- I never falsified anything and certainly not my qualification (which is that of a doctor of medicine). If I had, I would be writing these lines from behind bars.
- And if I had done such a thing, I would not have done it ‘to get a job as a professor’ – I had twice been appointed to professorships before I came to the UK (Hannover and Vienna).
- My university did not find out, mainly because there was nothing to find out.
- They did not fire me, but I went into early retirement. Subsequently, they even re-appointed me for several months.
- My book is not false; I don’t even know what a ‘false book’ is (is it a book that is not really a book but something else?).
- And finally, for Paul, I am not Mr Ernst, but Prof Ernst.
I don’t know who Paul is. And I don’t know whether he has even read the book he pretends to be commenting on (from what I see, I think this is very unlikely). I am sure, however, that he did not read my memoir where all these things are explained in full detail. And I certainly do not hope he ever reads it – if he did, he might claim:
This book is as false as the Mr Ernst
Distant healing is one of the most bizarre yet popular forms of alternative medicine. Healers claim they can transmit ‘healing energy’ towards patients to enable them to heal themselves. There have been many trials testing the effectiveness of the method, and the general consensus amongst critical thinkers is that all variations of ‘energy healing’ rely entirely on a placebo response. A recent and widely publicised paper seems to challenge this view.
This article has, according to its authors, two aims. Firstly it reviews healing studies that involved biological systems other than ‘whole’ humans (e.g., studies of plants or cell cultures) that were less susceptible to placebo-like effects. Secondly, it presents a systematic review of clinical trials on human patients receiving distant healing.
All the included studies examined the effects upon a biological system of the explicit intention to improve the wellbeing of that target; 49 non-whole human studies and 57 whole human studies were included.
The combined weighted effect size for non-whole human studies yielded a highly significant (r = 0.258) result in favour of distant healing. However, outcomes were heterogeneous and correlated with blind ratings of study quality; 22 studies that met minimum quality thresholds gave a reduced but still significant weighted r of 0.115.
Whole human studies yielded a small but significant effect size of r = .203. Outcomes were again heterogeneous, and correlated with methodological quality ratings; 27 studies that met threshold quality levels gave an r = .224.
From these findings, the authors drew the following conclusions: Results suggest that subjects in the active condition exhibit a significant improvement in wellbeing relative to control subjects under circumstances that do not seem to be susceptible to placebo and expectancy effects. Findings with the whole human database suggests that the effect is not dependent upon the previous inclusion of suspect studies and is robust enough to accommodate some high profile failures to replicate. Both databases show problems with heterogeneity and with study quality and recommendations are made for necessary standards for future replication attempts.
In a press release, the authors warned: the data need to be treated with some caution in view of the poor quality of many studies and the negative publishing bias; however, our results do show a significant effect of healing intention on both human and non-human living systems (where expectation and placebo effects cannot be the cause), indicating that healing intention can be of value.
My thoughts on this article are not very complimentary, I am afraid. The problems are, it seems to me, too numerous to discuss in detail:
- The article is written such that it is exceedingly difficult to make sense of it.
- It was published in a journal which is not exactly known for its cutting edge science; this may seem a petty point but I think it is nevertheless important: if distant healing works, we are confronted with a revolution in the understanding of nature – and surely such a finding should not be buried in a journal that hardly anyone reads.
- The authors seem embarrassingly inexperienced in conducting and publishing systematic reviews.
- There is very little (self-) critical input in the write-up.
- A critical attitude is necessary, as the primary studies tend to be by evangelic believers in and amateur enthusiasts of healing.
- The article has no data table where the reader might learn the details about the primary studies included in the review.
- It also has no table to inform us in sufficient detail about the quality assessment of the included trials.
- It seems to me that some published studies of distant healing are missing.
- The authors ignored all studies that were not published in English.
- The method section lacks detail, and it would therefore be impossible to conduct an independent replication.
- Even if one ignored all the above problems, the effect sizes are small and would not be clinically important.
- The research was sponsored by the ‘Confederation of Healing Organisations’ and some of the comments look as though the sponsor had a strong influence on the phraseology of the article.
Given these reservations, my conclusion from an analysis of the primary studies of distant healing would be dramatically different from the one published by the authors: DESPITE A SIZABLE AMOUNT OF PRIMARY STUDIES ON THE SUBJECT, THE EFFECTIVENESS OF DISTANT HEALING REMAINS UNPROVEN. AS THIS THERAPY IS BAR OF ANY BIOLOGICAL PLAUSIBILITY, FURTHER RESEARCH IN THIS AREA SEEMS NOT WARRANTED.
When someone has completed a scientific project, it is customary to publish it [‘unpublished science is no science’, someone once told me many years ago]. To do so, he needs to write it up and submit it to a scientific journal. The editor of this journal will then submit it to a process called ‘peer review’.
What does ‘peer review’ entail? Well, it means that 2-3 experts are asked to critically assess the paper in question, make suggestions as to how it can be improved and submit a recommendation as to whether or not the article deserves to be published.
Peer review has many pitfalls but, so far, nobody has come up with a solution that is convincingly better. Many scientists are under pressure to publish [‘publish or perish’], and therefore some people resort to cheating. A most spectacular case of fraudulent peer review has been reported recently in this press release:
London, UK (08 July 2014) – SAGE announces the retraction of 60 articles implicated in a peer review and citation ring at the Journal of Vibration and Control (JVC). The full extent of the peer review ring has been uncovered following a 14 month SAGE-led investigation, and centres on the strongly suspected misconduct of Peter Chen, formerly of National Pingtung University of Education, Taiwan (NPUE) and possibly other authors at this institution.
In 2013 the then Editor-in-Chief of JVC, Professor Ali H. Nayfeh,and SAGE became aware of a potential peer review ring involving assumed and fabricated identities used to manipulate the online submission system SAGE Track powered by ScholarOne Manuscripts™. Immediate action was taken to prevent JVC from being exploited further, and a complex investigation throughout 2013 and 2014 was undertaken with the full cooperation of Professor Nayfeh and subsequently NPUE.
In total 60 articles have been retracted from JVC after evidence led to at least one author or reviewer being implicated in the peer review ring. Now that the investigation is complete, and the authors have been notified of the findings, we are in a position to make this statement.
While investigating the JVC papers submitted and reviewed by Peter Chen, it was discovered that the author had created various aliases on SAGE Track, providing different email addresses to set up more than one account. Consequently, SAGE scrutinised further the co-authors of and reviewers selected for Peter Chen’s papers, these names appeared to form part of a peer review ring. The investigation also revealed that on at least one occasion, the author Peter Chen reviewed his own paper under one of the aliases he had created.
Unbelievable? Perhaps, but sadly it is true; some scientists seem to be criminally ingenious when it comes to getting their dodgy articles into peer-reviewed journals.
And what does this have to do with ALTERNATIVE MEDICINE, you may well ask. The Journal of Vibration and Control is not even medical and certainly would never consider publishing articles on alternative medicine. Such papers go to one of the many [I estimate more that 1000] journals that cover either alternative medicine in general or any of the modalities that fall under this wide umbrella. Most of these journals, of course, pride themselves with being peer-reviewed – and, at least nominally, that is correct.
I have been on the editorial board of most of the more important journals in alternative medicine, and I cannot help thinking that their peer review process is not all that dissimilar from the fraudulent scheme set up by Peter Chen and disclosed above. What happens in alternative medicine is roughly as follows:
- a researcher submits a paper for publication,
- the editor sends it out for peer review,
- the peer reviewers are either those suggested by the original author or members of the editorial board of the journal,
- in either case, the reviewers are more than likely to be uncritical and recommend publication,
- in the end, peer review turns out to be a farcical window dressing exercise with no consequence,
- thus even very poor research and pseudo-research are being published abundantly.
The editorial boards of journals of alternative medicine tend to be devoid of experts who are critical about the subject at hand. If you think that I am exaggerating, have a look at the editorial board members of ‘HOMEOPATHY’ (or any other journal of alternative medicine) and tell me who might qualify as a critic of homeopathy. When the editor, Peter Fisher, recently fired me from his board because he felt I had tarnished the image of homeopathy, this panel lost the only person who understood the subject matter and, at the same time, was critical about it (the fact that the website still lists me as an editorial board member is merely a reflection of how slow things are in the world of homeopathy: Fisher fired me more than a year ago).
The point I am trying to make is simple: peer review is never a perfect method but when it is set up to be deliberately uncritical, it cannot possibly fulfil its function to prevent the publication of dodgy research. In this case, the quality of the science will be inadequate and generate false-positive messages that mislead the public.
If we search on ‘Medline’ for ‘complementary alternative medicine’ (CAM), we currently get about 13000 hits. A little graph on the side of the page demonstrates that, during the last 4 years, the number of articles on this subject has grown exponentially.
Surely, this must be very good news: such intense research activity will soon tell us exactly which alternative treatments work for which conditions and which don’t.
I beg to differ. Let me explain why.
The same ‘Medline’ search informs us that the majority of the recent articles were published in an open access journal called ‘Evidence-Based Complementary and Alternative Medicine’ (eCAM). For example, of the 80 most recent articles listed in Medline (on 26/5/2014), 53 came from that journal. The publication frequency of eCAM and its increase in recent years beggars belief: in 2011, they published just over 500 articles which is already a high number, but, in 2012, the figure had risen to >800, and in 2013 it was >1300 (the equivalent 2013 figure for the BMJ/BMJ Open by comparison is 4, and that for another alt med journal, e.g. Forsch Komplement, is 10)
How do they do it? How can eCAM be so dominant in publishing alt med research? The trick seems to be fairly simple.
Let’s assume you are an alt med researcher and you have an article that you would like to see published. Once you submit it to eCAM, your paper is sent to one of the ~150 members of the editorial board. These people are almost all strong proponents of alternative medicine; critics are a true rarity in this group. At this stage, you are able to suggest the peer reviewers for your submission (all who ever accepted this task are listed on the website; they amount to several thousand!), and it seems that, with the vast majority of submissions, the authors’ suggestions are being followed.
It goes without saying that most researchers suggest colleagues for peer reviewing who are not going to reject their work (the motto seems to be “if you pass my paper, I will pass yours). Therefore even faily flimsy bits of research pass this peer review process and get quickly published online in eCAM.
This process explains a lot, I think: 1) the extraordinarily high number of articles published 2) why currently more than 50% of all alt med research originate from eCAM 3) why so much of it is utter rubbish.
Even the mere titles of some of the articles might demonstrate my point. A few examples have to suffice:
- Color distribution differences in the tongue in sleep disorder
- Wen-dan decoction improves negative emotions in sleep-deprived rats by regulating orexin-a and leptin expression.
- Yiqi Huoxue Recipe Improves Heart Function through Inhibiting Apoptosis Related to Endoplasmic Reticulum Stress in Myocardial Infarction Model of Rats.
- Protective Effects of Bu-Shen-Huo-Xue Formula against 5/6 Nephrectomy-Induced Chronic Renal Failure in Rats
- Effects and Mechanisms of Complementary and Alternative Medicine during the Reproductive Process
- Evidence-based medicinal plants for modern chronic diseases
- Transforming Pain into Beauty: On Art, Healing, and Care for the Spirit
This system of uncritical peer review and fast online publication seems to suit many of the people involved in this process: the journal’s owners are laughing all the way to the bank; there is a publication charge of US$ 2000 per article, and, in 2013, the income of eCAM must therefore have been well over US$2 000 000. The researchers are equally delighted; they get even their flimsiest papers published (remember: ‘publish or perish’!). And the evangelic believers in alternative medicine are pleased because they can now claim that their field is highly research-active and that there is plenty of evidence to support the use of this or that therapy.
But there are others who are not served well by eCAM habit of publishing irrelevant, low quality articles:
- professionals who would like to advance health care and want to see reliable evidence as to which treatments work and which don’t,
- the public who, in one way or another, pay for all this and might assume that published research tends to be relevant and reliable,
- the patients who have given their time to researchers in the hope that their gift will improve health care,
- ill individuals who hope that alternative treatments might relieve their suffering,
- politicians who rely on research to be reliable in order to arrive at the right decisions.
Come to think of it, the vast majority of people should be less than enchanted with eCAM and similar journals.
Musculoskeletal and rheumatic conditions, often just called “arthritis” by lay people, bring more patients to alternative practitioners than any other type of disease. It is therefore particularly important to know whether alternative medicines (AMs) demonstrably generate more good than harm for such patients. Most alternative practitioners, of course, firmly believe in what they are doing. But what does the reliable evidence show?
To find out, ‘Arthritis Research UK’ has sponsored a massive project lasting several years to review the literature and critically evaluate the trial data. They convened a panel of experts (I was one of them) to evaluate all the clinical trials that are available in 4 specific clinical areas. The results for those forms of AM that are to be taken by mouth or applied topically have been published some time ago, now the report, especially written for lay people, on those treatments that are practitioner-based has been published. It covers the following 25 modalities:
Chiropractic (spinal manipulation)
Kinesiology (applied kinesiology)
Magnet therapy (static magnets)
Osteopathy (spinal manipulation)
Qigong (internal qigong)
Our findings are somewhat disappointing: only very few treatments were shown to be effective.
In the case of rheumatoid arthritis, 24 trials were included with a total of 1,500 patients. The totality of this evidence failed to provide convincing evidence that any form of AM is effective for this particular condition.
For osteoarthritis, 53 trials with a total of ~6,000 patients were available. They showed reasonably sound evidence only for two treatments: Tai chi and acupuncture.
Fifty trials were included with a total of ~3,000 patients suffering from fibromyalgia. The results provided weak evidence for Tai chi and relaxation-therapies, as well as more conclusive evidence for acupuncture and massage therapy.
Low back pain had attracted more research than any of the other diseases: 75 trials with ~11,600 patients. The evidence for Alexander Technique, osteopathy and relaxation therapies was promising by not ultimately convincing, and reasonably good evidence in support of yoga and acupuncture was also found.
The majority of the experts felt that the therapies in question did not frequently cause harm, but there were two important exceptions: osteopathy and chiropractic. For both, the report noted the existence of frequent yet mild, as well as serious but rare adverse effects.
As virtually all osteopaths and chiropractors earn their living by treating patients with musculoskeletal problems, the report comes as an embarrassment for these two professions. In particular, our conclusions about chiropractic were quite clear:
There are serious doubts as to whether chiropractic works for the conditions considered here: the trial evidence suggests that it’s not effective in the treatment of fibromyalgia and there’s only little evidence that it’s effective in osteoarthritis or chronic low back pain. There’s currently no evidence for rheumatoid arthritis.
Our point that chiropractic is not demonstrably effective for chronic back pain deserves some further comment, I think. It seems to be in contradiction to the guideline by NICE, as chiropractors will surely be quick to point out. How can this be?
One explanation is that, since the NICE-guidelines were drawn up, new evidence has emerged which was not positive. The recent Cochrane review, for instance, concludes that spinal manipulation “is no more effective for acute low-back pain than inert interventions, sham SMT or as adjunct therapy”
Another explanation could be that the experts on the panel writing the NICE-guideline were less than impartial towards chiropractic and thus arrived at false-positive or over-optimistic conclusions.
Chiropractors might say that my presence on the ‘Arthritis Research’-panel suggests that we were biased against chiropractic. If anything, the opposite is true: firstly, I am not even aware of having a bias against chiropractic, and no chiropractor has ever demonstrated otherwise; all I ever aim at( in my scientific publications) is to produce fair, unbiased but critical assessments of the existing evidence. Secondly, I was only one of a total of 9 panel members. As the following list shows, the panel included three experts in AM, and most sceptics would probably categorise two of them (Lewith and MacPherson) as being clearly pro-AM:
Professor Michael Doherty – professor of rheumatology, University of Nottingham
Professor Edzard Ernst – emeritus professor of complementary medicine, Peninsula Medical School
Margaret Fisken – patient representative, Aberdeenshire
Dr Gareth Jones (project lead) – senior lecturer in epidemiology, University of Aberdeen
Professor George Lewith – professor of health research, University of Southampton
Dr Hugh MacPherson – senior research fellow in health sciences, University of York
Professor Gary Macfarlane (chair of committee) – professor of epidemiology, University of Aberdeen
Professor Julius Sim – professor of health care research, Keele University
Jane Tadman – representative from Arthritis Research UK, Chesterfield
What can we conclude from all that? I think it is safe to say that the evidence for practitioner-based AMs as a treatment of the 4 named conditions is disappointing. In particular, chiropractic is not a demonstrably effective therapy for any of them. This, of course begs the question, for what condition is chiropractic proven to work! I am not aware of any, are you?
The question whether spinal manipulation is an effective treatment for infant colic has attracted much attention in recent years. The main reason for this is, of course, that a few years ago Simon Singh had disclosed in a comment that the British Chiropractic Association (BCA) was promoting chiropractic treatment for this and several other childhood condition on their website. Simon famously wrote “they (the BCA) happily promote bogus treatments” and was subsequently sued for libel by the BCA. Eventually, the BCA lost the libel action as well as lots of money, and the entire chiropractic profession ended up with enough egg on their faces to cook omelets for all their patients.
At the time, the BCA had taken advice from several medical and legal experts; one of their medical advisers, I was told, was Prof George Lewith. Intriguingly, he and several others have just published a Cochrane review of manipulative therapies for infant colic. Here are the unabbreviated conclusions from their article:
“The studies included in this meta-analysis were generally small and methodologically prone to bias, which makes it impossible to arrive at a definitive conclusion about the effectiveness of manipulative therapies for infantile colic. The majority of the included trials appeared to indicate that the parents of infants receiving manipulative therapies reported fewer hours crying per day than parents whose infants did not, based on contemporaneous crying diaries, and this difference was statistically significant. The trials also indicate that a greater proportion of those parents reported improvements that were clinically significant. However, most studies had a high risk of performance bias due to the fact that the assessors (parents) were not blind to who had received the intervention. When combining only those trials with a low risk of such performance bias, the results did not reach statistical significance. Further research is required where those assessing the treatment outcomes do not know whether or not the infant has received a manipulative therapy. There are inadequate data to reach any definitive conclusions about the safety of these interventions”
Cochrane reviews also carry a “plain language” summary which might be easier to understand for lay people. And here are the conclusions from this section of the review:
The studies involved too few participants and were of insufficient quality to draw confident conclusions about the usefulness and safety of manipulative therapies. Although five of the six trials suggested crying is reduced by treatment with manipulative therapies, there was no evidence of manipulative therapies improving infant colic when we only included studies where the parents did not know if their child had received the treatment or not. No adverse effects were found, but they were only evaluated in one of the six studies.
If we read it carefully, this article seems to confirm that there is no reliable evidence to suggest that manipulative therapies are effective for infant colic. In the analyses, the positive effect disappears, if the parents are properly blinded; thus it is due to expectation or placebo. The studies that seem to show a positive effect are false positive, and spinal manipulation is, in fact, not effective.
The analyses disclose another intriguing aspect: most trials failed to mention adverse effects. This confirms the findings of our own investigation and amounts to a remarkable breach of publication ethics (nobody seems to be astonished by this fact; is it normal that chiropractic researchers ignore generally accepted rules of ethics?). It also reflects badly on the ability of the investigators of the primary studies to be objective. They seem to aim at demonstrating only the positive effects of their intervention; science is, however, not about confirming the researchers’ prejudices, it is about testing hypotheses.
The most remarkable thing about the new Cochrane review is, I think, the in-congruence of the actual results and the authors’ conclusion. To a critical observer, the former are clearly negative but the latter sound almost positive. I think this begs the question about the possibility of reviewer bias.
We have recently discussed on this blog whether reviews by one single author are necessarily biased. The new Cochrane review has 6 authors, and it seems to me that its conclusions are considerably more biased than my single-author review of chiropractic spinal manipulation for infant colic; in 2009, I concluded simply that “the claim [of effectiveness] is not based on convincing data from rigorous clinical trials”.
Which of the two conclusions describe the facts more helpfully and more accurately?
I think, I rest my case.