MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

When someone has completed a scientific project, it is customary to publish it [‘unpublished science is no science’, someone once told me many years ago]. To do so, he needs to write it up and submit it to a scientific journal. The editor of this journal will then submit it to a process called ‘peer review’.

What does ‘peer review’ entail? Well, it means that 2-3 experts are asked to critically assess the paper in question, make suggestions as to how it can be improved and submit a recommendation as to whether or not the article deserves to be published.

Peer review has many pitfalls but, so far, nobody has come up with a solution that is convincingly better. Many scientists are under pressure to publish [‘publish or perish’], and therefore some people resort to cheating. A most spectacular case of fraudulent peer review has been reported recently in this press release:

SAGE statement on Journal of Vibration and Control

London, UK (08 July 2014) – SAGE announces the retraction of 60 articles implicated in a peer review and citation ring at the Journal of Vibration and Control (JVC). The full extent of the peer review ring has been uncovered following a 14 month SAGE-led investigation, and centres on the strongly suspected misconduct of Peter Chen, formerly of National Pingtung University of Education, Taiwan (NPUE) and possibly other authors at this institution.

In 2013 the then Editor-in-Chief of JVC, Professor Ali H. Nayfeh,and SAGE became aware of a potential peer review ring involving assumed and fabricated identities used to manipulate the online submission system SAGE Track powered by ScholarOne Manuscripts™. Immediate action was taken to prevent JVC from being exploited further, and a complex investigation throughout 2013 and 2014 was undertaken with the full cooperation of Professor Nayfeh and subsequently NPUE.

In total 60 articles have been retracted from JVC after evidence led to at least one author or reviewer being implicated in the peer review ring. Now that the investigation is complete, and the authors have been notified of the findings, we are in a position to make this statement.

While investigating the JVC papers submitted and reviewed by Peter Chen, it was discovered that the author had created various aliases on SAGE Track, providing different email addresses to set up more than one account. Consequently, SAGE scrutinised further the co-authors of and reviewers selected for Peter Chen’s papers, these names appeared to form part of a peer review ring. The investigation also revealed that on at least one occasion, the author Peter Chen reviewed his own paper under one of the aliases he had created.

Unbelievable? Perhaps, but sadly it is true; some scientists seem to be criminally ingenious when it comes to getting their dodgy articles into peer-reviewed journals.

And what does this have to do with ALTERNATIVE MEDICINE, you may well ask. The Journal of Vibration and Control is not even medical and certainly would never consider publishing articles on alternative medicine. Such papers go to one of the many [I estimate more that 1000] journals that cover either alternative medicine in general or any of the modalities that fall under this wide umbrella. Most of these journals, of course, pride themselves with being peer-reviewed – and, at least nominally, that is correct.

I have been on the editorial board of most of the more important journals in alternative medicine, and I cannot help thinking that their peer review process is not all that dissimilar from the fraudulent scheme set up by Peter Chen and disclosed above. What happens in alternative medicine is roughly as follows:

  • a researcher submits a paper for publication,
  • the editor sends it out for peer review,
  • the peer reviewers are either those suggested by the original author or members of the editorial board of the journal,
  • in either case, the reviewers are more than likely to be uncritical and recommend publication,
  • in the end, peer review turns out to be a farcical window dressing exercise with no consequence,
  • thus even very poor research and pseudo-research are being published abundantly.

The editorial boards of journals of alternative medicine tend to be devoid of experts who are critical about the subject at hand. If you think that I am exaggerating, have a look at the editorial board members of ‘HOMEOPATHY’ (or any other journal of alternative medicine) and tell me who might qualify as a critic of homeopathy. When the editor, Peter Fisher, recently fired me from his board because he felt I had tarnished the image of homeopathy, this panel lost the only person who understood the subject matter and, at the same time, was critical about it (the fact that the website still lists me as an editorial board member is merely a reflection of how slow things are in the world of homeopathy: Fisher fired me more than a year ago).

The point I am trying to make is simple: peer review is never a perfect method but when it is set up to be deliberately uncritical, it cannot possibly fulfil its function to prevent the publication of dodgy research. In this case, the quality of the science will be inadequate and generate false-positive messages that mislead the public.

17 Responses to Peer review in alternative medicine is farcically inadequate

  • I’ve just clicked on the link to http://www.journals.elsevier.com/homeopathy/editorial-board/, and you’re still listed …

  • Edzard, I have to take you task a little here. I am an editor, associate editor or on the editorial board of 5 CAM journals and can state unequivocally we do try to get independent reviewers. Whilst reviewers suggested by the author are considered, they are only chosen if satisfied there is no conflict, and only one of at least 2 or 3 will be from this pool. This is standard practice across all journals in all fields.

    With most articles, I go through a minimum of requesting reviews from at least 10 people (and these are for the non-predatory publisher journals – not OMICS etc) before finding one who agrees. My colleagues do the same.

    You also neglect to mention that even when authors do want to choose other titles, the nature of publishing (big publishers with multiple titles) will mean that CAM is automatically shunted to CAM journals (e.g. anything even remotely CAM at BMC will not be considered by any other journal than BMC CAM – even if the author has asked for it to be published elsewhere).

    Professional, independent and uncritical peer-review would be great. But let’s not pretend the whole CAM research scheme is trying to avoid it, or that the problems are somehow limited or endemic to this area. Ditto for some of the blogosphere quoting dubious articles from dodgy journals: anyone trained in any decent capacity would know well enough to ignore anything published in an OMICS (or other predatory journal) due to it having poor (if any) peer review – but they do not just ‘publish’ CAM journals.

    Similarly, if you have a beef with Homeopathy, then you have a beef with Homeopathy. Don’t use that experience to cast aspersions on the entirety of CAM journals.

    If people who are more ‘critical’ of CAM want reviewers to be more independent and critical, then I suggest they start accepting review requests when they are asked. Because we do ask them. Or is it also our fault they say no?

    • I am on the ed board of ~30 journals, both alternative and mainstream, and I can assure you that the 2 worlds do behave quite differently in terms of peer review. the result of this discrepancy is even tangible: we have shown in 3 independent studies that the top alt med journals publish almost no negative results [the 1st time here: http://www.ncbi.nlm.nih.gov/pubmed/?term=ernst+pittler+nature%5D.
      suggesting that I “have a beef with Homeopathy” sounds like a cheap ad hominem and proves absolutely nothing; I could have taken any other journal in the field.
      if you think that the journals you are personally involved in are so much better, please give us a list of ed board members who are known critics of alt med; in case you can show that these are more that 30% of the total, you might convince me.
      if you insist, I can take a few other examples:
      COMPLLEMENTARY THERAPIES IN MEDICINE, the editor-in chief is a chap who is employed by one of the largest homeopathic manufacturer [usually without disclosing the fact]. I once wrote to him alerting him to the fact that a whole bunch of articles on healing published in his journal are highly suspect of being fraudulent and therefore ought to be withdrawn. were they? no!
      FORSCHENDE KOMPLEMENTAERMEDIZIN, the editor-in chief is someone who as been awarded the pseudoscientist of the year award [http://edzardernst.com/2012/10/professor-harald-walach-pseudo-scientist-of-the-year/].
      would you not agree that, under such circumstances, critical input is unlikely and false-positive results might often mislead the public?
      I am not so much questioning independent review, I tried to point out that peer review must be CRITICAL or else it is farcical.

      • Hi Edzard, I would say that saying you have a beef with Homeopathy is not ad hominem, it is what you wrote – and you extended that experience to all journals. You have then found two more, which you base not on journal content but on the status of their editors.

        When I – and most of my colleagues – get an article we do get a CAM ‘proponent’ (or CAM practitioner) who is expert in the practice of that modality to review. That is appropriate – to determine whether the therapy is accurately reflected. But we also seek content experts in the field (statistics, specialty of medicine [gynaecology etc], health policy) for review, usually those without any CAM practice background. All journals I am on have rejection rates of over 70% – hardly open and uncritical acceptance of anything.

        The fact that you ‘suspected’ something was fraudulent doesn’t mean it *was* – are you implying a cover-up? Perhaps you should write why they are fraudulent in a letter to the editor – that would seem to be the traditional approach, and along with giving readers access to these concerns give the authors chance to reply. If he rejected this letter, then you would have a case for cover-up, but until then it is just a [perhaps valid, though also possibly not valid on the scant information you’ve given] disagreement with some articles.

        I agree that there are some terrible CAM journals. I’m also invited to submit articles to some very terrible non-CAM journals. Editors of all journals often have strong opinions on controversial topics (I have issues with editors of gynaecology journals promoting preventive hysterectomy – but several do, for example. But this is their opinion, not the journals) – play the ball [journal], not the man [editor]. You could say the similar of a journal in any field – editors have their ideologies. Don’t dress this up as somehow specifically a ‘CAM thing’.

        You also know as well as I do that the content submitted to a journal dictates the content published, and publication bias is hardly a CAM phenomenon. I too would like to see more negative trials published, but what are we to do if it is mostly positive ones that are submitted? Are we supposed to reject methodologically sound papers simply because of ‘balance’?

        I am also, like you, on the editorial board of non-CAM journals and active in several high-level non-CAM research organisations and government committees, largely in my field of public health and health policy. I can say that, whilst the CAM research field does have problems, they are certainly not unique, and they are generally not that different from other fields. As a ‘new’ research field CAM has some research capacity issues – but no more than other ‘new’ fields. CAM is just the canary in the coal mine. It attracts enough scrutiny that problems that exist everywhere are often found here first – but they are indeed present in all fields.

        You were trying to point out that peer-review needs to be critical, and I wholeheartedly agree. But then you go on to say pretty much say that no-one in CAM research can be critical, simply by their virtue of being a CAM-focused researcher. Not content with that, you imply that journals are deliberately set up to be non-critical and as such deceptive. And you base this on several non-substantiated claims and gut feelings. If you are going to hold other’s claims to these standards, at the very least apply them to your own.

        • Jon Wardle said:

          When I – and most of my colleagues – get an article we do get a CAM ‘proponent’ (or CAM practitioner) who is expert in the practice of that modality to review.

          How do you determine if this practitioner or proponent is an expert?

        • what about the fact that we showed on 3 different occasions that the TOP journals of alt med publish no negative results?
          what about the list of critical board members I asked for?

          • Edzard – the last reference I can see on this from you – which doesn’t actually provide data but raises the point – is 2007. That’s 7 years ago now. They do publish negative material, when it is submitted, and they do reject positive material, if it is not sound. Just like any other journals. If you want a list, look at EUJIM or BMC CAM, though my guess is that you will put forward that these people are uncritical, probably for no other reason than they are editors on CAM journals. We could go on back and forth forever on this.

            If there is more recent research, then I think you should cite it and name and shame in detail. Having the links in your article would be a start. The reason I’ve raised it is that you’ve made this very bold claim and, as it stands, it looks more like opinion than fact. I’ve found one of your 3 articles (the one you linked to, but it is nearly 20yrs old), but not the others. In the article I see reference to a problem in the engineering sector (surely you could have at least pointed to the resveratrol scandal!) and a couple of personal anecdotes and feelpinions. No real link to any research on CAM backing your statements. I’m not disputing there aren’t problems, but if your going to make such a claim *against a whole field* at least provide some data, particularly if your argument is based around the fact that field won’t supply their own properly.

            Alan – the rules are the same as they are for any other field. Usually it is done by viewing who has previously published in this area in Medline journals. It is balanced by getting reviewers with expertise in all areas relevant to the publication. It would be entirely inappropriate to not have someone with knowledge of the therapy being tested review, just as it would be ridiculous to have a surgical intervention reviewed without surgeon input. Not a perfect system but the way it is done for all fields.

          • of course what I wrote is an opinion [this is a blog, not a scientific journal!!!] – just as your claims are opinion. the difference is, however, that I have researched this area in depth and have some evidence to back up my opinion while you seem to have none.
            the other articles that you were unable to find are here:
            http://www.ncbi.nlm.nih.gov/pubmed/11775494
            http://www.ncbi.nlm.nih.gov/pubmed/12184356
            http://www.ncbi.nlm.nih.gov/pubmed/17658121

          • Jon Wardle said:

            Alan – the rules are the same as they are for any other field. Usually it is done by viewing who has previously published in this area in Medline journals. It is balanced by getting reviewers with expertise in all areas relevant to the publication. It would be entirely inappropriate to not have someone with knowledge of the therapy being tested review, just as it would be ridiculous to have a surgical intervention reviewed without surgeon input. Not a perfect system but the way it is done for all fields.

            That’s not what I asked. I asked you about how you selected someone who was ‘expert in the practice of that modality’, not about someone who has published other papers on it. How do you determine if someone is such an expert?

        • what about the fact that we showed on 3 different occasions that the TOP journals of alt med publish no negative results?
          what about the list of critical board members I asked for?

  • Jon Wardle, just suppose that there is a Journal of Alternative Bridge Design and that I’m an expert in building bridges made of cheese. Now, ought the journal editor enlist me to peer-review the papers submitted on cheese bridges or enlist peer-reviews from experts in bridge building who know full well that cheese bridges are useless because they don’t actually work in practice?

    • Do you have any data to back up your claim that cheese bridges don’t work?

      • quite right!
        absence of evidence is not evidence of absence – give them the benefit of the doubt, be open-minded….and all that jazz.

        • Maybe you haven’t seen it work. But cheese bridges worked for me!

          People should have the choice whether to use cheese bridges or not, and I should have the right to sell them as safe and effective. No refunds.

  • At the root of ‘problems’ in peer review is a lack of scientific understanding of peer review… We are so steeped in it as academics that we tend to fail to construct it as a proper object of study – stripping away assumptions, and not looking at it as a common object of study – that we ‘muddle’ research. Pre-publication peer review with anonymous referees and secretive judgements and decisions is the least likely to contribute to rational decision-making yet it is the paradigmatic form of peer review used at most journals… why? Well there are lots of dynamics that make it extremely interesting to maintain power relations. I propose models and socio-historical research at .

Leave a Reply to Alan Henness Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories