In the wake of both the NEJM and the LANCET withdrawing two potentially influential papers due unanswered questions about the source and reliability of the data, one has to ask how good or bad the process of peer review is.

Peer review is the evaluation of work by one or more people with similar competences as the producers of the work (peers). It functions as a form of self-regulation by qualified members of a profession within the relevant field. It normally involves multiple steps:

  1. Authors send their manuscript to a journal of their choice for publication.
  2. The journal editor has a look at it and decides whether to reject it straight away (for instance, because the subject area is not of interest) or whether to send it out to referees for examination (often to experts suggested by the authors of the submission).
  3. The referees (usually 2 or 3) have the opportunity to reject or accept the invitation to review the submission.
  4. If they accept, they review the paper and send their report to the editor (usually following a deadline).
  5. The editor tries to come to a decision about publication; often the referees are not in agreement, and a further referee has to be recruited.
  6. Even if the submission is potentially publishable, the referees will have raised several points that need addressing. In such cases, the editor sends the submission back to the original authors asking them to revise the article.
  7. The authors do their revision (often following a deadline) and re-submit their paper.
  8. Now the editor can decide to either publish it or send it back to the referees asking them whether they feel their criticisms have been adequately addressed.
  9. Depending on the referees’ verdicts, the editor makes the final decision and informs all the involved parties accordingly.
  10. If the paper was accepted, it then goes into production.
  11. When this process is finished, the authors receive the proofs for final a check.
  12. Eventually, the paper is published and the readers of the journal may scrutinise it.
  13. Often this prompts comment which may get published.
  14. In this case, the authors of the original paper may get invited to write a reply.
  15. Finally the comments and the reply are published in the journal side by side.

The whole process takes time, sometimes lots of time. I have had papers that took almost two years from submissions to publications. This delay seems tedious and, if the paper is important, unacceptable (if it is not important, it should arguably not be published at all). Equally unacceptable is the fact that referees are expected to do their reviewing for free. The consequence is that many referees do their reviewing less than well.

When I was still at Exeter, I had plenty of opportunity to see the problems of peer review from the reviewers perspective. At a time, I accepted about 5 reviews per week, and in total I surely have reviewed over 1000 papers. I often recommended inviting a statistician to do a specialist review of the stats. Only rarely were such suggestions accepted by the journal editors. Very often I recommended rejecting a submission because it was rubbish, and occasionally, I told the editor that there was a strong suspicion of the paper being fraudulent. The editors very often (I estimate in about 50% of cases) ignored my suggestions and comments and published the papers nonetheless. If the editor did follow my advice to reject a paper, I regularly saw it published elsewhere later (usually in a less well-respected journal). Several times, an author of a submission contacted me directly after seeing my criticism of his paper. Occasionally this resulted in unpleasantness, once or twice even in threats. Eventually I realised that improving the publications in the realm of SCAM was a Sisyphean task, became quite disenchanted with all this and accepted less and less reviews. Today, I do only very few.

I had even more opportunity to see the peer review process from the author’s perspective. All authors must have suffered from unfair or incompetent reviews and most will have experienced the frustrations of the endless delays. Once (before my time in alternative medicine) a reviewer rejected my paper and soon after published results that were uncannily similar to mine. In alternative medicine, researchers tend to be rather emotional about their subject. Imagine, for instance, the review you might get from Dana Ullmann of a trial of homeopathy that fails to show what he believes in.

Finally, since 40 years, I have also had the displeasure of experiencing peer review as an editor. This often seemed like trying to sail between the devil and the deep blue sea. Editors want to fill their journals with the best science they can find. But all too often, they receive the worst science they can imagine. They are constantly torn by tensions pulling them in opposite directions. And they have to cope not just with poor quality submissions but also with reviewers who miss deadlines and do their work badly.

So, peer review is fraught with problems! The trouble is that there are few solutions that would keep a better check on the reliability of science. Peer review, it often seemed to me, is the worst idea, except for all others. If peer review is to survive (and I think it probably will), there are a few things that could, from my point of view, be done to improve it:

  1. Make it much more attractive for the referees. Payment would be the obvious thing – and by Jove, the big journals like the LANCET and NEJM could afford it. But recognising refereeing academically would be even more important. At present, academic careers depend largely of publications; if they also depended on reviewing, experts would queue up to do it.
  2. The reports of the referees should get independently evaluated according to sensible criteria. These data could be conflated an published as a criterion of academic standing. Referees who fail to to a good job would spoil their chances to get re-invited for this task.
  3. Speed up the entire process. Waiting months on months is hugely counter-productive for all concerned.
  4. Today many journals ask authors for the details of experts who are potential reviewers of their submission and then send the paper in question to them for review. I find this ridiculous! No author I know of has ever resisted the temptation to name people who are friends or owe a favour. Journals should afford the extra work to find who the best independent experts on any particular subject are.

None of this is simple or fool-proof or even sure to work well, of course. But surely it is worth trying to get peer-review right. The quality of future science depends on it.

17 Responses to Peer Review: the worst system except for all others

  • So true.
    It is difficult to see how the classic process could be successfully improved. I like your ideas of recognising refereeing academically and independent evaluation of reviews but they miight prove impossible to promote?
    Another even more important task would be to discourage or hinder predatory publishing. How that can be accomplished is also difficult to imagine. The big databases, in particular NCBI-PubMed might be key players in such an endeavour?

    On a related note, I like to have a look at from time to time.

  • The peer reviewers of a quack paper in a quack journal are usually other quacks.

  • Very good points. Of your improvement suggestions I think the first one is particularly important. Currently reviewers are neither incentivised or rewarded for carrying out peer review.

  • I agree with everything you say. You mention only publications as the focus of the process, but there are also grant applications to consider: peer review of a grant application is a lot more work than peer review of work submitted for publication in a journal. I served for three years as a panel member on a committee of the Wellcome Trust, then became its Chair for a further three years. The Trust pays fair remuneration to its committee members: remuneration definitely works to enhance the quality of peer reviews.

    But we are faced today with an explosion of new journals entering the highly lucrative science publishing market. These commonly seem to have no standards whatever for authors and peer reviewers to live up to. As Les Rose has already pointed out, the peer reviewers of a quack paper in a quack journal are usually other quacks.

    The only antidote to low standards I can suggest on the basis of my own experiences with the system is for organizations at a higher level than the journals to routinely delist those publications that support bad science. I’m thinking of PubMed, Medline, Google Scholar and suchlike. But they’d claim they’re fulfilling a special need by recording for posterity the details of the peer-reviewed publications they list in their databases.

    Which takes us back to where we started. Peer review can claim to be the least worst system for adjudicating papers prior to their publication. So maybe we should be content with the vicissitudes of post hoc peer review. Comments made on a paper after it has appeared in print, like “letters to the editor”, require the authors of the comments to put their heads above the parapets and identify themselves. This can be a psychologically traumatic experience, but it’s worth it if one feels strongly enough about a paper that should never have passed the present, broken, peer review system.

    • I reviewed plenty of grant applications too but left this topic for another day, as the NEJM and LANCET retractions made the headlines recently.
      I also reviewed for the Wellcome Trust but did not find the remuneration ‘fair’. as you say, a grant application review takes even more time; most took me a day or two [typically a weekend]. how much money per day do you consider a fair payment for world leading experts?

      • Around £100 per review would seem fair to me.

        • really?
          if one works 8 hours on a grant review, £100 seems demeaning to me.
          in this case, I’d rather do it for free.

          • @Edzard

            I guess we’ll have to agree to differ. Your OP makes some excellent points, as do the comments from Jashak and Les Rose. The balance that has to be struck is between feelings of scientific duty and incentives for peer review. Each individual will have a different viewpoint, based on their personal and professional experience.

  • Most of the papers about the current pandemic are placed on a preprint server. I see estimates of some 5,000 per week.
    How will this effect peer review in the long run?

  • Once again, Prof. Ernst, you touch a sore point, and rightfully so.

    The current peer-reviewing process is ridiculous and broken, imo.
    Journals expect us to review papers for free, and the publishers of these journals then sell them for exorbitant fees to our Universities and institutes.
    After having done pretty much every review that I was asked for (by journal above a certain threshold) for several years, I became frustrated with this process and decided to limit my reviewing activities greatly, and only do reviews for high-impact journals, if I am asked by an editor who I personally know, or if I am very interested in the topic.

    You have probably heard about the debate that was going on over the last weeks regarding a pre-print of Prof. Drosten of the Charité, which was heavily criticized by the BILD tabloid.
    Much of the talk was about that this pre-print was not yet peer-reviewed, and that a paper only is valid once it has been peer-reviewed, etc.pp. Knowing about the flaws of the peer-reviewing process, I found this part of the debate somewhat funny.
    Yes, if the peer review is done as it should be (i.e. by a diligent reviewer who takes time to do a proper review), then it can be a quite good quality check. But in reality, the peer reviewing process is so flawed that it is often not worth much (with the exception for the very best journals, where reviewers feel obliged to do a proper job… for free).

  • 10% of reviewers are responsible for 50% of peer reviews

    75% of journal editors say the hardest part of their job is finding willing reviewers

  • Journals should have open peer review which is available to qualified reviewers.

    The reviewers pick which papers they want to review.

    Journal editor (or qualified staff) picks the best reviews (say 5) and pass them to authors for revision.

    Authors pay directly for the peer review process…payment goes to the 5 reviewers that were selected.

  • Dear Prof Edzard

    I think you intended ‘Sisyphean’. Just doing my refereeing/editing.;-)

    Best wishes

  • Covid is relevant here. We can see the political and public mistrust of unaccountable science ‘experts’ increasing as the modelling and advice given over the lockdown is now being questioned. This will hopefully lead to more scientists held accountable for papers and advice that turns out to be false. This is needed with regard to clinical trials and evidence based medicine. Improve confidence in EBM then many of the public may not be so keen to go elsewhere.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.