MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

What an odd title, you might think.

Systematic reviews are the most reliable evidence we presently have!

Yes, this is my often-voiced and honestly-held opinion but, like any other type of research, systematic reviews can be badly abused; and when this happens, they can seriously mislead us.

new paper by someone who knows more about these issues than most of us, John Ioannidis from Stanford university, should make us think. It aimed at exploring the growth of published systematic reviews and meta‐analyses and at estimating how often they are redundant, misleading, or serving conflicted interests. Ioannidis demonstrated that publication of systematic reviews and meta‐analyses has increased rapidly. In the period January 1, 1986, to December 4, 2015, PubMed tags 266,782 items as “systematic reviews” and 58,611 as “meta‐analyses.” Annual publications between 1991 and 2014 increased 2,728% for systematic reviews and 2,635% for meta‐analyses versus only 153% for all PubMed‐indexed items. Ioannidis believes that probably more systematic reviews of trials than new randomized trials are published annually. Most topics addressed by meta‐analyses of randomized trials have overlapping, redundant meta‐analyses; same‐topic meta‐analyses may exceed 20 sometimes.

Some fields produce massive numbers of meta‐analyses; for example, 185 meta‐analyses of antidepressants for depression were published between 2007 and 2014. These meta‐analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests. China has rapidly become the most prolific producer of English‐language, PubMed‐indexed meta‐analyses. The most massive presence of Chinese meta‐analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes. Furthermore, many contracting companies working on evidence synthesis receive industry contracts to produce meta‐analyses, many of which probably remain unpublished. Many other meta‐analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta‐analyses are both non‐misleading and useful.

The author concluded that the production of systematic reviews and meta‐analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta‐analyses are unnecessary, misleading, and/or conflicted.

Ioannidis makes the following ‘Policy Points’:

  • Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta‐analyses. Instead of promoting evidence‐based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools.
  • Suboptimal systematic reviews and meta‐analyses can be harmful given the major prestige and influence these types of studies have acquired.
  • The publication of systematic reviews and meta‐analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.

Obviously, Ioannidis did not have alternative medicine in mind when he researched and published this article. But he easily could have! Virtually everything he stated in his paper does apply to it. In some areas of alternative medicine, things are even worse than Ioannidis describes.

Take TCM, for instance. I have previously looked at some of the many systematic reviews of TCM that currently flood Medline, based on Chinese studies. This is what I concluded at the time:

Why does that sort of thing frustrate me so much? Because it is utterly meaningless and potentially harmful:

  • I don’t know what treatments the authors are talking about.
  • Even if I managed to dig deeper, I cannot get the information because practically all the primary studies are published in obscure journals in Chinese language.
  • Even if I  did read Chinese, I do not feel motivated to assess the primary studies because we know they are all of very poor quality – too flimsy to bother.
  • Even if they were formally of good quality, I would have my doubts about their reliability; remember: 100% of these trials report positive findings!
  • Most crucially, I am frustrated because conclusions of this nature are deeply misleading and potentially harmful. They give the impression that there might be ‘something in it’, and that it (whatever ‘it’ might be) could be well worth trying. This may give false hope to patients and can send the rest of us on a wild goose chase.

So, to ease the task of future authors of such papers, I decided give them a text for a proper EVIDENCE-BASED conclusion which they can adapt to fit every review. This will save them time and, more importantly perhaps, it will save everyone who might be tempted to read such futile articles the effort to study them in detail. Here is my suggestion for a conclusion soundly based on the evidence, not matter what TCM subject the review is about:

OUR SYSTEMATIC REVIEW HAS SHOWN THAT THERAPY ‘X’ AS A TREATMENT OF CONDITION ‘Y’ IS CURRENTLY NOT SUPPORTED BY SOUND EVIDENCE.

On another occasion, I stated that I am getting very tired of conclusions stating ‘…XY MAY BE EFFECTIVE/HELPFUL/USEFUL/WORTH A TRY…’ It is obvious that the therapy in question MAY be effective, otherwise one would surely not conduct a systematic review. If a review fails to produce good evidence, it is the authors’ ethical, moral and scientific obligation to state this clearly. If they don’t, they simply misuse science for promotion and mislead the public. Strictly speaking, this amounts to scientific misconduct.

In yet another post on the subject of systematic reviews, I wrote that if you have rubbish trials, you can produce a rubbish review and publish it in a rubbish journal (perhaps I should have added ‘rubbish researchers).

And finally this post about a systematic review of acupuncture: it is almost needless to mention that the findings (presented in a host of hardly understandable tables) suggest that acupuncture is of proven or possible effectiveness/efficacy for a very wide array of conditions. It also goes without saying that there is no critical discussion, for instance, of the fact that most of the included evidence originated from China, and that it has been shown over and over again that Chinese acupuncture research never seems to produce negative results.

The main point surely is that the problem of shoddy systematic reviews applies to a depressingly large degree to all areas of alternative medicine, and this is misleading us all.

So, what can be done about it?

My preferred (but sadly unrealistic) solution would be this:

STOP ENTHUSIASTIC AMATEURS FROM PRETENDING TO BE RESEARCHERS!

Research is not fundamentally different from other professional activities; to do it well, one needs adequate training; and doing it badly can cause untold damage.

11 Responses to Beware of (poor-quality, redundant, nonsensical, biased) systematic reviews

  • “STOP ENTHUSIASTIC AMATEURS FROM PRETENDING TO BE RESEARCHERS!”

    Fine suggestion, but HOW?

    Every systematic review begins with a database search. One might perhaps imagine a world in which responsible databases (Web of Science, Medline, PubMed et al.) erect some sort of barrier to avoid inclusion of demonstrably dodgy journals. But (a) people would howl that these databases should be as comprehensive as possible and (b) supporters of pseudo-science will still include dodgy journals in their reviews anyway.

    • as I said, it’s not realistic, and i do not know the solution other than
      1) assessing every piece of research on its merits – but most people lack the skills to do that.
      2) do not fund researchers without adequate skills (or researchers with a suspicious track record [edzardernst.com/2012/11/the-trustworthiness-index/]) – but there will be funders who specifically look toward researchers who do promotional stuff.
      I do think that questioning the trustworthiness of researchers is the only feasible way [edzardernst.com/2012/11/the-trustworthiness-index/].

  • When Ioannidis speaks, it pays to listen. Let’s wait for Dana et al to firmly grasp the one end of the stick here and try to use this paper to dismiss all meta analyses and systemic reviews that condemn homeopathy as nonsense.

  • This is very like the sub-prime mortgage scandal. Wrapping up rubbish so as to present it as more valuable than it is. How many readers of systematic reviews and meta-analyses are going to read all the papers that were included? Yet again it comes back to peer review, which should never allow the source papers to be published let alone the meta-papers.

  • “… it could erode the trust we and others have in science.”

    Could it be that is the very motive of the editors who fail to organise proper peer review before publishing, thus paving the way for the outcome they so earnestly desire – the advent and approbation of pseudo-science?

    Why else would these editors bother to publish such poor and wrong minded articles in a journals they hold out to be ‘scientific’?

    Do these editors intend to mislead? Are they frauds?

    • they certainly earn very well doing it.

    • Financial interest and true belief are powerful motivators. The true belief component can mask the crime from the perpetrator’s consciousness. So unconscious frauds, perhaps – at least some of them? Both publishers and researchers. There is a whole industry of the stuff.

      The trouble is, as Ioannidis has shown, is that the mainsteam research world is severely compromised also.

  • In my opinion, one fundamental problem of meta-analyses is that they do not take into account the prior probability of the hypothesis “this treatment works”. This is especially evident in alternative medicine. A clinical trial has at least three alternative hypotheses, namely: (1) “the treatment works”, (2) “false positive”, and (3) “some unknown factor”. These hypotheses all have prior probabilities, based on scientific soundness, study design etc. Meta-analyses techniques assume that (1) >> than (2) and/or (3). The question arises, what if the prior probability of (1) is much less than that of (2) and/or (3) ? To address this problem would be the true transition from evidence based medicine to science based medicine.

Leave a Reply to Edzard Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories