One of the questions I hear frequently is ‘HOW CAN I BE SURE THIS STUDY IS SOUND’? Even though I have spent much of my professional life on this issue, I am invariably struggling to provide an answer. Firstly, because a comprehensive reply must inevitably have the size of a book, perhaps even several books. And secondly, to most lay people, the reply would be intensely boring, I am afraid.
Yet many readers of this blog evidently search for some guidance – so, let me try to provide a few indicators – indicators, not more!!! – as to what might signify a good and a poor clinical trial (other types of research would need different criteria).
INDICATORS SUGGESTIVE OF A GOOD CLINICAL TRIAL
- Author from a respected institution.
- Article published in a respected journal.
- A clear research question.
- Full description of the methods used such that an independent researcher could repeat the study.
- Randomisation of study participants into experimental and control groups.
- Use of a placebo in the control group where possible.
- Blinding of patients.
- Blinding of investigators, including clinicians administering the treatments.
- Clear definition of a primary outcome measure.
- Sufficiently large sample size demonstrated with a power calculation.
- Adequate statistical analyses.
- Clear presentation of the data such that an independent assessor can check them.
- Understandable write-up of the entire study.
- A discussion that puts the study into the context of all the important previous work in this area.
- Self-critical analysis of the study design, conduct and interpretation of the results.
- Cautious conclusion which are strictly based on the data presented.
- Full disclosure of ethics approval and informed consent,
- Full disclosure of funding sources.
- Full disclosure of conflicts of interest.
- List of references is up-to-date and includes also studies that contradict the authors’ findings.
I told you this would be boring! Not only that, but each bullet point is far too short to make real sense, and any full explanation would be even more boring to a lay person, I am sure.
What might be a little more fun is to list features of a clinical trial that might signify a poor study. So, let’s try that.
WARNIG SIGNALS INDICATING A POOR CLINICAL TRIAL
- published in one of the many dodgy CAM journals (or in a book, blog or similar),
- single author,
- authors are known to be proponents of the treatment tested,
- author has previously published only positive studies of the therapy in question (or member of my ‘ALT MED HALL OF FAME’),
- lack of plausible rationale for the study,
- lack of plausible rationale for the therapy that is being tested,
- stated aim of the study is ‘to demonstrate the effectiveness of…’ (clinical trials are for testing, not demonstrating effectiveness or efficacy),
- stated aim ‘to establish the effectiveness AND SAFETY of…’ (even large trials are usually far too small for establishing the safety of an intervention),
- text full of mistakes, e. g. spelling, grammar, etc.
- sample size is tiny,
- pilot study reporting anything other than the feasibility of a definitive trial,
- methods not described in sufficient detail,
- mismatch between aim, method, and conclusions of the study,
- results presented only as a graph (rather than figures which others can re-calculate),
- statistical approach inadequate or not sufficiently detailed,
- discussion without critical input,
- lack of disclosures of ethics, funding or conflicts of interest,
- conclusions which are not based on the results.
The problem here (as above) is that one would need to write at least an entire chapter on each point to render it comprehensible. Without further detailed explanations, the issues raised remain rather abstract or nebulous. Another problem is that both of the above lists are, of course, far from complete; they are merely an expression of my own experience in assessing clinical trials.
Despite these caveats, I hope that those readers who are not complete novices to the critical evaluation of clinical trials might be able to use my ‘warning signals’ as a form of check list that helps them to tell the chaff from the wheat.
I thing this is a start, like a “table of context” for a book on “How to understand scientific Articles/Claims”. In more general line could also explain what are e.g. logical fallacies in us that let the scientists screw one study or that let us struggle to accept certain facts/opinions.
I thing it would be great to have a book about this but it needs to be written in a interesting way. Like explaining medical or dietary myths in this context.
I am not convinced that a publisher would feel that such a book is worth producing; they will predict very low sales
Maybe we could make it a open source pdf/Ebook. I would love to have/do such thing. Because it would also be nice to have it for future university students. I often see that they don’t learn such things. And for interested people it would also be nice. That way one could argue higher sales?!
it might be a possibility; but I prefer to do books in the conventional way.
Some of these items are explained by Dr Ben Goldacre in his book “Bad Science”. ISBN-13: 978-0007283194
Just under £6 for the kindle variety and it’s a fun read.
Trisha Greenhalgh’s book “How to read a paper” is a winner! Highly recommend!
I agree that the above list provides a good guideline to search for quality research. Nevertheless I would always promote all clinicians to put pen to paper and share their clinical experiences and get published with peer review. The paper will stand on it’s topic,method, design and references not only the journal. I’m endeavouring to that right now. I look forward to the critical analysis of my manuscript.
reports of clinical experience would not be a clinical trial. reports of clinical experience can be important but they must be clearly differentiated from clinical trials.
Excellent summary; thank you.
Regarding statistical analysis, a reading of DC’s excellent http://rsos.royalsocietypublishing.org/content/1/3/140216 (including the comments) would make a nice “companion exercise” to this.
According to your standard definition in respect any CAM therapy the following would fail it, which makes a joke of this whole discussion!
* lack of plausible rationale for the therapy that is being tested,
there are rationales for mind-body interventions, massage therapy, herbal medicine and possibly others.
But you regularly state that Homeopathy is irrational!
Colin, homeopathy really is a no-brainer, even if one bends over backwards to be generous. Both of its central tenets ,similia similibus curantur and potentization by dilution, would be laughed out of court by halfway intelligent primary school children!
Frank you have proved my point perfectly, that the discussion on this subject is a joke; however, the time is fast approaching when you may well be without your feathers as depicted in the following link!
This the thread to which I refer!
I wholeheartedly agree with the indicators of a good clinical trial, but I think that neither the institution nor the authors’ reputation must be considered as good signals. Including that would put the evaluation at a strong risk of several biases against newcomers and new (but well validated) ideas.
not as a single indicator, of course. but as part of a score it might work.
Thanks for laying out this list.
I would add (1) reproducibility and (2) consensus to your list. Too many individual studies make it into the press and are instantly taken for gospel before the scientific establishment has had a chance to confirm and agree
For what it is worth I always thought the following article a brilliant primer that both journalists and politicians should read (and frankly should be taught in school).
Sutherland, Spiegelhalter and Burgman. Nature. 2013. “Policy: twenty tips for interpreting scientific claims”