The website of BMJ Clinical Evidence seems to be popular with fans of alternative medicine (FAMs). That sounds like good news: it’s an excellent source, and one can learn a lot about EBM when studying it. But there is a problem: FAMs don’t seem to really study it (alternatively they do not have the power of comprehension to understand the data); they merely pounce on this figure and cite it endlessly:
They interpret it to mean that only 11% of what conventional clinicians do is based on sound evidence. This is water on their mills, because now they feel able to claim:
THE MAJORITY OF WHAT CONVENTIONAL CLINICIANS DO IS NOT EVIDENCE-BASED. SO, WHY DO SO-CALLED RATIONAL THINKERS EXPECT ALTERNATIVE THERAPIES TO BE EVIDENCE-BASED? IF WE NEEDED PROOF THAT THEY ARE HYPOCRITES, HERE IT IS!!!
The question is: are these FAMs correct?
The answer is: no!
They are merely using a logical fallacy (tu quoque); what is worse, they use it based on misunderstanding the actual data summarised in the above figure.
Let’s look at this in a little more detail.
The first thing we need to understand the methodologies used by ‘Clinical Evidence’ and what the different categories in the graph mean. Here is the explanation:
So, arguably the top three categories amounting to 42% signify some evidential support (if we decided to be more rigorous and merely included the two top categories, we would still arrive at 35%). This is not great, but we must remember two things here:
- EBM is fairly new;
- lots of people are working hard to improve the evidence base of medicine so that, in future, these figures will be better (by contrast, in alternative medicine, no similar progress is noticeable).
The second thing that strikes me is that, in alternative medicine, these figures would surely be much, much worse. I am not aware of reliable estimates, but I guess that the percentages might be one dimension smaller.
The third thing to mention is that the figures do not cover the entire spectrum of treatments available today but are based on ~ 3000 selected therapies. It is unclear how they were chosen, presumably the choice is pragmatic and based on the information available. If an up-to date systematic review has been published and provided the necessary information, the therapy was included. This means that the figures include not just mainstream but also plenty of alternative treatments (to the best of my knowledge ‘Clinical Evidence’ makes no distinction between the two). It is thus nonsensical to claim that the data highlight the weakness of the evidence in conventional medicine. It is even possible that the figures would be better, if alternative treatments had been excluded (I estimate that around 2 000 systematic reviews of alternative therapies have been published [I am the author of ~400 of them!]).
The fourth and possibly the most important thing to mention is that the percentage figures in the graph are certainly NOT a reflection of what percentage of treatments used in routine care are based on good evidence. In conventional practice, clinicians would, of course, select where possible those treatments with the best evidence base, while leaving the less well documented ones aside. In other words, they will use the ones in the two top categories much more frequently than those from the other categories.
At this stage, I hear some FAMs say: how does he know that?
Because several studies have been published that investigated this issue in some detail. They have monitored what percentage of interventions used by conventional clinicians in their daily practice are based on good evidence. In 2004, I reviewed these studies; here is the crucial passage from my paper:
“The most conclusive answer comes from a UK survey by Gill et al who retrospectively reviewed 122 consecutive general practice consultations. They found that 81% of the prescribed treatments were based on evidence and 30% were based on randomised controlled trials (RCTs). A similar study conducted in a UK university hospital outpatient department of general medicine arrived at comparable figures; 82% of the interventions were based on evidence, 53% on RCTs. Other relevant data originate from abroad. In Sweden, 84% of internal medicine interventions were based on evidence and 50% on RCTs. In Spain these percentages were 55 and 38%, respectively. Imrie and Ramey pooled a total of 15 studies across all medical disciplines, and found that, on average, 76% of medical treatments are supported by some form of compelling evidence — the lowest was that mentioned above (55%),6 and the highest (97%) was achieved in anaesthesia in Britain. Collectively these data suggest that, in terms of evidence-base, general practice is much better than its reputation.”
My conclusions from all this:
FAMs should study the BMJ Clinical Evidence more thoroughly. If they did, they might comprehend that the claims they tend to make about the data shown there are, in fact, bogus. In addition, they might even learn a thing or two about EBM which might eventually improve the quality of the debate.