MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

evidence

What is and what isn’t evidence, and why is the distinction important?

In the area of alternative medicine, we tend to engage in sheer endless discussions around the subject of evidence; the relatively few comments on this new blog already confirm this impression. Many practitioners claim that their very own clinical experience is at least as important and generalizable as scientific evidence. It is therefore relevant to analyse in a little more detail some of the issues related to evidence as they apply to the efficacy of alternative therapies.

To prevent the debate from instantly deteriorating into a dispute about the value of this or that specific treatment, I will abstain from mentioning any alternative therapy by name and urge all commentators to do the same. The discussion on this post should not be about the value of homeopathy or any other alternative treatment; it is about more fundamental issues which, in my view, often get confused in the usually heated arguments for or against a specific alternative treatment.

My aim here is to outline the issues more fully than would be possible in the comments section of this blog. Readers and commentators can subsequently be referred to this post whenever appropriate. My hope is that, in this way, we might avoid repeating the same arguments ad nauseam.

Clinical experience is notoriously unreliable

Clinicians often feel quite strongly that their daily experience holds important information about the efficacy of their interventions. In this assumption, alternative practitioners are usually entirely united with healthcare professionals working in conventional medicine.

When their patients get better, they assume this to be the result of their treatment, especially if the experience is repeated over and over again. As an ex-clinician, I do sympathise with this notion which might even prevent practitioners from losing faith in their own work. But is the assumption really correct?

The short answer is NO. Two events [the treatment and the improvement] that follow each other in time are not necessarily causally related; we all know that, of course. So, we ought to consider alternative explanations for a patient’s improvement after therapy.

Even the most superficial scan of the possibilities discloses several options: the natural history of the condition, regression towards the mean, the placebo-effect, concomitant treatments, social desirability to name but a few. These and other phenomena can contribute to or determine the clinical outcome such that inefficacious treatments appear to be efficacious.

What follows is simple, undeniable and plausible for scientists, yet intensely counter-intuitive for clinicians: the prescribed treatment is only one of many influences on the clinical outcome. Thus even the most impressive clinical experience of the perceived efficacy of a treatment can be totally misleading. In fact, experience might just reflect the fact that we repeat the same mistake over and over again. Put differently, the plural of anecdote is anecdotes, not evidence!

Clinicians tend to get quite miffed when anyone tries to explain to them how multifactorial the situation really is and how little their much-treasured experience tells us about therapeutic efficacy. Here are seven of the counter-arguments I hear most frequently:

1)      The improvement was so direct and prompt that it was obviously caused by my treatment [this notion is not very convincing; placebo-effects can be just as prompt and direct].

2)      I have seen it so many times that it cannot be a coincidence [some clinicians are very caring, charismatic, and empathetic; they will thus regularly generate powerful placebo-responses, even when using placebos].

3)      A study with several thousand patients shows that 75% of them improved with my treatment [such response rates are not uncommon, even for ineffective treatments, if patient-expectation was high].

4)      Surely chronic conditions don’t suddenly get better; my treatment therefore cannot be a placebo [this is incorrect, eventually many chronic conditions improve, if only temporarily].

5)      I had a patient with a serious condition, e.g. cancer, who received my treatment and was cured [if one investigates such cases, one often finds that the patient also took a conventional treatment; or, in rare instances, even cancer-patients show spontaneous remissions].

6)      I have tried the treatment myself and had a positive outcome [clinicians are not immune to the multifactorial nature of the perceived clinical response].

7)      Even children and animals respond very well to my treatment, surely they are not prone to placebo-effects [animals can be conditioned to respond; and then there is, of course, the natural history of the disease].

Is all this to say that clinical experience is useless? Clearly not! I am merely pointing out that, when it comes to therapeutic efficacy, clinical experience is no replacement for evidence. It is invaluable for a lot of other things, but it can at best provide a hint and never a proof of efficacy.

What then is reliable evidence?

As the clinical outcomes after treatments always have many determinants, we need a different approach for verifying therapeutic efficacy. Essentially, we need to know what would have happened, if our patients had not received the treatment in question.

The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.

Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.

Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.

Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.

Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-commings, they are far superior than any other method for determining the efficacy of medical interventions.

There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.

Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.

In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.

Why is evidence important?

In a way, this question has already been answered: only with reliable evidence can we tell with any degree of certainty that it was the treatment per se – and not any of the other factors mentioned above – that caused the clinical outcome we observe in routine practice. Only if we have such evidence can we be sure about cause and effect. And only then can we make sure that patients receive the best possible treatments currently available.

There are, of course, those who say that causality does not matter all that much. What is important, they claim, is to help the patient, and if it was a placebo-effect that did the trick, who cares? However, I know of many reasons why this attitude is deeply misguided. To mention just one: we probably all might agree that the placebo-effect can benefit many patients, yet it would be a fallacy to assume that we need a placebo treatment to generate a placebo-response.

If a clinician administers an efficacious therapy [one that generates benefit beyond placebo] with compassion, time, empathy and understanding, she will generate a placebo-response PLUS a response to the therapy administered. In this case, the patient benefits twice. It follows that, merely administering a placebo is less than optimal; in fact it usually means cheating the patient of the effect of an efficacious therapy.

The frequently voiced counter-argument is that there are many patients who are ill without an exact diagnosis and who therefore cannot receive a specific treatment. This may be true, but even those patients’ symptoms can usually be alleviated with efficacious symptomatic therapy, and I fail to see how the administration of an ineffective treatment might be preferable to using an effective symptomatic therapy.

Conclusion

We all agree that helping the patient is the most important task of a clinician. This task is best achieved by maximising the non-specific effects [e.g. placebo], while also making sure that the patient benefits from the specific effects of what medicine has to offer. If that is our goal in clinical practice, we need reliable evidence and experience. Therefore one cannot be a substitute for the other, and scientific evidence is an essential precondition for good medicine.

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories