MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

placebo

In 2010, NICE recommended acupuncture for chronic low back pain (cLBP). Acupuncturists were of course delighted; the British Acupuncture Council, for instance, stated that they fully support NICE’s (National Institute for Health and Clinical Excellence) decision that acupuncture be made available on the NHS for chronic lower back pain. Traditional acupuncture has been used for over 2,000 years to alleviate back pain and British Acupuncture Council members have for many many years been successfully treating patients for this condition either in private practice or working within the NHS. In effect, therefore, these new guidelines are a rubber stamp of the positive work already being undertaken as well as an endorsement of the wealth of research evidence now available in this area.

More critical experts, however, tended to be surprised about this move and doubted that the evidence was strong enough for a positive recommendation. Now a brand-new meta-analysis sheds more light on this important issue.

Its aim was to determine the effectiveness of acupuncture as a therapy for cLBP. The authors found 13 RCTs which matched their inclusion criteria. Their results show that, compared with no treatment, acupuncture achieved better outcomes in terms of pain relief, disability recovery and better quality of life. These effects were, however, not observed when real acupuncture was compared to sham acupuncture. Acupuncture achieved better outcomes when compared with other treatments. No publication bias was detected.

The authors conclude that acupuncture is an effective treatment for chronic low back pain, but this effect is likely to be produced by the nonspecific effects of manipulation.

In plain English, this means that the effects of acupuncture on cLBP are most likely due to placebo. Should NICE be recommending placebo-treatments and have the tax payer foot the bill? I think I can leave it to my readers to answer this question.

Hot flushes are a big problem; they are not life-threatening, of course, but they do make life a misery for countless menopausal women. Hormone therapy is effective, but many women have gone off the idea since we know that hormone therapy might increase their risk of getting cancer and cardiovascular disease. So, what does work and is also risk-free? Acupuncture?

Together with researchers from Quebec, we wanted to determine whether acupuncture is effective for reducing hot flushes and for improving the quality of life of menopausal women. We decided to do this in form of a Cochrane review which was just published.

We searched 16 electronic databases in order to identify all relevant studies and included all RCTs comparing any type of acupuncture to no treatment/control or other treatments. Sixteen studies, with a total of 1155 women, were eligible for inclusion. Three review authors independently assessed trial eligibility and quality, and extracted data. We pooled data where appropriate.

Eight studies compared acupuncture versus sham acupuncture. No significant difference was found between the groups for hot flush frequency, but flushes were significantly less severe in the acupuncture group, with a small effect size. There was substantial heterogeneity for both these outcomes. In a post hoc sensitivity analysis excluding studies of women with breast cancer, heterogeneity was reduced to 0% for hot flush frequency and 34% for hot flush severity and there was no significant difference between the groups for either outcome. Three studies compared acupuncture with hormone therapy, and acupuncture turned out to be associated with significantly more frequent hot flushes. There was no significant difference between the groups for hot flush severity. One study compared electro-acupuncture with relaxation, and there was no significant difference between the groups for either hot flush frequency or hot flush severity. Four studies compared acupuncture with waiting list or no intervention. Traditional acupuncture was significantly more effective in reducing hot flush frequency, and was also significantly more effective in reducing hot flush severity. The effect size was moderate in both cases.

For quality of life measures, acupuncture was significantly less effective than HT, but traditional acupuncture was significantly more effective than no intervention. There was no significant difference between acupuncture and other comparators for quality of life. Data on adverse effects were lacking.

Our conclusion: We found insufficient evidence to determine whether acupuncture is effective for controlling menopausal vasomotor symptoms. When we compared acupuncture with sham acupuncture, there was no evidence of a significant difference in their effect on menopausal vasomotor symptoms. When we compared acupuncture with no treatment there appeared to be a benefit from acupuncture, but acupuncture appeared to be less effective than HT. These findings should be treated with great caution as the evidence was low or very low quality and the studies comparing acupuncture versus no treatment or HT were not controlled with sham acupuncture or placebo HT. Data on adverse effects were lacking.

I still have to meet an acupuncturist who is not convinced that acupuncture is not an effective treatment for hot flushes. You only need to go on the Internet to see the claims that are being made along those lines. Yet this review shows quite clearly that it is not better than placebo. It also demonstrates that studies which do suggest an effect do so because they fail to adequately control for a placebo response. This means that the benefit patients and therapists observe in routine clinical practice is not due to the acupuncture per se, but to the placebo-effect.

And what could be wrong with that? Quite a bit, is my answer; here are just 4 things that immediately spring into my mind:

1) Arguably, it is dishonest and unethical to use a placebo on ill patients in routine clinical practice and charge for it pretending it is a specific and effective treatment.

2) Placebo-effects are unreliable, small and usually of short duration.

3) In order to generate a placebo-effect, I don’t need a placebo-therapy; an effective one administered with compassion does that too (and generates specific effects on top of that).

4) Not all placebos are risk-free. Acupuncture, for instance, has been associated with serious complications.

The last point is interesting also in the context of our finding that the RCTs analysed failed to mention adverse-effects. This is a phenomenon we observe regularly in studies of alternative medicine: trialists tend to violate the most fundamental rules of research ethics by simply ignoring the need to report adverse-effects. In plain English, this is called ‘scientific misconduct’. Consequently, we find very little published evidence on this issue, and enthusiasts claim their treatment is risk-free, simply because no risks are being reported. Yet one wonders to what extend systematic under-reporting is the cause of that impression!

So, what about the legion of acupuncturists who earn a good part of their living by recommending to their patients acupuncture for hot flushes?

They may, of course, not know about the evidence which shows that it is not more than a placebo. Would this be ok then? No, emphatically no! All clinicians have a duty to be up to date regarding the scientific evidence in relation to the treatments they use. A therapist who does not abide by this fundamental rule of medical ethics is, in my view, a fraud. On the other hand, some acupuncturists might be well aware of the evidence and employ acupuncture nevertheless; after all, it brings good money! Well, I would say that such a therapist is a fraud too.

Acupuncture remains a highly controversial treatment: its mechanism of action is less than clear and the clinical results are equally unconvincing. Of course, one ought to differentiate between different conditions; the notion that acupuncture is a panacea is most certainly nonsense.

In many countries, acupuncture is being employed mostly in the management of pain, and it is in this area where the evidence is perhaps most encouraging. Yet, even here the evidence from the most rigorous clinical trials seems to suggest that much, if not all of the effects of acupuncture might be due to placebo.

Moreover, we ought to be careful with generalisations and ask what type of pain? One very specific pain is that caused by aromatase inhibitors (AI), a medication frequently prescribed to women suffering from breast cancer. Around 50 % of these patients complain of AI-associated musculoskeletal symptoms (AIMSS) and 15 % discontinue treatment because of these complaints. So, can acupuncture help these women?

A recent randomised, sham-controlled trial tested whether acupuncture improves AIMSS. Postmenopausal women with early stage breast cancer, experiencing AIMSS were randomized to eight weekly real or sham acupuncture sessions. The investigators evaluated changes in the Health Assessment Questionnaire Disability Index (HAQ-DI) and pain visual analog scale (VAS). Serum estradiol, β-endorphin, and proinflammatory cytokine concentrations were also measured pre and post-intervention. In total, 51 women were enrolled of whom 47 were evaluable (23 randomized to real and 24 to sham acupuncture).

Baseline characteristics turned out to be balanced between groups with the exception of a higher HAQ-DI score in the real acupuncture group. The results failed to show a statistically significant difference in reduction of HAQ-DI or VAS between the two groups. Following eight weekly treatments, a significant reduction of IL-17 was noted in both groups. No significant modulation was seen in estradiol, β-endorphin, or other proinflammatory cytokine concentrations in either group. No difference in AIMSS changes between real and sham acupuncture was seen.

Even though this study was not large, it was rigorously executed and well-reported. As many acupuncturists claim that their treatment alleviates pain and as many women suffering from AM-induced pain experience benefit, acupuncture advocates will nevertheless claim that the findings of this study are wrong, misleading or irrelevant. The often remarkable discrepancy between experience and evidence will again be the subject of intense discussions. How can a tiny trial overturn the experience of so many?

The answer is: VERY EASILY! In fact, the simplest explanation is that both are correct. The trial was well-done and its findings are thus likely to be true. The experience of patients is equally true – yet it relies not on the effects of acupuncture per se, but on the context in which it is given. In simple language, the effects patients experience after acupuncture are due to a placebo-response.

This is the only simple explanation which tallies with both the evidence and the experience. Once we think about it carefully, we realise that acupuncture is highly placebo-genic:

It is exotic.

It is invasive.

It is slightly painful.

It involves time with a therapist.

It involves touch.

If anyone had the task to develop a treatment that maximises placebo-effects, he could not come up with a better intervention!

Ahhh, will acupuncture-fans say, this means that acupuncture is a helpful therapy. I don’t care how it works, as long as it does help. Did we not just cover this issues in some detail? Indeed we did –  and I do not feel like re-visiting the three fallacies which underpin this sentence again.

In my last post, I strongly criticised Prince Charles for his recently published vision of “integrated health and post-modern medicine”. In fact, I wrote that it would lead us back to the dark ages. “That is all very well”, I hear my critics mutter, “but can Ernst offer anything better?” After all, as Prof Michael Baum once remarked, Charles has his authority merely through an accident of birth, whereas I have been to medical school, served as a professor in three different countries and pride myself of being an outspoken proponent of evidence-based medicine. I should thus know better and have something to put against Charles’ odd love affair with the ‘endarkenment’.

I have to admit that I am not exactly what one might call a visionary; all my life I have been slightly weary of people who wear a ‘vision’ on their sleeve for everyone to see. But I could produce some concepts about what might constitute good medicine (apart from the obvious statement that I think EBM is the correct approach). To be truthful, these are not really my concepts either – but, as far as I can see, they simply are ideas held by most responsible health care professionals across the world. So, for what it’s worth, here it is:

Two elements

In a nut-shell, good medicine consists of two main elements: the science and the ‘art’ of medicine. This division is, of course, somewhat artificial; for instance, the art of medicine does not defy science, and compassion is an empty word, if it is not combined with effective therapy. Yet for clarity it can be helpful to separate the two elements.

Science

Medicine has started to make progress about 150 years ago when we managed to free ourselves from the dogmas and beliefs that had previously dominated heath care. The first major randomised trial was published only in 1948. Since then, progress in both basic and clinical research has advanced at a breath-taking speed. Consequently, enormous improvements in health care have occurred, and the life-expectancy as well as the quality of life of millions have grown to a remarkable degree.

These developments are fairly recent and tend to be frustratingly slow; it is therefore clear that there is still much room for improvement. But improvement is surely being generated every day: the outlook of patients who suffer from MS, AIDS, cancer and many other conditions will be better tomorrow than it is today. Similar advances are being made in the areas of disease prevention, rehabilitation, palliative care etc. All of these improvements is almost exclusively the result of the hard work by thousands of brilliant scientists who tirelessly struggle to improve the status quo.

But the task is, of course, huge and virtually endless. We therefore need to be patient and remind ourselves how very young medicine’s marriage with science still is. To change direction at this stage would be wrong and lead to disastrous consequences. To doubt the power of science in generating progress displays ignorance. To call on “ancient wisdom” for help is ridiculous.

Art

The ‘art of medicine’ seems a somewhat old-fashioned term to use. My reason for employing it anyway is that I do not know any other word that captures all of the following characteristics and attributes:

Compassion

Empathy

Sympathy

Time to listen

Good therapeutic relationships

Provision of choice, information, guidance

Holism

Professionalism

They are all important features of  good medicine – they always have been and always will be. To deny this would be to destroy the basis on which health care stands. To neglect them risks good medicine to deteriorate. To call this “ancient wisdom” is grossly misleading.

Sadly, the system doctors have to work in makes it often difficult to respect all the features listed above. And sadly, not everyone working in health care is naturally gifted in showing compassion, empathy etc. to patients. This is why medical schools do their very best to teach these qualities to students. I do not deny that this endeavour is not always fully successful, and one can only hope that young doctors make career-choices according to their natural abilities. If you cannot produce a placebo-response in your patient, I was taught at medical school, go and train as a pathologist!

Science and art

Let me stress this again: the science and the art of medicine are essential elements of good medicine. In other words, if one is missing, medicine is by definition  not optimal. In vast areas of alternative medicine, the science-element is woefully neglected or even totally absent. It follows, that these areas cannot be good medicine. In some areas of conventional medicine, the art-element is weak or neglected. It follows that, in these areas, medicine is not good either.

My rough outline of a ‘vision’ is, of course, rather vague and schematic; it cannot serve as a recipe for creating good medicine nor as a road map towards improving today’s health care. It is also somewhat naive and simplistic: it generalises across the entire, diverse field of medicine which problematic, to say the least.

One challenge for heath care practitioners is to find the optimal balance between the two elements for the situation at hand. A surgeon pulling an in-grown toenail will need a different mix of science and art than a GP treating a patient suffering from chronic depression, for instance.

The essential nature of both the science and the art of medicine also means that a deficit of one element cannot normally be compensated by a surplus of the other. In the absence of an effective treatment, even an over-dose of compassion will not suffice (and it is for this reason that the integration of alt med needs to be seen with great scepticism). Conversely, science alone will do a poor job in many others circumstances (and it is for that reason that we need to remind the medical profession of the importance of the ‘art’).

We cannot expect that the introduction of compassionate quacks will improve health care; it might make it appear more human, while, in fact, it would only become less effective. And is it truly compassionate to pretend that homeopathic placebos, administered by a kind and empathetic homeopath, generate more good than harm? I do not think so. The integration of alternative medicine makes sense only for those modalities which have been scientifically tested and demonstrated to be effective. True compassion must always include the desire to administer those treatments which demonstrably generate more good than harm.

Conclusion

I must admit, I do feel slightly embarrassed to pompously entitle this post “a vision of good medicine”. It really amounts to little more than common sense and is merely a reflection of what many health care professionals believe. Yet it does differ significantly from the ‘integrated health and post-modern medicine’ as proposed by Charles – and perhaps this is one reason why it might not be totally irrelevant.

The question whether spinal manipulation is an effective treatment for infant colic has attracted much attention in recent years. The main reason for this is, of course, that a few years ago Simon Singh had disclosed in a comment that the British Chiropractic Association (BCA) was promoting chiropractic treatment for this and several other childhood condition on their website. Simon famously wrote “they (the BCA) happily promote bogus treatments” and was subsequently sued for libel by the BCA. Eventually, the BCA lost the libel action as well as lots of money, and the entire chiropractic profession ended up with enough egg on their faces to cook omelets for all their patients.

At the time, the BCA had taken advice from several medical and legal experts; one of their medical advisers, I was told, was Prof George Lewith. Intriguingly, he and several others have just published a Cochrane review of manipulative therapies for infant colic. Here are the unabbreviated conclusions from their article:

The studies included in this meta-analysis were generally small and methodologically prone to bias, which makes it impossible to arrive at a definitive conclusion about the effectiveness of manipulative therapies for infantile colic. The majority of the included trials appeared to indicate that the parents of infants receiving manipulative therapies reported fewer hours crying per day than parents whose infants did not, based on contemporaneous crying diaries, and this difference was statistically significant. The trials also indicate that a greater proportion of those parents reported improvements that were clinically significant. However, most studies had a high risk of performance bias due to the fact that the assessors (parents) were not blind to who had received the intervention. When combining only those trials with a low risk of such performance bias, the results did not reach statistical significance. Further research is required where those assessing the treatment outcomes do not know whether or not the infant has received a manipulative therapy. There are inadequate data to reach any definitive conclusions about the safety of these interventions”

Cochrane reviews also carry a “plain language” summary which might be easier to understand for lay people. And here are the conclusions from this section of the review:

The studies involved too few participants and were of insufficient quality to draw confident conclusions about the usefulness and safety of manipulative therapies. Although five of the six trials suggested crying is reduced by treatment with manipulative therapies, there was no evidence of manipulative therapies improving infant colic when we only included studies where the parents did not know if their child had received the treatment or not. No adverse effects were found, but they were only evaluated in one of the six studies.

If we read it carefully, this article seems to confirm that there is no reliable evidence to suggest that manipulative therapies are effective for infant colic. In the analyses, the positive effect disappears, if the parents are properly blinded;  thus it is due to expectation or placebo. The studies that seem to show a positive effect are false positive, and spinal manipulation is, in fact, not effective.

The analyses disclose another intriguing aspect: most trials failed to mention adverse effects. This confirms the findings of our own investigation and amounts to a remarkable breach of publication ethics (nobody seems to be astonished by this fact; is it normal that chiropractic researchers ignore generally accepted rules of ethics?). It also reflects badly on the ability of the investigators of the primary studies to be objective. They seem to aim at demonstrating only the positive effects of their intervention; science is, however, not about confirming the researchers’ prejudices, it is about testing hypotheses.

The most remarkable thing about the new Cochrane review  is, I think, the in-congruence of the actual results and the authors’ conclusion. To a critical observer, the former are clearly negative but  the latter sound almost positive. I think this begs the question about the possibility of reviewer bias.

We have recently discussed on this blog whether reviews by one single author are necessarily biased. The new Cochrane review has 6 authors, and it seems to me that its conclusions are considerably more biased than my single-author review of chiropractic spinal manipulation for infant colic; in 2009, I concluded simply that “the claim [of effectiveness] is not based on convincing data from rigorous clinical trials”.

Which of the two conclusions describe the facts more helpfully and more accurately?

I think, I rest my case.

In my last post, we discussed the “A+B versus B” trial design as a tool to produce false positive results. This method is currently very popular in alternative medicine, yet it is by no means the only approach that can mislead us. Today, let’s look at other popular options with a view of protecting us against trialists who naively or willfully might fool us.

The crucial flaw of the “A+B versus B” design is that it fails to account for non-specific effects. If the patients in the experimental group experience better outcomes than the control group, this difference could well be due to effects that are unrelated to the experimental treatment. There are, of course, several further ways to ignore non-specific effects in clinical research. The simplest option is to include no control group at all. Homeopaths, for instance, are very proud of studies which show that ~70% of their patients experience benefit after taking their remedies. This type of result tends to impress journalists, politicians and other people who fail to realise that such a result might be due to a host of factors, e.g. the placebo-effect, the natural history of the disease, regression towards the mean or treatments which patients self-administered while taking the homeopathic remedies. It is therefore misleading to make causal inferences from such data.

Another easy method to generate false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the outcomes in clinical trials is to make sure that expectation is not the cause of or contributor to the outcome. They say that expectation can move mountains; this might be an exaggeration, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment. Similarly, the parents of a child or the owners of an animal can transfer their expectations, and this is one of several reasons why it is incorrect to claim that children and animals are immune to placebo-effects.

Failure to randomise is another source of bias which can make an ineffective therapy look like an effective one when tested in a clinical trial. If we allow patients or trialists to select or choose which patients receive the experimental and which get the control-treatment, it is likely that the two groups differ in a number of variables. Some of these variables might, in turn, impact on the outcome. If, for instance, doctors allocate their patients to the experimental and control groups, they might select those who will respond to the former and those who don’t to the latter. This may not happen with malicious intent but through intuition or instinct: responsible health care professionals want those patients who, in their experience, have the best chances to benefit from a given treatment to receive that treatment. Only randomisation can, when done properly, make sure we are comparing comparable groups of patients, and non-randomisation is likely to produce misleading findings.

While these options for producing false positives are all too obvious, the next possibility is slightly more intriguing. It refers to studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but to investigations attempting to assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce the same or similarly positive results, both must be effective. For instance, such a study might compare the effects of acupuncture to a common pain-killer. Such trials are aptly called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a deliberately silly example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.

Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In our hypothetical example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then apparently confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition, and people who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.

Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, the most useless or even slightly detrimental treatment would appear to be effective simply because it is equivalent to or less harmful than the comparator.

A variation of this theme is the plethora of controlled clinical trials which compare one unproven therapy to another unproven treatment. Perdicatbly, the results indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic researchers then tend to conclude that this proves both treatments to be equally effective.

Another option for creating misleadingly positive findings is to cherry-pick the results. Most trails have many outcome measures; for instance, a study of acupuncture for pain-control might quantify pain in half a dozen different ways, it might also measure the length of the treatment until pain has subsided, the amount of medication the patients took in addition to receiving acupuncture, the days off work because of pain, the partner’s impression of the patient’s health status, the quality of life of the patient, the frequency of sleep being disrupted by pain etc. If the researchers then evaluate all the results, they are likely to find that one or two of them have changed in the direction they wanted. This can well be a chance finding: with the typical statistical tests, one in 20 outcome measures would produce a significant result purely by chance. In order to mislead us, the researchers only need to “forget” about all the negative results and focus their publication on the ones which by chance have come out as they had hoped.

One fail-proof method for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.

And finally, there is always the possibility of overt cheating. Researchers are only human and are thus not immune to temptation. They may have conflicts of interest or may know that positive results are much easier to publish than negative ones. Certainly they want to publish their work – “publish or perish”! So, faced with disappointing results of a study, they might decide to prettify them or even invent new ones which are more pleasing to them, their peers, or their sponsors.

Am I claiming that this sort of thing only happens in alternative medicine? No! Obviously, the way to minimise the risk of such misconduct is to train researchers properly and make sure they are able to think critically. Am I suggesting that investigators of alternative medicine are often not well-trained and almost always uncritical? Yes.

What is and what isn’t evidence, and why is the distinction important?

In the area of alternative medicine, we tend to engage in sheer endless discussions around the subject of evidence; the relatively few comments on this new blog already confirm this impression. Many practitioners claim that their very own clinical experience is at least as important and generalizable as scientific evidence. It is therefore relevant to analyse in a little more detail some of the issues related to evidence as they apply to the efficacy of alternative therapies.

To prevent the debate from instantly deteriorating into a dispute about the value of this or that specific treatment, I will abstain from mentioning any alternative therapy by name and urge all commentators to do the same. The discussion on this post should not be about the value of homeopathy or any other alternative treatment; it is about more fundamental issues which, in my view, often get confused in the usually heated arguments for or against a specific alternative treatment.

My aim here is to outline the issues more fully than would be possible in the comments section of this blog. Readers and commentators can subsequently be referred to this post whenever appropriate. My hope is that, in this way, we might avoid repeating the same arguments ad nauseam.

Clinical experience is notoriously unreliable

Clinicians often feel quite strongly that their daily experience holds important information about the efficacy of their interventions. In this assumption, alternative practitioners are usually entirely united with healthcare professionals working in conventional medicine.

When their patients get better, they assume this to be the result of their treatment, especially if the experience is repeated over and over again. As an ex-clinician, I do sympathise with this notion which might even prevent practitioners from losing faith in their own work. But is the assumption really correct?

The short answer is NO. Two events [the treatment and the improvement] that follow each other in time are not necessarily causally related; we all know that, of course. So, we ought to consider alternative explanations for a patient’s improvement after therapy.

Even the most superficial scan of the possibilities discloses several options: the natural history of the condition, regression towards the mean, the placebo-effect, concomitant treatments, social desirability to name but a few. These and other phenomena can contribute to or determine the clinical outcome such that inefficacious treatments appear to be efficacious.

What follows is simple, undeniable and plausible for scientists, yet intensely counter-intuitive for clinicians: the prescribed treatment is only one of many influences on the clinical outcome. Thus even the most impressive clinical experience of the perceived efficacy of a treatment can be totally misleading. In fact, experience might just reflect the fact that we repeat the same mistake over and over again. Put differently, the plural of anecdote is anecdotes, not evidence!

Clinicians tend to get quite miffed when anyone tries to explain to them how multifactorial the situation really is and how little their much-treasured experience tells us about therapeutic efficacy. Here are seven of the counter-arguments I hear most frequently:

1)      The improvement was so direct and prompt that it was obviously caused by my treatment [this notion is not very convincing; placebo-effects can be just as prompt and direct].

2)      I have seen it so many times that it cannot be a coincidence [some clinicians are very caring, charismatic, and empathetic; they will thus regularly generate powerful placebo-responses, even when using placebos].

3)      A study with several thousand patients shows that 75% of them improved with my treatment [such response rates are not uncommon, even for ineffective treatments, if patient-expectation was high].

4)      Surely chronic conditions don’t suddenly get better; my treatment therefore cannot be a placebo [this is incorrect, eventually many chronic conditions improve, if only temporarily].

5)      I had a patient with a serious condition, e.g. cancer, who received my treatment and was cured [if one investigates such cases, one often finds that the patient also took a conventional treatment; or, in rare instances, even cancer-patients show spontaneous remissions].

6)      I have tried the treatment myself and had a positive outcome [clinicians are not immune to the multifactorial nature of the perceived clinical response].

7)      Even children and animals respond very well to my treatment, surely they are not prone to placebo-effects [animals can be conditioned to respond; and then there is, of course, the natural history of the disease].

Is all this to say that clinical experience is useless? Clearly not! I am merely pointing out that, when it comes to therapeutic efficacy, clinical experience is no replacement for evidence. It is invaluable for a lot of other things, but it can at best provide a hint and never a proof of efficacy.

What then is reliable evidence?

As the clinical outcomes after treatments always have many determinants, we need a different approach for verifying therapeutic efficacy. Essentially, we need to know what would have happened, if our patients had not received the treatment in question.

The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.

Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.

Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.

Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.

Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-commings, they are far superior than any other method for determining the efficacy of medical interventions.

There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.

Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.

In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.

Why is evidence important?

In a way, this question has already been answered: only with reliable evidence can we tell with any degree of certainty that it was the treatment per se – and not any of the other factors mentioned above – that caused the clinical outcome we observe in routine practice. Only if we have such evidence can we be sure about cause and effect. And only then can we make sure that patients receive the best possible treatments currently available.

There are, of course, those who say that causality does not matter all that much. What is important, they claim, is to help the patient, and if it was a placebo-effect that did the trick, who cares? However, I know of many reasons why this attitude is deeply misguided. To mention just one: we probably all might agree that the placebo-effect can benefit many patients, yet it would be a fallacy to assume that we need a placebo treatment to generate a placebo-response.

If a clinician administers an efficacious therapy [one that generates benefit beyond placebo] with compassion, time, empathy and understanding, she will generate a placebo-response PLUS a response to the therapy administered. In this case, the patient benefits twice. It follows that, merely administering a placebo is less than optimal; in fact it usually means cheating the patient of the effect of an efficacious therapy.

The frequently voiced counter-argument is that there are many patients who are ill without an exact diagnosis and who therefore cannot receive a specific treatment. This may be true, but even those patients’ symptoms can usually be alleviated with efficacious symptomatic therapy, and I fail to see how the administration of an ineffective treatment might be preferable to using an effective symptomatic therapy.

Conclusion

We all agree that helping the patient is the most important task of a clinician. This task is best achieved by maximising the non-specific effects [e.g. placebo], while also making sure that the patient benefits from the specific effects of what medicine has to offer. If that is our goal in clinical practice, we need reliable evidence and experience. Therefore one cannot be a substitute for the other, and scientific evidence is an essential precondition for good medicine.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories