The subject of placebo is a complex but fascinating one, particularly for those interested in alternative medicine. Most sceptics believe that alternative therapies rely heavily, if not entirely, on the placebo effect. Some alternative practitioners, when unable to produce convincing evidence that their treatment is effective, seem to have now settled to admitting that their therapy works (mostly or entirely) via a placebo effect. They the hasten to add that this is perfectly fine, because it is just an explanation as to how it works – a mechanism of action, in other words. Causing benefit via a placebo effect still means, they insist, that their therapy is effective.
In a previous post, I have tried to demonstrate that this belief is erroneous and where the notion comes from. It originates, I believe, from a mistaken definition of ‘effectiveness’: for many alternative practitioners ‘effectiveness’ encompasses the specific plus the non-specific (e. g. placebo) effects of their therapy. In real medicine, ‘effectiveness’ is the degree to which a treatment works under real life conditions.
The ‘alternative’ definition is, of course, incorrect but alternative practitioners stubbornly refuse to acknowledge this fact. Here are just two reasons why it cannot be right:
- If it were correct, it would be hardly conceivable to think of a treatment that is NOT effective. Applied with empathy and compassion, virtually all treatments – however devoid of specific effects – will produce a placebo effect. Thus they will all be effective, and the term would be superfluous because ‘treatment’ would automatically mean ‘effective’. An ineffective treatment would, in other words, be a contradiction in terms.
- If it were correct, any pharmaceutical or devices company could legally market ineffective drugs or gadgets and rightly claim (or even prove) that they are effective. Any such therapy could very easily be shown to generate a placebo-effect under the right circumstances; and as long as this is the case, it would be certifiably effective.
I do sympathise with alt med enthusiasts who find this hard or even impossible to accept. They see almost every day how their placebo-therapy benefits their patients. (It seems worth remembering that not just the placebo phenomenon but several other factors are involved in such outcomes – take, for instance, the natural history of the disease and the regression towards the mean.) And they might think that my arguments are nothing but a devious attempt do away with the beneficial power of the placebo.
The truth, however, is that nobody wants to do anything of the sort; we all want to help patients as much as possible, and that does, of course, include the use of the placebo effect. In clinical practice, we usually want to maximise the placebo effect where possible. But for this goal, we do not require placebo therapies. If we administer a specifically effective therapy with compassion, we undoubtedly also generate a placebo response. In addition, our patients would benefit from the specific effects of the prescribed therapy. Both elements are essential for an optimal therapeutic response, and I don’t know any conventional healthcare professionals who do not aim at this optimal outcome.
Giving just placebos will not normally generate an optimal outcome, and therefore it cannot truly be in the interest of the patient. It is also ethically problematic because it usually entails a degree of deception of the patient. Moreover, placebo effects are unreliable and usually of short duration. Foremost, they do not normally cure a disease; they may alleviate symptoms but they almost never tackle their causes. These characteristics hardly make placebos an acceptable choice for routine clinical practice.
The bottom line is clear and simple: a drug that is not better than placebo can only be classified as being ineffective. The same applies to all non-drug therapies. Double standards are not acceptable in healthcare. And the demonstration of a placebo effect does not turn an ineffective therapy into an effective one.
I know that many alternative practitioners do not agree with this line of thought – so, let’s hear their counter-arguments.
Alternative medicine has no shortage of research that suggests it to be effective. Almost invariably, however, one finds – when looking a bit more carefully at such investigations – that the positive conclusions are not warranted by the data. Here is an excellent, recent example:
This new study, authored by two Turkish nurses, was an RCT where the patients were randomly assigned to either an aromatherapy massage (n = 17), reflexology (n = 17) or the control group (n = 17). Aromatherapy massage was applied to both knees of subjects in group 1 for 30 minutes. Reflexology was administered to both feet of subjects in group 2 for 40 minutes during weekly home visits. The subjects of group 3, the control group, received no intervention.
Fifty-one subjects with rheumatoid arthritis were recruited from a university hospital rheumatology clinic in Turkey between July 2014 and January 2015 for this trial. Data were collected by personal information form, DAS28 index, Visual Analog Scale and Fatigue Severity Scale. Pain and fatigue scores were measured at baseline and within an hour after each intervention for 6 weeks.
Pain and fatigue scores significantly decreased in the aromatherapy massage and reflexology groups compared with the control group (p < .05). The reflexology intervention started to decrease pain and fatigue scores earlier than aromatherapy massage (week 1 vs week 2 for pain, week 1 vs week 4 for fatigue) (p < .05).
The authors concluded that aromatherapy massage and reflexology are simple and effective non-pharmacologic nursing interventions that can be used to help manage pain and fatigue in patients with rheumatoid arthritis.
I am sure that most readers have spotted the snag: the two interventions generated better outcomes than no therapy. It is quite simply wrong to assume that this outcome is specifically related to the two treatments. Both of these treatments are fairly agreeable and generate expectations, involve touch, attention and care. In my view, it is these latter factors which together have caused the better outcomes. And this is, of course, entirely unrelated to any specific effects of the two therapies.
This might well be trivial, but if such sloppy conclusions pollute the literature to the extend that they currently do in the realm of alternative medicine, it becomes important.
Yesterday, I wrote about a new acupuncture trial. Amongst other things, I wanted to find out whether the author who had previously insisted I answer his questions about my view on the new NICE guideline would himself answer a few questions when asked politely. To remind you, this is what I wrote:
This new study was designed as a randomized, sham-controlled trial of acupuncture for persistent allergic rhinitis in adults investigated possible modulation of mucosal immune responses. A total of 151 individuals were randomized into real and sham acupuncture groups (who received twice-weekly treatments for 8 weeks) and a no acupuncture group. Various cytokines, neurotrophins, proinflammatory neuropeptides, and immunoglobulins were measured in saliva or plasma from baseline to 4-week follow-up.
Statistically significant reduction in allergen specific IgE for house dust mite was seen only in the real acupuncture group. A mean (SE) statistically significant down-regulation was also seen in pro-inflammatory neuropeptide substance P (SP) 18 to 24 hours after the first treatment. No significant changes were seen in the other neuropeptides, neurotrophins, or cytokines tested. Nasal obstruction, nasal itch, sneezing, runny nose, eye itch, and unrefreshed sleep improved significantly in the real acupuncture group (post-nasal drip and sinus pain did not) and continued to improve up to 4-week follow-up.
The authors concluded that acupuncture modulated mucosal immune response in the upper airway in adults with persistent allergic rhinitis. This modulation appears to be associated with down-regulation of allergen specific IgE for house dust mite, which this study is the first to report. Improvements in nasal itch, eye itch, and sneezing after acupuncture are suggestive of down-regulation of transient receptor potential vanilloid 1.
…Anyway, the trial itself raises a number of questions – unfortunately I have no access to the full paper – which I will post here in the hope that my acupuncture friend, who are clearly impressed by this paper, might provide the answers in the comments section below:
- Which was the primary outcome measure of this trial?
- What was the power of the study, and how was it calculated?
- For which outcome measures was the power calculated?
- How were the subjective endpoints quantified?
- Were validated instruments used for the subjective endpoints?
- What type of sham was used?
- Are the reported results the findings of comparisons between verum and sham, or verum and no acupuncture, or intra-group changes in the verum group?
- What other treatments did each group of patients receive?
- Does anyone really think that this trial shows that “acupuncture is a safe, effective and cost-effective treatment for allergic rhinitis”?
In the comments section, the author wrote: “after you have read the full text and answered most of your questions for yourself, it might then be a more appropriate time to engage in any meaningful discussion, if that is in fact your intent”, and I asked him to send me his paper. As he does not seem to have the intention to do so, I will answer the questions myself and encourage everyone to have a close look at the full paper [which I can supply on request].
- The myriad of lab tests were defined as primary outcome measures.
- Two sentences are offered, but they do not allow me to reconstruct how this was done.
- No details are provided.
- Most were quantified with a 3 point scale.
- Mostly not.
- Needle insertion at non-acupoints.
- The results are a mixture of inter- and intra-group differences.
- Patients were allowed to use conventional treatments and the frequency of this use was reported in patient diaries.
- I don’t think so.
So, here is my interpretation of this study:
- It lacked power for many outcome measures, certainly the clinical ones.
- There were hardly any differences between the real and the sham acupuncture group.
- Most of the relevant results were based on intra-group changes, rather than comparing sham with real acupuncture, a fact, which is obfuscated in the abstract.
- In a controlled trial fluctuations within one group must never be interpreted as caused by the treatment.
- There were dozens of tests for statistical significance, and there seems to be no correction for multiple testing.
- Thus the few significant results that emerged when comparing sham with real acupuncture might easily be false positives.
- Patient-blinding seems questionable.
- McDonald as the only therapist of the study might be suspected to have influenced his patients through verbal and non-verbal communications.
I am sure there are many more flaws, particularly in the stats, and I leave it to others to identify them. The ones I found are, however, already serious enough, in my view, to call for a withdrawal of this paper. Essentially, the authors seem to have presented a study with largely negative findings as a trial with positive results showing that acupuncture is an effective therapy for allergic rhinitis. Subsequently, McDonald went on social media to inflate his findings even more. One might easily ask: is this scientific misconduct or just poor science?
I would be most interested to hear what you think about it [if you want to see the full article, please send me an email].
Reiki is one of the most popular types of ‘energy healing’. Reiki healers believe to be able to channel ‘healing energy’ into patients’ body thus enabling them to get healthy. If Reiki were not such a popular treatment, one could brush such claims aside and think “let the lunatic fringe believe what they want”. But as Reiki so effectively undermines consumers’ sense of reality and rationality, I feel I should continue informing the public about this subject – despite the fact that I have already reported about it several times before, for instance here, here, here, here, here and here.
A new RCT, published in a respected journal looks interesting enough for a further blog-post on the subject. The main aim of the study was to investigate the effectiveness of two psychotherapeutic approaches, cognitive behavioural therapy (CBT) and a complementary medicine method Reiki, in reducing depression scores in adolescents. The researchers from Canada, Malaysia and Australia recruited 188 adolescent depressed adolescents. They were randomly assigned to CBT, Reiki or wait-list. Depression scores were assessed before and after 12 weeks of treatments/wait list. CBT showed a significantly greater decrease in Child Depression Inventory (CDI) scores across treatment than both Reiki (p<.001) and the wait-list control (p<.001). Reiki also showed greater decreases in CDI scores across treatment relative to the wait-list control condition (p=.031). Male participants showed a smaller treatment effects for Reiki than did female participants. The authors concluded that both CBT and Reiki were effective in reducing the symptoms of depression over the treatment period, with effect for CBT greater than Reiki.
I find it most disappointing that these days even respected journals publish such RCTs without the necessary critical input. This study may appear to be rigorous but, in fact, it is hardly worth the paper it was printed on.
The results show that Reiki produced worse results than CBT. That I can well believe!
However, the findings also suggest that Reiki was nevertheless “effective in reducing the symptoms of depression”, as the authors put it in their conclusions. This statement is misleading!
It is based on the comparison of Reiki with doing nothing. As Reiki involves lots of attention, it can be assumed to generate a sizable placebo effect. As a proportion of the patients in the wait list group are probably disappointed for not getting such attention, they can be assumed to experience the adverse effects of their disappointment. The two phenomena combined can easily explain the result without any “effectiveness” of Reiki per se.
If such considerations are not fully discussed and made amply clear even in the conclusions of the abstract, it seems reasonable to accuse the journal of being less than responsible and the authors of being outright misleading.
As with so many papers in this area, one has to ask: WHERE DOES SLOPPY RESEARCH END AND WHERE DOES SCIENTIFIC MISCONDUCT BEGIN?
A recent comment to a blog-post about alternative treatments for cancer inspired me to ponder a bit. I think it is noteworthy because it exemplifies so many of the comments I hear in the realm of alternative medicine on an almost daily basis. Here is the comment in question:
“Yes…it appears that the medical establishment have known for years that chemotherapy a lot of the time kills patients faster than if they were untreated…what’s more, it worsens a person’s quality of life in which many die directly of the severe effects on the endocrine, immune system and more…cancers often return in more aggressive forms metastasising with an increased risk of apoptosis. In other words it makes things worse whereas there are many natural remedies which not only do no harm but accumulating evidence points to their capacity to fight cancer…some of it is bullshit whilst some holds some truth!! So turning away from toxic treatments that kill towards natural approaches that are showing more hope with the backing of trials kinda reverses the whole argument of this article.”
The comment first annoyed me a bit, of course, but later it made me think and consider the differences between conspiracy theories, assumptions, opinions, evidence and scientific facts. Let’s tackle each of these in turn.
A conspiracy theory is an explanatory or speculative theory suggesting that two or more persons, or an organization, have conspired to cause or cover up, through secret planning and deliberate action, an event or situation typically regarded as illegal or harmful.
Part of the above comment bears some of the hallmarks of a conspiracy theory: “…the medical establishment have known for years that chemotherapy a lot of the time kills patients faster than if they were untreated…” The assumption here is that the conventional healthcare practitioners are evil enough to knowingly do harm to their patients. Such conspiracy theories abound in the realm of alternative medicine; they include the notions that
- BIG PHARMA is out to kill us all in order to maximize their profits,
- the ‘establishment’ is suppressing any information about the benefits of alternative treatments,
- vaccinations are known to be harmful but nevertheless being forced on to our children,
- drug regulators are in the pocket of the pharmaceutical industry,
- doctors accept bribes for prescribing dangerous drugs
- etc. etc.
In a previous blog-post, I have discussed the fact that the current popularity of alternative medicine is at least partly driven by the conviction that there is a sinister plot by ‘the establishment’ that prevents people from benefitting from the wonders of alternative treatments. It is therefore hardly surprising that conspiracy theories like the above are voiced regularly on this blog and elsewhere.
An assumption is something taken for granted or accepted as true without proof.
The above comment continues stating that “…[chemotherapy] makes things worse whereas there are many natural remedies which not only do no harm but accumulating evidence points to their capacity to fight cancer…” There is not proof for these assertions, yet the author takes them for granted. If one were to look for the known facts, one would find the assumptions to be erroneous: chemotherapy has saved countless lives and there simply are no natural remedies that will cure any form of cancer. In the realm of alternative medicine, this seems to worry few, and assumptions of this or similar nature are being made every day. Sadly the plethora of assumptions or bogus claims eventually endanger public health.
The above comment continues with the opinion that “…turning away from toxic treatments that kill towards natural approaches that are showing more hope with the backing of trials kinda reverses the whole argument of this article.” In general, alternative medicine is based on opinions of this sort. On this blog, we have plenty of examples for that in the comments section. This is perhaps understandable; evidence is usually in short supply, and therefore it often is swiftly replaced with often emotionally loaded opinions. It is even fair to say that much of alternative medicine is, in truth, opinion-based healthcare.
One remarkable feature of the above comment is that it is bar of any evidence. In a previous post, I have tried to explain the nature of evidence regarding the efficacy of medical interventions:
The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors (e. g. placebo effects, natural history of the condition, regression towards the mean), and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.
Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.
Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.
Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.
Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-comings, they are far superior than any other method for determining the efficacy of medical interventions.
There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.
Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.
In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.
Some facts related to the subject of alternative medicine have already been mentioned:
- chemotherapy prolongs survival of many cancer patients;
- no alternative therapy has achieved anything remotely similar.
The comment above that motivated me to write this somewhat long-winded post is devoid of facts. This is just one more feature that makes it so typical of the comments by proponents of alternative medicine we see with such embarrassing regularity.
Mindfulness-based stress reduction (MBSR) has not been rigorously evaluated as a treatment of chronic low back pain. According to its authors, this RCT was aimed at evaluating “the effectiveness for chronic low back pain of MBSR vs cognitive behavioural therapy (CBT) or usual care.”
The investigators randomly assigned patients to receive MBSR (n = 116), CBT (n = 113), or usual care (n = 113). CBT meant training to change pain-related thoughts and behaviours and MBSR meant training in mindfulness meditation and yoga. Both were delivered in 8 weekly 2-hour groups. Usual care included whatever care participants received.
Coprimary outcomes were the percentages of participants with clinically meaningful (≥30%) improvement from baseline in functional limitations (modified Roland Disability Questionnaire [RDQ]; range, 0-23) and in self-reported back pain bothersomeness (scale, 0-10) at 26 weeks. Outcomes were also assessed at 4, 8, and 52 weeks.
There were 342 randomized participants with a mean duration of back pain of 7.3 years. They attended 6 or more of the 8 sessions, 294 patients completed the study at 26 weeks, and 290 completed it at 52 weeks. In intent-to-treat analyses at 26 weeks, the percentage of participants with clinically meaningful improvement on the RDQ was higher for those who received MBSR (60.5%) and CBT (57.7%) than for usual care (44.1%), and RR for CBT vs usual care, 1.31 [95% CI, 1.01-1.69]). The percentage of participants with clinically meaningful improvement in pain bothersomeness at 26 weeks was 43.6% in the MBSR group and 44.9% in the CBT group, vs 26.6% in the usual care group, and RR for CBT vs usual care was 1.69 [95% CI, 1.18-2.41]). Findings for MBSR persisted with little change at 52 weeks for both primary outcomes.
The authors concluded that among adults with chronic low back pain, treatment with MBSR or CBT, compared with usual care, resulted in greater improvement in back pain and functional limitations at 26 weeks, with no significant differences in outcomes between MBSR and CBT. These findings suggest that MBSR may be an effective treatment option for patients with chronic low back pain.
At first glance, this seems like a well-conducted study. It was conducted by one of the leading back pain research team and was published in a top-journal. It will therefore have considerable impact. However, on closer examination, I have serious doubts about certain aspects of this trial. In my view, both the aims and the conclusions of this RCT are quite simply wrong.
The authors state that they aimed at evaluating “the effectiveness for chronic low back pain of MBSR vs cognitive behavioural therapy (CBT) or usual care.” This is not just misleading, it is wrong! The correct aim should have been to evaluate “the effectiveness for chronic low back pain of MBSR plus usual care vs cognitive behavioural therapy plus usual care or usual care alone.” One has to go into the method section to find the crucial statement: “All participants received any medical care they would normally receive.”
Consequently, the conclusions are equally wrong. They should have read as follows: Among adults with chronic low back pain, treatment with MBSR plus usual care or CBT plus usual care, compared with usual care alone, resulted in greater improvement in back pain and functional limitations at 26 weeks, with no significant differences in outcomes between MBSR and CBT.
In other words, this is yet another trial with the dreaded ‘A+B vs B’ design. Because A+B is always more than B (even if A is just a placebo), such a study will never generate a negative result (even if A is just a placebo). The results are therefore entirely compatible with the notion that the two tested treatments are pure placebos. Add to this the disappointment many patients in the ‘usual care group’ might have felt for not receiving an additional therapy for their pain, and you have a most plausible explanation for the observed outcomes.
I am totally puzzled why the authors failed to discuss these possibilities and limitations in full, and I am equally bewildered that JAMA published such questionable research.
Lots of people are puzzled how healthcare professionals – some with sound medical training – can become convinced homeopaths. Having done part of this journey myself, I think I know one possible answer to this question. So, let me try to explain it to you in the form of a ‘story’ of a young doctor who goes through this development. As you may have guessed, some elements of this story are autobiographical but others are entirely fictional.
Here is the story:
After he had finished medical school, our young and enthusiastic doctor wanted nothing more than to help and assist needy patients. A chain of coincidences made him take a post in a homeopathic hospital where he worked as a junior clinician alongside 10 experienced homeopaths. What he saw impressed him: despite of what he had learnt at med school, homeopathy seemed to work quite well: patients with all sorts of symptoms improved. This was not his or anybody else’s imagination, it was an undeniable fact.
As his confidence and his ability to think clearly grew, the young physician began to wonder nevertheless: were his patients’ improvements really due to the homeopathic remedies, or were these outcomes caused by the kind and compassionate care he and the other staff provided?
To cut a long story short, when he left the hospital to establish his own practice, he certainly knew how to prescribe homeopathics but he was not what one might call a convinced homeopath. He decided to employ homeopathy in parallel with conventional medicine and it turned out that he made less and less use of homeopathy as the months went by.
One day, a young women consulted him; she had been unsuccessfully trying to have a baby for two years and was now getting very frustrated, even depressed, with her childlessness. All tests on her and her husband had not revealed any abnormalities. A friend had told her that homeopathy might help, and see had therefore made this appointment to consult a doctor who had trained as a homeopath.
Our young physician was not convinced that he could help his patient but, in the end, he was persuaded to give it a try. As he had been taught by his fellow homeopaths, he conducted a full homeopathic history to find the optimal remedy for his patient, gave her an individualised prescription and explained that any effect might take a while. The patient was delighted that someone had given her so much time, felt well-cared for by her homeopaths, and seemed full of optimism.
Months passed and she returned for several further consultations. But sadly she failed to become pregnant. About a year later, when everyone involved had all but given up hope, her periods stopped and the test confirmed: she was expecting!
Everyone was surprised, not least our doctor. This outcome, he reasoned, could not possibly be due to placebo, or the good therapeutic relationship he had been able to establish with his patient. Perhaps it was just a coincidence?
In the small town where they lived, news spread quickly that he was able to treat infertility with homeopathy. Several other women with the same problem liked the idea of having an effective yet risk-free therapy for their infertility problem. The doctor thus treated several infertile women, about 10, during the next months. Amazingly most of them got pregnant within a year or so. The doctor was baffled, such a series of pregnancies could not be a coincidence, he reasoned.
Naturally, the cases that were talked about were the women who had become pregnant. And naturally, these were the patients our doctor liked to remember. Slowly he became convinced that he was indeed able to treat infertility homeopathically – so much so that he published a case series in a homeopathic journal about his successes.
In a way, he had hoped that, perhaps, someone would challenge him and explain where he had gone wrong. But the article was greeted nationally with much applause by his fellow homeopaths, and he was even invited to speak at several conferences. In short, within a few years, he made himself a name for his ability to help infertile women.
Patients now travelled from across the country to see him, and some even came from abroad. Our physician had become a minor celebrity in the realm of homeopathy. He also, one has to admit, had started to make very good money; most of his patients were private patients. Life was good. It almost goes without saying that all his former doubts about the effectiveness of homeopathic remedies gradually vanished into thin air.
Whenever now someone challenged his findings with arguments like ‘homeopathics are just placebos’, he surprised himself by getting quite angry. How do they dare doubt my data, he thought. The babies are there, to deny their existence means calling me a liar!
OUR DOCTOR HAD BECOME AN EVANGELICALLY CONVINCED HOMEOPATH, AND NO RATIONAL ARGUMENT COULD DISSUADE HIM.
And what arguments might that be? Isn’t he entirely correct? Can dozens of pregnancies be the result of a placebo effect, the therapeutic relationship or coincidence?
The answer is NO! The babies are real, very real.
But there are other, even simpler and much more plausible explanations for our doctor’s apparent success rate: otherwise healthy women who don’t get pregnant within months of trying do very often succeed eventually, even without any treatment whatsoever. Our doctor struck lucky when this happened a few times after the first patient had consulted him. Had he prescribed non-homeopathic placebos, his success rate would have been exactly the same.
As a clinician, it is all too easy and extremely tempting not to adequately rationalise such ‘success’. If the ‘success’ then happens repeatedly, one can be in danger of becoming deluded, and then one almost automatically ‘forgets’ one’s failures. Over time, this confirmation bias will create an entirely false impression and often even a deeply felt conviction.
I am sure that this sort of thing happens often, very often. And it happens not just to homeopaths. It happens to all types of quacks. And, I am afraid, it also happens to many conventional doctors.
This is how ineffective treatments survive for often very long periods. This is how blood-letting survived for centuries. This is how millions of patients get harmed following the advice of their trusted physicians to employ a useless or even dangerous therapy.
HOW CAN THIS SORT OF THING BE STOPPED?
The answer to this most important question is very simple: health care professionals need to systematically learn critical thinking early on in their education. The answer may be simple but its realisation is unfortunately not.
Even today, courses in critical thinking are rarely part of the medical curriculum. In my view, they would be as important as anatomy, physiology or any of the other core subjects in medicine.
We all hope that serious complications after chiropractic care are rare. However, this does not mean they are unimportant. Multi-vessel cervical dissection with cortical sparing is an exceptional event in clinical practice. Such a case has just been described as a result of chiropractic upper spinal manipulation.
Neurologists from Qatar published a case report of a 55-year-old man who presented with acute-onset neck pain associated with sudden onset right-sided hemiparesis and dysphasia after chiropractic manipulation for chronic neck pain.
Magnetic resonance imaging revealed bilateral internal carotid artery dissection and left extracranial vertebral artery dissection with bilateral anterior cerebral artery territory infarctions and large cortical-sparing left middle cerebral artery infarction. This suggests the presence of functionally patent and interconnecting leptomeningeal anastomoses between cerebral arteries, which may provide sufficient blood flow to salvage penumbral regions when a supplying artery is occluded.
The authors concluded that chiropractic cervical manipulation can result in catastrophic vascular lesions preventable if these practices are limited to highly specialized personnel under very specific situations.
Chiropractors will claim that they are highly specialised and that such events must be true rarities. Others might even deny a causal relationship altogether. Others again would claim that, relative to conventional treatments, chiropractic manipulations are extremely safe. You only need to search my blog using the search-term ‘chiropractic’ to find that there are considerable doubts about these assumptions:
- Many chiropractors are not well trained and seem mostly in the business of making a tidy profit.
- Some seem to have forgotten most of the factual knowledge they may have learnt at chiro-college.
- There is no effective monitoring scheme to adequately record serious side-effects of chiropractic care.
- Therefore the incidence figures of such catastrophic events are currently still anyone’s guess.
- Publications by chiropractic interest groups seemingly denying this point are all fatally flawed.
- It is not far-fetched to fear that under-reporting of serious complications is huge.
- The reliable evidence fails to demonstrate that neck manipulations generate more good than harm.
- Until sound evidence is available, the precautionary principle leads most critical thinkers to conclude that neck manipulations have no place in routine health care.
The randomized, placebo-controlled, double-blind trial is usually the methodology to test the efficacy of a therapy that carries the least risk of bias. This fact is an obvious annoyance to some alt med enthusiasts, because such trials far too often fail to produce the results they were hoping for.
But there is no need to despair. Here I provide a few simple tips on how to mislead the public with seemingly rigorous trials.
The most brutal method for misleading people is simply to cheat. The Germans have a saying, ‘Papier ist geduldig’ (paper is patient), implying that anyone can put anything on paper. Fortunately we currently have plenty of alt med journals which publish any rubbish anyone might dream up. The process of ‘peer-review’ is one of several mechanisms supposed to minimise the risk of scientific fraud. Yet alt med journals are more clever than that! They tend to have a peer-review that rarely involves independent and critical scientists, more often than not you can even ask that you best friend is invited to do the peer-review, and the alt med journal will follow your wish. Consequently the door is wide open to cheating. Once your fraudulent paper has been published, it is almost impossible to tell that something is fundamentally wrong.
But cheating is not confined to original research. You can also apply the method to other types of research, of course. For instance, the authors of the infamous ‘Swiss report’ on homeopathy generated a false positive picture using published systematic reviews of mine by simply changing their conclusions from negative to positive. Simple!
Obviously, outright cheating is not always as simple as that. Even in alt med, you cannot easily claim to have conducted a clinical trial without a complex infrastructure which invariably involves other people. And they are likely to want to have some control over what is happening. This means that complete fabrication of an entire data set may not always be possible. What might still be feasible, however, is the ‘prettification’ of the results. By just ‘re-adjusting’ a few data points that failed to live up to your expectations, you might be able to turn a negative into a positive trial. Proper governance is aimed at preventing his type of ‘mini-fraud’ but fortunately you work in alt med where such mechanisms are rarely adequately implemented.
Another very handy method is the omission of aspects of your trial which regrettably turned out to be in disagreement with the desired overall result. In most studies, one has a myriad of endpoints. Once the statistics of your trial have been calculated, it is likely that some of them yield the wanted positive results, while others do not. By simply omitting any mention of the embarrassingly negative results, you can easily turn a largely negative study into a seemingly positive one. Normally, researchers have to rely on a pre-specified protocol which defines a primary outcome measure. Thankfully, in the absence of proper governance, it usually is possible to publish a report which obscures such detail and thus mislead the public (I even think there has been an example of such an omission on this very blog).
Yes – lies, dam lies, and statistics! A gifted statistician can easily find ways to ‘torture the data until they confess’. One only has to run statistical test after statistical test, and BINGO one will eventually yield something that can be marketed as the longed-for positive result. Normally, researchers must have a protocol that pre-specifies all the methodologies used in a trial, including the statistical analyses. But, in alt med, we certainly do not want things to function normally, do we?
5 TRIAL DESIGNS THAT CANNOT GENERATE A NEGATIVE RESULT
All the above tricks are a bit fraudulent, of course. Unfortunately, fraud is not well-seen by everyone. Therefore, a more legitimate means of misleading the public would be highly desirable for those aspiring alt med researchers who do not want to tarnish their record to their disadvantage. No worries guys, help is on the way!
The fool-proof trial design is obviously the often-mentioned ‘A+B versus B’ design. In such a study, patients are randomized to receive an alt med treatment (A) together with usual care (B) or usual care (B) alone. This looks rigorous, can be sold as a ‘pragmatic’ trial addressing a real-fife problem, and has the enormous advantage of never failing to produce a positive result: A+B is always more than B alone, even if A is a pure placebo. Such trials are akin to going into a hamburger joint for measuring the calories of a Big Mac without chips and comparing them to the calories of a Big Mac with chips. We know the result before the research has started; in alt med, that’s how it should be!
I have been banging on about the ‘A+B versus B’ design often enough, but recently I came across a new study design used in alt med which is just as elegantly misleading. The trial in question has a promising title: Quality-of-life outcomes in patients with gynecologic cancer referred to integrative oncology treatment during chemotherapy. Here is the unabbreviated abstract:
Integrative oncology incorporates complementary medicine (CM) therapies in patients with cancer. We explored the impact of an integrative oncology therapeutic regimen on quality-of-life (QOL) outcomes in women with gynecological cancer undergoing chemotherapy.
PATIENTS AND METHODS:
A prospective preference study examined patients referred by oncology health care practitioners (HCPs) to an integrative physician (IP) consultation and CM treatments. QOL and chemotherapy-related toxicities were evaluated using the Edmonton Symptom Assessment Scale (ESAS) and Measure Yourself Concerns and Wellbeing (MYCAW) questionnaire, at baseline and at a 6-12-week follow-up assessment. Adherence to the integrative care (AIC) program was defined as ≥4 CM treatments, with ≤30 days between each session.
Of 128 patients referred by their HCP, 102 underwent IP consultation and subsequent CM treatments. The main concerns expressed by patients were fatigue (79.8 %), gastrointestinal symptoms (64.6 %), pain and neuropathy (54.5 %), and emotional distress (45.5 %). Patients in both AIC (n = 68) and non-AIC (n = 28) groups shared similar demographic, treatment, and cancer-related characteristics. ESAS fatigue scores improved by a mean of 1.97 points in the AIC group on a scale of 0-10 and worsened by a mean of 0.27 points in the non-AIC group (p = 0.033). In the AIC group, MYCAW scores improved significantly (p < 0.0001) for each of the leading concerns as well as for well-being, a finding which was not apparent in the non-AIC group.
An IP-guided CM treatment regimen provided to patients with gynecological cancer during chemotherapy may reduce cancer-related fatigue and improve other QOL outcomes.
A ‘prospective preference study’ – this is the design the world of alt med has been yearning for! Its principle is beautiful in its simplicity. One merely administers a treatment or treatment package to a group of patients; inevitably some patients take it, while others don’t. The reasons for not taking it could range from lack of perceived effectiveness to experience of side-effects. But never mind, the fact that some do not want your treatment provides you with two groups of patients: those who comply and those who do not comply. With a bit of skill, you can now make the non-compliers appear like a proper control group. Now you only need to compare the outcomes and BOB IS YOUR UNCLE!
Brilliant! Absolutely brilliant!
I cannot think of a more deceptive trial-design than this one; it will make any treatment look good, even one that is a mere placebo. Alright, it is not randomized, and it does not even have a proper control group. But it sure looks rigorous and meaningful, this ‘prospective preference study’!
While some chiropractors now do admit that upper neck manipulations can cause severe problems, many of them simply continue to ignore this fact. It is therefore important, I think, to keep alerting both consumers and chiropractors to the risks of spinal manipulations. In this context, a new article seems relevant.
Danish doctors reported a critical case of bilateral vertebral artery dissection (VAD) causing embolic occlusion of the basilar artery (BA) in a patient whose symptoms started after chiropractic Spinal manipulative therapy (cSMT). The patient, a 37-year-old woman, presented with acute onset of neurological symptoms immediately following cSMT in a chiropractic facility. Acute magnetic resonance imaging (MRI) showed ischemic lesions in the right cerebellar hemisphere and occlusion of the cranial part of the BA. Angiography demonstrated bilateral VADs. Symptoms remitted after endovascular therapy, which included dilatation of the left vertebral artery (VA) and extraction of thrombus from the BA. After 6 months, the patient still had minor sensory and cognitive deficits.
The authors concluded that, in severe cases, VAD may be complicated by BA thrombosis, and this case highlights the importance of a fast diagnostic approach and advanced intravascular procedure to obtain good long-term neurological outcome. Furthermore, this case underlines the need to suspect VAD in patients presenting with neurological symptoms following cSMT.
I can already hear the excuses of the chiropractic fraternity:
- this is just a case report,
- the risk is very rare,
- some investigations even deny any risk at all,
- the risk of many conventional treatments is far greater.
- as there are no functioning monitoring systems, nobody can tell with certainty how big the risk truly is,
- the precautionary principle in health care compels us to take even the slightest of suspicions of harm seriously,
- the risk/benefit principle compels us to ask whether the demonstrable benefits of neck manipulations outweigh its suspected risks.
The last point is perhaps the most important: AS FAR AS I CAN SEE, THERE IS NO INDICATION FOR NECK MANIPULATIONS FOR WHICH THE BENEFIT IS SUFFICIENTLY CERTAIN TO JUSTIFY ANY SUCH RISKS.