One could define alternative medicine by the fact that it is used almost exclusively for conditions for which conventional medicine does not have an effective and reasonably safe cure. Once such a treatment has been found, few patients would look for an alternative.
Alzheimer’s disease (AD) is certainly one such condition. Despite intensive research, we are still far from being able to cure it. It is thus not really surprising that AD patients and their carers are bombarded with the promotion of all sorts of alternative treatments. They must feel bewildered by the choice and all too often they fall victim to irresponsible quacks.
Acupuncture is certainly an alternative therapy that is frequently claimed to help AD patients. One of the first websites that I came across, for instance, stated boldly: acupuncture improves memory and prevents degradation of brain tissue.
But is there good evidence to support such claims? To answer this question, we need a systematic review of the trial data. Fortunately, such a paper has just been published.
The objective of this review was to assess the effectiveness and safety of acupuncture for treating AD. Eight electronic databases were searched from their inception to June 2014. Randomized clinical trials (RCTs) with AD treated by acupuncture or by acupuncture combined with drugs were included. Two authors extracted data independently.
Ten RCTs with a total of 585 participants were included in a meta-analysis. The combined results of 6 trials showed that acupuncture was better than drugs at improving scores on the Mini Mental State Examination (MMSE) scale. Evidence from the pooled results of 3 trials showed that acupuncture plus donepezil was more effective than donepezil alone at improving the MMSE scale score. Only 2 trials reported the incidence of adverse reactions related to acupuncture. Seven patients had adverse reactions related to acupuncture during or after treatment; the reactions were described as tolerable and not severe.
The Chinese authors of this review concluded that acupuncture may be more effective than drugs and may enhance the effect of drugs for treating AD in terms of improving cognitive function. Acupuncture may also be more effective than drugs at improving AD patients’ ability to carry out their daily lives. Moreover, acupuncture is safe for treating people with AD.
Anyone reading this and having a friend or family member who is affected by AD will think that acupuncture is the solution and warmly recommend trying this highly promising option. I would, however, caution to remain realistic. Like so very many systematic reviews of acupuncture or other forms of TCM that are currently flooding the medical literature, this assessment of the evidence has to be taken with more than just a pinch of salt:
- As far as I can see, there is no biological plausibility or mechanism for the assumption that acupuncture can do anything for AD patients.
- The abstract fails to mention that the trials were of poor methodological quality and that such studies tend to generate false-positive findings.
- The trials had small sample sizes.
- They were mostly not blinded.
- They were mostly conducted in China, and we know that almost 100% of all acupuncture studies from that country draw positive conclusions.
- Only two trials reported about adverse effects which is, in my view, a sign of violation of research ethics.
As I already mentioned, we are currently being flooded with such dangerously misleading reviews of Chinese primary studies which are of such dubious quality that one could do probably nothing better than to ignore them completely.
Isn’t that a bit harsh? Perhaps, but I am seriously worried that such papers cause real harm:
- They might motivate some to try acupuncture and give up conventional treatments which can be helpful symptomatically.
- They might prompt some families to spend sizable amounts of money for no real benefit.
- They might initiate further research into this area, thus drawing money away from research into much more promising avenues.
IT IS HIGH TIME THAT RESEARCHERS START THINKING CRITICALLY, PEER-REVIEWERS DO THEIR JOB PROPERLY, AND JOURNAL EDITORS STOP PUBLISHING SUCH MISLEADING ARTICLES.
Regular readers of this blog will have noticed: I recently published a ‘memoir‘.
Of all the books I have written, this one was by far the hardest. It covers ground that I felt quite uncomfortable with. At the same time, I felt compelled to write it. For over 5 years I kept at it, revised it, re-revised it, re-conceived the outline, abandoned the project altogether only to pick it up again.
When it eventually was finished, we had to find a suitable title. This was far from easy; my book is not a book about alternative medicine, it is a book about all sorts of things that have happened to me, including alternative medicine. Eventually we settled for A SCIENTIST IN WONDERLAND. A MEMOIR OF SEARCHING FOR TRUTH AND FINDING TROUBLE. This seemed to describe its contents quite well, I thought (the German edition is entitled NAZIS, NADELN UND INTRIGEN. ERINNERUNGEN EINES SKEPTIKERS which indicates why it was so difficult to put the diverse contents into a short title).
Then a further complication presented itself: at the very last minute, my publisher insisted that the text had to be checked by libel lawyers. This was not only painful and expensive, following their advice and thus changing or omitting passages also took some of the ‘edge’ off it.
Earlier this year, my ‘memoir’ was finally published; to say that I was nervous about how it might be received must be the understatement of the year. As it turned out, it received so many reviews that today I feel deeply humbled (and very proud), particularly as they were all full of praise and appreciation. In case you are interested, I provide some quotes and the links to the full text reviews below.[Ah, yes! Some people will surely claim that I did all this for the money. To those of my critics, I respond by saying that, had I done paper rounds or worked as a gardener or a window-cleaner during all the time I spent on this book, I would today be considerably better off. As it stands, the costs for the libel read are not yet covered by the income generated through the sales of this book.]
AND HERE ARE THE PROMISED QUOTES
Times Higher Education Book of the Week
Times Higher Education – Helen Bynum, Jan 29, 2015
“[F]or all its trenchant arguments about evidence-based science, the second half of A Scientist in Wonderland remains a very human memoir, and Ernst’s account of the increasingly personal nature of the attacks he faced when speaking to CAM practitioners and advocacy groups is disturbing… Ben Goldacre’s 2012 book Bad Pharma created a storm via its exposure of the pharmaceutical industry’s unhealthy links with mainstream medicine. Ernst’s book deserves to do the same for the quackery trading under the name of complementary and alternative medicine.”
The Spectator – Nick Cohen, Jan 31, 2015
“If you want a true measure of the man, buy Edzard Ernst’s memoir A Scientist in Wonderland, which the Imprint Academic press have just released. It would be worth reading [even] if the professor had never been the victim of a royal vendetta.”
The Bookbag review
The Bookbag – Sue Magee, Jan 28, 2015
“Ernst isn’t just an academic – he’s also an accomplished writer and skilled communicator. He puts over some quite complex ideas without resorting to jargon and I felt informed without ever struggling to understand, despite being a non-scientist. I was pulled into the story of his life and read most of the book in one sitting… I was impressed by what Ernst had to say and the way in which he said it.”
Science-Based Medicine review
Science-Based Medicine – Harriet Hall, Feb 3, 2015
“Edzard Ernst is one of those rare people who dare to question their own beliefs, look at the evidence without bias, and change their minds… In addition to being a memoir, Dr. Ernst’s book is a paean to science… He shows how misguided ideas, poor reasoning, and inaccurate publicity have contributed to the spread of alternative medicine… This is a well-written, entertaining book that anyone would enjoy reading and that advocates of alternative medicine should read: they might learn a thing or two about science, critical thinking, honesty, and the importance of truth.”
Nature – Barbara Kiser, Feb 5, 2015
“[T]his ferociously frank autobiography… [is] a clarion call for medical ethics.”
The Times – Robbie Millen, Feb 9, 2015
“A Scientist in Wonderland is a rather droll, quick read… [and] it’s an effective antidote to New Age nonsense, pseudo-science and old-fashioned quackery.”
AntiCancer.org.uk – Pan Pantziarka, Feb 19, 2015
“It should be required reading for everyone interested in medicine – without exception.”
Mail Online review
Mail Online – Katherine Keogh, Feb 28, 2015
“In his new book, A Scientist In Wonderland: A Memoir Of Searching For Truth And Finding Trouble, no one from the world of alternative medicine is safe from Professor Edzard Ernst’s firing line.”
James Randi Educational Foundation review
James Randi Educational Foundation – William M. London, Mar 9, 2015
“The writing in A Scientist in Wonderland is clear and engaging. It combines good storytelling with important insights about medicine, science, and analytic thinking. Despite all the troubles Ernst encountered, I found his story to be inspirational. I enthusiastically recommend the book to scientists, health professionals, and laypersons who like to see nonsense and mendacity exposed to the light of reason.”
The Pharmaceutical Journal review
The Pharmaceutical Journal – Andrews Haynes, Mar 26, 2015
“This engaging book is a memoir by a medical researcher whose passion for discovering the truth about untested therapies eventually forced him out of his job… [This] highly readable book concentrates on fact rather than emotion. It should be required reading for anyone interested in medical research.”
Skepticat – Maria MacLachlan, Apr 18, 2015
“A Scientist in Wonderland is more than an autobiography and I’m not sure I can do justice to the riches to be found in its pages. Sometimes it’s reminiscent of a black comedy, other times it’s almost too painful to read.”
Spiked! – Robin Walsh ,May 15, 2015
“Ernst’s book is a reminder of the need to have the courage to tell the truth as you understand it, and fight your corner against those in authority, while never losing a compassion for patients and a commitment to winning the debate. ”
Australasian Science review
Australasian Science – Loretta Marron, Jun 10, 2015
“Edzard Ernst is a living legend… The book is easy to read and hard to put down. I would particularly recommend it to anyone, with an open mind, who is interested in the truth or otherwise of CAM.”
Journal of the Royal Society of Medicine Review
JRSM – Michael Baum, June 2015
“This is a deeply moving and deeply disturbing book yet written with a light touch, humour and self-deprecation.”
THE BUFFALO NEWS
These enlightening books await summer readers. 21 June 2015
“Medical researcher Edzard Ernst spent most of his career stepping on toes. He first exposed the complicity of the German medical profession in the Nazi genocide. Then he accepted appointment as the world’s first chairman of alternative medicine at England’s University of Exeter. There he studied systematically the claims of the proponents of complementary medicine, a field dominated by evangelic and enthusiastic promoters, including Prince Charles. Needless to say, they did not take kindly to his exposures of many of their widely accepted therapies. His book, “A Scientist in Wonderland: A Memoir of Searching for Truth and Finding Trouble,” is a charming account of a committed life.”
You may feel that homeopaths are bizarre, irrational, perhaps even stupid – but you cannot deny their tenacity. Since 200 years, they are trying to convince us that their treatments are effective beyond placebo. And they seem to get more and more bold with their claims: while they used to suggest that homeopathy was effective for trivial conditions like a common cold, they now have their eyes on much more ambitious things. Two recent studies, for instance, claim that homeopathic remedies can help cancer patients.
The aim of the first study was to evaluate whether homeopathy influenced global health status and subjective wellbeing when used as an adjunct to conventional cancer therapy.
In this pragmatic randomized controlled trial, 410 patients, who were treated by standard anti-neoplastic therapy, were randomized to receive or not receive classical homeopathic adjunctive therapy in addition to standard therapy. The main outcome measures were global health status and subjective wellbeing as assessed by the patients. At each of three visits (one baseline, two follow-up visits), patients filled in two questionnaires for quantification of these endpoints.
The results show that 373 patients yielded at least one of three measurements. The improvement of global health status between visits 1 and 3 was significantly stronger in the homeopathy group by 7.7 (95% CI 2.3-13.0, p=0.005) when compared with the control group. A significant group difference was also observed with respect to subjective wellbeing by 14.7 (95% CI 8.5-21.0, p<0.001) in favor of the homeopathic as compared with the control group. Control patients showed a significant improvement only in subjective wellbeing between their first and third visits.
Our homeopaths concluded that the results suggest that the global health status and subjective wellbeing of cancer patients improve significantly when adjunct classical homeopathic treatment is administered in addition to conventional therapy.
The second study is a little more modest; it had the aim to explore the benefits of a three-month course of individualised homeopathy (IH) for survivors of cancer.
Fifteen survivors of any type of cancer were recruited by a walk-in cancer support centre. Conventional treatment had to have taken place within the last three years. Patients scored their total, physical and emotional wellbeing using the Functional Assessment of Chronic Illness Therapy for Cancer (FACIT-G) before and after receiving four IH sessions.
The results showed that 11 women had statistically positive results for emotional, physical and total wellbeing based on FACIT-G scores.
And the conclusion: Findings support previous research, suggesting CAM or individualised homeopathy could be beneficial for survivors of cancer.
As I said: one has to admire their tenacity, perhaps also their chutzpa – but not their understanding of science or their intelligence. If they were able to think critically, they could only arrive at one conclusion: STUDY DESIGNS THAT ARE WIDE OPEN TO BIAS ARE LIKELY TO DELIVER BIASED RESULTS.
The second study is a mere observation without a control group. The reported outcomes could be due to placebo, expectation, extra attention or social desirability. We obviously need an RCT! But the first study was an RCT!!! Its results are therefore more convincing, aren’t they?
No, not at all. I can repeat my sentence from above: The reported outcomes could be due to placebo, expectation, extra attention or social desirability. And if you don’t believe it, please read what I have posted about the infamous ‘A+B versus B’ trial design (here and here and here and here and here for instance).
My point is that such a study, while looking rigorous to the naïve reader (after all, it’s an RCT!!!), is just as inconclusive when it comes to establishing cause and effect as a simple case series which (almost) everyone knows by now to be utterly useless for that purpose. The fact that the A+B versus B design is nevertheless being used over and over again in alternative medicine for drawing causal conclusions amounts to deceit – and deceit is unethical, as we all know.
My overall conclusion about all this:
QUACKS LOVE THIS STUDY DESIGN BECAUSE IT NEVER FAILS TO PRODUCE FALSE POSITIVE RESULTS.
The purpose of this study was to evaluate the impact of early and guideline adherent physical therapy for low back pain on utilization and costs within the Military Health System (MHS).
Patients presenting to a primary care setting with a new complaint of LBP from January 1, 2007 to December 31, 2009 were identified from the MHS Management Analysis and Reporting Tool. Descriptive statistics, utilization, and costs were examined on the basis of timing of referral to physical therapy and adherence to practice guidelines over a 2-year period. Utilization outcomes (advanced imaging, lumbar injections or surgery, and opioid use) were compared using adjusted odds ratios with 99% confidence intervals. Total LBP-related health care costs over the 2-year follow-up were compared using linear regression models.
753,450 eligible patients with a primary care visit for LBP between 18-60 years of age were considered. Physical therapy was utilized by 16.3% (n = 122,723) of patients, with 24.0% (n = 17,175) of those receiving early physical therapy that was adherent to recommendations for active treatment. Early referral to guideline adherent physical therapy was associated with significantly lower utilization for all outcomes and 60% lower total LBP-related costs.
The authors concluded that the potential for cost savings in the MHS from early guideline adherent physical therapy may be substantial. These results also extend the findings from similar studies in civilian settings by demonstrating an association between early guideline adherent care and utilization and costs in a single payer health system. Future research is necessary to examine which patients with LBP benefit early physical therapy and determine strategies for providing early guideline adherent care.
These are certainly interesting data. Because LBP is such a common condition, it costs us all dearly. Measures to reduce this burden in suffering and expense are urgently needed. The question is whether early referral to a physiotherapist is such a measure. The present data show that this is possible but they do not prove it.
I applaud the authors for realising this point and discussing it at length: The results of this study should be examined in light of the following limitations. Given the favorable natural history of LBP, many patients improve regardless of treatment. Those referred to physical therapy early are also more likely to have a shorter duration of pain, thus the potential for selection bias to have influenced these results. We accounted for a number of co-morbidities available in the data set and excluded patients with prior visits for LBP to mitigate against this possibility. However, the retrospective observational design of this study imposes limitations on extending the associations we observed to causation. Although we attempted to exclude patients with a specific spinal pathology, it is possible that a few patients may have been inadvertently included in the data set, in which case advanced imaging may be indicated. Additionally, although our results support that early physical therapy which adheres to practice guidelines may be less resource intense, we cannot conclude without patient-centered clinical outcomes (i.e., pain, function, disability, satisfaction, etc.) that the care was more cost effective. Further, it may be that the standard we used to judge adherence to practice guidelines (CPT codes) was not sufficiently sensitive to determine whether care is consistent with clinical practice guidelines. We also did not account for indirect or out-of-pocket costs for treatments such as complementary care, which is common for LBP. However, it is likely that the observed effects on total costs would have been even larger had these costs been considered.
I was originally alerted to this paper through a tweet claiming that these results demonstrate that chiropractic has an important role in LBP. However, the study does not even imply such a conclusion. It is, of course, true that many chiropractors use physical therapies. But they do not have the same training as physiotherapists and they tend to use spinal manipulations far more frequently. Virtually every LBP-patient consulting a chiropractor would be treated with spinal manipulations. As this approach is neither based on sound evidence nor free of risks, the conclusion, in my view, cannot be to see chiropractors for LBP; it must be to consult a physiotherapist.
Time for some fun!
In alternative medicine, there often seems to be an uneasy uncertainty about research methodology. This is, of course, regrettable, as it can (and often does) lead to misunderstandings. I feel that I have some responsibility to educate research-naïve practitioners. I hope this little dictionary of research terminology turns out to be a valuable contribution in this respect.
Abstract: a concise summary of what you wanted to do skilfully hiding what you managed to do.
Acute: an exceptionally good-looking nurse.
Adverse reaction: a side effect of a therapy that I do not practise.
Anecdotal evidence: the type of evidence that charlatans prefer.
Audit: misspelled name of German car manufacturer.
Avogadro’s number: telephone number of an Italian friend.
Basic research: investigations which are too simplistic to bother with.
Best evidence synthesis: a review of those cases where my therapy worked extraordinarily well.
Bias: prejudice against my therapy held by opponents.
Bioavailability: number of health food shops in the region.
Bogus: a term Simon Singh tried to highjack, but chiropractors sued and thus got the right use it for characterising their trade.
Chiropractic manipulation: a method of discretely adjusting data so that they yield positive results.
Confidence interval: the time between reading a paper and realising that it is rubbish.
Confounder: founder of a firm selling bogus treatments.
Conflict of interest: bribery by ‘Big Pharma’.
Data manipulation: main aim of chiropractic.
Declaration of Helsinki: a statement by the Finnish Society for Homeopathy in favour of treating Ebola with homeopathy.
Dose response: weird concept of pharmacologists which has been disproven by homeopathy.
Controlled clinical trial: a study where I am in control of the data and can prettify them, if necessary.
Critical appraisal: an assessment of my work by people fellow charlatans.
Doctor: title mostly used by chiropractors and naturopaths.
EBM: eminence-based medicine.
Error: a thing done by my opponents.
Ethics: misspelled name of an English county North of London.
Evidence: the stuff one can select from Medline when one needs a positive result in a hurry.
Evidence-based medicine: the health care based on the above.
Exclusion criteria: term used to characterise material that is not to my liking and must therefore be omitted.
Exploratory analysis: valuable approach of re-analysing negative results until a positive finding pops up.
Focus group: useful method for obtaining any desired outcome.
Forest plot: a piece of land with lots of trees.
Funnel plot: an intrigue initiated by Prof Funnel to discredit homeopathy.
Good clinical practice: the stuff I do in my clinical routine.
Grey literature: print-outs of articles from a faulty printer.
Hawthorne effect: the effects of Crataegus on cardiovascular function.
Hierarchy of evidence: a pyramid with my opinion on top.
Homeopathic delusion: method of manufacturing a homeopathic remedy.
Informed consent: agreement of patients to pay my fee.
Intention to treat analysis: a method of calculating data in such a way that they demonstrate what I intended to show.
Logic: my way of thinking.
Mean: attitude of chiropractors to anyone suggesting their manipulations are not a panacea.
Metastasis: lack of progress with a meta-analysis.
Numbers needed to treat: amount of patients I require to make a good living.
Odds ratio: number of lunatics in my professional organisation divided by the number of people who seem normal.
Observational study: results from a few patients who did exceptionally well on my therapy.
Pathogenesis: a rock group who have fallen ill.
Peer review: assessment of my work by several very close friends of mine.
Pharmacodynamics: the way ‘Big Pharma’ is trying to supress my findings.
Pilot study: a trial that went so terribly wrong that it became unpublishable – but, in the end, we still got it in an alt med journal.
Placebo-effect: a most useful phenomenon that makes patients who receive my therapy feel better.
Pragmatic trial: a study that is designed to generate the result I want
Silicon Valley: region in US where most stupid fraudsters are said to come from.
Standard deviation: a term describing the fact that deviation from the study protocol is normal.
Statistics: a range of methods which are applied to the data until they eventually yield a significant finding.
Survey: popular method of interviewing a few happy customers in order to promote my practice.
Systematic review: a review of all the positive results I could find.
Like it? If so, why don’t you suggest a few more entries into my dictionary via the comment section below?
This is a question which I have asked myself more often than I care to remember. The reason is probably that, in alternative medicine, I feel surrounded by so much dodgy research that I simply cannot avoid asking it.
In particular, the co-called ‘pragmatic’ trials which are so much ‘en vogue’ at present are, in my view, a reason for concern. Take a study of cancer patients, for instance, where one group is randomized to get the usual treatments and care, while the experimental group receives the same and several alternative treatments in addition. These treatments are carefully selected to be agreeable and pleasant; each patient can choose the ones he/she likes best, always had wanted to try, or has heard many good things about. The outcome measure of our fictitious study would, of course, be some subjective parameter such as quality of life.
In this set-up, the patients in our experimental group thus have high expectations, are delighted to get something extra, even more happy to get it for free, receive plenty of attention and lots of empathy, care, time, attention etc. By contrast, our poor patients in the control group would be a bit miffed to have drawn the ‘short straw’ and receive none of this.
What result do we expect?
Will the quality of life after all this be equal in both groups?
Will it be better in the miffed controls?
Or will it be higher in those lucky ones who got all this extra pampering?
I don’t think I need to answer these questions; the answers are too obvious and too trivial.
But the real and relevant question is the following, I think: IS SUCH A TRIAL JUST SILLY AND MEANINGLESS OR IS IT UNETHICAL?
I would argue the latter!
Because the results of the study are clearly known before the first patient had even been recruited. This means that the trial was not necessary; the money, time and effort has been wasted. Crucially, patients have been misled into thinking that they give their time, co-operation, patience etc. because there is a question of sufficient importance to be answered.
But, in truth, there is no question at all!
Perhaps you believe that nobody in their right mind would design, fund and conduct such a daft trial. If so, you assumed wrongly. Such studies are currently being published by the dozen. Here is the abstract of the most recent one I could find:
The aim of this study was to evaluate the effectiveness of an additional, individualized, multi-component complementary medicine treatment offered to breast cancer patients at the Merano Hospital (South Tyrol) on health-related quality of life compared to patients receiving usual care only. A randomized pragmatic trial with two parallel arms was performed. Women with confirmed diagnoses of breast cancer were randomized (stratified by usual care treatment) to receive individualized complementary medicine (CM group) or usual care alone (usual care group). Both groups were allowed to use conventional treatment for breast cancer. Primary endpoint was the breast cancer-related quality of life FACT-B score at 6 months. For statistical analysis, we used analysis of covariance (with factors treatment, stratum, and baseline FACT-B score) and imputed missing FACT-B scores at 6 months with regression-based multiple imputation. A total of 275 patients were randomized between April 2011 and March 2012 to the CM group (n = 136, 56.3 ± 10.9 years of age) or the usual care group (n = 139, 56.0 ± 11.0). After 6 months from randomization, adjusted means for health-related quality of life were higher in the CM group (FACT-B score 107.9; 95 % CI 104.1-111.7) compared to the usual care group (102.2; 98.5-105.9) with an adjusted FACT-B score difference between groups of 5.7 (2.6-8.7, p < 0.001). Thus, an additional individualized and complex complementary medicine intervention improved quality of life of breast cancer patients compared to usual care alone. Further studies evaluating specific effects of treatment components should follow to optimize the treatment of breast cancer patients.
The key sentence in this abstract is, of course: complementary medicine intervention improved quality of life of breast cancer patients… It provides the explanation as to why these trials are so popular with alternative medicine researchers: they are not real research but they are quite simply promotion! The next step would be to put a few of those pseudo-scientific trials together and claim that there is solid proof that integrating alternative treatments into conventional health care produces better results. At that stage, few people will bother asking whether this is really due to the treatments in questioning or to the additional attention, pampering etc.
My question is ARE SUCH TRIALS ETHICAL?
I would very much appreciate your opinion.
A new study of homeopathic arnica suggests efficacy. How come?
Subjects scheduled for rhinoplasty surgery with nasal bone osteotomies by a single surgeon were prospectively randomized to receive either oral perioperative arnica or placebo in a double-blinded fashion. A commercially available preparation was used which contained 12 capsules: one 500 mg capsule with arnica 1M is given preoperatively on the morning of surgery and two more later that day after surgery. Thereafter, arnica was administered in the 12C potency three times daily for the next 3 days (“C” indicates a 100-fold serial dilution; and M, a 1000-fold dilution)
Ecchymosis was measured in digital “three-quarter”-view photographs at three postoperative time points. Each bruise was outlined with Adobe Photoshop and the extent was scaled to a standardized reference card. Cyan, magenta, yellow, black, and luminosity were analyzed in the bruised and control areas to calculate change in intensity.
Compared with 13 subjects receiving placebo, 9 taking arnica had 16.2%, 32.9%, and 20.4% less extent on postoperative days 2/3, 7, and 9/10, a statistically significant difference on day 7. Color change initially showed 13.1% increase in intensity with arnica, but 10.9% and 36.3% decreases on days 7 and 9/10, a statistically significant difference on day 9/10. One subject experienced mild itching and rash with the study drug that resolved during the study period.
The authors concluded that Arnica montana seems to accelerate postoperative healing, with quicker resolution of the extent and the intensity of ecchymosis after osteotomies in rhinoplasty surgery, which may dramatically affect patient satisfaction.
Why are the results positive? Pervious systematic reviews confirm that homeopathic arnica is a pure placebo. First, I thought the answer lies in the 1M potency. It could well still contain active molecules. But then I realised that the answer is much more simple: if we apply the conventional level of statistical significance, there are no statistically significant differences to placebo at all! I had not noticed the little sentence by the authors: a P value of 0.1 was set as a meaningful difference with statistical significance. In fact, none of the effects called significant by the authors pass the conventionally used probability level of 5%.
So, what so the results of this new study truly mean? In my view, they show what was known all along: HOMEOPATHIC REMEDIES ARE PLACEBOS.
A recent comment to a post of mine (by a well-known and experienced German alt med researcher) made the following bold statement aimed directly at me and at my apparent lack of understanding research methodology:
C´mon , as researcher you should know the difference between efficacy and effectiveness. This is pharmacological basic knowledge. Specific (efficacy) + nonspecific effects = effectiveness. And, in fact, everything can be effective – because of non-specific or placebo-like effects. That does not mean that efficacy is existent.
The point he wanted to make is that outcome studies – studies without a control group where the researcher simply observe the outcome of a particular treatment in a ‘real life’ situation – suffice to demonstrate the effectiveness of therapeutic interventions. This belief is very wide-spread in alternative medicine and tends to mislead all concerned. It is therefore worth re-visiting this issue here in an attempt to create some clarity.
When a patient’s condition improves after receiving a therapy, it is very tempting to feel that this improvement reflects the effectiveness of the intervention (as the researcher mentioned above obviously does). Tempting but wrong: there are many other factors involved as well, for instance:
- the placebo effect (mainly based on conditioning and expectation),
- the therapeutic relationship with the clinician (empathy, compassion etc.),
- the regression towards the mean (outliers tend to return to the mean value),
- the natural history of the patient’s condition (most conditions get better even without treatment),
- social desirability (patients tend to say they are better to please their friendly clinician),
- concomitant treatments (patients often use treatments other than the prescribed one without telling their clinician).
So, how does this fit into the statement above ‘Specific (efficacy) + nonspecific effects = effectiveness’? Even if this formula were correct, it would not mean that outcome studies of the nature described demonstrate the effectiveness of a therapy. It all depends, of course, on what we call ‘non-specific’ effects. We all agree that placebo-effects belong to this category. Probably, most experts also would include the therapeutic relationship and the regression towards the mean under this umbrella. But the last three points from my list are clearly not non-specific effects of the therapy; they are therapy-independent determinants of the clinical outcome.
The most important factor here is usually the natural history of the disease. Some people find it hard to imagine what this term actually means. Here is a little joke which, I hope, will make its meaning clear and memorable.
CONVERATION BETWEEN TWO HOSPITAL DOCTORS:
Doc A: The patient from room 12 is much better today.
Doc B: Yes, we stared his treatment just in time; a day later and he would have been cured without it!
I am sure that most of my readers now understand (and never forget) that clinical improvement cannot be equated with the effectiveness of the treatment administered (they might thus be immune to the misleading messages they are constantly exposed to). Yet, I am not at all sure that all ‘alternativists’ have got it.
In my last post, I claimed that researchers of alternative medicine tend to be less than rigorous. I did not link this statement to any evidence at all. Perhaps I should have at least provided an example!? As it happens, I just came across a brand new paper which nicely demonstrates what I meant.
According to its authors, this non-interventional study was performed to generate data on safety and treatment effects of a complex homeopathic drug. They treated 1050 outpatients suffering from common cold with a commercially available homeopathic remedy for 8 days. The study was conducted in 64 German outpatient practices of medical doctors trained in CAM. Tolerability, compliance and the treatment effects were assessed by the physicians and by patient diaries. Adverse events were collected and assessed with specific attention to homeopathic aggravation and proving symptoms. Each adverse effect was additionally evaluated by an advisory board of experts.
The physicians detected 60 adverse events from 46 patients (4.4%). Adverse drug reactions occurred in 14 patients (1.3%). Six patients showed proving symptoms (0.57%) and only one homeopathic aggravation (0.1%) appeared. The rate of compliance was 84% for all groups. The global assessment of the treatment effects resulted in the verdict “good” and “very good” in 84.9% of all patients.
The authors concluded that the homeopathic complex drug was shown to be safe and effective for children and adults likewise. Adverse reactions specifically related to homeopathic principles are very rare. All observed events recovered quickly and were of mild to moderate intensity.
So why do I think this is ‘positively barmy’?
The study had no control group. This means that there is no way anyone can attribute the observed ‘treatment effects’ to the homeopathic remedy. There are many other phenomena that may have caused or contributed to it, e. g.:
- a placebo effect
- the natural history of the condition
- regression to the mean
- other treatments which the patients took but did not declare
- the empathic encounter with the physician
- social desirability
To plan a study with the aim as stated above and to draw the conclusion as cited above is naïve and unprofessional (to say the least) on the part of the researchers (I often wonder where, in such cases, the boundary between incompetence and research misconduct might lie). To pass such a paper through the peer review process is negligent on the part of the reviewers. To publish the article is irresponsible on the part of the editor.
In a nut-shell: COLLECTIVELY, THIS IS ‘POSITIVELY BARMY’!!!
Distant healing is one of the most bizarre yet popular forms of alternative medicine. Healers claim they can transmit ‘healing energy’ towards patients to enable them to heal themselves. There have been many trials testing the effectiveness of the method, and the general consensus amongst critical thinkers is that all variations of ‘energy healing’ rely entirely on a placebo response. A recent and widely publicised paper seems to challenge this view.
This article has, according to its authors, two aims. Firstly it reviews healing studies that involved biological systems other than ‘whole’ humans (e.g., studies of plants or cell cultures) that were less susceptible to placebo-like effects. Secondly, it presents a systematic review of clinical trials on human patients receiving distant healing.
All the included studies examined the effects upon a biological system of the explicit intention to improve the wellbeing of that target; 49 non-whole human studies and 57 whole human studies were included.
The combined weighted effect size for non-whole human studies yielded a highly significant (r = 0.258) result in favour of distant healing. However, outcomes were heterogeneous and correlated with blind ratings of study quality; 22 studies that met minimum quality thresholds gave a reduced but still significant weighted r of 0.115.
Whole human studies yielded a small but significant effect size of r = .203. Outcomes were again heterogeneous, and correlated with methodological quality ratings; 27 studies that met threshold quality levels gave an r = .224.
From these findings, the authors drew the following conclusions: Results suggest that subjects in the active condition exhibit a significant improvement in wellbeing relative to control subjects under circumstances that do not seem to be susceptible to placebo and expectancy effects. Findings with the whole human database suggests that the effect is not dependent upon the previous inclusion of suspect studies and is robust enough to accommodate some high profile failures to replicate. Both databases show problems with heterogeneity and with study quality and recommendations are made for necessary standards for future replication attempts.
In a press release, the authors warned: the data need to be treated with some caution in view of the poor quality of many studies and the negative publishing bias; however, our results do show a significant effect of healing intention on both human and non-human living systems (where expectation and placebo effects cannot be the cause), indicating that healing intention can be of value.
My thoughts on this article are not very complimentary, I am afraid. The problems are, it seems to me, too numerous to discuss in detail:
- The article is written such that it is exceedingly difficult to make sense of it.
- It was published in a journal which is not exactly known for its cutting edge science; this may seem a petty point but I think it is nevertheless important: if distant healing works, we are confronted with a revolution in the understanding of nature – and surely such a finding should not be buried in a journal that hardly anyone reads.
- The authors seem embarrassingly inexperienced in conducting and publishing systematic reviews.
- There is very little (self-) critical input in the write-up.
- A critical attitude is necessary, as the primary studies tend to be by evangelic believers in and amateur enthusiasts of healing.
- The article has no data table where the reader might learn the details about the primary studies included in the review.
- It also has no table to inform us in sufficient detail about the quality assessment of the included trials.
- It seems to me that some published studies of distant healing are missing.
- The authors ignored all studies that were not published in English.
- The method section lacks detail, and it would therefore be impossible to conduct an independent replication.
- Even if one ignored all the above problems, the effect sizes are small and would not be clinically important.
- The research was sponsored by the ‘Confederation of Healing Organisations’ and some of the comments look as though the sponsor had a strong influence on the phraseology of the article.
Given these reservations, my conclusion from an analysis of the primary studies of distant healing would be dramatically different from the one published by the authors: DESPITE A SIZABLE AMOUNT OF PRIMARY STUDIES ON THE SUBJECT, THE EFFECTIVENESS OF DISTANT HEALING REMAINS UNPROVEN. AS THIS THERAPY IS BAR OF ANY BIOLOGICAL PLAUSIBILITY, FURTHER RESEARCH IN THIS AREA SEEMS NOT WARRANTED.