MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

research methodology

Chiropractors are back pain specialists, they say. They do not pretend to treat non-spinal conditions, they claim.

If such notions were true, why are so many of them still misleading the public? Why do many chiropractors pretend to be primary care physicians who can take care of most illnesses regardless of any connection with the spine? Why do they continue to happily promote bogus treatments? Why do chiropractors, for instance, claim they can treat gastrointestinal diseases?

This recent narrative review of the literature, for example, was aimed at summarising studies describing the management of disorders of the gastrointestinal (GI) tract using ‘chiropractic therapy’ broadly defined here as spinal manipulation therapy, mobilizations, soft tissue therapy, modalities and stretches.

Twenty-one articles were found through searching the published literature to meet the authors’ inclusion criteria. The retrieved articles included case reports to clinical trials to review articles. The majority of articles chronicling patient experiences under chiropractic care reported that they experienced mild to moderate improvements in GI symptoms. No adverse effects were reported.

From this, the authors concluded that chiropractic care can be considered as an adjunctive therapy for patients with various GI conditions providing there are no co-morbidities.

I think, we would need to look for a long time to find an article with conclusions that are more ridiculous, false and unethical than these.

The old adage applies: rubbish in, rubbish out. If we include unreliable reports such as anecdotes, our finding will be unreliable as well. If we do not make this mistake and conduct a proper systematic review, we will arrive at very different conclusions. My own systematic review, for instance, of controlled clinical trials drew the following conclusion: There is no supportive evidence that chiropractic is an effective treatment for gastrointestinal disorders.

That probably says it all. I only want to add a short question: SHOULD THIS LATEST CHIROPRACTIC ATTEMPT TO MISLEAD THE PUBLIC BE CONSIDERED ‘SCIENTIFIC MISCONDUCT’ OR ‘FRAUD’?

The last time I had contact with Dr Fisher was when he fired me from the editorial board of his journal ‘Homeopathy’. He did that by sending me the following letter:

Dear Professor Ernst,

This is to inform you that you have been removed from the Editorial Board of Homeopathy.  The reason for this is the statement you published on your blog on Holocaust Memorial Day 2013 in which you smeared homeopathy and other forms of complementary medicine with a ‘guilt by association’ argument, associating them with the Nazis.

I should declare a personal interest….[Fisher goes on to tell a story which is personal and which I therefore omit]…  I mention this only because it highlights the absurdity of guilt by association arguments.

Sincerely

Peter Fisher Editor-in-Chief, Homeopathy

I did not expect to have any more dealings with him after this rather unpleasant encounter. But, as it turns out, I recently did have a further encounter.

When the BMJ invited me to write a debate article about the question whether homeopathy should continue to be available on the NHS, I accepted (with some reservations, I hasten to add). At the time, I did not know who would do the ‘other side’ of this debate. It turned out to be Peter Fisher, and our two articles have just been published.

As one would expect from a good journal, the articles were both peer reviewed. One of the peer-reviewers of my piece was most scathing of it essentially claiming that it was entirely worthless. Feeling that this was a bit harsh and very impolite, I was keen to see who this reviewer had been; it was none other than Andrew Vickers. This is remarkable because Vickers had not only published several homeopathic papers with Fisher, but also had been in the employment of the ‘Royal London Homeopathic Hospital’ under Fisher. To the best of my knowledge, his conflicts of interested had not been disclosed. I did point that out to the BMJ, but they seemed to think nothing of it.

Anyway, I was pleased to eventually (the whole procedure took many months) see the articles published, but at the same time somewhat irritated by Fisher’s piece. It contained plenty of misleading information that the peer-reviewers obviously had failed to correct. Here is a small sample from Fishers piece:

… recent overviews have had more favourable conclusions, including a health technology assessment commissioned by the Swiss federal government that concluded that homeopathy is “probably” effective for upper respiratory tract infections and allergies.

Readers interested in the clinical evidence can access the CORE-HOM database of clinical research in homeopathy free of charge (www.carstens-stiftung.de/core-hom). It includes 1117 clinical trials of homeopathy, of which about 300 are randomised controlled trials.

In the podcast that accompanies the articles Fisher insists that, on this database, there are well over 300 RCT, and I had to admit that this was new to me. Keen to learn more, I registered with the database and had a look. What I found startled me. True, the database does claim that almost 500 RCTs are available, but just a very superficial scrutiny of these studies reveals that

  • some are not truly randomised,
  • some are not even clinical trials,
  • the list includes dual publications, re-analyses of already published studies as well as aborted trials,
  • many have never been peer-reviewed,
  • many are not double-blind,
  • many are not placebo controlled,
  • the majority are of poor methodological quality.

As to the other thing mentioned in the above excerpt from Fisher’s article, the famous ‘health technology assessment commissioned by the Swiss federal government’, I can refer my readers to a blog post by J W Nienhuys which probably says it all, if not, there is plenty more criticism of this report available on the Internet.

My conclusion from all this?

THE QUEEN’S HOMEOPATH USES ARGUMENTS THAT SEEM JUST AS BOGUS AS HOMEOPATHY ITSELF.

Osteopathy is a difficult subject. In the US, osteopaths are (almost) identical with doctors who have studied conventional medicine and hardly practice any manipulative techniques at all. Elsewhere, osteopaths are alternative healthcare providers specialising in what they like to call ‘osteopathic manipulative therapy’ (OMT). As though this is not confusing enough, osteopaths are doing similar things as chiropractors but are adamant that they are a distinct profession. Despite these assertions, I have seen little to clearly differentiate the two – with one exception perhaps: osteopaths tend to use techniques that are less frequently associated with severe harm.

Despite this confusion, or maybe because of it, we need to ask: DOES OMT WORK?

A recent study was aimed at assessing the effectiveness of OMT on chronic migraineurs using HIT-6 questionnaire, drug consumption, days of migraine, pain intensity and functional disability. All patients admitted to the Department of Neurology of Ancona’s United Hospitals, Italy, with a diagnosis of migraine and without chronic illness, were considered eligible for this 3-armed RCT.

Patients were randomly divided into three (1) OMT+medication therapy, (2) sham+medication therapy and (3) medication therapy only and received 8 treatments during 6 months. Changes in the HIT-6 score were considered as the main outcome measure.

A total of 105 subjects were included. At the end of the study, OMT significantly reduced HIT-6 score, drug consumption, days of migraine, pain intensity and functional disability.

The investigators concluded that these findings suggest that OMT may be considered a valid procedure for the management of migraineurs.

Similar results have been reported elsewhere:

One trial, for instance, concluded: “This study affirms the effects of OMT on migraine headache in regard to decreased pain intensity and the reduction of number of days with migraine as well as working disability, and partly on improvement of HRQoL. Future studies with a larger sample size should reproduce the results with a control group receiving placebo treatment in a long-term follow-up.”

Convinced? No, I am not.

Why? Because the studies that do exist seem a little too good to be true; because they are few and far between, because the few studies tend to be flimsy and have been published in dodgy journals, because they lack independent replications, and because critical reviews seem to conclude that OMT is nowhere near as promising as some osteopaths would like us to believe: “Further studies of improved quality are necessary to more firmly establish the place of physical modalities in the treatment of primary headache disorders. With the exception of high velocity chiropractic manipulation of the neck, the treatments are unlikely to be physically dangerous, although the financial costs and lost treatment opportunity by prescribing potentially ineffective treatment may not be insignificant. In the absence of clear evidence regarding their role in treatment, physicians and patients are advised to make cautious and individualized judgments about the utility of physical treatments for headache management; in most cases, the use of these modalities should complement rather than supplant better-validated forms of therapy.”

A paper entitled ‘Real world research: a complementary method to establish the effectiveness of acupuncture’ caught my attention recently. I find it quite remarkable and think it might stimulate some discussion on this blog.  Here is its abstract:

Acupuncture has been widely used in the management of a variety of diseases for thousands of years, and many relevant randomized controlled trials have been published. In recent years, many randomized controlled trials have provided controversial or less-than-convincing evidence that supports the efficacy of acupuncture. The clinical effectiveness of acupuncture in Western countries remains controversial.

Acupuncture is a complex intervention involving needling components, specific non-needling components, and generic components. Common problems that have contributed to the equivocal findings in acupuncture randomized controlled trials were imperfections regarding acupuncture treatment and inappropriate placebo/sham controls. In addition, some inherent limitations were also present in the design and implementation of current acupuncture randomized controlled trials such as weak external validity. The current designs of randomized controlled trials of acupuncture need to be further developed. In contrast to examining efficacy and adverse reaction in a “sterilized” environment in a narrowly defined population, real world research assesses the effectiveness and safety of an intervention in a much wider population in real world practice. For this reason, real world research might be a feasible and meaningful method for acupuncture assessment. Randomized controlled trials are important in verifying the efficacy of acupuncture treatment, but the authors believe that real world research, if designed and conducted appropriately, can complement randomized controlled trials to establish the effectiveness of acupuncture. Furthermore, the integrative model that can incorporate randomized controlled trial and real world research which can complement each other and potentially provide more objective and persuasive evidence.

In the article itself, the authors list seven criteria for what they consider good research into acupuncture:

  1. Acupuncture should be regarded as complex and individualized treatment;
  2. The study aim (whether to assess the efficacy of acupuncture needling or the effectiveness of acupuncture treatment) should be clearly defined and differentiated;
  3. Pattern identification should be clearly specified, and non-needling components should also be considered;
  4. The treatment protocol should have some degree of flexibility to allow for individualization;
  5. The placebo or sham acupuncture should be appropriate: knowing “what to avoid” and “what to mimic” in placebos/shams;
  6. In addition to “hard evidence”, one should consider patient-reported outcomes, economic evaluations, patient preferences and the effect of expectancy;
  7. The use of qualitative research (e.g., interview) to explore some missing areas (e.g., experience of practitioners and patient-practitioner relationship) in acupuncture research.

Furthermore, the authors list the advantages of their RWR-concept:

  1. In RWR, interventions are tailored to the patients’ specific conditions, in contrast to standardized treatment. As a result, conclusions based on RWR consider all aspects of acupuncture that affect the effectiveness.
  2. At an operational level, patients’ choice of the treatment(s) decreases the difficulties in recruiting and retaining patients during the data collection period.
  3. The study sample in RWR is much more representative of the real world situation (similar to the section of the population that receives the treatment). The study, therefore, has higher external validity.
  4. RWR tends to have a larger sample size and longer follow-up period than RCT, and thus is more appropriate for assessing the safety of acupuncture.

The authors make much of their notion that acupuncture is a COMPLEX INTERVENTION; specifically they claim the following: Acupuncture treatment includes three aspects: needling, specific non-needling components drove by acupuncture theory, and generic components not unique to acupuncture treatment. In addition, acupuncture treatment should be performed on the basis of the patient condition and traditional Chinese medicine (TCM) theory.

There is so much BS here that it is hard to decide where to begin refuting. As the assumption of acupuncture or other alternative therapies being COMPLEX INTERVENTIONS (and therefore exempt from rigorous tests) is highly prevalent in this field, let me try to just briefly tackle this one.

The last time I saw a patient and prescribed a drug treatment I did all of the following:

  • I greeted her, asked her to sit down and tried to make her feel relaxed.
  • I first had a quick chat about something trivial.
  • I then asked why she had come to see me.
  • I started to take notes.
  • I inquired about the exact nature and the history of her problem.
  • I then asked her about her general medical history, family history and her life-style.
  • I also asked about any psychological problems that might relate to her symptoms.
  • I then conducted a physical examination.
  • Subsequently we discussed what her diagnosis might be.
  • I told her what my working diagnosis was.
  • I ordered a few tests to either confirm or refute it and explained them to her.
  • We decided that she should come back and see me in a few days when her tests had come back.
  • In order to ease her symptoms in the meanwhile, I gave her a prescription for a drug.
  • We discussed this treatment, how and when she should take it, adverse effects etc.
  • We also discussed other therapeutic options, in case the prescribed treatment was in any way unsatisfactory.
  • I reassured her by telling her that her condition did not seem to be serious and stressed that I was confident to be able to help her.
  • She left my office.

The point I am trying to make is: prescribing an entirely straight forward drug treatment is also a COMPLEX INTERVENTION. In fact, I know of no treatment that is NOT complex.

Does that mean that drugs and all other interventions are exempt from being tested in rigorous RCTs? Should we allow drug companies to adopt the RWR too? Any old placebo would pass that test and could be made to look effective using RWR. In the example above, my compassion, care and reassurance would alleviate my patient’s symptoms, even if the prescription I gave her was complete rubbish.

So why should acupuncture (or any other alternative therapy) not be tested in proper RCTs? I fear, the reason is that RCTs might show that it is not as effective as its proponents had hoped. The conclusion about the RWR is thus embarrassingly simple: proponents of alternative medicine want double standards because single standards would risk to disclose the truth.

One could define alternative medicine by the fact that it is used almost exclusively for conditions for which conventional medicine does not have an effective and reasonably safe cure. Once such a treatment has been found, few patients would look for an alternative.

Alzheimer’s disease (AD) is certainly one such condition. Despite intensive research, we are still far from being able to cure it. It is thus not really surprising that AD patients and their carers are bombarded with the promotion of all sorts of alternative treatments. They must feel bewildered by the choice and all too often they fall victim to irresponsible quacks.

Acupuncture is certainly an alternative therapy that is frequently claimed to help AD patients. One of the first websites that I came across, for instance, stated boldly: acupuncture improves memory and prevents degradation of brain tissue.

But is there good evidence to support such claims? To answer this question, we need a systematic review of the trial data. Fortunately, such a paper has just been published.

The objective of this review was to assess the effectiveness and safety of acupuncture for treating AD. Eight electronic databases were searched from their inception to June 2014. Randomized clinical trials (RCTs) with AD treated by acupuncture or by acupuncture combined with drugs were included. Two authors extracted data independently.

Ten RCTs with a total of 585 participants were included in a meta-analysis. The combined results of 6 trials showed that acupuncture was better than drugs at improving scores on the Mini Mental State Examination (MMSE) scale. Evidence from the pooled results of 3 trials showed that acupuncture plus donepezil was more effective than donepezil alone at improving the MMSE scale score. Only 2 trials reported the incidence of adverse reactions related to acupuncture. Seven patients had adverse reactions related to acupuncture during or after treatment; the reactions were described as tolerable and not severe.

The Chinese authors of this review concluded that acupuncture may be more effective than drugs and may enhance the effect of drugs for treating AD in terms of improving cognitive function. Acupuncture may also be more effective than drugs at improving AD patients’ ability to carry out their daily lives. Moreover, acupuncture is safe for treating people with AD.

Anyone reading this and having a friend or family member who is affected by AD will think that acupuncture is the solution and warmly recommend trying this highly promising option. I would, however, caution to remain realistic. Like so very many systematic reviews of acupuncture or other forms of TCM that are currently flooding the medical literature, this assessment of the evidence has to be taken with more than just a pinch of salt:

  • As far as I can see, there is no biological plausibility or mechanism for the assumption that acupuncture can do anything for AD patients.
  • The abstract fails to mention that the trials were of poor methodological quality and that such studies tend to generate false-positive findings.
  • The trials had small sample sizes.
  • They were mostly not blinded.
  • They were mostly conducted in China, and we know that almost 100% of all acupuncture studies from that country draw positive conclusions.
  • Only two trials reported about adverse effects which is, in my view, a sign of violation of research ethics.

As I already mentioned, we are currently being flooded with such dangerously misleading reviews of Chinese primary studies which are of such dubious quality that one could do probably nothing better than to ignore them completely.

Isn’t that a bit harsh? Perhaps, but I am seriously worried that such papers cause real harm:

  • They might motivate some to try acupuncture and give up conventional treatments which can be helpful symptomatically.
  • They might prompt some families to spend sizable amounts of money for no real benefit.
  • They might initiate further research into this area, thus drawing money away from research into much more promising avenues.

IT IS HIGH TIME THAT RESEARCHERS START THINKING CRITICALLY, PEER-REVIEWERS DO THEIR JOB PROPERLY, AND JOURNAL EDITORS STOP PUBLISHING SUCH MISLEADING ARTICLES.

Regular readers of this blog will have noticed: I recently published a ‘memoir‘.

Of all the books I have written, this one was by far the hardest. It covers ground that I felt quite uncomfortable with. At the same time, I felt compelled to write it. For over 5 years I kept at it, revised it, re-revised it, re-conceived the outline, abandoned the project altogether only to pick it up again.

When it eventually was finished, we had to find a suitable title. This was far from easy; my book is not a book about alternative medicine, it is a book about all sorts of things that have happened to me, including alternative medicine. Eventually we settled for A SCIENTIST IN WONDERLAND. A MEMOIR OF SEARCHING FOR TRUTH AND FINDING TROUBLE. This seemed to describe its contents quite well, I thought (the German edition is entitled NAZIS, NADELN UND INTRIGEN. ERINNERUNGEN EINES SKEPTIKERS which indicates why it was so difficult to put the diverse contents into a short title).

Then a further complication presented itself: at the very last minute, my publisher insisted that the text had to be checked by libel lawyers. This was not only painful and expensive, following their advice and thus changing or omitting passages also took some of the ‘edge’ off it.

Earlier this year, my ‘memoir’ was finally published; to say that I was nervous about how it might be received must be the understatement of the year. As it turned out, it received so many reviews that today I feel deeply humbled (and very proud), particularly as they were all full of praise and appreciation. In case you are interested, I provide some quotes and the links to the full text reviews below.

[Ah, yes! Some people will surely claim that I did all this for the money. To those of my critics, I respond by saying that, had I done paper rounds or worked as a gardener or a window-cleaner during all the time I spent on this book, I would today be considerably better off. As it stands, the costs for the libel read are not yet covered by the income generated through the sales of this book.]

AND HERE ARE THE PROMISED QUOTES

Times Higher Education Book of the Week

Times Higher Education – Helen Bynum, Jan 29, 2015

“[F]or all its trenchant arguments about evidence-based science, the second half of A Scientist in Wonderland remains a very human memoir, and Ernst’s account of the increasingly personal nature of the attacks he faced when speaking to CAM practitioners and advocacy groups is disturbing… Ben Goldacre’s 2012 book Bad Pharma created a storm via its exposure of the pharmaceutical industry’s unhealthy links with mainstream medicine. Ernst’s book deserves to do the same for the quackery trading under the name of complementary and alternative medicine.”

Spectator article

The Spectator – Nick Cohen, Jan 31, 2015

“If you want a true measure of the man, buy Edzard Ernst’s memoir A Scientist in Wonderland, which the Imprint Academic press have just released. It would be worth reading [even] if the professor had never been the victim of a royal vendetta.”

The Bookbag review

The Bookbag – Sue Magee, Jan 28, 2015

“Ernst isn’t just an academic – he’s also an accomplished writer and skilled communicator. He puts over some quite complex ideas without resorting to jargon and I felt informed without ever struggling to understand, despite being a non-scientist. I was pulled into the story of his life and read most of the book in one sitting… I was impressed by what Ernst had to say and the way in which he said it.”

Science-Based Medicine review

Science-Based Medicine – Harriet Hall, Feb 3, 2015

“Edzard Ernst is one of those rare people who dare to question their own beliefs, look at the evidence without bias, and change their minds… In addition to being a memoir, Dr. Ernst’s book is a paean to science… He shows how misguided ideas, poor reasoning, and inaccurate publicity have contributed to the spread of alternative medicine… This is a well-written, entertaining book that anyone would enjoy reading and that advocates of alternative medicine should read: they might learn a thing or two about science, critical thinking, honesty, and the importance of truth.”

Nature review

Nature – Barbara Kiser, Feb 5, 2015

“[T]his ferociously frank autobiography… [is] a clarion call for medical ethics.”

Times review

The Times – Robbie Millen, Feb 9, 2015

A Scientist in Wonderland is a rather droll, quick read… [and] it’s an effective antidote to New Age nonsense, pseudo-science and old-fashioned quackery.”

AntiCancer review

AntiCancer.org.uk – Pan Pantziarka, Feb 19, 2015

“It should be required reading for everyone interested in medicine – without exception.”

Mail Online review

Mail Online – Katherine Keogh, Feb 28, 2015

“In his new book, A Scientist In Wonderland: A Memoir Of Searching For Truth And Finding Trouble, no one from the world of alternative medicine is safe from Professor Edzard Ernst’s firing line.”

James Randi Educational Foundation review

James Randi Educational Foundation – William M. London, Mar 9, 2015

“The writing in A Scientist in Wonderland is clear and engaging. It combines good storytelling with important insights about medicine, science, and analytic thinking. Despite all the troubles Ernst encountered, I found his story to be inspirational. I enthusiastically recommend the book to scientists, health professionals, and laypersons who like to see nonsense and mendacity exposed to the light of reason.”

The Pharmaceutical Journal review

The Pharmaceutical Journal – Andrews Haynes, Mar 26, 2015

“This engaging book is a memoir by a medical researcher whose passion for discovering the truth about untested therapies eventually forced him out of his job… [This] highly readable book concentrates on fact rather than emotion. It should be required reading for anyone interested in medical research.”

Skepticat review

Skepticat – Maria MacLachlan, Apr 18, 2015

A Scientist in Wonderland is more than an autobiography and I’m not sure I can do justice to the riches to be found in its pages. Sometimes it’s reminiscent of a black comedy, other times it’s almost too painful to read.”

Spiked! review

Spiked! – Robin Walsh ,May 15, 2015

“Ernst’s book is a reminder of the need to have the courage to tell the truth as you understand it, and fight your corner against those in authority, while never losing a compassion for patients and a commitment to winning the debate. ”

Australasian Science review

Australasian Science – Loretta Marron, Jun 10, 2015

“Edzard Ernst is a living legend… The book is easy to read and hard to put down. I would particularly recommend it to anyone, with an open mind, who is interested in the truth or otherwise of CAM.”

Journal of the Royal Society of Medicine Review

JRSM – Michael Baum, June 2015

“This is a deeply moving and deeply disturbing book yet written with a light touch, humour and self-deprecation.”

THE BUFFALO NEWS

These enlightening books await summer readers. 21 June 2015

“Medical researcher Edzard Ernst spent most of his career stepping on toes. He first exposed the complicity of the German medical profession in the Nazi genocide. Then he accepted appointment as the world’s first chairman of alternative medicine at England’s University of Exeter. There he studied systematically the claims of the proponents of complementary medicine, a field dominated by evangelic and enthusiastic promoters, including Prince Charles. Needless to say, they did not take kindly to his exposures of many of their widely accepted therapies. His book, “A Scientist in Wonderland: A Memoir of Searching for Truth and Finding Trouble,” is a charming account of a committed life.”

Skeptical Inquirer

http://www.csicop.org/si/show/truth_trouble_and_research_exposing_alt_med?utm_source=twitterfeed&utm_medium=twitter

Chris French

“The book is first and foremost a memoir. Even those of us who have long followed Ernst’s research may well find a few surprises here…[Ernst] did not quite behave as many people expected him to after taking up his post in Exeter. He was committed to carrying out high-quality research into the efficacy and safety of CAM, not simply promoting its use in an uncritical manner. This admirable attitude won him much respect in the eyes of fellow scientists and the wider skeptical community.

You may feel that homeopaths are bizarre, irrational, perhaps even stupid – but you cannot deny their tenacity. Since 200 years, they are trying to convince us that their treatments are effective beyond placebo. And they seem to get more and more bold with their claims: while they used to suggest that homeopathy was effective for trivial conditions like a common cold, they now have their eyes on much more ambitious things. Two recent studies, for instance, claim that homeopathic remedies can help cancer patients.

The aim of the first study was to evaluate whether homeopathy influenced global health status and subjective wellbeing when used as an adjunct to conventional cancer therapy.

In this pragmatic randomized controlled trial, 410 patients, who were treated by standard anti-neoplastic therapy, were randomized to receive or not receive classical homeopathic adjunctive therapy in addition to standard therapy. The main outcome measures were global health status and subjective wellbeing as assessed by the patients. At each of three visits (one baseline, two follow-up visits), patients filled in two questionnaires for quantification of these endpoints.

The results show that 373 patients yielded at least one of three measurements. The improvement of global health status between visits 1 and 3 was significantly stronger in the homeopathy group by 7.7 (95% CI 2.3-13.0, p=0.005) when compared with the control group. A significant group difference was also observed with respect to subjective wellbeing by 14.7 (95% CI 8.5-21.0, p<0.001) in favor of the homeopathic as compared with the control group. Control patients showed a significant improvement only in subjective wellbeing between their first and third visits.

Our homeopaths concluded that the results suggest that the global health status and subjective wellbeing of cancer patients improve significantly when adjunct classical homeopathic treatment is administered in addition to conventional therapy.

The second study is a little more modest; it had the aim to explore the benefits of a three-month course of individualised homeopathy (IH) for survivors of cancer.

Fifteen survivors of any type of cancer were recruited by a walk-in cancer support centre. Conventional treatment had to have taken place within the last three years. Patients scored their total, physical and emotional wellbeing using the Functional Assessment of Chronic Illness Therapy for Cancer (FACIT-G) before and after receiving four IH sessions.

The results showed that 11 women had statistically positive results for emotional, physical and total wellbeing based on FACIT-G scores.

And the conclusion: Findings support previous research, suggesting CAM or individualised homeopathy could be beneficial for survivors of cancer.

As I said: one has to admire their tenacity, perhaps also their chutzpa – but not their understanding of science or their intelligence. If they were able to think critically, they could only arrive at one conclusion: STUDY DESIGNS THAT ARE WIDE OPEN TO BIAS ARE LIKELY TO DELIVER BIASED RESULTS.

The second study is a mere observation without a control group. The reported outcomes could be due to placebo, expectation, extra attention or social desirability. We obviously need an RCT! But the first study was an RCT!!! Its results are therefore more convincing, aren’t they?

No, not at all. I can repeat my sentence from above: The reported outcomes could be due to placebo, expectation, extra attention or social desirability. And if you don’t believe it, please read what I have posted about the infamous ‘A+B versus B’ trial design (here and here and here and here and here for instance).

My point is that such a study, while looking rigorous to the naïve reader (after all, it’s an RCT!!!), is just as inconclusive when it comes to establishing cause and effect as a simple case series which (almost) everyone knows by now to be utterly useless for that purpose. The fact that the A+B versus B design is nevertheless being used over and over again in alternative medicine for drawing causal conclusions amounts to deceit – and deceit is unethical, as we all know.

My overall conclusion about all this:

QUACKS LOVE THIS STUDY DESIGN BECAUSE IT NEVER FAILS TO PRODUCE FALSE POSITIVE RESULTS.

The purpose of this study was to evaluate the impact of early and guideline adherent physical therapy for low back pain on utilization and costs within the Military Health System (MHS).

Patients presenting to a primary care setting with a new complaint of LBP from January 1, 2007 to December 31, 2009 were identified from the MHS Management Analysis and Reporting Tool. Descriptive statistics, utilization, and costs were examined on the basis of timing of referral to physical therapy and adherence to practice guidelines over a 2-year period. Utilization outcomes (advanced imaging, lumbar injections or surgery, and opioid use) were compared using adjusted odds ratios with 99% confidence intervals. Total LBP-related health care costs over the 2-year follow-up were compared using linear regression models.

753,450 eligible patients with a primary care visit for LBP between 18-60 years of age were considered. Physical therapy was utilized by 16.3% (n = 122,723) of patients, with 24.0% (n = 17,175) of those receiving early physical therapy that was adherent to recommendations for active treatment. Early referral to guideline adherent physical therapy was associated with significantly lower utilization for all outcomes and 60% lower total LBP-related costs.

The authors concluded that the potential for cost savings in the MHS from early guideline adherent physical therapy may be substantial. These results also extend the findings from similar studies in civilian settings by demonstrating an association between early guideline adherent care and utilization and costs in a single payer health system. Future research is necessary to examine which patients with LBP benefit early physical therapy and determine strategies for providing early guideline adherent care.

These are certainly interesting data. Because LBP is such a common condition, it costs us all dearly. Measures to reduce this burden in suffering and expense are urgently needed. The question is whether early referral to a physiotherapist is such a measure. The present data show that this is possible but they do not prove it.

I applaud the authors for realising this point and discussing it at length: The results of this study should be examined in light of the following limitations. Given the favorable natural history of LBP, many patients improve regardless of treatment. Those referred to physical therapy early are also more likely to have a shorter duration of pain, thus the potential for selection bias to have influenced these results. We accounted for a number of co-morbidities available in the data set and excluded patients with prior visits for LBP to mitigate against this possibility. However, the retrospective observational design of this study imposes limitations on extending the associations we observed to causation. Although we attempted to exclude patients with a specific spinal pathology, it is possible that a few patients may have been inadvertently included in the data set, in which case advanced imaging may be indicated. Additionally, although our results support that early physical therapy which adheres to practice guidelines may be less resource intense, we cannot conclude without patient-centered clinical outcomes (i.e., pain, function, disability, satisfaction, etc.) that the care was more cost effective. Further, it may be that the standard we used to judge adherence to practice guidelines (CPT codes) was not sufficiently sensitive to determine whether care is consistent with clinical practice guidelines. We also did not account for indirect or out-of-pocket costs for treatments such as complementary care, which is common for LBP. However, it is likely that the observed effects on total costs would have been even larger had these costs been considered.

I was originally alerted to this paper through a tweet claiming that these results demonstrate that chiropractic has an important role in LBP. However, the study does not even imply such a conclusion. It is, of course, true that many chiropractors use physical therapies. But they do not have the same training as physiotherapists and they tend to use spinal manipulations far more frequently. Virtually every LBP-patient consulting a chiropractor would be treated with spinal manipulations. As this approach is neither based on sound evidence nor free of risks, the conclusion, in my view, cannot be to see chiropractors for LBP; it must be to consult a physiotherapist.

Time for some fun!

In alternative medicine, there often seems to be an uneasy uncertainty about research methodology. This is, of course, regrettable, as it can (and often does) lead to misunderstandings. I feel that I have some responsibility to educate research-naïve practitioners. I hope this little dictionary of research terminology turns out to be a valuable contribution in this respect.

Abstract: a concise summary of what you wanted to do skilfully hiding what you managed to do.

Acute: an exceptionally good-looking nurse.

Adverse reaction: a side effect of a therapy that I do not practise.

Anecdotal evidence: the type of evidence that charlatans prefer.

Audit: misspelled name of German car manufacturer.

Avogadro’s number: telephone number of an Italian friend.

Basic research: investigations which are too simplistic to bother with.

Best evidence synthesis: a review of those cases where my therapy worked extraordinarily well.

Bias: prejudice against my therapy held by opponents.

Bioavailability: number of health food shops in the region.

Bogus: a term Simon Singh tried to highjack, but chiropractors sued and thus got the right use it for characterising their trade.

Chiropractic manipulation: a method of discretely adjusting data so that they yield positive results.

Confidence interval: the time between reading a paper and realising that it is rubbish.

Confounder: founder of a firm selling bogus treatments.

Conflict of interest: bribery by ‘Big Pharma’.

Data manipulation: main aim of chiropractic.

Declaration of Helsinki: a statement by the Finnish Society for Homeopathy in favour of treating Ebola with homeopathy.

Dose response: weird concept of pharmacologists which has been disproven by homeopathy.

Controlled clinical trial: a study where I am in control of the data and can prettify them, if necessary.

Critical appraisal: an assessment of my work by people fellow charlatans.

Doctor: title mostly used by chiropractors and naturopaths.

EBM: eminence-based medicine.

Error: a thing done by my opponents.

Ethics: misspelled name of an English county North of London.

Evidence: the stuff one can select from Medline when one needs a positive result in a hurry.

Evidence-based medicine: the health care based on the above.

Exclusion criteria: term used to characterise material that is not to my liking and must therefore be omitted.

Exploratory analysis: valuable approach of re-analysing negative results until a positive finding pops up.

Focus group: useful method for obtaining any desired outcome.

Forest plot: a piece of land with lots of trees.

Funnel plot: an intrigue initiated by Prof Funnel to discredit homeopathy.

Good clinical practice: the stuff I do in my clinical routine.

Grey literature: print-outs of articles from a faulty printer.

Hawthorne effect: the effects of Crataegus on cardiovascular function.

Hierarchy of evidence: a pyramid with my opinion on top.

Homeopathic delusion: method of manufacturing a homeopathic remedy.

Informed consent: agreement of patients to pay my fee.

Intention to treat analysis: a method of calculating data in such a way that they demonstrate what I intended to show.

Logic: my way of thinking.

Mean: attitude of chiropractors to anyone suggesting their manipulations are not a panacea.

Metastasis: lack of progress with a meta-analysis.

Numbers needed to treat: amount of patients I require to make a good living.

Odds ratio: number of lunatics in my professional organisation divided by the number of people who seem normal.

Observational study: results from a few patients who did exceptionally well on my therapy.

Pathogenesis: a rock group who have fallen ill.

Peer review: assessment of my work by several very close friends of mine.

Pharmacodynamics: the way ‘Big Pharma’ is trying to supress my findings.

Pilot study: a trial that went so terribly wrong that it became unpublishable – but, in the end, we still got it in an alt med journal.

Placebo-effect: a most useful phenomenon that makes patients who receive my therapy feel better.

Pragmatic trial: a study that is designed to generate the result I want

Silicon Valley: region in US where most stupid fraudsters are said to come from.

Standard deviation: a term describing the fact that deviation from the study protocol is normal.

Statistics: a range of methods which are applied to the data until they eventually yield a significant finding.

Survey: popular method of interviewing a few happy customers in order to promote my practice.

Systematic review: a review of all the positive results I could find.

 

 

Like it? If so, why don’t you suggest a few more entries into my dictionary via the comment section below?

This is a question which I have asked myself more often than I care to remember. The reason is probably that, in alternative medicine, I feel surrounded by so much dodgy research that I simply cannot avoid asking it.

In particular, the co-called ‘pragmatic’ trials which are so much ‘en vogue’ at present are, in my view, a reason for concern. Take a study of cancer patients, for instance, where one group is randomized to get the usual treatments and care, while the experimental group receives the same and several alternative treatments in addition. These treatments are carefully selected to be agreeable and pleasant; each patient can choose the ones he/she likes best, always had wanted to try, or has heard many good things about. The outcome measure of our fictitious study would, of course, be some subjective parameter such as quality of life.

In this set-up, the patients in our experimental group thus have high expectations, are delighted to get something extra, even more happy to get it for free, receive plenty of attention and lots of empathy, care, time, attention etc. By contrast, our poor patients in the control group would be a bit miffed to have drawn the ‘short straw’ and receive none of this.

What result do we expect?

Will the quality of life after all this be equal in both groups?

Will it be better in the miffed controls?

Or will it be higher in those lucky ones who got all this extra pampering?

I don’t think I need to answer these questions; the answers are too obvious and too trivial.

But the real and relevant question is the following, I think: IS SUCH A TRIAL JUST SILLY AND MEANINGLESS OR IS IT UNETHICAL?

I would argue the latter!

Why?

Because the results of the study are clearly known before the first patient had even been recruited. This means that the trial was not necessary; the money, time and effort has been wasted. Crucially, patients have been misled into thinking that they give their time, co-operation, patience etc. because there is a question of sufficient importance to be answered.

But, in truth, there is no question at all!

Perhaps you believe that nobody in their right mind would design, fund and conduct such a daft trial. If so, you assumed wrongly. Such studies are currently being published by the dozen. Here is the abstract of the most recent one I could find:

The aim of this study was to evaluate the effectiveness of an additional, individualized, multi-component complementary medicine treatment offered to breast cancer patients at the Merano Hospital (South Tyrol) on health-related quality of life compared to patients receiving usual care only. A randomized pragmatic trial with two parallel arms was performed. Women with confirmed diagnoses of breast cancer were randomized (stratified by usual care treatment) to receive individualized complementary medicine (CM group) or usual care alone (usual care group). Both groups were allowed to use conventional treatment for breast cancer. Primary endpoint was the breast cancer-related quality of life FACT-B score at 6 months. For statistical analysis, we used analysis of covariance (with factors treatment, stratum, and baseline FACT-B score) and imputed missing FACT-B scores at 6 months with regression-based multiple imputation. A total of 275 patients were randomized between April 2011 and March 2012 to the CM group (n = 136, 56.3 ± 10.9 years of age) or the usual care group (n = 139, 56.0 ± 11.0). After 6 months from randomization, adjusted means for health-related quality of life were higher in the CM group (FACT-B score 107.9; 95 % CI 104.1-111.7) compared to the usual care group (102.2; 98.5-105.9) with an adjusted FACT-B score difference between groups of 5.7 (2.6-8.7, p < 0.001). Thus, an additional individualized and complex complementary medicine intervention improved quality of life of breast cancer patients compared to usual care alone. Further studies evaluating specific effects of treatment components should follow to optimize the treatment of breast cancer patients. 

The key sentence in this abstract is, of course: complementary medicine intervention improved quality of life of breast cancer patients… It provides the explanation as to why these trials are so popular with alternative medicine researchers: they are not real research but they are quite simply promotion! The next step would be to put a few of those pseudo-scientific trials together and claim that there is solid proof that integrating alternative treatments into conventional health care produces better results. At that stage, few people will bother asking whether this is really due to the treatments in questioning or to the additional attention, pampering etc.

My question is ARE SUCH TRIALS ETHICAL?

I would very much appreciate your opinion.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories