conflict of interest
In the current issue of the Faculty of Homeopathy‘s Simile publication, Dr Peter Fisher, the Queen’s homeopath, re-visits the old story of the ‘Smallwood Report’. To my big surprise, I found the following two paragraphs in his editorial:
A prepublication draft [of the Smallwood report] was circulated for comment with prominent warnings that it was confidential and not to be shared more widely (I can personally vouch for this, since I was one of those asked to comment). Regrettably, Prof Ernst did precisely this, leaking it to The Times who used it as the basis of their lead story. The editor of The Lancet, Richard Horton, certainly no friend of homeopathy, promptly denounced Ernst for having “broken every professional code of scientific behaviour”.
Sir Michael Peat, the Prince of Wales’ Principal Private Secretary, wrote to the vice chancellor of Exeter University protesting at the leak, and the university conducted an investigation. Ernst’s position became untenable, funding for his department dried up and he took early retirement. Thirteen years later he remains sore; in his latest book More Harm than Good? he attacks the Prince of Wales as “foolish and immoral”.
END OF QUOTE
Sadly it is true that Horton wrote these defaming words. Subsequently, I asked him to justify them explaining that they were being used by my university against me. He ignored several of my emails, but eventually he sent a reply. In it, he said that, since the university was investigating the issue, the truth would doubtlessly be disclosed. I remember that I was livid at the arrogance and ignorance of this reply. However, being in the middle of my university’s investigation against me, never did anything about it. Looking back at this part of the episode, I feel that Horton behaved abominably.
But back to Dr Fisher.
Why did his defamatory and false accusation in his new editorial come as a ‘big surprise’ to me?
Should I not have gotten used to the often odd way in which some homeopaths handle the truth?
Yes, I did get used to this phenomenon; but I am nevertheless surprised because I have tried to correct Fisher’s ‘error’ before.
This is from a post about Fisher which I published in 2015:
In this article [available here in archive,org – Admin] which he published as Dr. Peter Fisher, Homeopath to Her Majesty, the Queen, he wrote: There is a serious threat to the future of the Royal London Homoeopathic Hospital (RLHH), and we need your help…Lurking behind all this is an orchestrated campaign, including the ’13 doctors letter’, the front page lead in The Times of 23 May 2006, Ernst’s leak of the Smallwood report (also front page lead in The Times, August 2005), and the deeply flawed, but much publicised Lancet meta-analysis of Shang et al…
If you have read my memoir, you will know that even the hostile 13-months investigation my own university did not find me guilty of the ‘leak’. The Times journalist who interviewed me about the Smallwood report already had the document on his desk when we spoke, and I did not disclose any contents of the report to him…
END OF QUOTE
So, assuming that Dr Peter Fisher has seen my 2015 post, he is knowingly perpetuating a slanderous untruth. However, giving him the benefit of the doubt, he might not have read the post nor my memoir and could be unaware of the truth. Error or lie? I am determined to find out and will send him today’s post with an offer to clarify the situation.
I will keep you posted.
In recent days, journalists across the world had a field day (mis)reporting that doctors practising integrative medicine were doing something positive after all. I think that the paper shows nothing of the kind – but please judge for yourself.
The authors of this article wanted to determine differences in antibiotic prescription rates between conventional General Practice (GP) surgeries and GP surgeries employing general practitioners (GPs) additionally trained in integrative medicine (IM) or complementary and alternative medicine (CAM) (referred to as IM GPs) working within National Health Service (NHS) England.
They conducted a retrospective study on antibiotic prescription rates per STAR-PU (Specific Therapeutic group Age–sex weighting Related Prescribing Unit) using NHS Digital data over 2016. Publicly available data were used on prevalence of relevant comorbidities, demographics of patient populations and deprivation scores. setting Primary Care. Participants were 7283 NHS GP surgeries in England. The association between IM GPs and antibiotic prescribing rates per STAR-PU with the number of antibiotic prescriptions (total, and for respiratory tract infection (RTI) and urinary tract infection (UTI) separately) as outcome. results IM GP surgeries (n=9) were comparable to conventional GP surgeries in terms of list sizes, demographics, deprivation scores and comorbidity prevalence.
Statistically significant fewer total antibiotics were prescribed at NHS IM GP surgeries compared with conventional NHS GP surgeries. In contrast, the number of antibiotics prescribed for UTI were similar between both practices.
The authors concluded that NHS England GP surgeries employing GPs additionally trained in IM/CAM have lower antibiotic prescribing rates. Accessibility of IM/CAM within NHS England primary care is limited. Main study limitation is the lack of consultation data. Future research should include the differences in consultation behaviour of patients self-selecting to consult an IM GP or conventional surgery, and its effect on antibiotic prescription. Additional treatment strategies for common primary care infections used by IM GPs should be explored to see if they could be used to assist in the fight against antimicrobial resistance.
The study was flimsy to say the least:
- It was retrospective and is therefore open to no end of confounders.
- There were only 9 surgeries in the IM group.
Moreover, the results were far from impressive. The differences in antibiotic prescribing between the two groups of GP surgeries were minimal or non-existent. Finally, the study was financed via an unrestricted grant of WALA Heilmittel GmbH, Germany (“approx. 900 different remedies conforming to the anthroposophic understanding of man and nature”) and its senior author has a long track record of publishing papers promotional for anthroposophic medicine.
Such pseudo-research seems to be popular in the realm of CAM, and I have commented before on similarly futile projects. The comparison, I sometimes use is that of a Hamburger restaurant:
Employees by a large Hamburger chain set out to study the association between utilization of Hamburger restaurant services and vegetarianism. The authors used a retrospective cohort design. The study population comprised New Hampshire residents aged 18-99 years, who had entered the premises of a Hamburger restaurant within 90 days for a primary purpose of eating. The authors excluded subjects with a diagnosis of cancer. They measured the likelihood of vegetarianism among recipients of services delivered by Hamburger restaurants compared with a control group of individuals not using meat-dispensing facilities. They also compared the cohorts with regard to the money spent in Hamburger restaurants. The adjusted likelihood of being a vegetarian was 55% lower among the experimental group compared to controls. The average money spent per person in Hamburger restaurants were also significantly lower among the Hamburger group.
To me, it is obvious that such analyses must produce a seemingly favourable result for CAM. In the present case, there are several reasons for this:
- GPs who volunteer to be trained in CAM tend to be in favour of ‘natural’ treatments and oppose synthetic drugs such as antibiotics.
- Education in CAM would only re-inforce this notion.
- Similarly, patients electing to consult IM GPs tend to be in favour of ‘natural’ treatments and oppose synthetic drugs such as antibiotics.
- Such patients might be less severely ill that the rest of the patient population (the data from the present study do in fact imply this to be true).
- These phenomena work in concert to generate less antibiotic prescribing in the IM group.
In the final analysis, all this finding amounts to is a self-fulfilling prophecy: grocery shops sell less meat than butchers! You don’t believe me? Perhaps you need to read a previous post then; it concluded that physicians practicing integrative medicine (the 80% who did not respond to the survey were most likely even worse) not only use and promote much quackery, they also tend to endanger public health by their bizarre, irrational and irresponsible attitudes towards vaccination.
What is upsetting with the present paper, in my view, are the facts that:
- a reputable journal published this junk,
- the international press has a field-day reporting this study implying that CAM is a good thing.
The fact is that it shows nothing of the kind. Imagine we send GPs on a course where they are taught to treat all their patients with blood-letting. This too would result in less prescription of antibiotics, wouldn’t it? But would it be a good thing? Of course not!
True, we prescribe too much antibiotics. Nobody doubts that. And nobody doubts that it is a big problem. The solution to this problem is not more CAM, but less antibiotics. To realise the solution we do not need to teach GPs CAM but we need to remind them of the principles of evidence-based practice. And the two are clearly not the same; in fact, they are opposites.
Chiropractors are fast giving up the vitalistic and obsolete concepts of their founding fathers, we are told over and over again. But are these affirmations true? There are good reasons to be sceptical. Take this recent paper, for instance.
The objective of this survey was to investigate the proportion of Australian chiropractic students who hold non-evidence-based beliefs in the first year of study and to determine the extent to which they may be involved in non-musculoskeletal health conditions.
Students from two Australian chiropractic programs were invited to answer a questionnaire on how often they would give advice on 5 common health conditions in their future practices, as well as to provide their opinion on whether chiropractic spinal adjustments could prevent or help seven health-related conditions.
The response rate of this survey was 53%. Students were highly likely to offer advice on a range of non-musculoskeletal conditions. The proportions were lowest in first year and highest the final year. For instance, 64% of students in year 4/5 believed that spinal adjustments improve the health of infants. Also, high numbers of students held non-evidence-based beliefs about ‘chiropractic spinal adjustments’ which tended to occur in gradually decreasing in numbers in sequential years, except for 5th and final year, when a reversal of the pattern occurred.
The authors concluded that new strategies are required for chiropractic educators if they are to produce graduates who understand and deliver evidence-based health care and able to be part of the mainstream health care system.
This is an interesting survey, but I think its conclusion is wrong!
- Educators do not require ‘new strategies’, I would argue; they simply need to take their duty of educating students seriously – educating in this context does not mean brain-washing, it means teaching facts and evidence-based practice. And this is were any concept of true education would run into problems: it would teach students that chiropractic is built on sand.
- Conclusions need to be based on the data presented. Therefore, the most fitting conclusion, in my view, is that chiropractic students are currently being educated such that, once let loose on the unsuspecting and often all too gullible public, they will be a menace and a serious danger to public health.
You might say that this survey is from Australia and that the findings therefore do not necessarily apply to other countries. Correct! However, I very much fear that elsewhere the situation is similar or perhaps even worse. And my fear does not come out of thin air, it is based on things we have discussed before; see for instance these three posts:
But I would be more than willing to change my mind – provided someone can show me good evidence to the contrary.
An article in yesterday’ Times makes the surprising claim that ‘doctors turn to herbal cures when the drugs don’t work’. As the subject is undoubtedly relevant to this blog and as the Times is a highly respected newspaper, I think this might be important and will therefore comment (in normal print) on the full text of the article (in bold print):
GPs are increasingly dissatisfied with doling out pills that do not work for illnesses with social and emotional roots, and a surprising number of them end up turning to alternative medicine.
What a sentence! I would have thought that GPs have always been ‘dissatisfied’ with treatments that are ineffective. But who says they turn to alternative medicine in ‘surprising numbers’ (our own survey does not confirm the notion)? And what is a ‘surprising number’ anyway (zero would be surprising, in my view)?
Charlotte Mendes da Costa is unusual in being both an NHS GP and a registered homeopath. Her frustration with the conventional approach of matching a medicine to a symptom is growing as doctors increasingly see the limits, and the risks, of such a tactic.
Do we get the impression that THE TIMES does not know that homeopathy is not herbal medicine? Do they know that ‘matching a medicine to a symptom’ is what homeopaths believe they are doing? Real doctors try to find the cause of a symptom and, whenever possible, treat it.
She asks patients with sore throats questions that few other GPs pose: “What side is it? Is it easier to swallow solids or liquids? What time of day is it worst?” Dr Mendes da Costa is trying to find out which homeopathic remedy to prescribe. But when NHS guidance for sore throats aims mainly to convince patients that they will get better on their own, her questions are just as important as her prescription.
This section makes no sense. Sore throats do get better on their own, that’s a fact. And empathy is not a monopoly of homeopaths. But Dr Mendes Da Costa might be somewhat detached from reality; she once promoted the nonsensical notion that “up to the end of 2010, 156 randomised controlled trials (RCTs) in homeopathy had been carried out with 41% reporting positive effects, whereas only 7% have been negative. The remainder were non-conclusive.” (see more on this particular issue here)
“It’s very difficult to disentangle the effect of listening to someone properly, in a non-judgmental way, and taking a real rather than a superficial interest,” she says. “With a sore throat [I was trained] really only to be interested in, ‘Do they need antibiotics or not?’ ”
In this case, she should ask her money back; her medical school seems to have been rubbish in training her adequately.
This week a Lancet series on back pain said that millions of patients were getting treatments that did them no good. A government review is looking into how one in 11 people has come to be on potentially addictive drugs such as tranquillisers, opioid painkillers and antidepressants.
Yes, and how is that an argument for homeopathy? It isn’t! It seems to come from the textbook of fallacies.
And this week a BMJ Open study found that GPs with alternative training prescribed a fifth fewer antibiotics.
That study was akin to showing that butchers sell less vegetables than green-grocers. It provided no argument at all for implying that homeopathy is a valuable therapy.
Doctors seem receptive to alternative approaches: in a poll on its website 70 per cent agreed that doctors should recommend acupuncture to patients in pain. The Faculty of Homeopathy now counts 400 doctors among its 700 healthcare professional members.
Wow! Does the Times journalist know that the ‘Faculty of Homeopathy’ is primarily an organisation for doctor homeopaths? If so, why are these figures anything to write home about? And does the author appreciate that the pole was open not just to doctors but to to anyone (particularly those who were motivated, like acupuncturists)?
This horrifies many academics, who say that there is almost no evidence that complementary therapies work.
It horrifies nobody, I’d say. It puzzles some people, and not just academics. And their claim of a lack of sound evidence is evidence-based.
“It’s a false battle”, says Michael Dixon, a GP who chairs the College of Medicine, which is trying to broaden the focus on treatment to patients’ whole lives. “GPs are practical. If a patient gets better that’s all that matters.”
Dr Dixon says there are enormous areas of illness ranging from chronic pain to irritable bowels where few conventional treatments have been shown to be particularly effective, so why not try alternatives with fewer side effects?
Unable to diagnose and treat adequately, let’s all do the next worst thing and apply some outright quackery?!? Logic does not seem to be Dixon’s strong point, does it?
He recommends herbal remedies such as pelargonium — “like a geranium, quite a pretty little flower” — acupressure, and techniques such as self-hypnosis. To those who say these are placebos he replies: so what?
So what indeed! There are over 200 species of pelargonium; only 2 or 3 of them are used in herbal medicine. I don’t suppose Dr Dixon wants to poison us?
“Aromatherapy does work, but only if you believe in it, that’s the way you have to look at it, like a mother kissing knees better.” He continues: “We are healers. That’s what we do as doctors. You can call it theatrical or you can call it a relationship. A lot of patients come in with a metaphor — a headache is actually unhappiness — and the treatment is symbolic.”
It frightens me to know that there are doctors out there who think like this!
What if a patient is seriously ill?
A cancer is a metaphor for what exactly?
As doctors, we have the ethical duty to apply BOTH the science and the art of medicine, BOTH efficacious, evidence-based therapies AND compassion. Can I be so bold as to recommend our book about the ethics of alternative medicine to Dixon?
Such talk makes conventional doctors very nervous. Yet acupuncture illustrates their dilemma. It used to be recommended by the NHS for back pain because patients did improve. Now it is not, after further evidence suggested that patients given placebo “sham acupuncture” did just as well.
No, acupuncture used to be recommended by NICE because there was some evidence; when subsequently more rigorous trials emerged showing that it does NOT work, NICE stopped recommending it. Real medicine develops – it’s only alternative medicine and its proponents that seem to be stuck in the past and resist progress.
Martin Underwood, of the University of Warwick, asks: “So are you going to say, ‘Well, patients get better than they would do otherwise’? Or say it’s all theatrical placebo because it shows no benefit over sham treatment? That’s the question for society.”
Society has long answered it! The answer is called evidence-based medicine. We are not content using quackery for its placebo response; we know that effective treatments do that too, and we want to make progress and improve healthcare of tomorrow.
Although many doctors agree that they need to look at patients more broadly, they insist they do not need to turn to unproven treatments. The magic ingredient, they say, is not an alternative remedy, but time. Helen Stokes-Lampard, chairwoman of the Royal College of GPs, said: “Practices which offer alternative therapies tend to spend longer with patients . . . allowing for more in-depth conversations.”
I am sorry, if this post turned into a bit of a lengthy rant. But it was needed, I think: if there ever was a poorly written, ill focussed, badly researched and badly argued article on alternative medicine, it must be this one.
Did I call the Times a highly respected paper?
I take it back.
The media have (rightly) paid much attention to the three Lancet-articles on low back pain (LBP) which were published this week. LBP is such a common condition that its prevalence alone renders it an important subject for us all. One of the three papers covers the treatment and prevention of LBP. Specifically, it lists various therapies according to their effectiveness for both acute and persistent LBP. The authors of the article base their judgements mainly on published guidelines from Denmark, UK and the US; as these guidelines differ, they attempt a synthesis of the three.
Several alternative therapist organisations and individuals have consequently jumped on the LBP bandwagon and seem to feel encouraged by the attention given to the Lancet-papers to promote their treatments. Others have claimed that my often critical verdicts of alternative therapies for LBP are out of line with this evidence and asked ‘who should we believe the international team of experts writing in one of the best medical journals, or Edzard Ernst writing on his blog?’ They are trying to create a division where none exists,
The thing is that I am broadly in agreement with the evidence presented in Lancet-paper! But I also know that things are a bit more complex.
Below, I have copied the non-pharmacological, non-operative treatments listed in the Lancet-paper together with the authors’ verdicts regarding their effectiveness for both acute and persistent LBP. I find no glaring contradictions with what I regard as the best current evidence and with my posts on the subject. But I feel compelled to point out that the Lancet-paper merely lists the effectiveness of several therapeutic options, and that the value of a treatment is not only determined by its effectiveness. Crucial further elements are a therapy’s cost and its risks, the latter of which also determines the most important criterion: the risk/benefit balance. In my version of the Lancet table, I have therefore added these three variables for non-pharmacological and non-surgical options:
|EFFECTIVENESS ACUTE LBP||EFFECTIVENESS PERSISTENT LBP||RISKS||COSTS||RISK/BENEFIT BALANCE|
|Advice to stay active||+, routine||+, routine||None||Low||Positive|
|Education||+, routine||+, routine||None||Low||Positive|
|Superficial heat||+/-||Ie||Very minor||Low to medium||Positive (aLBP)|
|Exercise||Limited||+/-, routine||Very minor||Low||Positive (pLBP)|
|CBT||Limited||+/-, routine||None||Low to medium||Positive (pLBP)|
|Rehab||Ie||+/-||Minor||Medium to high||Questionable|
Routine = consider for routine use
+/- = second line or adjunctive treatment
Ie = insufficient evidence
Limited = limited use in selected patients
vfbmae = very frequent, minor adverse effects
sae = serious adverse effects, including deaths, are on record
aLBP = acute low back pain
The reason why my stance, as expressed on this blog and elsewhere, is often critical about certain alternative therapies is thus obvious and transparent. For none of them (except for massage) is the risk/benefit balance positive. And for spinal manipulation, it even turns out to be negative. It goes almost without saying that responsible advice must be to avoid treatments for which the benefits do not demonstrably outweigh the risks.
I imagine that chiropractors, osteopaths and acupuncturists will strongly disagree with my interpretation of the evidence (they might even feel that their cash-flow is endangered) – and I am looking forward to the discussions around their objections.
As I often said, I find it regrettable that sceptics often say THERE IS NOT A SINGLE STUDY THAT SHOWS HOMEOPATHY TO BE EFFECTIVE (or something to that extent). This is quite simply not true, and it gives homeopathy-fans the occasion to suggest sceptics wrong. The truth is that THE TOTALITY OF THE MOST RELIABLE EVIDENCE FAILS TO SUGGEST THAT HIGHLY DILUTED HOMEOPATHIC REMEDIES ARE EFFECTIVE BEYOND PLACEBO. As a message for consumers, this is a little more complex, but I believe that it’s worth being well-informed and truthful.
And that also means admitting that a few apparently rigorous trials of homeopathy exist and some of them show positive results. Today, I want to focus on this small set of studies.
How can a rigorous trial of a highly diluted homeopathic remedy yield a positive result? As far as I can see, there are several possibilities:
- Homeopathy does work after all, and we have not fully understood the laws of physics, chemistry etc. Homeopaths favour this option, of course, but I find it extremely unlikely, and most rational thinkers would discard this possibility outright. It is not that we don’t quite understand homeopathy’s mechanism; the fact is that we understand that there cannot be a mechanism that is in line with the laws of nature.
- The trial in question is the victim of some undetected error.
- The result has come about by chance. Of 100 trials, 5 would produce a positive result at the 5% probability level purely by chance.
- The researchers have cheated.
When we critically assess any given trial, we attempt, in a way, to determine which of the 4 solutions apply. But unfortunately we always have to contend with what the authors of the trial tell us. Publications never provide all the details we need for this purpose, and we are often left speculating which of the explanations might apply. Whatever it is, we assume the result is false-positive.
Naturally, this assumption is hard to accept for homeopaths; they merely conclude that we are biased against homeopathy and conclude that, however, rigorous a study of homeopathy is, sceptics will not accept its result, if it turns out to be positive.
But there might be a way to settle the argument and get some more objective verdict, I think. We only need to remind ourselves of a crucially important principle in all science: INDEPENDENT REPLICATION. To be convincing, a scientific paper needs to provide evidence that the results are reproducible. In medicine, it unquestionably is wise to accept a new finding only after it has been confirmed by other, independent researchers. Only if we have at least one (better several) independent replications, can we be reasonably sure that the result in question is true and not false-positive due to bias, chance, error or fraud.
And this is, I believe, the extremely odd phenomenon about the ‘positive’ and apparently rigorous studies of homeopathic remedies. Let’s look at the recent meta-analysis of Mathie et al. The authors found several studies that were both positive and fairly rigorous. These trials differ in many respects (e. g. remedies used, conditions treated) but they have, as far as I can see, one important feature in common: THEY HAVE NOT BEEN INDEPENDENTLY REPLICATED.
If that is not astounding, I don’t know what is!
Think of it: faced with a finding that flies in the face of science and would, if true, revolutionise much of medicine, scientists should jump with excitement. Yet, in reality, nobody seems to take the trouble to check whether it is the truth or an error.
To explain this absurdity more fully, let’s take just one of these trials as an example, one related to a common and serious condition: COPD
The study is by Prof Frass and was published in 2005 – surely long enough ago for plenty of independent replications to emerge. Its results showed that potentized (C30) potassium dichromate decreases the amount of tracheal secretions was reduced, extubation could be performed significantly earlier, and the length of stay was significantly shorter. This is a scientific as well as clinical sensation, if there ever was one!
The RCT was published in one of the leading journals on this subject (Chest) which is read by most specialists in the field, and it was at the time widely reported. Even today, there is hardly an interview with Prof Frass in which he does not boast about this trial with truly sensational results (only last week, I saw one). If Frass is correct, his findings would revolutionise the lives of thousands of seriously suffering patients at the very brink of death. In other words, it is inconceivable that Frass’ result has not been replicated!
But it hasn’t; at least there is nothing in Medline.
Why not? A risk-free, cheap, universally available and easy to administer treatment for such a severe, life-threatening condition would normally be picked up instantly. There should not be one, but dozens of independent replications by now. There should be several RCTs testing Frass’ therapy and at least one systematic review of these studies telling us clearly what is what.
But instead there is a deafening silence.
For heaven sakes, why?
The only logical explanation is that many centres around the world did try Frass’ therapy. Most likely they found it does not work and soon dismissed it. Others might even have gone to the trouble of conducting a formal study of Frass’ ‘sensational’ therapy and found it to be ineffective. Subsequently they felt too silly to submit it for publication – who would not laugh at them, if they said they trailed a remedy that was diluted 1: 1000000000000000000000000000000000000000000000000000000000000 and found it to be worthless? Others might have written up their study and submitted it for publication, but got rejected by all reputable journals in the field because the editors felt that comparing one placebo to another placebo is not real science.
And this is roughly, how it went with the other ‘positive’ and seemingly rigorous studies of homeopathy as well, I suspect.
Regardless of whether I am correct or not, the fact is that there are no independent replications (if readers know any, please let me know).
Once a sufficiently long period of time has lapsed and no replications of a ‘sensational’ finding did not emerge, the finding becomes unbelievable or bogus – no rational thinker can possibly believe such a results (I for one have not yet met an intensive care specialist who believes Frass’ findings, for instance). Subsequently, it is quietly dropped into the waste-basket of science where it no longer obstructs progress.
The absence of independent replications is therefore a most useful mechanism by which science rids itself of falsehoods.
It seems that homeopathy is such a falsehood.
The plethora of dodgy meta-analyses in alternative medicine has been the subject of a recent post – so this one is a mere update of a regular lament.
This new meta-analysis was to evaluate evidence for the effectiveness of acupuncture in the treatment of lumbar disc herniation (LDH). (Call me pedantic, but I prefer meta-analyses that evaluate the evidence FOR AND AGAINST a therapy.) Electronic databases were searched to identify RCTs of acupuncture for LDH, and 30 RCTs involving 3503 participants were included; 29 were published in Chinese and one in English, and all trialists were Chinese.
The results showed that acupuncture had a higher total effective rate than lumbar traction, ibuprofen, diclofenac sodium and meloxicam. Acupuncture was also superior to lumbar traction and diclofenac sodium in terms of pain measured with visual analogue scales (VAS). The total effective rate in 5 trials was greater for acupuncture than for mannitol plus dexamethasone and mecobalamin, ibuprofen plus fugui gutong capsule, loxoprofen, mannitol plus dexamethasone and huoxue zhitong decoction, respectively. Two trials showed a superior effect of acupuncture in VAS scores compared with ibuprofen or mannitol plus dexamethasone, respectively.
The authors from the College of Traditional Chinese Medicine, Jinan University, Guangzhou, Guangdong, China, concluded that acupuncture showed a more favourable effect in the treatment of LDH than lumbar traction, ibuprofen, diclofenac sodium, meloxicam, mannitol plus dexamethasone and mecobalamin, fugui gutong capsule plus ibuprofen, mannitol plus dexamethasone, loxoprofen and huoxue zhitong decoction. However, further rigorously designed, large-scale RCTs are needed to confirm these findings.
Why do I call this meta-analysis ‘dodgy’? I have several reasons, 10 to be exact:
- There is no plausible mechanism by which acupuncture might cure LDH.
- The types of acupuncture used in these trials was far from uniform and included manual acupuncture (MA) in 13 studies, electro-acupuncture (EA) in 10 studies, and warm needle acupuncture (WNA) in 7 studies. Arguably, these are different interventions that cannot be lumped together.
- The trials were mostly of very poor quality, as depicted in the table above. For instance, 18 studies failed to mention the methods used for randomisation. I have previously shown that some Chinese studies use the terms ‘randomisation’ and ‘RCT’ even in the absence of a control group.
- None of the trials made any attempt to control for placebo effects.
- None of the trials were conducted against sham acupuncture.
- Only 10 studies 10 trials reported dropouts or withdrawals.
- Only two trials reported adverse reactions.
- None of these shortcomings were critically discussed in the paper.
- Despite their affiliation, the authors state that they have no conflicts of interest.
- All trials were conducted in China, and, on this blog, we have discussed repeatedly that acupuncture trials from China never report negative results.
And why do I find the journal ‘dodgy’?
Because any journal that publishes such a paper is likely to be sub-standard. In the case of ‘Acupuncture in Medicine’, the official journal of the British Medical Acupuncture Society, I see such appalling articles published far too frequently to believe that the present paper is just a regrettable, one-off mistake. What makes this issue particularly embarrassing is, of course, the fact that the journal belongs to the BMJ group.
… but we never really thought that science publishing was about anything other than money, did we?
What an odd title, you might think.
Systematic reviews are the most reliable evidence we presently have!
Yes, this is my often-voiced and honestly-held opinion but, like any other type of research, systematic reviews can be badly abused; and when this happens, they can seriously mislead us.
A new paper by someone who knows more about these issues than most of us, John Ioannidis from Stanford university, should make us think. It aimed at exploring the growth of published systematic reviews and meta‐analyses and at estimating how often they are redundant, misleading, or serving conflicted interests. Ioannidis demonstrated that publication of systematic reviews and meta‐analyses has increased rapidly. In the period January 1, 1986, to December 4, 2015, PubMed tags 266,782 items as “systematic reviews” and 58,611 as “meta‐analyses.” Annual publications between 1991 and 2014 increased 2,728% for systematic reviews and 2,635% for meta‐analyses versus only 153% for all PubMed‐indexed items. Ioannidis believes that probably more systematic reviews of trials than new randomized trials are published annually. Most topics addressed by meta‐analyses of randomized trials have overlapping, redundant meta‐analyses; same‐topic meta‐analyses may exceed 20 sometimes.
Some fields produce massive numbers of meta‐analyses; for example, 185 meta‐analyses of antidepressants for depression were published between 2007 and 2014. These meta‐analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests. China has rapidly become the most prolific producer of English‐language, PubMed‐indexed meta‐analyses. The most massive presence of Chinese meta‐analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes. Furthermore, many contracting companies working on evidence synthesis receive industry contracts to produce meta‐analyses, many of which probably remain unpublished. Many other meta‐analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta‐analyses are both non‐misleading and useful.
The author concluded that the production of systematic reviews and meta‐analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta‐analyses are unnecessary, misleading, and/or conflicted.
Ioannidis makes the following ‘Policy Points’:
- Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta‐analyses. Instead of promoting evidence‐based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools.
- Suboptimal systematic reviews and meta‐analyses can be harmful given the major prestige and influence these types of studies have acquired.
- The publication of systematic reviews and meta‐analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.
Obviously, Ioannidis did not have alternative medicine in mind when he researched and published this article. But he easily could have! Virtually everything he stated in his paper does apply to it. In some areas of alternative medicine, things are even worse than Ioannidis describes.
Take TCM, for instance. I have previously looked at some of the many systematic reviews of TCM that currently flood Medline, based on Chinese studies. This is what I concluded at the time:
Why does that sort of thing frustrate me so much? Because it is utterly meaningless and potentially harmful:
- I don’t know what treatments the authors are talking about.
- Even if I managed to dig deeper, I cannot get the information because practically all the primary studies are published in obscure journals in Chinese language.
- Even if I did read Chinese, I do not feel motivated to assess the primary studies because we know they are all of very poor quality – too flimsy to bother.
- Even if they were formally of good quality, I would have my doubts about their reliability; remember: 100% of these trials report positive findings!
- Most crucially, I am frustrated because conclusions of this nature are deeply misleading and potentially harmful. They give the impression that there might be ‘something in it’, and that it (whatever ‘it’ might be) could be well worth trying. This may give false hope to patients and can send the rest of us on a wild goose chase.
So, to ease the task of future authors of such papers, I decided give them a text for a proper EVIDENCE-BASED conclusion which they can adapt to fit every review. This will save them time and, more importantly perhaps, it will save everyone who might be tempted to read such futile articles the effort to study them in detail. Here is my suggestion for a conclusion soundly based on the evidence, not matter what TCM subject the review is about:
OUR SYSTEMATIC REVIEW HAS SHOWN THAT THERAPY ‘X’ AS A TREATMENT OF CONDITION ‘Y’ IS CURRENTLY NOT SUPPORTED BY SOUND EVIDENCE.
On another occasion, I stated that I am getting very tired of conclusions stating ‘…XY MAY BE EFFECTIVE/HELPFUL/USEFUL/WORTH A TRY…’ It is obvious that the therapy in question MAY be effective, otherwise one would surely not conduct a systematic review. If a review fails to produce good evidence, it is the authors’ ethical, moral and scientific obligation to state this clearly. If they don’t, they simply misuse science for promotion and mislead the public. Strictly speaking, this amounts to scientific misconduct.
In yet another post on the subject of systematic reviews, I wrote that if you have rubbish trials, you can produce a rubbish review and publish it in a rubbish journal (perhaps I should have added ‘rubbish researchers).
And finally this post about a systematic review of acupuncture: it is almost needless to mention that the findings (presented in a host of hardly understandable tables) suggest that acupuncture is of proven or possible effectiveness/efficacy for a very wide array of conditions. It also goes without saying that there is no critical discussion, for instance, of the fact that most of the included evidence originated from China, and that it has been shown over and over again that Chinese acupuncture research never seems to produce negative results.
The main point surely is that the problem of shoddy systematic reviews applies to a depressingly large degree to all areas of alternative medicine, and this is misleading us all.
So, what can be done about it?
My preferred (but sadly unrealistic) solution would be this:
STOP ENTHUSIASTIC AMATEURS FROM PRETENDING TO BE RESEARCHERS!
Research is not fundamentally different from other professional activities; to do it well, one needs adequate training; and doing it badly can cause untold damage.
The pro arguments essentially are the well-rehearsed points acupuncture-fans like to advance:
- Some guidelines do recommend acupuncture.
- Sham acupuncture is not a valid comparator.
- The largest meta-analysis shows a small effect.
- Acupuncture is not implausible.
- It improves quality of life.
Cummings concludes as follows: In summary, the pragmatic view sees acupuncture as a relatively safe and moderately effective intervention for a wide range of common chronic pain conditions. It has a plausible set of neurophysiological mechanisms supported by basic science.12 For those patients who choose it and who respond well, it considerably improves health related quality of life, and it has much lower long term risk for them than non-steroidal anti-inflammatory drugs. It may be especially useful for chronic musculoskeletal pain and osteoarthritis in elderly patients, who are at particularly high risk from adverse drug reactions.
Our arguments are also not new; essentially, we stress that:
- The effects of acupuncture are too small to be clinically relevant.
- They are probably not even caused by acupuncture, but the result of residual bias.
- Pragmatic trials are of little value in defining efficacy.
- Acupuncture is not free of risks.
- Regular acupuncture treatments are expensive.
- There is no generally accepted, plausible mechanism.
We concluded that after decades of research and hundreds of acupuncture pain trials, including thousands of patients, we still have no clear mechanism of action, insufficient evidence for clinically worthwhile benefit, and possible harms. Therefore, doctors should not recommend acupuncture for pain.
Neither Asbjorn nor I have any conflicts of interests to declare.
Dr Cummings, by contrast, states that he is the salaried medical director of the British Medical Acupuncture Society, which is a membership organisation and charity established to stimulate and promote the use and scientific understanding of acupuncture as part of the practice of medicine for the public benefit. He is an associate editor for Acupuncture in Medicine, published by BMJ. He has a modest private income from lecturing outside the UK, royalties from textbooks, and a partnership teaching veterinary surgeons in Western veterinary acupuncture. He has participated in a NICE guideline development group as an expert adviser discussing acupuncture. He has used Western medical acupuncture in clinical practice following a chance observation as a medical officer in the Royal Air Force in 1989.
My question to you is this: WHICH OF THE TWO POSITION IS THE MORE REASONABLE ONE?
Please, do let us know by posting a comment here, or directly at the BMJ article (better), or both (best).
The question whether spinal manipulative therapy (SMT) has any specific therapeutic effects is still open. This fact must irritate ardent chiropractors, and they therefore try everything to dispel our doubts. One way would be to demonstrate a dose-effect relationship between SMT and the clinical outcome. But, for several reasons, this is not an easy task.
This RCT was aimed at identifying the dose-response relationship between visits for SMT and chronic cervicogenic headache (CGH) outcomes; to evaluate the efficacy of SMT by comparison with a light massage control.
The study included 256 adults with chronic CGH. The primary outcome was days with CGH in the prior 4 weeks evaluated at the 12- and 24-week primary endpoints. Secondary outcomes included CGH days at remaining endpoints, pain intensity, disability, perceived improvement, medication use, and patient satisfaction. Participants were randomized to 4 different dose levels of chiropractic SMT: 0, 6, 12, or 18 sessions. They were treated 3 times per week for 6 weeks and received a focused light-massage control at sessions when SMT was not assigned. Linear dose effects and comparisons to the no-manipulation control group were evaluated at 6, 12, 24, 39, and 52 weeks.
A linear dose-response was observed for all follow-ups, a reduction of approximately 1 CGH day/4 weeks per additional 6 SMT visits (p<.05); a maximal effective dose could not be determined. CGH days/4 weeks were reduced from about 16 to 8 for the highest and most effective dose of 18 SMT visits. Mean differences in CGH days/4 weeks between 18 SMT visits and control were -3.3 (p=.004) and -2.9 (p=.017) at the primary endpoints, and similar in magnitude at the remaining endpoints (p<.05). Differences between other SMT doses and control were smaller in magnitude (p > .05). CGH intensity showed no important improvement nor differed by dose. Other secondary outcomes were generally supportive of the primary.
The authors concluded that there was a linear dose-response relationship between SMT visits and days with CGH. For the highest and most effective dose of 18 SMT visits, CGH days were reduced by half, and about 3 more days per month than for the light-massage control.
This trial would make sense, if the effectiveness of SMT for CGH had been a well-documented fact, and if the study had rigorously controlled for placebo-effects.
But guess what?
Neither of these conditions were met.
A recent review concluded that there are few published randomized controlled trials analyzing the effectiveness of spinal manipulation and/or mobilization for TTH, CeH, and M in the last decade. In addition, the methodological quality of these papers is typically low. Clearly, there is a need for high-quality randomized controlled trials assessing the effectiveness of these interventions in these headache disorders. And this is by no means the only article making such statements; similar reviews arrive at similar conclusions. In turn, this means that the effects observed after SMT are not necessarily specific effects due to SMT but could easily be due to placebo or other non-specific effects. In order to avoid confusion, one would need a credible placebo – one that closely mimics SMT – and make sure that patients were ‘blinded’. But ‘light massage’ clearly does not mimic SMT, and patients obviously were aware of which interventions they received.
So, an alternative – and I think at least as plausible – conclusion of the data provided by this new RCT is this:
Chiropractic SMT is associated with a powerful placebo response which, of course, obeys a dose-effect relationship. Thus these findings are in keeping with the notion that SMT is a placebo.
And why would the researchers – who stress that they have no conflicts of interest – mislead us by making this alternative interpretation of their findings not abundantly clear?
I fear, the reason might be simple: they also seem to mislead us about their conflicts of interest: they are mostly chiropractors with a long track record of publishing promotional papers masquerading as research. What, I ask myself, could be a stronger conflict of interest?
(Pity that a high-impact journal like SPINE did not spot these [not so little] flaws)