Guest post by Pete Attkins
Commentator “jm” asked a profound and pertinent question: “What DOES it take for people to get real in this world, practice some common sense, and pay attention to what’s going on with themselves?” This question was asked in the context of asserting that personal experience always trumps the results of large-scale scientific experiments; and asserting that alt-med experts are better able to provide individulized healthcare than 21st Century orthodox medicine.
What does common sense and paying attention lead us to conclude about the following? We test a six-sided die for bias by rolling it 100 times. The number 1 occurs only once and the number 6 occurs many times, never on its own, but in several groups of consecutive sixes.
I think it is reasonable to say that common sense would, and should, lead everyone to conclude that the die is biased and not fit for its purpose as a source of random numbers.
In other words, we have a gut feeling that the die is untrustworthy. Gut instincts and common sense are geared towards maximizing our chances of survival in our complex and unpredictable world — these are innate and learnt behaviours that have enabled humans to survive despite the harshness of our ever changing habitat.
Only very recently in the long history of our species have we developed specialized tools that enable us to better understand our harsh and complex world: science and critical thinking. These tools are difficult to master because they still haven’t been incorporated into our primary and secondary formal education systems.
The vast majority of people do not have these skills therefore, when a scientific finding flies in the face of our gut instincts and/or common sense, it creates an overwhelming desire to reject the finding and classify the scientist(s) as being irrational and lacking basic common sense. It does not create an intense desire to accept the finding then painstakingly learn all of the science that went into producing the finding.
With that in mind, let’s rethink our common sense conclusion that the six-sided die is biased and untrustworthy. What we really mean is that the results have given all of us good reason to be highly suspicious of this die. We aren’t 100% certain that this die is biased, but our gut feeling and common sense are more than adequate to form a reasonable mistrust of it and to avoid using it for anything important to us. Reasons to keep this die rather than discard it might be to provide a source of mild entertainment or to use its bias for the purposes of cheating.
Some readers might be surprised to discover at this point that the results I presented from this apparently heavily-biased die are not only perfectly valid results obtained from a truly random unbiased die, they are to be fully expected. Even if the die had produced 100 sixes in that test, it would not confirm that the die is biased in any way whatsoever. Rolling a truly unbiased die once will produce one of six possible outcomes. Rolling the same die 100 times will produce one unique sequence out of the 6^100 (6.5 x 10^77) possible sequences: all of which are equally valid!
Gut feeling plus common sense rightfully informs us that the probability of a random die producing one hundred consecutive sixes is so incredibly remote that nobody will ever see it occur in reality. This conclusion is also mathematically sound: if there were 6.5 x 10^77 people on Earth, each performing the same test on truly random dice, there is no guarantee that anyone would observe a sequence of one hundred consecutive sixes.
When we observe a sequence such as 2 5 1 4 6 3 1 4 3 6 5 2… common sense informs us that the die is very likely random. If we calculate the arithmetic mean to be very close to 3.5 then common sense will lead us to conclude that the die is both random and unbiased enough to use it as a reliable source of random numbers.
Unfortunately, this is a perfect example of our gut feelings and common sense failing us abysmally. They totally failed to warn us that the 2 5 1 4 6 3 1 4 3 6 5 2… sequence we observed had exactly the same (im)probability of occurring as a sequence of one hundred 6s or any other sequence that one can think of that doesn’t look random to a human observer.
The 100-roll die test is nowhere near powerful enough to properly test a six-sided die, but this test is more than adequately powered to reveal some of our cognitive biases and some of the deficits in our personal mastery of science and critical thinking.
To properly test the die we need to provide solid evidence that it is both truly random and that its measured bias tends towards zero as the number of rolls tends towards infinity. We could use the services of one testing lab to conduct billions of test rolls, but this would not exclude errors caused by such things as miscalibrated equipment and experimenter bias. It is better to subdivide the testing across multiple labs then carefully analyse and appropriately aggregate the results: this dramatically reduces errors caused by equipment and humans.
In medicine, this testing process is performed via systematic reviews of multiple, independent, double-blind, placebo-controlled trials — every trial that is insufficiently powered to add meaningfully to the result is rightfully excluded from the aggregation.
Alt-med relies on a diametrically opposed testing process. It performs a plethora of only underpowered tests; presents those that just happen to show a positive result (just as a random die could’ve produced); and sweeps under the carpet the overwhelming number of tests that produced a negative result. It publishes only the ‘successes’, not its failures. By sweeping its failures under the carpet it feels justified in making the very bold claim: Our plethora of collected evidence shows clearly that it mostly ‘works’ and, when it doesn’t, it causes no harm.
One of the most acidic tests for a hypothesis and its supporting data (which is a mandatory test in a few branches of critical engineering) is to substitute the collected data for random data that has been carefully crafted to emulate the probability mass functions of the collected datasets. This test has to be run multiple times for reasons that I’ve attempted to explain in my random die example. If the proposer of the hypothesis is unable to explain the multiple failures resulting from this acid test then it is highly likely that the proposer either does not fully understand their hypothesis or that their hypothesis is indistinguishable from the null hypothesis.
Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .
A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine. I suggest to
- do a general lecture on the clinical evidence of the 4 major types of alternative medicine (acupuncture, chiropractic, herbal medicine, homeopathy) or
- give a more specific lecture with in-depth analyses of any given alternative therapy.
I imagine that most of the institutions in question might be a bit anxious about such an idea, but there is no need to worry: I guarantee that everything I say will be strictly and transparently evidence-based. I will disclose my sources and am willing to make my presentation available to students so that they can read up the finer details about the evidence later at home. In other words, I will do my very best to only transmit the truth about the subject at hand.
Nobody wants to hire a lecturer without having at least a rough outline of what he will be talking about – fair enough! Here I present a short summary of the lecture as I envisage it:
- I will start by providing a background about myself, my qualifications and my experience in researching and lecturing on the matter at hand.
- This will be followed by a background on the therapies in question, their history, current use etc.
- Next I would elaborate on the main assumptions of the therapies in question and on their biological plausibility.
- This will be followed by a review of the claims made for the therapies in question.
- The main section of my lecture would be to review the clinical evidence regarding the efficacy of therapies in question. In doing this, I will not cherry-pick my evidence but rely, whenever possible, on authoritative systematic reviews, preferably those from the Cochrane Collaboration.
- This, of course, needs to be supplemented by a review of safety issues.
- If wanted, I could also say a few words about the importance of the placebo effect.
- I also suggest to discuss some of the most pertinent ethical issues.
- Finally, I would hope to arrive at a few clear conclusions.
You see, all is entirely up to scratch!
Perhaps you have some doubts about my abilities to lecture? I can assure you, I have done this sort of thing all my life, I have been a professor at three different universities, and I will probably manage a lecture to your students.
A final issue might be the costs involved. As I said, I would charge neither for the preparation (this can take several days depending on the exact topic), nor for the lecture itself. All I would hope for is that you refund my travel (and, if necessary over-night) expenses. And please note: this is time-limited: approaches will be accepted until 1 January 2015 for lectures any time during 2015.
I can assure you, this is a generous offer that you ought to consider seriously – unless, of course, you do not want your students to learn the truth!
(In which case, one would need to wonder why not)
Guest post by Jan Oude-Aost
ADHD is a common disorder among children. There are evidence based pharmacological treatments, the best known being methylphenidate (MPH). MPH has kind of a bad reputation, but is effective and reasonably safe. The market is also full of alternative treatments, pharmacological and others, some of them under investigation, some unproven and many disproven. So I was not surprised to find a study about Ginkgo biloba as a treatment for ADHD. I was surprised, however, to find this study in the German Journal of Child and Adolescent Psychiatry and Psychotherapy, officially published by the “German Society of Child and Adolescent Psychiatry and Psychotherapy“ (Deutsche Gesellschaft für Kinder- und Jugendpsychiatrie und Psychotherapie). The journal’s guidelines state that studies should provide new scientific results.
The study is called “Ginkgo biloba Extract EGb 761® in Children with ADHD“. EGb 761® is the key ingredient in “Tebonin®“, a herbal drug made by “Dr. Wilma Schwabe GmbH“. The abstract states:
“One possible treatment, at least for cognitive problems, might be the administration of Ginkgo biloba, though evidence is rare.This study tests the clinical efficacy of a Ginkgo biloba special extract (EGb 761®) (…) in children with ADHD (…).“
“Eine erfolgversprechende, bislang kaum untersuchte Möglichkeit zur Behandlung kognitiver Aspekte ist die Gabe von Ginkgo biloba. Ziel der vorliegenden Studie war die Prüfung klinischer Wirksamkeit (…) von Ginkgo biloba-Extrakt Egb 761® bei Kindern mit ADHS.“ (Taken from the English and German abstracts.)
The study sample (20!) was recruited among children who “did not tolerate or were unwilling“ to take MPH. The unwilling part struck me as problematic. There is likely a strong selection bias towards parents who are unwilling to give their children MPH. I guess it is not the children who are unwilling to take MPH, but the parents who are unwilling to administer it. At least some of these parents might be biased against MPH and might already favor CAMmodalities.
The authors state three main problems with “herbal therapy“ that require more empirical evidence: First of all the question of adverse reactions, which they claim occur in about 1% of cases with “some CAMs“ (mind you, not “herbal therapy“). Secondly, the question of drug interactions and thirdly, the lack of information physicians have about the CAMs their patients use.
A large part of the study is based on results of an EEG-protocol, which I choose to ignore, because the clinical results are too weak to give the EEG findings any clinical relevance.
Before looking at the study itself, let’s look at what is known about Ginkgo biloba as a drug. Ginkgo is best known for its use in patients with dementia, cognitive impairment und tinnitus. A Cochrane review from 2009 concluded:
“There is no convincing evidence that Ginkgo biloba is efficacious for dementia and cognitive impairment“ .
The authors of the current Study cite Sarris et al. (2011), a systematic review of complementary treatment of ADHD. Sarris et al. mention Salehi et al. (2010) who tested Ginkgo against MPH. MPH turned out to be much more effective than Ginkgo, but Sarris et al. argue that the duration of treatment (6 weeks) might have been too short to see the full effects of Ginkgo.
Given the above information it is unclear why Ginkgo is judged a “possible“ treatment, properly translated from German even “promising”, and why the authors state that Ginkgo has been “barely studied“.
In an unblinded, uncontrolled study with a sample likely to be biased toward the tested intervention, anything other than a positive result would be odd. In the treatment of autism there are several examples of implausible treatments that worked as long as parents knew that their children were getting the treatment, but didn’t after proper blinding (e.g. secretin).
This study’s aim was to test clinical efficacy, but the conclusion begins with how well tolerated Ginkgo was. The efficacy is mentioned subsequently: “Following administration, interrelated improvements on behavioral ratings of ADHD symptoms (…) were detected (…).“ But the way they where “detected“ is interesting. The authors used an established questionnaire (FBB-HKS) to let parents rate their children. Only the parents. The children and their teachers where not given the FBB-HKS-questionnaires, inspite of this being standard clinical practice (and inspite of giving children questionnaires to determine changes in quality of life, which were not found).
None of the three problems that the authors describe as important (adverse reactions, drug interactions, lack of information) can be answered by this study. I am no expert in statistics but it seems unlikely to me to meaningfully determine adverse effects in just 20 patients especially when adverse effects occur at a rate of 1%. The authors claim they found an incidence rate of 0,004% in “700 observation days“. Well, if they say so.
The authors conclude:
“Taken together, the current study provides some preliminary evidence that Ginkgo biloba Egb 761® seems to be well tolerated in the short term and may be a clinically useful treatment for children with ADHD. Double-blind randomized trials are required to clarify the value of the presented data.“
Given the available information mentioned earlier, one could have started with that conclusion and conducted a double blind RCT in the first place!
The trends of this preliminary open study may suggest that Ginkgo biloba Egb 761® might be considered as a complementary or alternative medicine for treating children with ADHD.“
So, why do I care? If preliminary evidence “may suggest“ that something “might be considered“ as a treatment? Because I think that this study does not answer any important questions or give us any new or useful knowledge. Following the journal’s guidelines, it should therefore not have been published. I also think it is an example of bad science. Bad not just because of the lack of critical thinking. It also adds to the misinformation about possible ADHD treatments spreading through the internet. The study was published in September. In November I found a website citing the study and calling it “clinical proof“ when it is not. But child psychiatrists will have to explain that to many parents, instead of talking about their children’s health.
I somehow got the impression that this study was more about marketing than about science. I wonder if Schwabe will help finance the necessary double-blind randomized trial…
The volume of medical research, as listed on Medline, is huge and increases steadily each year. This phenomenon can easily be observed with simple Medline searches. If we use search terms related to conventional medicine, we find near linear increases in the number of articles (here I do not make a distinction between types of articles) published in each area over time, invariably with a peak in 2013, the last year for which Medline listing is currently complete. Three examples will suffice:
PHARMACOTHERAPY 117 414 articles in 2013
PHARMACOLOGY 210 228 articles in 2013
ADVERSE EFFECTS 86 067 articles in 2013
Some of the above subjects are obviously heavily industry-dependent and thus perhaps not typical of the volume of research in health care generally. Let’s therefore look up three fields where there is no such powerful industry to support research:
PSYCHOTHERAPY 7 208 articles in 2013
PHYSIOTHERAPY 7 713 articles in 2013
SURGERY 154 417 articles in 2013
Now, if we conduct similar searches for topics related to alternative medicine, the picture changes in at least three remarkable ways: 1) there is no linear increase of the volume per year; instead the curves look flat and shapeless (the only exception is ‘herbal medicine’ where the increase even looks exponential). 2) The absolute volume does not necessarily peak in 2013 (exceptions are ‘acupuncture’ and ‘herbal medicine’). 3) The number of articles in the year with the most articles (as listed below) is small or even tiny:
ACUPUNCTURE 1 491 articles in 2013
CHIROPRACTIC 283 articles in 2011
HERBAL MEDICINE 2 503 articles in 2013
HOMEOPATHY 233 articles in 2005
NATUROPATHY 69 articles in 2010
You may think: so what? But I find these figures intriguing. They demonstrate that the research output in alternative medicine is minimal compared to that in conventional medicine. Moreover, they imply that this output is not only not increasing steadily, as it is in conventional medicine, but in the case of chiropractic, homeopathy and naturopathy, it has recently been decreasing.
To put this into context, we need to know that:
- there is a plethora of journals dedicated to alternative medicine which are keen to publish all sorts of articles,
- the peer-review process of most of these journals seems farcically poor,
- as a result, the quality of the research into alternative medicine is often dismal, as regularly disclosed on this blog,
- enthusiasts of alternative medicine often see rigorous research into their subject as a dangerous threat: it might disprove their prior beliefs.
In their defence, proponents of alternative medicine would probably claim that the low volume of research is due to a severe and unfair lack of funding. However, I fail to see how this can be the sole or even the main explanation: areas of conventional medicine that do not have industry support seem to manage a much higher output than alternative medicine (and I should stress that I have chosen 5 sections within alternative medicine that are associated with the highest number of articles per year). Research in these areas is usually sponsored by charitable and government sources, and it needs to be stressed that these are open to any researcher who submits good science.
What follows, I think, is simple: in general, alternative medicine advocates have little interest in research and even less expertise to conduct it.
Twenty years ago, when I started my Exeter job as a full-time researcher of complementary/alternative medicine (CAM), I defined the aim of my unit as applying science to CAM. At the time, this intention upset quite a few CAM-enthusiasts. One of the most prevalent arguments of CAM-proponents against my plan was that the study of CAM with rigorous science was quite simply an impossibility. They claimed that CAM included mind and body practices, holistic therapies, and other complex interventions which cannot not be put into the ‘straight jacket’ of conventional research, e. g. a controlled clinical trial. I spent the next few years showing that this notion was wrong. Gradually and hesitantly CAM researchers seemed to agree with my view – not all, of course, but first a few and then slowly, often reluctantly the majority of them.
What followed was a period during which several research groups started conducting rigorous tests of the hypotheses underlying CAM. All too often, the results turned out to be disappointing, to say the least: not only did most of the therapies in question fail to show efficacy, they were also by no means free of risks. Worst of all, perhaps, much of CAM was disclosed as being biologically implausible. The realization that rigorous scientific scrutiny often generated findings which were not what proponents had hoped for led to a sharp decline in the willingness of CAM-proponents to conduct rigorous tests of their hypotheses. Consequently, many asked whether science was such a good idea after all.
But that, in turn, created a new problem: once they had (at least nominally) committed themselves to science, how could they turn against it? The answer to this dilemma was easier that anticipated: the solution was to appear dedicated to science but, at the same time, to argue that, because CAM is subtle, holistic, complex etc., a different scientific approach was required. At this stage, I felt we had gone ‘full circle’ and had essentially arrived back where we were 20 years ago – except that CAM-proponents no longer rejected the scientific method outright but merely demanded different tools.
A recent article may serve as an example of this new and revised stance of CAM-proponents on science. Here proponents of alternative medicine argue that a challenge for research methodology in CAM/ICH* is the growing recognition that CAM/IHC practice often involves complex combination of novel interventions that include mind and body practices, holistic therapies, and others. Critics argue that the reductionist placebo controlled randomized control trial (RCT) model that works effectively for determining efficacy for most pharmaceutical or placebo trial RCTs may not be the most appropriate for determining effectiveness in clinical practice for either CAM/IHC or many of the interventions used in primary care, including health promotion practices. Therefore the reductionist methodology inherent in efficacy studies, and in particular in RCTs, may not be appropriate to study the outcomes for much of CAM/IHC, such as Traditional Korean Medicine (TKM) or other complex non-CAM/IHC interventions—especially those addressing comorbidities. In fact it can be argued that reductionist methodology may disrupt the very phenomenon, the whole system, that the research is attempting to capture and evaluate (i.e., the whole system in its naturalistic environment). Key issues that surround selection of the most appropriate methodology to evaluate complex interventions are well described in the Kings Fund report on IHC and also in the UK Medical Research Council (MRC) guidelines for evaluating complex interventions—guidelines which have been largely applied to the complexity of conventional primary care and care for patients with substantial comorbidity. These reports offer several potential solutions to the challenges inherent in studying CAM/IHC. [* IHC = integrated health care]
Let’s be clear and disclose what all of this actually means. The sequence of events, as I see it, can be summarized as follows:
- We are foremost ALTERNATIVE! Our treatments are far too unique to be subjected to reductionist research; we therefore reject science and insist on an ALTERNATIVE.
- We (well, some of us) have reconsidered our opposition and are prepared to test our hypotheses scientifically (NOT LEAST BECAUSE WE NEED THE RECOGNITION THAT THIS MIGHT BRING).
- We are dismayed to see that the results are mostly negative; science, it turns out, works against our interests.
- We need to reconsider our position.
- We find it inconceivable that our treatments do not work; all the negative scientific results must therefore be wrong.
- We always said that our treatments are unique; now we realize that they are far too holistic and complex to be submitted to reductionist scientific methods.
- We still believe in science (or at least want people to believe that we do) – but we need a different type of science.
- We insist that RCTs (and all other scientific methods that fail to demonstrate the value of CAM) are not adequate tools for testing complex interventions such as CAM.
- We have determined that reductionist research methods disturb our subtle treatments.
- We need pragmatic trials and similarly ‘soft’ methods that capture ‘real life’ situations, do justice to CAM and rarely produce a negative result.
What all of this really means is that, whenever the findings of research fail to disappoint CAM-proponents, the results are by definition false-negative. The obvious solution to this problem is to employ different (weaker) research methods, preferably those that cannot possibly generate a negative finding. Or, to put it bluntly: in CAM, science is acceptable only as long as it produces the desired results.
Blinding patients in clinical trials is a key methodological procedure for minimizing bias and thus making sure that the results are reliable. In alternative medicine, blinding is not always straight forward, and many studies are therefore not patient-blinded. We all know that this can introduce bias into a trial, but how large is its effect on study outcomes?
This was the research question addressed by a recent systematic review of randomized clinical trials with one sub-study (i.e. experimental vs control) involving blinded patients and another, otherwise identical, sub-study involving non-blinded patients. Within each trial, the researchers compared the difference in effect sizes (i.e. standardized mean differences) between the two sub-studies. A difference <0 indicates that non-blinded patients generated a more optimistic effect estimate. The researchers then pooled the differences with random-effects inverse variance meta-analysis, and explored reasons for heterogeneity.
The main analysis included 12 trials with a total of 3869 patients. Ten of these RCTs were studies of acupuncture. The average difference in effect size for patient-reported outcomes was -0.56 (95% confidence interval -0.71 to -0.41), (I(2 )= 60%, P = 0.004), indicating that non-blinded patients exaggerated the effect size by an average of 0.56 standard deviation, but with considerable variation. Two of the 12 trials also used observer-reported outcomes, showing no indication of exaggerated effects due lack of patient blinding.
There was an even larger effect size difference in the 10 acupuncture trials [-0.63 (-0.77 to -0.49)], than in the two non-acupuncture trials [-0.17 (-0.41 to 0.07)]. Lack of patient blinding was also associated with increased attrition rates and the use of co-interventions: ratio of control group attrition risk 1.79 (1.18 to 2.70), and ratio of control group co-intervention risk 1.55 (0.99 to 2.43).
The authors conclude that this study provides empirical evidence of pronounced bias due to lack of patient blinding in complementary/alternative randomized clinical trials with patient-reported outcomes.
This is a timely, rigorous and important analysis. In alternative medicine, we currently see a proliferation of trials that are not patient-blinded. We always suspected that they are at a high risk of generating false-positive results – now we know that this is, in fact, the case.
What should we do with this insight? In my view, the following steps would be wise:
- Take the findings from the existing trials that are devoid of patient-blinding with more than just a pinch of salt.
- Discourage the funding of future studies that fail to include patient-blinding.
- If patient-blinding is truly and demonstrably impossible – which is not often the case – make sure that the trialists at least include blinding of the assessors of the primary outcome measures.
There must be well over 10 000 clinical trials of acupuncture; Medline lists ~5 000, and many more are hidden in the non-Medline listed literature. That should be good news! Sadly, it isn’t.
It should mean that we now have a pretty good idea for what conditions acupuncture is effective and for which illnesses it does not work. But we don’t! Sceptics say it works for nothing, while acupuncturists claim it is a panacea. The main reason for this continued controversy is that the quality of the vast majority of these 10 000 studies is not just poor, it is lousy.
“Where is the evidence for this outraging statement???” – I hear the acupuncture-enthusiasts shout. Well, how about my own experience as editor-in-chief of FACT? No? Far too anecdotal?
How about looking at Cochrane reviews then; they are considered to be the most independent and reliable evidence in existence? There are many such reviews (most, if not all [co-]authored by acupuncturists) and they all agree that the scientific rigor of the primary studies is fairly awful. Here are the crucial bits of just the last three; feel free to look for more:
Or how about providing an example? Good idea! Here is a new trial which could stand for numerous others:
This study was performed to compare the efficacy of acupuncture versus corticosteroid injection for the treatment of Quervain’s tendosynovitis (no, you do not need to look up what condition this is for understanding this post). Thirty patients were treated in two groups. The acupuncture group received 5 acupuncture sessions of 30 minutes duration. The injection group received one methylprednisolone acetate injection in the first dorsal compartment of the wrist. The degree of disability and pain was evaluated by using the Quick Disabilities of the Arm, Shoulder, and Hand (Q-DASH) scale and the Visual Analogue Scale (VAS) at baseline and at 2 weeks and 6 weeks after the start of treatment. The baseline means of the Q-DASH and the VAS scores were 62.8 and 6.9, respectively. At the last follow-up, the mean Q-DASH scores were 9.8 versus 6.2 in the acupuncture and injection groups, respectively, and the mean VAS scores were 2 versus 1.2. Thus there were short-term improvements of pain and function in both groups.
The authors drew the following conclusions: Although the success rate was somewhat higher with corticosteroid injection, acupuncture can be considered as an alternative option for treatment of De Quervain’s tenosynovitis.
The flaws of this study are exemplary and numerous:
- This should have been a study that compares two treatments – the technical term is ‘equivalence trial – and such studies need to be much larger to produce a meaningful result. Small sample sizes in equivalent trials will always make the two treatments look similarly effective, even if one is a pure placebo.
- There is no gold standard treatment for this condition. This means that a comparative trial makes no sense at all. In such a situation, one ought to conduct a placebo-controlled trial.
- There was no blinding of patients; therefore their expectation might have distorted the results.
- The acupuncture group received more treatments than the injection group; therefore the additional attention might have distorted the findings.
- Even if the results were entirely correct, one cannot conclude from them that acupuncture was effective; the notion that it was similarly ineffective as the injections is just as warranted.
These are just some of the most fatal flaws of this study. The sad thing is that similar criticisms can be made for most of the 10 000 trials of acupuncture. But the point here is not to nit-pick nor to quack-bust. My point is a different and more serious one: fatally flawed research is not just a ‘poor show’, it is unethical because it is a waste of scarce resources and, even more importantly, an abuse of patients for meaningless pseudo-science. All it does is it misleads the public into believing that acupuncture might be good for this or that condition and consequently make wrong therapeutic decisions.
In acupuncture (and indeed in most alternative medicine) research, the problem is so extremely wide-spread that it is high time to do something about it. Journal editors, peer-reviewers, ethics committees, universities, funding agencies and all others concerned with such research have to work together so that such flagrant abuse is stopped once and for all.
Yesterday, BBC NEWS published the following interesting text about a BBC4 broadcast entitled ‘THE ROYAL ACTIVIST’ aired on the same day:
Prince Charles has been a well-known supporter of complementary medicine. According to a… former Labour cabinet minister, Peter Hain, it was a topic they shared an interest in.
“He had been constantly frustrated at his inability to persuade any health ministers anywhere that that was a good idea, and so he, as he once described it to me, found me unique from this point of view, in being somebody that actually agreed with him on this, and might want to deliver it.”
Mr Hain added: “When I was Secretary of State for Northern Ireland in 2005-7, he was delighted when I told him that since I was running the place I could more or less do what I wanted to do.***
“I was able to introduce a trial for complementary medicine on the NHS, and it had spectacularly good results, that people’s well-being and health was vastly improved.
“And when he learnt about this he was really enthusiastic and tried to persuade the Welsh government to do the same thing and the government in Whitehall to do the same thing for England, but not successfully,” added Mr Hain.
*** obviously there is no homeopathic remedy for megalomania (but that’s a different story)
SPECTACULARLY GOOD RESULTS?
Let’s have a look at the ‘trial’ and its results. An easily accessible report provides the following details about it:
From February 2007 to February 2008, Get Well UK ran the UK’s first government-backed complementary therapy pilot. Sixteen practitioners provided treatments including acupuncture, osteopathy and aromatherapy, to more than 700 patients at two GP practices in Belfast and Derry.
The BBC made an hour long documentary following our trials and tribulations, which was broadcast on BBC1 NI on 5 May 2008.
Aims and Objectives
The aim of the project was to pilot services integrating complementary medicine into existing primary care services in Northern Ireland. Get Well UK provided this pilot project for the Department for Health, Social Services and Public Safety (DHSSPS) during 2007.
The objectives were:
- To measure the health outcomes of the service and monitor health improvements.
- To redress inequalities in access to complementary medicine by providing therapies through the NHS, allowing access regardless of income.
- To contribute to best practise in the field of delivering complementary therapies through primary care.
- To provide work for suitably skilled and qualified practitioners.
- To increase patient satisfaction with quick access to expert care.
- To help patients learn skills to improve and retain their health.
- To free up GP time to work with other patients.
- To deliver the programme for 700 patients.
The results of the pilot were analysed by Social and Market Research, who produced this report.
The findings can be summarised as follows:
Following the pilot, 80% of patients reported an improvement in their symptoms, 64% took less time off work and 55% reduced their use of painkillers.
In the pilot, 713 patients with a range of ages and demographic backgrounds and either physical or mental health conditions were referred to various complementary and alternative medicine (CAM) therapies via nine GP practices in Belfast and Londonderry. Patients assessed their own health and wellbeing pre and post therapy and GPs and CAM practitioners also rated patients’ responses to treatment and the overall effectiveness of the scheme.
• 81% of patients reported an improvement in their physical health
• 79% reported an improvement in their mental health
• 84% of patients linked an improvement in their health and wellbeing directly to their CAM treatment
• In 65% of patient cases, GPs documented a health improvement, correlating closely to patient-reported improvements
• 94% of patients said they would recommend CAM to another patient with their condition
• 87% of patient indicated a desire to continue with their CAM treatment
Painkillers and medication
• Half of GPs reported prescribing less medication and all reported that patients had indicated to them that they needed less
• 62% of patients reported suffering from less pain
• 55% reported using less painkillers following treatment
• Patients using medication reduced from 75% before treatment to 61% after treatment
• 44% of those taking medication before treatment had reduced their use afterwards
Health service and social benefits
• 24% of patients who used health services prior to treatment (i.e. primary and secondary care, accident and emergency) reported using the services less after treatment
• 65% of GPs reported seeing the patient less following the CAM referral
• Half of GPs said the scheme had reduced their workload and 17% reported a financial saving for their practice
• Half of GPs said their patients were using secondary care services less.
Impressed? Well, in case you are, please consider this:
- there was no control group
- therefore it is not possible to attribute any of the outcomes to the alternative therapies offered
- they could have been due to placebo-effects
- or to the natural history of the disease
- or to regression towards the mean
- or to social desirability
- or to many other factors which are unrelated to the alternative treatments provided
- most outcome measures were not objectively verified
- the patients were self-selected
- they would all have had conventional treatments in parallel
- this ‘trial’ was of such poor quality that its findings were never published in a peer-reviewed journal
- this was not a ‘trial’ but a ‘pilot study’
- pilot studies are not normally for measuring outcomes but for testing the feasibility of a proper trial
- the research expertise of the investigators was close to zero
- the scientific community merely had pitiful smiles for this ‘trial’ when it was published
- neither Northern Ireland nor any other region implemented the programme despite its “spectacularly good results”.
So, is the whole ‘trial’ story an utterly irrelevant old hat?
Certainly not! Its true significance does not lie in the fact that a few amateurs are trying to push bogus treatments into the NHS via the flimsiest pseudo-research of the century. The true significance, I think, is that it shows how Prince Charles, once again, oversteps the boundaries of his constitutional role.
Arnold Relman has died aged 91. He was a great personality, served for many years as editor-in-chief of ‘The New England Journal of Medicine’ and was professor of medicine and social medicine at Harvard Medical School. He also was an brilliantly outspoken critic of alternative medicine, and I therefore believe that he deserves to be remembered here. The following excerpts are from an article he wrote in 1998 about Andrew Weil, America’s foremost guru of alternative medicine; I have taken the liberty of extracting a few paragraphs which deal with alternative medicine in general terms.
Until now, alternative medicine has generally been rejected by medical scientists and educators, and by most practicing physicians. The reasons are many, but the most important reason is the difference in mentality between the alternative practitioners and the medical establishment. The leaders of the establishment believe in the scientific method, and in the rule of evidence, and in the laws of physics, chemistry, and biology upon which the modern view of nature is based. Alternative practitioners either do not seem to care about science or explicitly reject its premises. Their methods are often based on notions totally at odds with science, common sense, and modern conceptions of the structure and the function of the human body. In advancing their claims, they do not appear to recognize the need for objective evidence, asserting that the intuitions and the personal beliefs of patients and healers are all that is needed to validate their methods. One might have expected such thinking to alienate most people in a technologically advanced society such as ours; but the alternative medicine movement, and the popularity of gurus such as Weil, are growing rapidly…
That people usually “get better,” that most relatively minor diseases heal spontaneously or seem to improve with simple common remedies, is hardly news. Every physician, indeed every grandmother, knows that. Yet before we accept Weil’s contention that serious illnesses such as “bone cancer,” “Parkinson’s disease,” or “scleroderma” are similarly curable, or respond to alternative healing methods, we need at least to have some convincing medical evidence that the patients whom he reports in these testimonials did indeed suffer from these diseases, and that they were really improved or healed. The perplexity is not that Weil is using “anecdotes” as proof, but that we don’t know whether the anecdotes are true.
Anecdotal evidence is often used in the conventional medical literature to suggest the effectiveness of treatment that has not yet been tested by formal clinical trials. In fact, much of the mainstream professional literature in medicine consists of case reports — “anecdotes,” of a kind. The crucial difference between those case reports and the testimonials that abound in Weil’s books (and throughout the literature of alternative medicine) is that the case reports in the mainstream literature are almost always meticulously documented with objective data to establish the diagnosis and to verify what happened, whereas the testimonials cited by alternative medicine practitioners usually are not. Weil almost never gives any objective data to support his claims. Almost everything is simply hearsay and personal opinion.
To the best of my knowledge, Weil himself has published nothing in the peer-reviewed medical literature to document objectively his personal experiences with allegedly cured patients or to verify his claims for the effectiveness of any of the unorthodox remedies he uses. He is not alone in this respect. Few proponents of alternative medicine have so far published clinical reports that would stand the rigorous scientific scrutiny given to studies of traditional medical treatments published in the serious medical journals. Alternative medicine is still a field rich in undocumented claims and anecdotes and relatively lacking in credible scientific reports…
… Thus Weil can believe in miraculous cures even while claiming to be rational and scientific, because he thinks that quantum theory supports his views.
Yet the leading physicists of our time do not accept such an interpretation of quantum theory. They do not believe quantum theory says anything about the role of human consciousness in the physical world. They see quantum laws as simply a useful mathematical formulation for describing subatomic phenomena that are not adequately handled by classical physical theory, although the latter remains quite satisfactory for the analysis of physical events at the macro-level. Steven Weinberg has observed that “quantum mechanics has been overwhelmingly important to physics, but I cannot find any messages for human life in quantum mechanics that are different in any important way from those of Newtonian physics.” And overriding all discussions of the meaning of quantum physics is the fundamental fact that quantum theory, like all other scientific law, is only valid to the extent that it predicts and accords with the evidence provided by observation and objective measurement. Richard Feynman said it quite simply: “Observation is the ultimate and final judge of the truth of an idea.” Feynman also pointed out that scientific observations need to be objective, reproducible, and, in a sense, public — that is, available to all interested scientists who wish to check the observations for themselves.
Surely almost all scientists would agree with Feynman that, regardless of what theory of nature we wish to espouse, we cannot escape the obligation to support our claims with objective evidence. All theories must conform to the facts or be discarded. So, if Weil cannot produce credible evidence to validate the miraculous cures that he claims for the healing powers of the mind, and if he does not support with objective data the claims he and others make for the effectiveness of alternative healing methods, he cannot presume to wear the mantle of science, and his appeal to quantum theory cannot help him.
Some apologists for alternative medicine have argued that since their healing methods are based on a “paradigm” different from that of traditional medicine, traditional standards of evidence do not apply. Weil sometimes seems to agree with that view, as when he talks about “stoned thinking” and the “ambivalent” nature of reality, but more recently — as he seeks to integrate alternative with allopathic medicine — he seems to acknowledge the need for objective evidence. This, at least, is how I would interpret one of his most recent and ambitious publishing ventures, the editorship of the new quarterly journal Integrative Medicine***.
Integrative Medicine describes itself as a “peer-reviewed journal … committed to gathering evidence for the safety and efficacy of all approaches to health according to the highest standards of scientific research, while remaining open to new paradigms and honoring the healing power of nature.” The Associate Editors and Editorial Board include prominent names in both alternative medicine and allopathic medicine, who presumably support that mission. Yet the first two issues will disappoint those who were looking for original clinical research based on new, objective data. Perhaps subsequent issues will be different, but in any case it is hard to understand the need for Weil’s new journal if he truly intends to hold manuscripts to accepted scientific standards: there already exist many leading peer-reviewed medical journals that will review research studies of alternative healing methods on their merits. During the past decade or so, only a few such studies have passed rigorous review and have been published in first-rate journals. Recently, more studies have been published, but very few of them report significant clinical effects. And that is pretty much where matters now stand. Despite much avowed interest in research on alternative medicine and increased investment in support of such research, the evidentiary underpinnings of unconventional healing methods are still largely lacking…
The alternative medicine movement has been around for a long time, but it was eclipsed during most of this century by the success of medical science. Now there is growing public disenchantment with the cost and the impersonality of modern medical care, as well as concern about medical mistakes and the complications and side-effects of pharmaceuticals and other forms of medical treatment. For their part, physicians have allowed the public to perceive them as uninterested in personal problems, as inaccessible to their patients except when carrying out technical procedures and surgical operations. The “doctor knows best” attitude, which dominated patient-doctor relations during most of the century, has in recent decades given way to a more activist, consumer-oriented view of the patient’s role. Moreover, many other licensed health-care professionals, such as nurse-practitioners, psychotherapists, pharmacists, and chiropractors, are providing services once exclusively reserved to allopathic physicians.
The net result of all these developments has been a weakening of the hegemony that allopathic medicine once exercised over the health care system, and a growing interest by the public in exploring other healing approaches. The authority of allopathic medicine is also being challenged by a swelling current of mysticism and anti-scientism that runs deep through our culture. Even as the number and the complexity of urgent technological and scientific issues facing contemporary society increase, there seems to be a growing public distrust of the scientific outlook and a reawakening of interest in mysticism and spiritualism.
All this obscurantism has given powerful impetus to the alternative medicine movement, with its emphasis on the power of mind over matter. And so consumer demand for alternative remedies is rising, as is public and private financial support for their study and clinical use. It is no wonder that practicing physicians, the academic medical establishment, and the National Institutes of Health are all finding reasons to pay more attention to the alternative medicine movement. Indeed, it is becoming politically incorrect for the movement’s critics to express their skepticism too strongly in public…
There is no doubt that modern medicine as it is now practiced needs to improve its relations with patients, and that some of the criticisms leveled against it by people such as Weil — and by many more within the medical establishment itself — are valid. There also can be no doubt that a few of the “natural” medicines and healing methods now being used by practitioners of alternative medicine will prove, after testing, to be safe and effective. This, after all, has been the way in which many important therapeutic agents and treatments have found their way into standard medical practice in the past. Mainstream medicine should continue to be open to the testing of selected unconventional treatments. In keeping an open mind, however, the medical establishment in this country must not lose its scientific compass or weaken its commitment to rational thought and the rule of evidence.
There are not two kinds of medicine, one conventional and the other unconventional, that can be practiced jointly in a new kind of “integrative medicine.” Nor, as Andrew Weil and his friends also would have us believe, are there two kinds of thinking, or two ways to find out which treatments work and which do not. In the best kind of medical practice, all proposed treatments must be tested objectively. In the end, there will only be treatments that pass that test and those that do not, those that are proven worthwhile and those that are not. Can there be any reasonable “alternative”?
*** the journal only existed for a short period of time
If we search on ‘Medline’ for ‘complementary alternative medicine’ (CAM), we currently get about 13000 hits. A little graph on the side of the page demonstrates that, during the last 4 years, the number of articles on this subject has grown exponentially.
Surely, this must be very good news: such intense research activity will soon tell us exactly which alternative treatments work for which conditions and which don’t.
I beg to differ. Let me explain why.
The same ‘Medline’ search informs us that the majority of the recent articles were published in an open access journal called ‘Evidence-Based Complementary and Alternative Medicine’ (eCAM). For example, of the 80 most recent articles listed in Medline (on 26/5/2014), 53 came from that journal. The publication frequency of eCAM and its increase in recent years beggars belief: in 2011, they published just over 500 articles which is already a high number, but, in 2012, the figure had risen to >800, and in 2013 it was >1300 (the equivalent 2013 figure for the BMJ/BMJ Open by comparison is 4, and that for another alt med journal, e.g. Forsch Komplement, is 10)
How do they do it? How can eCAM be so dominant in publishing alt med research? The trick seems to be fairly simple.
Let’s assume you are an alt med researcher and you have an article that you would like to see published. Once you submit it to eCAM, your paper is sent to one of the ~150 members of the editorial board. These people are almost all strong proponents of alternative medicine; critics are a true rarity in this group. At this stage, you are able to suggest the peer reviewers for your submission (all who ever accepted this task are listed on the website; they amount to several thousand!), and it seems that, with the vast majority of submissions, the authors’ suggestions are being followed.
It goes without saying that most researchers suggest colleagues for peer reviewing who are not going to reject their work (the motto seems to be “if you pass my paper, I will pass yours). Therefore even faily flimsy bits of research pass this peer review process and get quickly published online in eCAM.
This process explains a lot, I think: 1) the extraordinarily high number of articles published 2) why currently more than 50% of all alt med research originate from eCAM 3) why so much of it is utter rubbish.
Even the mere titles of some of the articles might demonstrate my point. A few examples have to suffice:
- Color distribution differences in the tongue in sleep disorder
- Wen-dan decoction improves negative emotions in sleep-deprived rats by regulating orexin-a and leptin expression.
- Yiqi Huoxue Recipe Improves Heart Function through Inhibiting Apoptosis Related to Endoplasmic Reticulum Stress in Myocardial Infarction Model of Rats.
- Protective Effects of Bu-Shen-Huo-Xue Formula against 5/6 Nephrectomy-Induced Chronic Renal Failure in Rats
- Effects and Mechanisms of Complementary and Alternative Medicine during the Reproductive Process
- Evidence-based medicinal plants for modern chronic diseases
- Transforming Pain into Beauty: On Art, Healing, and Care for the Spirit
This system of uncritical peer review and fast online publication seems to suit many of the people involved in this process: the journal’s owners are laughing all the way to the bank; there is a publication charge of US$ 2000 per article, and, in 2013, the income of eCAM must therefore have been well over US$2 000 000. The researchers are equally delighted; they get even their flimsiest papers published (remember: ‘publish or perish’!). And the evangelic believers in alternative medicine are pleased because they can now claim that their field is highly research-active and that there is plenty of evidence to support the use of this or that therapy.
But there are others who are not served well by eCAM habit of publishing irrelevant, low quality articles:
- professionals who would like to advance health care and want to see reliable evidence as to which treatments work and which don’t,
- the public who, in one way or another, pay for all this and might assume that published research tends to be relevant and reliable,
- the patients who have given their time to researchers in the hope that their gift will improve health care,
- ill individuals who hope that alternative treatments might relieve their suffering,
- politicians who rely on research to be reliable in order to arrive at the right decisions.
Come to think of it, the vast majority of people should be less than enchanted with eCAM and similar journals.