This post is dedicated to Mel Koppelman.
Those who followed the recent discussions about acupuncture on this blog will probably know her; she is an acupuncturist who (thinks she) knows a lot about research because she has several higher qualifications (but was unable to show us any research published by herself). Mel seems very quick in lecturing others about research methodology. Yesterday, she posted this comment in relation to my previous post on a study of aromatherapy and reflexology:
Professor Ernst, This post affirms yet again a rather poor understanding of clinical trial methodology. A pragmatic trial such as this one with a wait-list control makes no attempt to look for specific effects. You say “it is quite simply wrong to assume that this outcome is specifically related to the two treatments.” Where have specific effects been tested or assumed in this study? Your statement in no way, shape or form negates the author’s conclusions that “aromatherapy massage and reflexology are simple and effective non-pharmacologic nursing interventions.” Effectiveness is not a measure of specific effects.
I am most grateful for this comment because it highlights an issue that I had wanted to address for some time: The meanings of the two terms ‘efficacy and effectiveness’ and their differences as seen by scientists and by alternative practitioners/researchers.
Let’s start with the definitions.
I often use the excellent book of Alan Earl-Slater entitled THE HANDBOOK OF CLINICAL TRIALS AND OTHER RESEARCH. In it, EFFICACY is defined as ‘the degree to which an intervention does what it is intended to do under ideal conditions. EFFECTIVENESS is the degree to which a treatment works under real life conditions. An EFFECTIVENESS TRIAL is a trial that ‘is said to approximate reality (i. e. clinical practice). It is sometimes called a pragmatic trial’. An EFFICACY TRIAL ‘is a clinical trial that is said to take place under ideal conditions.’
In other words, an efficacy trial investigates the question, ‘can the therapy work?’, and an effectiveness trial asks, ‘does this therapy work?’ In both cases, the question relate to the therapy per se and not to the plethora of phenomena which are not directly related to it. It seems logical that, where possible, the first question would need to be addressed before the second – it does make little sense to test for effectiveness, if efficacy has not been ascertained, and effectiveness without efficacy does not seem to be possible.
In my 2007 book entitled UNDERSTANDING RESEARCH IN COMPLEMENTARY AND ALTERNATIVE MEDICINE (written especially for alternative therapists like Mel), I adopted these definitions and added: “It is conceivable that a given therapy works only under optimal conditions but not in everyday practice. For instance, in clinical practice patients may not comply with a therapy because it causes adverse effects.” I should have added perhaps that adverse effects are by no means the only reason for non-compliance, and that non-compliance is not the only reason why an efficacious treatment might not be effective.
Most scientists would agree with the above definitions. In fact, I am not aware of a debate about them in scientific circles. But they are not something alternative practitioners tend to like. Why? Because, using these strict definitions, many alternative therapies are neither of proven efficacy nor effectiveness.
What can be done about this unfortunate situation?
Simple! Let’s re-formulate the definitions of efficacy and effectiveness!
Efficacy, according to some alternative medicine proponents, refers to the therapeutic effects of the therapy per se, in other words, its specific effects. (That coincides almost with the scientific definition of this term – except, of course, it fails to tell us anything about the boundary conditions [optimal or real-life conditions].)
Effectiveness, according to the advocates of alternative therapies, refers to its specific effects plus its non-specific effects. Some researchers have even introduced the term ‘real-life effectiveness’ for this.
This is why, the authors of the study discussed in my previous post, could conclude that “aromatherapy massage and reflexology are simple… effective… interventions… to help manage pain and fatigue in patients with rheumatoid arthritis.” Based on their data, neither aromatherapy nor reflexology has been shown to be effective. They might appear to be effective because patients expected to get better, or patients in the no-treatment control group felt worse for not getting the extra care. Based on studies of this nature, giving patients £10 or a box of chocolate might also turn out to be “simple… effective… interventions… to help manage pain and fatigue in patients with rheumatoid arthritis.” Based on these definitions of efficacy and effectiveness, there are hardly any limits to bogus claims for any old quackery.
Such obfuscation suits proponents of alternative therapies fine because, using such definitions, virtually every treatment anyone might ever think of can be shown to be effective! Wishful thinking, it seems, can fulfil almost any dream, it can even turn the truth upside down.
Or can anyone name an alternative treatment that cannot even generate a placebo response when administered with empathy, sympathy and care? Compared to doing nothing, virtually every ineffective therapy might generate outcomes that make the treatment look effective. Even the anticipation of an effect alone might do the trick. How often have you had a tooth-ache, went to the dentist, and discovered sitting in the waiting room that the pain had gone? Does that mean that sitting in a waiting room is an effective treatment for dental pain?
In fact, some enthusiasts of alternative medicine could soon begin to argue that, with their new definition of ‘effectiveness’, we no longer need controlled clinical trials at all, if we want to demonstrate how effective alternative therapies truly are. We can just do observational studies without a control group, note that lots of patients get better, and ‘Bob is your uncle’!!! This is much faster, saves money, time and effort, and has the undeniable advantage of never generating a negative result.
To most outsiders, all this might seem a bit like splitting hair. However, I fear that it is far from that. In fact, it turns out to be a fairly fundamental issue in almost any discussion about the value or otherwise of alternative medicine. And, I think, it is also a matter of principle that reaches far beyond alternative medicine: if we allow various interest groups, lobbyists, sects, cults etc. to use their own definitions of fundamentally important terms, any dialogue, understanding or progress becomes almost impossible.
While over on my post about the new NICE GUIDELINES on acupuncture for back pain, the acupuncturists’ assassination attempts of my character, competence, integrity and personality are in full swing, I have decided to employ my time more fruitfully and briefly comment on a new piece of acupuncture research.
This new Italian study was to determine the effectiveness of acupuncture for the management of hot flashes in women with breast cancer.
A total of 190 women with breast cancer were randomly assigned to two groups. Random assignment was performed with stratification for hormonal therapy; the allocation ratio was 1:1. Both groups received a booklet with information about climacteric syndrome and its management to be followed for at least 12 weeks. In addition, the acupuncture group received 10 traditional acupuncture treatment sessions involving needling of predefined acupoints.
The primary outcome was hot flash score at the end of treatment (week 12), calculated as the frequency multiplied by the average severity of hot flashes. The secondary outcomes were climacteric symptoms and quality of life, measured by the Greene Climacteric and Menopause Quality of Life scales. Health outcomes were measured for up to 6 months after treatment. Expectation and satisfaction of treatment effect and safety were also evaluated. We used intention-to-treat analyses.
Of the participants, 105 were randomly assigned to enhanced self-care and 85 to acupuncture plus enhanced self-care. Acupuncture plus enhanced self-care was associated with a significantly lower hot flash score than enhanced self-care at the end of treatment (P < .001) and at 3- and 6-month post-treatment follow-up visits (P = .0028 and .001, respectively). Acupuncture was also associated with fewer climacteric symptoms and higher quality of life in the vasomotor, physical, and psychosocial dimensions (P < .05).
The authors concluded that acupuncture in association with enhanced self-care is an effective integrative intervention for managing hot flashes and improving quality of life in women with breast cancer.
This hardly needs a comment, as I have been going on about this study design many times before: the ‘A+B versus B’ design can only produce positive findings. Any such study concluding that ‘acupuncture (or whatever other intervention) is effective’ can therefore not be a legitimate test of a hypothesis and ought to be categorised as pseudo-science. Sadly, this problem seems more the rule than the exception in the realm of acupuncture research. That’s a pity really… because, if there is potential in acupuncture at all, this sort of thing can only distract from it.
I think the JOURNAL OF CLINICAL ONCOLOGY, its editors and reviewers, should be ashamed of having published such misleading rubbish.
Reiki is one of the most popular types of ‘energy healing’. Reiki healers believe to be able to channel ‘healing energy’ into patients’ body thus enabling them to get healthy. If Reiki were not such a popular treatment, one could brush such claims aside and think “let the lunatic fringe believe what they want”. But as Reiki so effectively undermines consumers’ sense of reality and rationality, I feel I should continue informing the public about this subject – despite the fact that I have already reported about it several times before, for instance here, here, here, here, here and here.
A new RCT, published in a respected journal looks interesting enough for a further blog-post on the subject. The main aim of the study was to investigate the effectiveness of two psychotherapeutic approaches, cognitive behavioural therapy (CBT) and a complementary medicine method Reiki, in reducing depression scores in adolescents. The researchers from Canada, Malaysia and Australia recruited 188 adolescent depressed adolescents. They were randomly assigned to CBT, Reiki or wait-list. Depression scores were assessed before and after 12 weeks of treatments/wait list. CBT showed a significantly greater decrease in Child Depression Inventory (CDI) scores across treatment than both Reiki (p<.001) and the wait-list control (p<.001). Reiki also showed greater decreases in CDI scores across treatment relative to the wait-list control condition (p=.031). Male participants showed a smaller treatment effects for Reiki than did female participants. The authors concluded that both CBT and Reiki were effective in reducing the symptoms of depression over the treatment period, with effect for CBT greater than Reiki.
I find it most disappointing that these days even respected journals publish such RCTs without the necessary critical input. This study may appear to be rigorous but, in fact, it is hardly worth the paper it was printed on.
The results show that Reiki produced worse results than CBT. That I can well believe!
However, the findings also suggest that Reiki was nevertheless “effective in reducing the symptoms of depression”, as the authors put it in their conclusions. This statement is misleading!
It is based on the comparison of Reiki with doing nothing. As Reiki involves lots of attention, it can be assumed to generate a sizable placebo effect. As a proportion of the patients in the wait list group are probably disappointed for not getting such attention, they can be assumed to experience the adverse effects of their disappointment. The two phenomena combined can easily explain the result without any “effectiveness” of Reiki per se.
If such considerations are not fully discussed and made amply clear even in the conclusions of the abstract, it seems reasonable to accuse the journal of being less than responsible and the authors of being outright misleading.
As with so many papers in this area, one has to ask: WHERE DOES SLOPPY RESEARCH END AND WHERE DOES SCIENTIFIC MISCONDUCT BEGIN?
My last post was about a researcher who manages to produce nothing but positive findings with the least promising alternative therapy, homeopathy. Some might think that this is an isolated case or an anomaly – but they would be wrong. I have previously published about researchers who have done very similar things with homeopathy or other unlikely therapies. Examples include:
But there are many more, and I will carry on highlighting their remarkable work. For example, the research of a German group headed by Prof Gustav Dobos, one of the most prolific investigator in alternative medicine at present.
For my evaluation, I conducted a Medline search of the last 10 of Dobos’ published articles and excluded those not assessing the effectiveness of alternative therapies such as surveys, comments, etc. Here they are with their respective conclusions and publication dates:
RCTs with different yoga styles do not differ in their odds of reaching positive conclusions. Given that most RCTs were positive, the choice of an individual yoga style can be based on personal preferences and availability.
Despite methodological drawbacks, yoga can be preliminarily considered a safe and effective intervention to reduce body mass index in overweight or obese individuals.
REVIEW OF INTEGRATIVE MEDICINE IN GYNAECOLOGICAL ONCOLOGY (2016)
…there is published, positive level I evidence for a number of CAM treatment forms.
Mindfulness- and acceptance-based interventions can be recommended as an additional treatment for patients with psychosis.
Cabbage leaf wraps are more effective for knee osteoarthritis than usual care, but not compared with diclofenac gel. Therefore, they might be recommended for patients with osteoarthritis of the knee.
This review found strong evidence for A. paniculata and ivy/primrose/thyme-based preparations and moderate evidence for P. sidoides being significantly superior to placebo in alleviating the frequency and severity of patients’ cough symptoms. Additional research, including other herbal treatments, is needed in this area.
Dietary approaches should mainly be tried to reduce macronutrients and enrich functional food components such as vitamins, flavonoids, and unsaturated fatty acids. People with Metabolic Syndrome will benefit most by combining weight loss and anti-inflammatory nutrients.
In patients with CHD, MBM programs can lessen the occurrence of cardiac events, reduce atherosclerosis, and lower systolic blood pressure, but they do not reduce mortality. They can be used as a complement to conventional rehabilitation programs.
CST was both specifically effective and safe in reducing neck pain intensity and may improve functional disability and the quality of life up to 3 months after intervention.
Study data have shown that therapy- and disease-related side effects can be reduced using the methods of integrative medicine. Reported benefits include improving patients’ wellbeing and quality of life, reducing stress, and improving patients’ mood, sleeping patterns and capacity to cope with disease.
Dobos seems to be an ‘all-rounder’ whose research tackles a wide range of alternative treatments. That is perhaps unremarkable – but what I do find remarkable is the impression that, whatever he researches, the results turn out to be pretty positive. This might imply one of two things, in my view:
- all alternative therapies are effective,
- the ‘Trustworthiness Index’ of Prof Dobos is unusual.
I let my readers chose which possibility they deem to be more likely.
Recently, I came across the ‘Clinical Practice Guidelines on the Use of Integrative Therapies as Supportive Care in Patients Treated for Breast Cancer’ published by the ‘Society for Integrative Oncology (SIO) Guidelines Working Group’. The mission of the SIO is to “advance evidence-based, comprehensive, integrative healthcare to improve the lives of people affected by cancer. The SIO has consistently encouraged rigorous scientific evaluation of both pre-clinical and clinical science, while advocating for the transformation of oncology care to integrate evidence-based complementary approaches. The vision of SIO is to have research inform the true integration of complementary modalities into oncology care, so that evidence-based complementary care is accessible and part of standard cancer care for all patients across the cancer continuum. As an interdisciplinary and inter-professional society, SIO is uniquely poised to lead the “bench to bedside” efforts in integrative cancer care.”
The aim of the ‘Clinical Practice Guidelines’ was to “inform clinicians and patients about the evidence supporting or discouraging the use of specific complementary and integrative therapies for defined outcomes during and beyond breast cancer treatment, including symptom management.”
This sounds like a most laudable aim. Therefore I studied the document carefully and was surprised to read their conclusions: “Specific integrative therapies can be recommended as evidence-based supportive care options during breast cancer treatment.”
How can this be? On this blog, we have repeatedly seen evidence to suggest that integrative medicine is little more than the admission of quackery into evidence-based healthcare. This got me wondering how their conclusion had been reached, and I checked the document even closer.
On the surface, it seemed well-made. A team of researchers first defined the treatments they wanted to look at, then they searched for RCTs, evaluated their quality, extracted their results, combined them into an overall verdict and wrote the whole thing up. In a word, they conducted what seems a proper systematic review.
Based on the findings of their review, they then issued recommendations which I thought were baffling in several respects. Let me just focus on three of the SIO’s recommendations dealing with acupuncture:
- “Acupuncture can be considered for treating anxiety concurrent with ongoing fatigue…” [only RCT (1) cited in support]
- “Acupuncture can be considered for improving depressive symptoms in women suffering from hot flashes…” [RCTs (1 and 2) cited in support]
- “Acupuncture can be considered for treating anxiety concurrent with ongoing fatigue…” [only RCT (1) cited in support]
The actual RCT (1) cited in support of all three recommendations stated that the authors “randomly assigned 75 patients to usual care and 227 patients to acupuncture plus usual care…” As we have discussed often before on this blog and elsewhere, such a ‘A+B versus B study design’ will never generate a negative result, does not control for placebo-effects and is certainly not a valid test for the effectiveness of the treatment in question. Nevertheless, the authors of this study concluded that: “Acupuncture is an effective intervention for managing the symptom of cancer-related fatigue and improving patients’ quality of life.”
RCT (2) cited in support of recommendation number 2 seems to be a citation error; the study in question is not an acupuncture-trial and does not back the statement in question. I suspect they meant to cite their reference number 87 (instead of 88). This trial is an equivalence study where 50 patients were randomly assigned to receive 12 weeks of acupuncture (n = 25) or venlafaxine (n = 25) treatment for cancer-related hot flushes. Its results indicate that the two treatments generated the similar results. As the two therapies could also have been equally ineffective, it is impossible, in my view, to conclude that acupuncture is effective.
Finally, RCT (1) does in no way support recommendation number two. Yet RCT (1) and RCT (2) were both cited in support of this recommendation.
I have not systematically checked any other claims made in this document, but I get the impression that many other recommendations made here are based on similarly ‘liberal’ interpretations of the evidence. How can the ‘Society for Integrative Oncology’ use such dodgy pseudo-science for formulating potentially far-reaching guidelines?
I know none of the authors (Heather Greenlee, Lynda G. Balneaves, Linda E. Carlson, Misha Cohen, Gary Deng, Dawn Hershman, Matthew Mumber, Jane Perlmutter, Dugald Seely, Ananda Sen, Suzanna M. Zick, Debu Tripathy) of the document personally. They made the following collective statement about their conflicts of interest: “There are no financial conflicts of interest to disclose. We note that some authors have conducted/authored some of the studies included in the review.” I am a little puzzled to hear that they have no financial conflicts of interest (do not most of them earn their living by practising integrative medicine? Yes they do! The article informs us that: “A multidisciplinary panel of experts in oncology and integrative medicine was assembled to prepare these clinical practice guidelines. Panel members have expertise in medical oncology, radiation oncology, nursing, psychology, naturopathic medicine, traditional Chinese medicine, acupuncture, epidemiology, biostatistics, and patient advocacy.”). I also suspect they have other, potentially much stronger conflicts of interest. They belong to a group of people who seem to religiously believe in the largely nonsensical concept of integrative medicine. Integrating unproven treatments into healthcare must affect its quality in much the same way as the integration of cow pie into apple pie would affect the taste of the latter.
After considering all this carefully, I cannot help wondering whether these ‘Clinical Practice Guidelines’ by the ‘Society for Integrative Oncology’ are just full of honest errors or whether they amount to fraud and scientific misconduct.
WHATEVER THE ANSWER, THE GUIDELINES MUST BE RETRACTED, IF THIS SOCIETY WANTS TO AVOID LOSING ALL CREDIBILITY.
In recent blogs, I have written much about acupuncture and particularly about the unscientific notions of traditional acupuncturists. I was therefore surprised to see that a UK charity is teaming up with traditional acupuncturists in an exercise that looks as though it is designed to mislead the public.
The website of ‘Anxiety UK’ informs us that this charity and the British Acupuncture Council (BAcC) have launched a ‘pilot project’ which will see members of Anxiety UK being able to access traditional acupuncture through this new partnership. Throughout the pilot project, they proudly proclaim, data will be collected to “determine the effectiveness of traditional acupuncture for treating those living with anxiety and anxiety based depression.”
This, they believe, will enable both parties to continue to build a body of evidence to measure the success rate of this type of treatment. Anxiety UK’s Chief Executive Nicky Lidbetter said: “This is an exciting project and will provide us with valuable data and outcomes for those members who take part in the pilot and allow us to assess the benefits of extending the pilot to a regular service for those living with anxiety. “We know anecdotally that many people find complementary therapies used to support conventional care can provide enormous benefit, although it should be remembered they are used in addition to and not instead of seeking medical advice from a doctor or taking prescribed medication. This supports our strategic aim to ensure that we continue to make therapies and services that are of benefit to those with anxiety and anxiety based depression, accessible.”
And what is wrong with that, you might ask.
What is NOT wrong with it, would be my response.
To start with, traditional acupuncture relies of obsolete assumptions like yin and yang, meridians, energy flow, acupuncture points etc. They have one thing in common: they fly in the face of science and evidence. But this might just be a triviality. More important is, I believe, the fact that a pilot project cannot determine the effectiveness of a therapy. Therefore the whole exercise smells very much like a promotional activity for pure quackery.
And what about the hint in the direction of anecdotal evidence in support of the study? Are they not able to do a simple Medline search? Because, if they had done one, they would have found a plethora of articles on the subject. Most of them show that there are plenty of studies but their majority is too flawed to draw firm conclusions.
A review by someone who certainly cannot be accused of being biased against alternative medicine, for instance, informs us that “trials in depression, anxiety disorders and short-term acute anxiety have been conducted but acupuncture interventions employed in trials vary as do the controls against which these are compared. Many trials also suffer from small sample sizes. Consequently, it has not proved possible to accurately assess the effectiveness of acupuncture for these conditions or the relative effectiveness of different treatment regimens. The results of studies showing similar effects of needling at specific and non-specific points have further complicated the interpretation of results. In addition to measuring clinical response, several clinical studies have assessed changes in levels of neurotransmitters and other biological response modifiers in an attempt to elucidate the specific biological actions of acupuncture. The findings offer some preliminary data requiring further investigation.”
Elsewhere, the same author, together with other pro-acupuncture researchers, wrote this: “Positive findings are reported for acupuncture in the treatment of generalised anxiety disorder or anxiety neurosis but there is currently insufficient research evidence for firm conclusions to be drawn. No trials of acupuncture for other anxiety disorders were located. There is some limited evidence in favour of auricular acupuncture in perioperative anxiety. Overall, the promising findings indicate that further research is warranted in the form of well designed, adequately powered studies.”
What does this mean in the context of the charity’s project?
I think, it tells us that acupuncture for anxiety is not exactly the most promising approach to further investigate. Even in the realm of alternative medicine, there are several interventions which are supported by more encouraging evidence. And even if one disagrees with this statement, one cannot possibly disagree with the fact that more flimsy research is not required. If we do need more studies, they must be rigorous and not promotion thinly disguised as science.
I guess the ultimate question here is one of ethics. Do charities not have an ethical and moral duty to spend our donations wisely and productively? When does such ill-conceived pseudo-research cross the line to become offensive or even fraudulent?
The randomized, placebo-controlled, double-blind trial is usually the methodology to test the efficacy of a therapy that carries the least risk of bias. This fact is an obvious annoyance to some alt med enthusiasts, because such trials far too often fail to produce the results they were hoping for.
But there is no need to despair. Here I provide a few simple tips on how to mislead the public with seemingly rigorous trials.
The most brutal method for misleading people is simply to cheat. The Germans have a saying, ‘Papier ist geduldig’ (paper is patient), implying that anyone can put anything on paper. Fortunately we currently have plenty of alt med journals which publish any rubbish anyone might dream up. The process of ‘peer-review’ is one of several mechanisms supposed to minimise the risk of scientific fraud. Yet alt med journals are more clever than that! They tend to have a peer-review that rarely involves independent and critical scientists, more often than not you can even ask that you best friend is invited to do the peer-review, and the alt med journal will follow your wish. Consequently the door is wide open to cheating. Once your fraudulent paper has been published, it is almost impossible to tell that something is fundamentally wrong.
But cheating is not confined to original research. You can also apply the method to other types of research, of course. For instance, the authors of the infamous ‘Swiss report’ on homeopathy generated a false positive picture using published systematic reviews of mine by simply changing their conclusions from negative to positive. Simple!
Obviously, outright cheating is not always as simple as that. Even in alt med, you cannot easily claim to have conducted a clinical trial without a complex infrastructure which invariably involves other people. And they are likely to want to have some control over what is happening. This means that complete fabrication of an entire data set may not always be possible. What might still be feasible, however, is the ‘prettification’ of the results. By just ‘re-adjusting’ a few data points that failed to live up to your expectations, you might be able to turn a negative into a positive trial. Proper governance is aimed at preventing his type of ‘mini-fraud’ but fortunately you work in alt med where such mechanisms are rarely adequately implemented.
Another very handy method is the omission of aspects of your trial which regrettably turned out to be in disagreement with the desired overall result. In most studies, one has a myriad of endpoints. Once the statistics of your trial have been calculated, it is likely that some of them yield the wanted positive results, while others do not. By simply omitting any mention of the embarrassingly negative results, you can easily turn a largely negative study into a seemingly positive one. Normally, researchers have to rely on a pre-specified protocol which defines a primary outcome measure. Thankfully, in the absence of proper governance, it usually is possible to publish a report which obscures such detail and thus mislead the public (I even think there has been an example of such an omission on this very blog).
Yes – lies, dam lies, and statistics! A gifted statistician can easily find ways to ‘torture the data until they confess’. One only has to run statistical test after statistical test, and BINGO one will eventually yield something that can be marketed as the longed-for positive result. Normally, researchers must have a protocol that pre-specifies all the methodologies used in a trial, including the statistical analyses. But, in alt med, we certainly do not want things to function normally, do we?
5 TRIAL DESIGNS THAT CANNOT GENERATE A NEGATIVE RESULT
All the above tricks are a bit fraudulent, of course. Unfortunately, fraud is not well-seen by everyone. Therefore, a more legitimate means of misleading the public would be highly desirable for those aspiring alt med researchers who do not want to tarnish their record to their disadvantage. No worries guys, help is on the way!
The fool-proof trial design is obviously the often-mentioned ‘A+B versus B’ design. In such a study, patients are randomized to receive an alt med treatment (A) together with usual care (B) or usual care (B) alone. This looks rigorous, can be sold as a ‘pragmatic’ trial addressing a real-fife problem, and has the enormous advantage of never failing to produce a positive result: A+B is always more than B alone, even if A is a pure placebo. Such trials are akin to going into a hamburger joint for measuring the calories of a Big Mac without chips and comparing them to the calories of a Big Mac with chips. We know the result before the research has started; in alt med, that’s how it should be!
I have been banging on about the ‘A+B versus B’ design often enough, but recently I came across a new study design used in alt med which is just as elegantly misleading. The trial in question has a promising title: Quality-of-life outcomes in patients with gynecologic cancer referred to integrative oncology treatment during chemotherapy. Here is the unabbreviated abstract:
Integrative oncology incorporates complementary medicine (CM) therapies in patients with cancer. We explored the impact of an integrative oncology therapeutic regimen on quality-of-life (QOL) outcomes in women with gynecological cancer undergoing chemotherapy.
PATIENTS AND METHODS:
A prospective preference study examined patients referred by oncology health care practitioners (HCPs) to an integrative physician (IP) consultation and CM treatments. QOL and chemotherapy-related toxicities were evaluated using the Edmonton Symptom Assessment Scale (ESAS) and Measure Yourself Concerns and Wellbeing (MYCAW) questionnaire, at baseline and at a 6-12-week follow-up assessment. Adherence to the integrative care (AIC) program was defined as ≥4 CM treatments, with ≤30 days between each session.
Of 128 patients referred by their HCP, 102 underwent IP consultation and subsequent CM treatments. The main concerns expressed by patients were fatigue (79.8 %), gastrointestinal symptoms (64.6 %), pain and neuropathy (54.5 %), and emotional distress (45.5 %). Patients in both AIC (n = 68) and non-AIC (n = 28) groups shared similar demographic, treatment, and cancer-related characteristics. ESAS fatigue scores improved by a mean of 1.97 points in the AIC group on a scale of 0-10 and worsened by a mean of 0.27 points in the non-AIC group (p = 0.033). In the AIC group, MYCAW scores improved significantly (p < 0.0001) for each of the leading concerns as well as for well-being, a finding which was not apparent in the non-AIC group.
An IP-guided CM treatment regimen provided to patients with gynecological cancer during chemotherapy may reduce cancer-related fatigue and improve other QOL outcomes.
A ‘prospective preference study’ – this is the design the world of alt med has been yearning for! Its principle is beautiful in its simplicity. One merely administers a treatment or treatment package to a group of patients; inevitably some patients take it, while others don’t. The reasons for not taking it could range from lack of perceived effectiveness to experience of side-effects. But never mind, the fact that some do not want your treatment provides you with two groups of patients: those who comply and those who do not comply. With a bit of skill, you can now make the non-compliers appear like a proper control group. Now you only need to compare the outcomes and BOB IS YOUR UNCLE!
Brilliant! Absolutely brilliant!
I cannot think of a more deceptive trial-design than this one; it will make any treatment look good, even one that is a mere placebo. Alright, it is not randomized, and it does not even have a proper control group. But it sure looks rigorous and meaningful, this ‘prospective preference study’!
This study created a media storm when it was first published. Several articles in the lay press seemed to advertise it as though a true breakthrough had been made in the treatment of hypertension. I would not be surprised, if many patients consequently threw their anti-hypertensives over board and queued up at their local acupuncturist.
Good for business, no doubt – but would this be a wise decision?
The aim of this clinical trial was to examine effectiveness of electroacupuncture (EA) for reducing systolic blood pressure (SBP) and diastolic blood pressures (DBP) in hypertensive patients. Sixty-five hypertensive patients not receiving medication were assigned randomly to one of two acupuncture intervention. Patients were assessed with 24-hour ambulatory blood pressure monitoring. They were treated by 4 acupuncturists with 30-minutes of EA at PC 5-6+ST 36-37 or LI 6-7+GB 37-39 (control group) once weekly for 8 weeks. Primary outcomes measuring effectiveness of EA were peak and average SBP and DBP. Secondary outcomes examined underlying mechanisms of acupuncture with plasma norepinephrine, renin, and aldosterone before and after 8 weeks of treatment. Outcomes were obtained by blinded evaluators.
After 8 weeks, 33 patients treated with EA at PC 5-6+ST 36-37 had decreased peak and average SBP and DBP, compared with 32 patients treated with EA at LI 6-7+GB 37-39 control acupoints. Changes in blood pressures significantly differed between the two patient groups. In 14 patients, a long-lasting blood pressure–lowering acupuncture effect was observed for an additional 4 weeks of EA at PC 5-6+ST 36-37. After treatment, the plasma concentration of norepinephrine, which was initially elevated, was decreased by 41%; likewise, renin was decreased by 67% and aldosterone by 22%.
The authors concluded that EA at select acupoints reduces blood pressure. Sympathetic and renin-aldosterone systems were likely related to the long-lasting EA actions.
These results are baffling, to say the least; and they contradict a recent meta-analysis which did not find that acupuncture without antihypertensive medications significantly improves blood pressure in those hypertensive patients.
So, who is right and who is wrong here?
Or shall we just look for alternative explanations of the effects observed in the new study?
There could be dozens of reasons for these findings that are unrelated to the alleged effects of acupuncture. For instance, they could be due to life-style changes suggested to the experimental but not the control group, or they might be caused by some other undisclosed bias or confounding. At the very minimum, we should insist on an independent replication of this trial.
It would be silly, I think, to trust these results and now recommend acupuncture to the millions of hypertensive patients worldwide, particularly as dozens of safe, cheap and very effective treatments for hypertension do already exist.
This seems to be the question that occupies the minds of several homeopaths.
So was I!
Let me explain.
In 1997, Linde et al published their now famous meta-analysis of clinical trials of homeopathy which concluded that “The results of our meta-analysis are not compatible with the hypothesis that the clinical effects of homeopathy are completely due to placebo. However, we found insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition. Further research on homeopathy is warranted provided it is rigorous and systematic.”
This paper had several limitations which Linde was only too happy to admit. The authors therefore conducted a re-analysis which, even though published in an excellent journal, is rarely cited by homeopaths. Linde et al stated in their re-analysis of 2000: “there was clear evidence that studies with better methodological quality tended to yield less positive results.” It was this phenomenon that prompted me and my colleague Max Pittler to publish a ‘letter to the editor’ which now – 15 years later – seems the stone of homeopathic contention.
A blog-post by a believer in homeopathy even asks the interesting question: Did Professor Ernst Sell His Soul to Big Pharma? It continues as follows:
Edzard Ernst is an anti-homeopath who spent his career attacking traditional medicine. In 1993 he became Professor of Complementary Medicine at the University of Exeter. He is often described as the first professor of complementary medicine, but the title he assumed should have fooled no-one. His aim was to discredit medical therapies, notably homeopathy, and he then published some 700 papers in ‘scientific’ journals to do so.
Now, Professor Robert Hahn, in his blog, has made an assessment of the quality of his work… In the interests of the honesty and integrity in science, it is an important assessment. It shows, in his view, how science has been taken over by ideology (or as I would suggest, more accurately, the financial interests of Big Corporations, in this case, Big Pharma). The blog indicates that in order to demonstrate that homeopathy is ineffective, over 95% of scientific research into homeopathy has to be discarded or removed!
So for those people who, like myself, cannot read the original German, here is an English translation of the blog…
“I have never seen a science writer so blatantly biased as Edzard Ernst: his work should not be considered of any worth at all, and discarded” finds Sweden’s Professor Robert Hahn, a leading medical scientist, physician, and Professor of Anaesthesia and Intensive Care at the University of Linköping, Sweden.
Hahn determined therefore to analyze for himself the ‘research’ which supposedly demonstrated homeopathy to be ineffective, and reached the shocking conclusion that:
“only by discarding 98% of homeopathy trials and carrying out a statistical meta-analysis on the remaining 2% negative studies, can one ‘prove’ that homeopathy is ineffective”.
In other words, all supposedly negative homeopathic meta-analyses which opponents of homeopathy have relied on, are scientifically bogus…
Who can you trust? We can begin by disregarding Edzard Ernst. I have read several other studies that he has published, and they are all untrustworthy. His work should be discarded…
In the case of homeopathy, one should stick with what the evidence reveals. And the evidence is that only by removing 95-98% of all studies is the effectiveness of homeopathy not demonstrable…
So, now you are wondering, I am sure: HOW MUCH DID HE GET FOR SELLING HIS SOUL TO BIG PHARMA?
No? You are wondering 1) who this brilliant Swedish scientist, Prof Hahn, is and 2) what article of mine he is criticising? Alright, I will try to enlighten you.
Here I can rely on a comment posted on my blog some time ago by someone who can read Swedish (thank you Bjorn). He commented about Hahn as follows:
A renowned director of medical research with well over 300 publications on anesthesia and intensive care and 16 graduated PhD students under his mentorship, who has been leading a life on the side, blogging and writing about spiritualism, and alternative medicine and now ventures on a public crusade for resurrecting the failing realm of homeopathy!?! Unbelievable!
I was unaware of this person before, even if I have lived and worked in Sweden for decades.
I have spent the evening looking up his net-track and at his blog at roberthahn.nu (in Swedish).
I will try to summarise some first impressions:
Hahn is evidently deeply religious and there is the usual, unmistakably narcissistic aura over his writings and sayings. He is religiously confident that there is more to this world than what can be measured and sensed. In effect, he seems to believe that homeopathy (as well as alternative medical methods in general) must work because there are people who say they have experienced it and denying the possibility is akin to heresy (not his wording but the essence of his writing).
He has, along with his wife, authored at least three books on spiritual matters with titles such as (my translations) “Clear replies from the spiritual world” and “Connections of souls”.
He has a serious issue with skeptics and goes on at length about how they are dishonest bluffers[sic] who willfully cherry-pick and misinterpret evidence to fit their preconceived beliefs.
He feels that desperate patients should generally be allowed the chance that alternative methods may offer.
He believes firmly in former-life memories, including his own, which he claims he has found verification for in an ancient Italian parchment.
His main arguments for homeopathy are Claus Linde’s meta analyses and the sheer number of homeopathic research that he firmly believes shows it being superior to placebo, a fact that (in his opinion) shows it has a biological effect. Shang’s work from 2005 he dismisses as seriously flawed.
He also points to individual research like this as credible proof of the biologic effect of remedies.
He somewhat surprisingly denies recommending homeopathy despite being convinced of its effect and maintains that he wants better, more problem oriented and disease specific studies to clarify its applicability. (my interpretation)
If it weren’t for his track record of genuine, acknowledged medical research and him being a renowned authority in a genuine, scientific medical field, this man would be an ordinary, religiously devout quack.
What strikes me as perhaps telling of a consequence of his “exoscientific” activity, is that Hahn, who holds the position of research director at a large city trauma and emergency hospital is an “adjungerad professor”, which is (usually) a part time, time limited, externally financed professorial position, while any Swedish medical doctor with his very extensive formal merits would very likely hold a full professorship at an academic institution.
END OF QUOTE
MY 2000 PAPER THAT SEEMS TO IRRITATE HAHN
This was a short ‘letter to the editor’ by Ernst and Pittler published in the J Clin Epidemiol commenting on the above-mentioned re-analysis by Linde et al which was published in the same journal. As its text is not available on-line, I re-type parts of it here:
In an interesting re-analysis of their meta-analysis of clinical trials of homeopathy, Linde et al conclude that there is no linear relationship between quality scores and study outcome. We have simply re-plotted their data and arrive at a different conclusion. There is an almost perfect correlation between the odds ratio and the Jadad score between the range of 1-4… [some technical explanations follow which I omit]…Linde et al can be seen as the ultimate epidemiological proof that homeopathy is, in fact, a placebo.
And that is, as far as I can see, the whole mysterious story. I cannot even draw a conclusion – all I can do is to ask a question:
DOES ANYONE UNDERSTAND WHAT THEY ARE GOING ON ABOUT?
In the realm of alternative medicine, we encounter many therapeutic claims that beggar belief. This is true for most modalities but perhaps for none more than chiropractic. Many chiropractors still adhere to Palmer’s gospel of the ‘inate’, ‘subluxation’ etc. and thus they believe that their ‘adjustments’ are a cure all. Readers of this blog will know all that, of course, but even they might be surprised by the notion that a chiropractic adjustment improves the voice of a choir singer.
This, however, is precisely the ‘hypothesis’ that was recently submitted to an RCT. To be precise, the study investigated the effect of spinal manipulative therapy (SMT) on the singing voice of male individuals.
Twenty-nine subjects were selected among male members of a local choir. Participants were randomly assigned to two groups: (A) a single session of chiropractic SMT and (B) a single session of non-therapeutic transcutaneous electrical nerve stimulation (TENS). Recordings of the singing voice of each participant were taken immediately before and after the procedures. After a 14-day wash-out period, procedures were switched between groups: participants who underwent SMT on the first occasion were now subjected to TENS and vice versa. Recordings were assessed via perceptual audio and acoustic evaluations. The same recording segment of each participant was selected. Perceptual audio evaluation was performed by a specialist panel (SP). Recordings of each participant were randomly presented thus making the SP blind to intervention type and recording session (before/after intervention). Recordings compiled in a randomized order were also subjected to acoustic evaluation.
No differences in the quality of the singing on perceptual audio evaluation were observed between TENS and SMT.
The authors concluded that no differences in the quality of the singing voice of asymptomatic male singers were observed on perceptual audio evaluation or acoustic evaluation after a single spinal manipulative intervention of the thoracic and cervical spine.
There is nevertheless an important point to be made here, I feel: some claims are just too silly to waste resources on. Or, to put it in more scientific terms, hypotheses require much more than a vague notion or hunch.
To set up, conduct and eventually publish an RCT as above requires expertise, commitment, time and money. All of this is entirely wasted, if the prior probability of a relevant result approaches zero. In the realm of alternative medicine, this is depressingly often the case. In the final analysis, this suggests that all too often research in this area achieves nothing other than giving science a bad name.