There are numerous types and styles of acupuncture, and the discussion whether one is better than the other has been long, tedious and frustrating. Traditional acupuncturists, for instance, individualise their approach according to their findings of pulse and tongue diagnoses as well as other non-validated diagnostic criteria. Western acupuncturists, by contrast, tend to use formula or standardised treatments according to conventional diagnoses.
This study aimed to compare the effectiveness of standardized and individualized acupuncture treatment in patients with chronic low back pain. A single-center randomized controlled single-blind trial was performed in a general medical practice of a Chinese-born medical doctor trained in both western and Chinese medicine. One hundred and fifty outpatients with chronic low back pain were randomly allocated to two groups who received either standardized acupuncture or individualized acupuncture. 10 to 15 treatments based on individual symptoms were given with two treatments per week.
The main outcome measure was the area under the curve (AUC) summarizing eight weeks of daily rated pain severity measured with a visual analogue scale. No significant differences between groups were observed for the AUC (individualized acupuncture mean: 1768.7; standardized acupuncture 1482.9; group difference, 285.8).
The authors concluded that individualized acupuncture was not superior to standardized acupuncture for patients suffering from chronic pain.
But perhaps it matters whether the acupuncturist is thoroughly trained or has just picked up his/her skills during a weekend course? I am afraid not: this analysis of a total of 4,084 patients with chronic headache, lower back pain or arthritic pain treated by 1,838 acupuncturists suggested otherwise. There were no differences in success for patients treated by physicians passing through shorter (A diploma) or longer (B diploma) training courses in acupuncture.
But these are just one single trial and one post-hoc analysis of another study which, by definition, cannot be fully definitive. Fortunately, we have more evidence based on much larger numbers. This brand-new meta-analysis aimed to evaluate whether there are characteristics of acupuncture or acupuncturists that are associated with better or worse outcomes.
An existing dataset, developed by the Acupuncture Trialists’ Collaboration, included 29 trials of acupuncture for chronic pain with individual data involving 17,922 patients. The available data on characteristics of acupuncture included style of acupuncture, point prescription, location of needles, use of electrical stimulation and moxibustion, number, frequency and duration of sessions, number of needles used and acupuncturist experience. Random-effects meta-regression was used to test the effect of each characteristic on the main effect estimate of pain. Where sufficient patient-level data were available, patient-level analyses were conducted.
When comparing acupuncture to sham controls, there was little evidence that the effects of acupuncture on pain were modified by any of the acupuncture characteristics evaluated, including style of acupuncture, the number or placement of needles, the number, frequency or duration of sessions, patient-practitioner interactions and the experience of the acupuncturist. When comparing acupuncture to non-acupuncture controls, there was little evidence that these characteristics modified the effect of acupuncture, except better pain outcomes were observed when more needles were used and, from patient level analysis involving a sub-set of 5 trials, when a higher number of acupuncture treatment sessions were provided.
The authors of this meta-analysis concluded that there was little evidence that different characteristics of acupuncture or acupuncturists modified the effect of treatment on pain outcomes. Increased number of needles and more sessions appear to be associated with better outcomes when comparing acupuncture to non-acupuncture controls, suggesting that dose is important. Potential confounders include differences in control group and sample size between trials. Trials to evaluate potentially small differences in outcome associated with different acupuncture characteristics are likely to require large sample sizes.
My reading of these collective findings is that it does not matter which type of acupuncture you use nor who uses it; the clinical effects are similar regardless of the most obvious potential determinants. Hardly surprising! In fact, one would expect such results, if one considered that acupuncture is a placebo-treatment.
What is ear acupressure?
Proponents claim that ear-acupressure is commonly used by Chinese medicine practitioners… It is like acupuncture but does not use needles. Instead, small round pellets are taped to points on one ear. Ear-acupressure is a non-invasive, painless, low cost therapy and no significant side effects have been reported.
Ok, but does it work?
There is a lot of money being made with the claim that ear acupressure (EAP) is effective, especially for smoking cessation; entrepreneurs sell gadgets for applying the pressure on the ear, and practitioners earn their living through telling their patients that this therapy is helpful. There are hundreds of websites with claims like this one: Auricular therapy (Acupressure therapy of the ear region) has been used successfully for Smoking cessation. Auriculotherapy is thought to be 7 times more powerful than other methods used for smoking cessation; a single auriculotherapy treatment has been shown to reduce smoking from 20 or more cigarettes a day down to 3 to 5 a day.
But what does the evidence show?
This new study investigated the efficacy of EAP as a stand-alone intervention for smoking cessation. Adult smokers were randomised to receive EAP specific for smoking cessation (SSEAP) or a non-specific EAP (NSEAP) intervention, EAP at points not typically used for smoking cessation. Participants received 8 weekly treatments and were requested to press the five pellets taped to one ear at least three times per day. Participants were followed up for three months. The primary outcome measures were a 7-day point-prevalence cessation rate confirmed by exhaled carbon monoxide and relief of nicotine withdrawal symptoms (NWS).
Forty-three adult smokers were randomly assigned to SSEAP (n = 20) or NSEAP (n = 23) groups. The dropout rate was high with 19 participants completing the treatments and 12 remaining at followup. One participant from the SSEAP group had confirmed cessation at week 8 and end of followup (5%), but there was no difference between groups for confirmed cessation or NWS. Adverse events were few and minor.
And is there a systematic review of the totality of the evidence?
Sure, the current Cochrane review arrives at the following conclusion: There is no consistent, bias-free evidence that acupuncture, acupressure, laser therapy or electrostimulation are effective for smoking cessation…
Yes, we may well ask! If most TCM practitioners use EAP or acupuncture for smoking cessation telling their customers that it works (and earning good money when doing so), while the evidence fails to show that this is true, what should we say about such behaviour? I don’t know about you, but I find it thoroughly dishonest.
Irritable bowel syndrome (IBS) is common and often difficult to treat – unless, of course, you consult a homeopath. Here is just one of virtually thousands of quotes from homeopaths available on the Internet: Homeopathic medicine can reduce Irritable Bowel Syndrome (IBS) symptoms by lowering food sensitivities and allergies. Homeopathy treats the patient as a whole and does not simply focus on the disease. Careful attention is given to the minute details about the presenting complaints, including the severity of diarrhea, constipation, pain, cramps, mucus in the stools, nausea, heartburn, emotional triggers and conventional laboratory findings. In addition, the patient’s eating habits, food preferences, thermal attributes and sleep patterns are noted. The patient’s family history and diseases, along with the patient’s emotions are discussed. Then the homeopathic practitioner will select the remedy that most closely matches the symptoms.
Such optimism might be refreshing, but is there any reason for it? Is homeopathy really an effective treatment for IBS? To answer this question, we now have a brand-new Cochrane review. The aim of this review was to assess the effectiveness and safety of homeopathic treatment for treating irritable bowel syndrome (IBS). (This type of statement always makes me a little suspicious; how on earth can anyone truly assess the safety of a treatment by looking at a few studies? This is NOT how one evaluates safety!) The authors conducted extensive literature searches to identify all RCTs, cohort and case-control studies that compared homeopathic treatment with placebo, other control treatments, or usual care in adults with IBS. The primary outcome was global improvement in IBS.
Three RCTs with a total of 213 participants were included. No cohort or case-control studies were identified. Two studies compared homeopathic remedies to placebos for constipation-predominant IBS. One study compared individualised homeopathic treatment to usual care defined as high doses of dicyclomine hydrochloride, faecal bulking agents and a high fibre diet. Due to the low quality of reporting, the risk of bias in all three studies was unclear on most criteria and high for some criteria.
A meta-analysis of two studies with a total of 129 participants with constipation-predominant IBS found a statistically significant difference in global improvement between the homeopathic ‘asafoetida’ and placebo at a short-term follow-up of two weeks. Seventy-three per cent of patients in the homeopathy group improved compared to 45% of placebo patients. There was no statistically significant difference in global improvement between the homeopathic asafoetida plus nux vomica compared to placebo. Sixty-eight per cent of patients in the homeopathy group improved compared to 52% of placebo patients.
The overall quality of the evidence was very low. There was no statistically significant difference between individualised homeopathic treatment and usual care for the outcome “feeling unwell”. None of the studies reported on adverse events (which, by the way, should be seen as a breech in research ethics on the part of the authors of the three primary studies).
The authors concluded that a pooled analysis of two small studies suggests a possible benefit for clinical homeopathy, using the remedy asafoetida, over placebo for people with constipation-predominant IBS. These results should be interpreted with caution due to the low quality of reporting in these trials, high or unknown risk of bias, short-term follow-up, and sparse data. One small study found no statistically difference between individualised homeopathy and usual care (defined as high doses of dicyclomine hydrochloride, faecal bulking agents and diet sheets advising a high fibre diet). No conclusions can be drawn from this study due to the low number of participants and the high risk of bias in this trial. In addition, it is likely that usual care has changed since this trial was conducted. Further high quality, adequately powered RCTs are required to assess the efficacy and safety of clinical and individualised homeopathy compared to placebo or usual care.
THIS REVIEW REQUIRES A FEW FURTHER COMMENTS, I THINK
Asafoetida, the remedy used in two of the studies, is a plant native to Pakistan, Iran and Afghanistan. It is used in Ayurvedic herbal medicine to treat colic, intestinal parasites and irritable bowel syndrome. In the ‘homeopathic’ trials, asafoetida was used in relatively low dilutions, one that still contains molecules. It is therefore debatable whether this was really homeopathy or whether it is more akin to herbal medicine – it was certainly not homeopathy with its typical ultra-high dilutions.
Regardless of this detail, the Cochrane review does hardly provide sound evidence for homeopathy’s efficacy. On the contrary, my reading of its findings is that the ‘possible benefit’ is NOT real but a false positive result caused by the serious limitations of the original studies. The authors stress that the apparently positive result ‘should be interpreted with caution’; that is certainly correct.
So, if you are a proponent of homeopathy, as the authors of the review seem to be, you will claim that homeopathy offers ‘possible benefits’ for IBS-sufferers. But if you are not convinced of the merits of homeopathy, you might suggest that the evidence is insufficient to recommend homeopathy. I imagine that IBS-sufferers might get as frustrated with such confusion as most scientists will be. Yet there is hope; the answer could be imminent: apparently, a new trial is to report its results within this year.
IS THIS NEW TRIAL GOING TO CONTRIBUTE MEANINGFULLY TO OUR KNOWLEDGE?
It is a three-armed study (same 1st author as in the Cochrane review) which, according to its authors, seeks to explore the effectiveness of individualised homeopathic treatment plus usual care compared to both an attention control plus usual care and usual care alone, for patients with IBS. (Why “explore” and not “determine”, I ask myself.) Patients are randomly selected to be offered, 5 sessions of homeopathic treatment plus usual care, 5 sessions of supportive listening plus usual care or usual care alone. (“To be offered” looks odd to me; does that mean patients are not blinded to the interventions? Yes, indeed it does.) The primary clinical outcome is the IBS Symptom Severity at 26 weeks. Analysis will be by intention to treat and will compare homeopathic treatment with usual care at 26 weeks as the primary analysis, and homeopathic treatment with supportive listening as an additional analysis.
Hold on…the primary analysis “will compare homeopathic treatment with usual care“. Are they pulling my leg? They just told me that patients will be “offered, 5 sessions of homeopathic treatment plus usual care… or usual care alone“.
Oh, I see! We are again dealing with an A+B versus B design, on top of it without patient- or therapist-blinding. This type of analysis cannot ever produce a negative result, even if the experimental treatment is a pure placebo: placebo + usual care is always more than usual care alone. IBS-patients will certainly experience benefit from having the homeopaths’ time, empathy and compassion – never mind the remedies they get from them. And for the secondary analyses, things do not seem to be much more rigorous either.
Do we really need more trials of this nature? The Cochrane review shows that we currently have three studies which are too flimsy to be interpretable. What difference will a further flimsy trial make in this situation? When will we stop wasting time and money on such useless ‘research’? All it can possibly achieve is that apologists of homeopathy will misinterpret the results and suggest that they demonstrate efficacy.
Obviously, I have not seen the data (they have not yet been published) but I think I can nevertheless predict the conclusions of the primary analysis of this trial; they will read something like this: HOMEOPATHY PROVED TO BE SIGNIFICANTLY MORE EFFECTIVE THAN USUAL CARE. I have asked the question before and I do it again: when does this sort of ‘research’ cross the line into the realm of scientific misconduct?
Some experts concede that chiropractic spinal manipulation is effective for chronic low back pain (cLBP). But what is the right dose? There have been no full-scale trials of the optimal number of treatments with spinal manipulation. This study was aimed at filling this gap by trying to identify a dose-response relationship between the number of visits to a chiropractor for spinal manipulation and cLBP outcomes. A further aim was to determine the efficacy of manipulation by comparison with a light massage control.
The primary cLBP outcomes were the 100-point pain intensity scale and functional disability scales evaluated at the 12- and 24-week primary end points. Secondary outcomes included days with pain and functional disability, pain unpleasantness, global perceived improvement, medication use, and general health status.
One hundred patients with cLBP were randomized to each of 4 dose levels of care: 0, 6, 12, or 18 sessions of spinal manipulation from a chiropractor. Participants were treated three times per week for 6 weeks. At sessions when manipulation was not assigned, the patients received a focused light massage control. Covariate-adjusted linear dose effects and comparisons with the no-manipulation control group were evaluated at 6, 12, 18, 24, 39, and 52 weeks.
For the primary outcomes, mean pain and disability improvement in the manipulation groups were 20 points by 12 weeks, an effect that was sustainable to 52 weeks. Linear dose-response effects were small, reaching about two points per 6 manipulation sessions at 12 and 52 weeks for both variables. At 12 weeks, the greatest differences compared to the no-manipulation controls were found for 12 sessions (8.6 pain and 7.6 disability points); at 24 weeks, differences were negligible; and at 52 weeks, the greatest group differences were seen for 18 visits (5.9 pain and 8.8 disability points).
The authors concluded that the number of spinal manipulation visits had modest effects on cLBP outcomes above those of 18 hands-on visits to a chiropractor. Overall, 12 visits yielded the most favorable results but was not well distinguished from other dose levels.
This study is interesting because it confirms that the effects of chiropractic spinal manipulation as a treatment for cLBP are tiny and probably not clinically relevant. And even these tiny effects might not be due to the treatment per se but could be caused by residual confounding and bias.
As for the optimal dose, the authors suggest that, on average, 18 sessions might be the best. But again, we have to be clear that the dose-response effects were small and of doubtful clinical relevance. Since the therapeutic effects are tiny, it is obviously difficult to establish a dose-response relationship.
In view of the cost of chiropractic spinal manipulation and the uncertainty about its safety, I would probably not rate this approach as the treatment of choice but would consider the current Cochrane review which concludes that “high quality evidence suggests that there is no clinically relevant difference between spinal manipulation and other interventions for reducing pain and improving function in patients with chronic low-back pain” Personally, I think it is more prudent to recommend exercise, back school, massage or perhaps even yoga to cLBP-sufferers.
Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.
Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.
An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.
Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.
The principle that is being followed here is simple:
- believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
- the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
- they publish their findings in one of the many journals that specialise in this sort of nonsense;
- they make sure that advocates across the world learn about their results;
- the community of apologists of this treatment picks up the information without the slightest critical analysis;
- the researchers conduct more and more of such pseudo-research;
- nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
- thus the body of false or misleading ‘evidence’ grows and grows;
- proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
- too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
- eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
- important health care decisions are thus based on data which are false and misleading.
So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:
- authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
- editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
- if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
- in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).
What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ‘evidence’ critically…but how realistic is that?
According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.
The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.
The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.
Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.
But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.
And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.
I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.
We have probably all fallen into the trap of thinking that something which has stood the ‘test of time’, i.e. something that has been used for centuries with apparent success, must be ok. In alternative medicine, this belief is extremely wide-spread, and one could argue that the entire sector is built on it. Influential proponents of ‘traditional’ medicine like Prince Charles do their best to strengthen this assumption. Sadly, however, it is easily disclosed as a classical fallacy: things that have stood the ‘test of time’ might work, of course, but the ‘test of time’ is never a proof of anything.
A recent study brought this message home loud and clear. This trial tested the efficacy of Rhodiola crenulata (R. crenulata), a traditional remedy which has been used widely in the Himalayan areas and in Tibet to prevent acute mountain sickness . As no scientific studies of this traditional treatment existed, the researchers conducted a double-blind, placebo-controlled crossover RCT to test its efficacy in acute mountain sickness prevention.
Healthy adult volunteers were randomized to two treatment sequences, receiving either 800 mg R. crenulata extract or placebo daily for 7 days before ascent and two days during mountaineering. After a three-month wash-out period, they were crossed over to the alternate treatment. On each occasion, the participants ascended rapidly from 250 m to 3421 m. The primary outcome measure was the incidence of acute mountain sickness with headache and at least one of the symptoms of nausea or vomiting, fatigue, dizziness, or difficulty sleeping.
One hundred and two participants completed the trial. No significant differences in the incidence of acute mountain sickness were found between R. crenulata extract and placebo groups. If anything, the incidence of severe acute mountain sickness with Rhodiola extract was slightly higher compared to the one with placebo: 35.3% vs. 29.4%.
R. crenulata extract was not effective in reducing the incidence or severity of acute mountain sickness as compared to placebo.
Similar examples could be found by the dozen. They demonstrate very clearly that the notion of the ‘test of time’ is erroneous: a treatment which has a long history of usage is not necessarily effective (or safe) – not only that, it might be dangerous. The true value of a therapy cannot be judged by experience, to be sure, we need rigorous clinical trials. Acute mountain sickness is a potentially life-threatening condition for which there are reasonably effective treatments. If people relied on the ‘ancient wisdom’ instead of using a therapy that actually works, they might pay for their error with their lives. The sooner alternative medicine proponents realise that, the better.
Acupressure is a treatment-variation of acupuncture; instead of sticking needles into the skin, pressure is applied over ‘acupuncture points’ which is supposed to provide a stimulus similar to needling. Therefore the effects of both treatments should theoretically be similar.
Acupressure could have several advantages over acupuncture:
- it can be used for self-treatment
- it is suitable for people with needle-phobia
- it is painless
- it is not invasive
- it has less risks
- it could be cheaper
But is acupressure really effective? What do the trial data tell us? Our own systematic review concluded that the effectiveness of acupressure is currently not well documented for any condition. But now there is a new study which might change this negative verdict.
The primary objective of this 3-armed RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care alone in the management of chemotherapy-induced nausea. 500 patients from outpatient chemotherapy clinics in three regions in the UK involving 14 different cancer units/centres were randomised to the wristband arm, the sham wristband arm and the standard care only arm. Participants were chemotherapy-naive cancer patients receiving chemotherapy of low, moderate and high emetogenic risk. The experimental group were given acupressure wristbands pressing the P6 point (anterior surface of the forearm). The Rhodes Index for Nausea/Vomiting, the Multinational Association of Supportive Care in Cancer (MASCC) Antiemesis Tool and the Functional Assessment of Cancer Therapy General (FACT-G) served as outcome measures. At baseline, participants completed measures of anxiety/depression, nausea/vomiting expectation and expectations from using the wristbands.
Data were available for 361 participants for the primary outcome. The primary outcome analysis (nausea in cycle 1) revealed no statistically significant differences between the three arms. The median nausea experience in patients using wristbands (both real and sham ones) was somewhat lower than that in the anti-emetics only group (median nausea experience scores for the four cycles: standard care arm 1.43, 1.71, 1.14, 1.14; sham acupressure arm 0.57, 0.71, 0.71, 0.43; acupressure arm 1.00, 0.93, 0.43, 0). Women responded more favourably to the use of sham acupressure wristbands than men (odds ratio 0.35 for men and 2.02 for women in the sham acupressure group; 1.27 for men and 1.17 for women in the acupressure group). No significant differences were detected in relation to vomiting outcomes, anxiety and quality of life. Some transient adverse effects were reported, including tightness in the area of the wristbands, feeling uncomfortable when wearing them and minor swelling in the wristband area (n = 6). There were no statistically significant differences in the costs associated with the use of real acupressure band.
26 subjects took part in qualitative interviews. Participants perceived the wristbands (both real and sham) as effective and helpful in managing their nausea during chemotherapy.
The authors concluded that there were no statistically significant differences between the three arms in terms of nausea, vomiting and quality of life, although apparent resource use was less in both the real acupressure arm and the sham acupressure arm compared with standard care only; therefore; no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting. However, the study provided encouraging evidence in relation to an improved nausea experience and some indications of possible cost savings to warrant further consideration of acupressure both in practice and in further clinical trials.
I could argue about several of the methodological details of this study. But I resist the temptation in order to focus on just one single point which I find important and which has implications beyond the realm of acupressure.
Why on earth do the authors conclude that no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting? The stated aim of this RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care. The results failed to show significant differences of the primary outcome measures, consequently the conclusion cannot be “unclear”, it has to be that ACUPRESSURE WRIST BANDS ARE NOT MORE EFFECTIVE THAN SHAM ACUPRESSURE WRIST BANDS AS AN ADJUNCT TO ANTI-EMETIC DRUG TREATMENT (or something to that extent).
As long as RCTs of alternative therapies are run by evangelic believers in the respective therapy, we are bound to regularly encounter this lamentable phenomenon of white-washing negative findings with an inadequate conclusion. In my view, this is not research or science, it is pseudo-research or pseudo-science. And it is much more than a nuisance or a trivial matter; it is a waste of research funds, a waste of patients’ good will that has reached a point where people will lose trust in alternative medicine research. Someone should really do a systematic study to identify those research teams that regularly commit such scientific misconduct and ensure that they are cut off public funding and support.
This post will probably work best, if you have read the previous one describing how the parallel universe of acupuncture research insists on going in circles in order to avoid admitting that their treatment might not be as effective as they pretend. The way they achieve this is fairly simple: they conduct trials that are designed in such a way that they cannot possibly produce a negative result.
A brand-new investigation which was recently vociferously touted via press releases etc. as a major advance in proving the effectiveness of acupuncture is an excellent case in point. According to its authors, the aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care. This sounds alright, but wait!
755 patients with depression were randomised to one of three arms to 1)acupuncture, 2)counselling, and 3)usual care alone. The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat. PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of 10 sessions for acupuncture and 9 sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 and 12 months for acupuncture and counselling.
From this, the authors conclude that both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.
Acupuncture for depression? Really? Our own systematic review with co-authors who are the most ardent apologists of acupuncture I have come across showed that the evidence is inconsistent on whether manual acupuncture is superior to sham… Therefore, I thought it might be a good idea to have a closer look at this new study.
One needs to search this article very closely indeed to find out that the authors did not actually evaluate acupuncture versus usual care and counselling versus usual care at all, and that comparisons were not made between acupuncture, counselling, and usual care (hints like the use of the word “alone” are all we get to guess that the authors’ text is outrageously misleading). Not even the methods section informs us what really happened in this trial. You find this hard to believe? Here is the unabbreviated part of the article that describes the interventions applied:
Patients allocated to the acupuncture and counselling groups were offered up to 12 sessions usually on a weekly basis. Participating acupuncturists were registered with the British Acupuncture Council with at least 3 years post-qualification experience. An acupuncture treatment protocol was developed and subsequently refined in consultation with participating acupuncturists. It allowed for customised treatments within a standardised theory-driven framework. Counselling was provided by members of the British Association for Counselling and Psychotherapy who were accredited or were eligible for accreditation having completed 400 supervised hours post-qualification. A manualised protocol, using a humanistic approach, was based on competences independently developed for Skills for Health. Practitioners recorded in logbooks the number and length of sessions, treatment provided, and adverse events. Further details of the two interventions are presented in Tables S2 and S3. Usual care, both NHS and private, was available according to need and monitored for all patients in all three groups for the purposes of comparison.
It is only in the results tables that we can determine what treatments were actually given; and these were:
1) Acupuncture PLUS usual care (i.e. medication)
2) Counselling PLUS usual care
3) Usual care
Its almost a ‘no-brainer’ that, if you compare A+B to B (or in this three-armed study A+B vs C+B vs B), you find that the former is more than the latter – unless A is a negative, of course. As acupuncture has significant placebo-effects, it never can be a negative, and thus this trial is an entirely foregone conclusion. As, in alternative medicine, one seems to need experimental proof even for ‘no-brainers’, we have some time ago demonstrated that this common sense theory is correct by conducting a systematic review of all acupuncture trials with such a design. We concluded that the ‘A + B versus B’ design is prone to false positive results…What makes this whole thing even worse is the fact that I once presented our review in a lecture where the lead author of the new trial was in the audience; so there can be no excuse of not being aware of the ‘no-brainer’.
Some might argue that this is a pragmatic trial, that it would have been unethical to not give anti-depressants to depressed patients and that therefore it was not possible to design this study differently. However, none of these arguments are convincing, if you analyse them closely (I might leave that to the comment section, if there is interest in such aspects). At the very minimum, the authors should have explained in full detail what interventions were given; and that means disclosing these essentials even in the abstract (and press release) – the part of the publication that is most widely read and quoted.
It is arguably unethical to ask patients’ co-operation, use research funds etc. for a study, the results of which were known even before the first patient had been recruited. And it is surely dishonest to hide the true nature of the design so very sneakily in the final report.
In my view, this trial begs at least 5 questions:
1) How on earth did it pass the peer review process of one of the most highly reputed medical journals?
2) How did the protocol get ethics approval?
3) How did it get funding?
4) Does the scientific community really allow itself to be fooled by such pseudo-research?
5) What do I do to not get depressed by studies of acupuncture for depression?
Has it ever occurred to you that much of the discussion about cause and effect in alternative medicine goes in circles without ever making progress? I have come to the conclusion that it does. Here I try to illustrate this point using the example of acupuncture, more precisely the endless discussion about how to best test acupuncture for efficacy. For those readers who like to misunderstand me I should explain that the sceptics’ view is in capital letters.
At the beginning there was the experience. Unaware of anatomy, physiology, pathology etc., people started sticking needles in other people’s skin, some 2000 years ago, and observed that they experienced relief of all sorts of symptoms.When an American journalist reported about this phenomenon in the 1970s, acupuncture became all the rage in the West. Acupuncture-fans then claimed that a 2000-year history is ample proof that acupuncture does work.
BUT ANECDOTES ARE NOTORIOUSLY UNRELIABLE!
Even the most enthusiastic advocates conceded that this is probably true. So they documented detailed case-series of lots of patients, calculated the average difference between the pre- and post-treatment severity of symptoms, submitted it to statistical tests, and published the notion that the effects of acupuncture are not just anecdotal; in fact, they are statistically significant, they said.
BUT THIS EFFECT COULD BE DUE TO THE NATURAL HISTORY OF THE CONDITION!
“True enough”, grumbled the acupuncture-fans and conducted the very first controlled clinical trials. Essentially they treated one group of patients with acupuncture while another group received conventional treatments as usual. When they analysed the results, they found that the acupuncture group had improved significantly more. “Now do you believe us?”, they asked triumphantly, “acupuncture is clearly effective”.
NO! THIS OUTCOME MIGHT BE DUE TO SELECTION BIAS. SUCH A STUDY-DESIGN CANNOT ESTABLISH CAUSE AND EFFECT.
The acupuncturists felt slightly embarrassed because they had not thought of that. They had allocated their patients to the treatment according to patients’ choice. Thus the expectation of the patients (or the clinician) to get relief from acupuncture might have been the reason for the difference in outcome. So they consulted an expert in trial-design and were advised to allocate not by choice but by chance. In other words, they repeated the previous study but randomised patients to the two groups. Amazingly, their RCT still found a significant difference favouring acupuncture over treatment as usual.
BUT THIS DIFFERENCE COULD BE CAUSED BY A PLACEBO-EFFECT!
Now the acupuncturists were in a bit of a pickle; as far as they could see, there was no good placebo for acupuncture! Eventually some methodologist-chap came up with the idea that, in order to mimic a placebo, they could simply stick needles into non-acupuncture points. When the acupuncturists tried that method, they found that there were improvements in both groups but the difference between real acupuncture and placebo was tiny and usually neither statistically significant nor clinically relevant.
NOW DO YOU CONCEDE THAT ACUPUNCTURE IS NOT AN EFFECTIVE TREATMENT?
Absolutely not! The results merely show that needling non-acupuncture points is not an adequate placebo. Obviously this intervention also sends a powerful signal to the brain which clearly makes it an effective intervention. What do you expect when you compare two effective treatments?
IF YOU REALLY THINK SO, YOU NEED TO PROVE IT AND DESIGN A PLACEBO THAT IS INERT.
At that stage, the acupuncturists came up with a placebo-needle that did not actually penetrate the skin; it worked like a mini stage dagger that telescopes into itself while giving the impression that it penetrated the skin just like the real thing. Surely this was an adequate placebo! The acupuncturists repeated their studies but, to their utter dismay, they found again that both groups improved and the difference in outcome between their new placebo and true acupuncture was minimal.
WE TOLD YOU THAT ACUPUNCTURE WAS NOT EFFECTIVE! DO YOU FINALLY AGREE?
Certainly not, they replied. We have thought long and hard about these intriguing findings and believe that they can be explained just like the last set of results: the non-penetrating needles touch the skin; this touch provides a stimulus powerful enough to have an effect on the brain; the non-penetrating placebo-needles are not inert and therefore the results merely depict a comparison of two effective treatments.
YOU MUST BE JOKING! HOW ARE YOU GOING TO PROVE THAT BIZARRE HYPOTHESIS?
We had many discussions and consensus meeting amongst the most brilliant brains in acupuncture about this issue and have arrived at the conclusion that your obsession with placebo, cause and effect etc. is ridiculous and entirely misplaced. In real life, we don’t use placebos. So, let’s instead address the ‘real life’ question: is acupuncture better than usual treatment? We have conducted pragmatic studies where one group of patients gets treatment as usual and the other group receives acupuncture in addition. These studies show that acupuncture is effective. This is all the evidence we need. Why can you not believe us?
NOW WE HAVE ARRIVED EXACTLY AT THE POINT WHERE WE HAVE BEEN A LONG TIME AGO. SUCH A STUDY-DESIGN CANNOT ESTABLISH CAUSE AND EFFECT. YOU OBVIOUSLY CANNOT DEMONSTRATE THAT ACUPUNCTURE CAUSES CLINICAL IMPROVEMENT. THEREFORE YOU OPT TO PRETEND THAT CAUSE AND EFFECT ARE IRRELEVANT. YOU USE SOME IMITATION OF SCIENCE TO ‘PROVE’ THAT YOUR PRECONCEIVED IDEAS ARE CORRECT. YOU DO NOT SEEM TO BE INTERESTED IN THE TRUTH ABOUT ACUPUNCTURE AT ALL.