MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

methodology

This meta-analysis was performed “to ascertain the effectiveness of oral aloe vera consumption on the reduction of fasting blood glucose (FBG) and hemoglobin A1c (HbA1c).”

PubMed, CINAHL, Natural Medicines Comprehensive Database, and Natural Standard databases were searched. The searches were limited to clinical trials or observational studies conducted in humans and published in English. Studies of aloe vera’s effect on FBG, HbA1c, homeostasis model assessment-estimated insulin resistance (HOMA-IR), fasting serum insulin, fructosamine, and oral glucose tolerance test (OGTT) in prediabetic and diabetic populations were examined.

Nine studies were included in the FBG parameter (n = 283); 5 of these studies included HbA1c data (n = 89). Aloe vera decreased FBG by 46.6 mg/dL (p < 0.0001) and HbA1c by 1.05% (p = 0.004). Significant reductions of both endpoints were maintained in all subgroup analyses. Additionally, the data suggested that patients with an FBG ≥200 mg/dL may see a greater benefit. A mean FBG reduction of 109.9 mg/dL was observed in this population (p ≤ 0.0001). There was evidence of publication bias with FBG but not with HbA1c.

The authors concluded that the results of this meta-analysis support the use of oral aloe vera for significantly reducing both FBG (46.6 mg/dL) and HbA1c (1.05%) in prediabetic and diabetic patients. However, given the current overall quality and relative scarcity of data, further clinical studies that are more robust and better controlled are warranted to confirm and further explore these findings.

Oh no, the results do not support the use of aloe vera at all!!

Why?

Because this ‘meta-analysis’ is of unacceptably poor quality. Here are just some of the flaws that render it totally useless, particularly for issuing advice such as above:

  • The authors included uncontrolled observational studies which make no attempt to control for non-specific effects.
  • In several studies, the use of concomitant anti-diabetic medications was allowed; therefore it is not possible to establish cause and effect by aloe vera.
  • The search strategy was woefully inadequate; for instance non-English publications were not considered.
  • There was no assessment of the scientific rigor of the included studies; this totally invalidates the reliably of the conclusions.
  • The included studies used preparations of widely different aloe vera preparations, and there is no way of knowing the does of the active ingredients.

Diabetes is a serious condition that affects millions worldwide. If some of these patients are sufficiently gullible to follow the conclusions of this paper, they might be dead within a matter of days. This makes this article one of the most dangerous papers that I have seen in the ‘peer-reviewed’ literature of alternative medicine.

Who publishes such utter and irresponsible rubbish?

You may well ask.

The journal has been discussed on this blog  before for the junk that regularly appears in its pages, and so has its editor in chief. The authors (and the reviewers) are not known to me, but one thing is for sure: they don’t know the first thing about conducting a decent systematic review/meta-analysis.

Acupuncture for hot flushes?

What next?

I know, to rational thinkers this sounds bizarre – but, actually, there are quite a few studies on the subject. Enough evidence for me to have published not one but four different systematic reviews on the subject.

The first (2009) concluded that “the evidence is not convincing to suggest acupuncture is an effective treatment of hot flash in patients with breast cancer. Further research is required to investigate whether there are specific effects of acupuncture for treating hot flash in patients with breast cancer.”

The second (also 2009) concluded that “sham-controlled RCTs fail to show specific effects of acupuncture for control of menopausal hot flushes. More rigorous research seems warranted.”

The third (again 2009) concluded that “the evidence is not convincing to suggest acupuncture is an effective treatment for hot flush in patients with prostate cancer. Further research is required to investigate whether acupuncture has hot-flush-specific effects.”

The fourth (2013), a Cochrane review, “found insufficient evidence to determine whether acupuncture is effective for controlling menopausal vasomotor symptoms. When we compared acupuncture with sham acupuncture, there was no evidence of a significant difference in their effect on menopausal vasomotor symptoms. When we compared acupuncture with no treatment there appeared to be a benefit from acupuncture, but acupuncture appeared to be less effective than HT. These findings should be treated with great caution as the evidence was low or very low quality and the studies comparing acupuncture versus no treatment or HT were not controlled with sham acupuncture or placebo HT. Data on adverse effects were lacking.”

And now, there is a new systematic review; its aim was to evaluate the effectiveness of acupuncture for treatment of hot flash in women with breast cancer. The searches identified 12 relevant articles for inclusion. The meta-analysis without any subgroup or moderator failed to show favorable effects of acupuncture on reducing the frequency of hot flashes after intervention (n = 680, SMD = − 0.478, 95 % CI −0.397 to 0.241, P = 0.632) but exhibited marked heterogeneity of the results (Q value = 83.200, P = 0.000, I^2 = 83.17, τ^2 = 0.310). The authors concluded that “the meta-analysis used had contradictory results and yielded no convincing evidence to suggest that acupuncture was an effective treatment of hot flash in patients with breast cancer. Multi-central studies including large sample size are required to investigate the efficiency of acupuncture for treating hot flash in patients with breast cancer.”

What follows from all this?

  • The collective evidence does NOT seem to suggest that acupuncture is a promising treatment for hot flushes of any aetiology.
  • The new paper is unimpressive, in my view. I don’t see the necessity for it, particularly as it fails to include a formal assessment of the methodological quality of the primary studies (contrary to what the authors state in the abstract) and because it merely includes articles published in English (with a therapy like acupuncture, such a strategy seems ridiculous, in my view).
  • I predict that future studies will suggest an effect – as long as they are designed such that they are open to bias.
  • Rigorous trials are likely to show an effect beyond placebo.
  • My own reviews typically state that MORE RESEARCH IS NEEDED. I regret such statements and would today no longer issue them.

The aim of a new meta-analysis was to estimate the clinical effectiveness and safety of acupuncture for amnestic mild cognitive impairment (AMCI), the transitional stage between the normal memory loss of aging and dementia. Randomised controlled trials (RCTs) of acupuncture versus medical treatment for AMCI were identified using six electronic databases.

Five RCTs involving a total of 568 subjects were included. The methodological quality of the RCTs was generally poor. Participants receiving acupuncture had better outcomes than those receiving nimodipine with greater clinical efficacy rates (odds ratio (OR) 1.78, 95% CI 1.19 to 2.65; p<0.01), mini-mental state examination (MMSE) scores (mean difference (MD) 0.99, 95% CI 0.71 to 1.28; p<0.01), and picture recognition score (MD 2.12, 95% CI 1.48 to 2.75; p<0.01). Acupuncture used in conjunction with nimodipine significantly improved MMSE scores (MD 1.09, 95% CI 0.29 to 1.89; p<0.01) compared to nimodipine alone. Three trials reported adverse events.

The authors concluded that acupuncture appears effective for AMCI when used as an alternative or adjunctive treatment; however, caution must be exercised given the low methodological quality of included trials. Further, more rigorously designed studies are needed.

Meta-analyses like this one are, in my view, perfect examples for the ‘rubbish in, rubbish out’ principle of systematic reviews. This may seem like an unfair statement, so let me justify it by explaining the shortfalls of this specific paper.

The authors try to tell us that their aim was “to estimate the clinical effectiveness and safety of acupuncture…” While it might be possible to estimate the effectiveness of a therapy by pooling the data of a few RCTs, it is never possible to estimate its safety on such a basis. To conduct an assessment of therapeutic safety, one would need sample sizes that go two or three dimensions beyond those of RCTs. Thus safety assessments are best done by evaluating the evidence from all the available evidence, including case-reports, epidemiological investigations and observational studies.

The authors tell us that “two studies did not report whether any adverse events or side effects had occurred in the experimental or control groups.” This is a common and serious flaw of many acupuncture trials, and another important reason why RCTs cannot be used for evaluating the risks of acupuncture. Too many such studies simply don’t mention adverse effects at all. If they are then submitted to systematic reviews, they must generate a false positive picture about the safety of acupuncture. The absence of adverse effects reporting is a serious breach of research ethics. In the realm of acupuncture, it is so common, that many reviewers do not even bother to discuss this violation of medical ethics as a major issue.

The authors conclude that acupuncture is more effective than nimodipine. This sounds impressive – unless you happen to know that nimodipine is not supported by good evidence either. A Cochrane review provided no convincing evidence that nimodipine is a useful treatment for the symptoms of dementia, either unclassified or according to the major subtypes – Alzheimer’s disease, vascular, or mixed Alzheimer’s and vascular dementia.

The authors also conclude that acupuncture used in conjunction with nimodipine is better than nimodipine alone. This too might sound impressive – unless you realise that all the RCTs in question failed to control for the effects of placebo and the added attention given to the patients. This means that the findings reported here are consistent with acupuncture itself being totally devoid of therapeutic effects.

The authors are quite open about the paucity of RCTs and their mostly dismal methodological quality. Yet they arrive at fairly definitive conclusions regarding the therapeutic value of acupuncture. This is, in my view, a serious mistake: on the basis of a few poorly designed and poorly reported RCTs, one should never arrive at even tentatively positive conclusion. Any decent journal would not have published such misleading phraseology, and it is noteworthy that the paper in question appeared in a journal that has a long history of being hopelessly biased in favour of acupuncture.

Any of the above-mentioned flaws could already be fatal, but I have kept the most serious one for last. All the 5 RCTs that were included in the analyses were conducted in China by Chinese researchers and published in Chinese journals. It has been shown repeatedly that such studies hardly ever report anything other than positive results; no matter what conditions is being investigated, acupuncture turns out to be effective in the hands of Chinese trialists. This means that the result of such a study is clear even before the first patient has been recruited. Little wonder then that virtually all reviews of such trials – and there are dozens of then – arrive at conclusions similar to those formulated in the paper before us.

As I already said: rubbish in, rubbish out!

This post is dedicated to Mel Koppelman.

Those who followed the recent discussions about acupuncture on this blog will probably know her; she is an acupuncturist who (thinks she) knows a lot about research because she has several higher qualifications (but was unable to show us any research published by herself). Mel seems very quick in lecturing others about research methodology. Yesterday, she posted this comment in relation to my previous post on a study of aromatherapy and reflexology:

Professor Ernst, This post affirms yet again a rather poor understanding of clinical trial methodology. A pragmatic trial such as this one with a wait-list control makes no attempt to look for specific effects. You say “it is quite simply wrong to assume that this outcome is specifically related to the two treatments.” Where have specific effects been tested or assumed in this study? Your statement in no way, shape or form negates the author’s conclusions that “aromatherapy massage and reflexology are simple and effective non-pharmacologic nursing interventions.” Effectiveness is not a measure of specific effects.

I am most grateful for this comment because it highlights an issue that I had wanted to address for some time: The meanings of the two terms ‘efficacy and effectiveness’ and their differences as seen by scientists and by alternative practitioners/researchers.

Let’s start with the definitions.

I often use the excellent book of Alan Earl-Slater entitled THE HANDBOOK OF CLINICAL TRIALS AND OTHER RESEARCH. In it, EFFICACY is defined as ‘the degree to which an intervention does what it is intended to do under ideal conditions. EFFECTIVENESS is the degree to which a treatment works under real life conditions. An EFFECTIVENESS TRIAL is a trial that ‘is said to approximate reality (i. e. clinical practice). It is sometimes called a pragmatic trial’. An EFFICACY TRIAL ‘is a clinical trial that is said to take place under ideal conditions.’

In other words, an efficacy trial investigates the question, ‘can the therapy work?’, and an effectiveness trial asks, ‘does this therapy work?’ In both cases, the question relate to the therapy per se and not to the plethora of phenomena which are not directly related to it. It seems logical that, where possible, the first question would need to be addressed before the second – it does make little sense to test for effectiveness, if efficacy has not been ascertained, and effectiveness without efficacy does not seem to be possible.

In my 2007 book entitled UNDERSTANDING RESEARCH IN COMPLEMENTARY AND ALTERNATIVE MEDICINE (written especially for alternative therapists like Mel), I adopted these definitions and added: “It is conceivable that a given therapy works only under optimal conditions but not in everyday practice. For instance, in clinical practice patients may not comply with a therapy because it causes adverse effects.” I should have added perhaps that adverse effects are by no means the only reason for non-compliance, and that non-compliance is not the only reason why an efficacious treatment might not be effective.

Most scientists would agree with the above definitions. In fact, I am not aware of a debate about them in scientific circles. But they are not something alternative practitioners tend to like. Why? Because, using these strict definitions, many alternative therapies are neither of proven efficacy nor effectiveness.

What can be done about this unfortunate situation?

Simple! Let’s re-formulate the definitions of efficacy and effectiveness!

Efficacy, according to some alternative medicine proponents, refers to the therapeutic effects of the therapy per se, in other words, its specific effects. (That coincides almost with the scientific definition of this term – except, of course, it fails to tell us anything about the boundary conditions [optimal or real-life conditions].)

Effectiveness, according to the advocates of alternative therapies, refers to its specific effects plus its non-specific effects. Some researchers have even introduced the term ‘real-life effectiveness’ for this.

This is why, the authors of the study discussed in my previous post, could conclude that “aromatherapy massage and reflexology are simple… effective… interventions… to help manage pain and fatigue in patients with rheumatoid arthritis.” Based on their data, neither aromatherapy nor reflexology has been shown to be effective. They might appear to be effective because patients expected to get better, or patients in the no-treatment control group felt worse for not getting the extra care. Based on studies of this nature, giving patients £10 or a box of chocolate might also turn out to be “simple… effective… interventions… to help manage pain and fatigue in patients with rheumatoid arthritis.” Based on these definitions of efficacy and effectiveness, there are hardly any limits to bogus claims for any old quackery.

Such obfuscation suits proponents of alternative therapies fine because, using such definitions, virtually every treatment anyone might ever think of can be shown to be effective! Wishful thinking, it seems, can fulfil almost any dream, it can even turn the truth upside down.

Or can anyone name an alternative treatment that cannot even generate a placebo response when administered with empathy, sympathy and care? Compared to doing nothing, virtually every ineffective therapy might generate outcomes that make the treatment look effective. Even the anticipation of an effect alone might do the trick. How often have you had a tooth-ache, went to the dentist, and discovered sitting in the waiting room that the pain had gone? Does that mean that sitting in a waiting room is an effective treatment for dental pain?

In fact, some enthusiasts of alternative medicine could soon begin to argue that, with their new definition of ‘effectiveness’, we no longer need controlled clinical trials at all, if we want to demonstrate how effective alternative therapies truly are. We can just do observational studies without a control group, note that lots of patients get better, and ‘Bob is your uncle’!!! This is much faster, saves money, time and effort, and has the undeniable advantage of never generating a negative result.

To most outsiders, all this might seem a bit like splitting hair. However, I fear that it is far from that. In fact, it turns out to be a fairly fundamental issue in almost any discussion about the value or otherwise of alternative medicine. And, I think, it is also a matter of principle that reaches far beyond alternative medicine: if we allow various interest groups, lobbyists, sects, cults etc. to use their own definitions of fundamentally important terms, any dialogue, understanding or progress becomes almost impossible.

While over on my post about the new NICE GUIDELINES on acupuncture for back pain, the acupuncturists’ assassination attempts of my character, competence, integrity and personality are in full swing, I have decided to employ my time more fruitfully and briefly comment on a new piece of acupuncture research.

This new Italian study was to determine the effectiveness of acupuncture for the management of hot flashes in women with breast cancer.

A total of 190 women with breast cancer were randomly assigned to two groups. Random assignment was performed with stratification for hormonal therapy; the allocation ratio was 1:1. Both groups received a booklet with information about climacteric syndrome and its management to be followed for at least 12 weeks. In addition, the acupuncture group received 10 traditional acupuncture treatment sessions involving needling of predefined acupoints.

The primary outcome was hot flash score at the end of treatment (week 12), calculated as the frequency multiplied by the average severity of hot flashes. The secondary outcomes were climacteric symptoms and quality of life, measured by the Greene Climacteric and Menopause Quality of Life scales. Health outcomes were measured for up to 6 months after treatment. Expectation and satisfaction of treatment effect and safety were also evaluated. We used intention-to-treat analyses.

Of the participants, 105 were randomly assigned to enhanced self-care and 85 to acupuncture plus enhanced self-care. Acupuncture plus enhanced self-care was associated with a significantly lower hot flash score than enhanced self-care at the end of treatment (P < .001) and at 3- and 6-month post-treatment follow-up visits (P = .0028 and .001, respectively). Acupuncture was also associated with fewer climacteric symptoms and higher quality of life in the vasomotor, physical, and psychosocial dimensions (P < .05).

The authors concluded that acupuncture in association with enhanced self-care is an effective integrative intervention for managing hot flashes and improving quality of life in women with breast cancer.

This hardly needs a comment, as I have been going on about this study design many times before: the ‘A+B versus B’ design can only produce positive findings. Any such study concluding that ‘acupuncture (or whatever other intervention) is effective’ can therefore not be a legitimate test of a hypothesis and ought to be categorised as pseudo-science. Sadly, this problem seems more the rule than the exception in the realm of acupuncture research. That’s a pity really… because, if there is potential in acupuncture at all, this sort of thing can only distract from it.

I think the JOURNAL OF CLINICAL ONCOLOGY, its editors and reviewers, should be ashamed of having published such misleading rubbish.

Reiki is one of the most popular types of ‘energy healing’. Reiki healers believe to be able to channel ‘healing energy’ into patients’ body thus enabling them to get healthy. If Reiki were not such a popular treatment, one could brush such claims aside and think “let the lunatic fringe believe what they want”. But as Reiki so effectively undermines consumers’ sense of reality and rationality, I feel I should continue informing the public about this subject – despite the fact that I have already reported about it several times before, for instance here, here, here, here, here and here.

A new RCT, published in a respected journal looks interesting enough for a further blog-post on the subject. The main aim of the study was to investigate the effectiveness of two psychotherapeutic approaches, cognitive behavioural therapy (CBT) and a complementary medicine method Reiki, in reducing depression scores in adolescents. The researchers from Canada, Malaysia and Australia recruited 188 adolescent depressed adolescents. They were randomly assigned to CBT, Reiki or wait-list. Depression scores were assessed before and after 12 weeks of treatments/wait list. CBT showed a significantly greater decrease in Child Depression Inventory (CDI) scores across treatment than both Reiki (p<.001) and the wait-list control (p<.001). Reiki also showed greater decreases in CDI scores across treatment relative to the wait-list control condition (p=.031).  Male participants showed a smaller treatment effects for Reiki than did female participants. The authors concluded that both CBT and Reiki were effective in reducing the symptoms of depression over the treatment period, with effect for CBT greater than Reiki.

I find it most disappointing that these days even respected journals publish such RCTs without the necessary critical input. This study may appear to be rigorous but, in fact, it is hardly worth the paper it was printed on.

The results show that Reiki produced worse results than CBT. That I can well believe!

However, the findings also suggest that Reiki was nevertheless “effective in reducing the symptoms of depression”, as the authors put it in their conclusions. This statement is misleading!

It is based on the comparison of Reiki with doing nothing. As Reiki involves lots of attention, it can be assumed to generate a sizable placebo effect. As a proportion of the patients in the wait list group are probably disappointed for not getting such attention, they can be assumed to experience the adverse effects of their disappointment. The two phenomena combined can easily explain the result without any “effectiveness” of Reiki per se.

If such considerations are not fully discussed and made amply clear even in the conclusions of the abstract, it seems reasonable to accuse the journal of being less than responsible and the authors of being outright misleading.

As with so many papers in this area, one has to ask: WHERE DOES SLOPPY RESEARCH END AND WHERE DOES SCIENTIFIC MISCONDUCT BEGIN?

My last post was about a researcher who manages to produce nothing but positive findings with the least promising alternative therapy, homeopathy. Some might think that this is an isolated case or an anomaly – but they would be wrong. I have previously published about researchers who have done very similar things with homeopathy or other unlikely therapies. Examples include:

Claudia Witt

George Lewith

John Licciardone

But there are many more, and I will carry on highlighting their remarkable work. For example, the research of a German group headed by Prof Gustav Dobos, one of the most prolific investigator in alternative medicine at present.

For my evaluation, I conducted a Medline search of the last 10 of Dobos’ published articles and excluded those not assessing the effectiveness of alternative therapies such as surveys, comments, etc. Here they are with their respective conclusions and publication dates:

SYSTEMATIC REVIEW COMPARING DIFFERENT YOGA STYLES (2016)

RCTs with different yoga styles do not differ in their odds of reaching positive conclusions. Given that most RCTs were positive, the choice of an individual yoga style can be based on personal preferences and availability.

SYSTEMATIC REVIEW OF YOGA FOR WEIGHT LOSS (2016)

Despite methodological drawbacks, yoga can be preliminarily considered a safe and effective intervention to reduce body mass index in overweight or obese individuals.

REVIEW OF INTEGRATIVE MEDICINE IN GYNAECOLOGICAL ONCOLOGY (2016)

…there is published, positive level I evidence for a number of CAM treatment forms.

SYSTEMATIC REVIEW OF MINDFULNESS FOR PSYCHOSES (2016)

Mindfulness- and acceptance-based interventions can be recommended as an additional treatment for patients with psychosis.

RCT OF CABBAGE LEAF WRAPS FOR OSTEOARTHOSIS (2016)

Cabbage leaf wraps are more effective for knee osteoarthritis than usual care, but not compared with diclofenac gel. Therefore, they might be recommended for patients with osteoarthritis of the knee.

SYSTEMATIC REVIEW OF HERBAL MEDICINES FOR COUGH (2015)

This review found strong evidence for A. paniculata and ivy/primrose/thyme-based preparations and moderate evidence for P. sidoides being significantly superior to placebo in alleviating the frequency and severity of patients’ cough symptoms. Additional research, including other herbal treatments, is needed in this area.

SYSTEMATIC REVIEW OF DIETARY APPROACHES FOR METABOLIC SYNDROME (2016)

Dietary approaches should mainly be tried to reduce macronutrients and enrich functional food components such as vitamins, flavonoids, and unsaturated fatty acids. People with Metabolic Syndrome will benefit most by combining weight loss and anti-inflammatory nutrients.

SYSTEMATIC REVIEW OF MIND BODY MEDICINE FOR CORONARY HEART DISEASE (2015)

In patients with CHD, MBM programs can lessen the occurrence of cardiac events, reduce atherosclerosis, and lower systolic blood pressure, but they do not reduce mortality. They can be used as a complement to conventional rehabilitation programs.

CRANIOSACRAL THERAPY (CST) FOR BACK PAIN (2016)

CST was both specifically effective and safe in reducing neck pain intensity and may improve functional disability and the quality of life up to 3 months after intervention.

REVIEW OF INTEGRATED MEDICINE FOR BREAST CANCER (2015)

Study data have shown that therapy- and disease-related side effects can be reduced using the methods of integrative medicine. Reported benefits include improving patients’ wellbeing and quality of life, reducing stress, and improving patients’ mood, sleeping patterns and capacity to cope with disease.

Amazed?

Dobos seems to be an ‘all-rounder’ whose research tackles a wide range of alternative treatments. That is perhaps unremarkable – but what I do find remarkable is the impression that, whatever he researches, the results turn out to be pretty positive. This might imply one of two things, in my view:

I let my readers chose which possibility they deem to be more likely.

Recently, I came across the ‘Clinical Practice Guidelines on the Use of Integrative Therapies as Supportive Care in Patients Treated for Breast Cancer’ published by the ‘Society for Integrative Oncology (SIO) Guidelines Working Group’. The mission of the SIO is to “advance evidence-based, comprehensive, integrative healthcare to improve the lives of people affected by cancer. The SIO has consistently encouraged rigorous scientific evaluation of both pre-clinical and clinical science, while advocating for the transformation of oncology care to integrate evidence-based complementary approaches. The vision of SIO is to have research inform the true integration of complementary modalities into oncology care, so that evidence-based complementary care is accessible and part of standard cancer care for all patients across the cancer continuum. As an interdisciplinary and inter-professional society, SIO is uniquely poised to lead the “bench to bedside” efforts in integrative cancer care.”

The aim of the ‘Clinical Practice Guidelines’ was to “inform clinicians and patients about the evidence supporting or discouraging the use of specific complementary and integrative therapies for defined outcomes during and beyond breast cancer treatment, including symptom management.”

This sounds like a most laudable aim. Therefore I studied the document carefully and was surprised to read their conclusions: “Specific integrative therapies can be recommended as evidence-based supportive care options during breast cancer treatment.”

How can this be? On this blog, we have repeatedly seen evidence to suggest that integrative medicine is little more than the admission of quackery into evidence-based healthcare. This got me wondering how their conclusion had been reached, and I checked the document even closer.

On the surface, it seemed well-made. A team of researchers first defined the treatments they wanted to look at, then they searched for RCTs, evaluated their quality, extracted their results, combined them into an overall verdict and wrote the whole thing up. In a word, they conducted what seems a proper systematic review.

Based on the findings of their review, they then issued recommendations which I thought were baffling in several respects. Let me just focus on three of the SIO’s recommendations dealing with acupuncture:

  1. “Acupuncture can be considered for treating anxiety concurrent with ongoing fatigue…” [only RCT (1) cited in support]
  2. “Acupuncture can be considered for improving depressive symptoms in women suffering from hot flashes…” [RCTs (1 and 2) cited in support] 
  3. “Acupuncture can be considered for treating anxiety concurrent with ongoing fatigue…” [only RCT (1) cited in support]
One or two studies as a basis for far-reaching guidelines? Yes, that would normally be a concern! But, at closer scrutiny, my worries about these recommendation turn out to be much more serious than this.

The actual RCT (1) cited in support of all three recommendations stated that the authors “randomly assigned 75 patients to usual care and 227 patients to acupuncture plus usual care…” As we have discussed often before on this blog and elsewhere, such a ‘A+B versus B study design’ will never generate a negative result, does not control for placebo-effects and is certainly not a valid test for the effectiveness of the treatment in question. Nevertheless, the authors of this study concluded that: “Acupuncture is an effective intervention for managing the symptom of cancer-related fatigue and improving patients’ quality of life.”

RCT (2) cited in support of recommendation number 2 seems to be a citation error; the study in question is not an acupuncture-trial and does not back the statement in question. I suspect they meant to cite their reference number 87 (instead of 88). This trial is an equivalence study where 50 patients were randomly assigned to receive 12 weeks of acupuncture (n = 25) or venlafaxine (n = 25) treatment for cancer-related hot flushes. Its results indicate that the two treatments generated the similar results. As the two therapies could also have been equally ineffective, it is impossible, in my view, to conclude that acupuncture is effective.

Finally, RCT (1) does in no way support recommendation number two. Yet RCT (1) and RCT (2) were both cited in support of this recommendation.

I have not systematically checked any other claims made in this document, but I get the impression that many other recommendations made here are based on similarly ‘liberal’ interpretations of the evidence. How can the ‘Society for Integrative Oncology’ use such dodgy pseudo-science for formulating potentially far-reaching guidelines?

I know none of the authors (Heather Greenlee, Lynda G. Balneaves, Linda E. Carlson, Misha Cohen, Gary Deng, Dawn Hershman, Matthew Mumber, Jane Perlmutter, Dugald Seely, Ananda Sen, Suzanna M. Zick, Debu Tripathy) of the document personally. They made the following collective statement about their conflicts of interest: “There are no financial conflicts of interest to disclose. We note that some authors have conducted/authored some of the studies included in the review.” I am a little puzzled to hear that they have no financial conflicts of interest (do not most of them earn their living by practising integrative medicine? Yes they do! The article informs us that: “A multidisciplinary panel of experts in oncology and integrative medicine was assembled to prepare these clinical practice guidelines. Panel members have expertise in medical oncology, radiation oncology, nursing, psychology, naturopathic medicine, traditional Chinese medicine, acupuncture, epidemiology, biostatistics, and patient advocacy.”). I also suspect they have other, potentially much stronger conflicts of interest. They belong to a group of people who seem to religiously believe in the largely nonsensical concept of integrative medicine. Integrating unproven treatments into healthcare must affect its quality in much the same way as the integration of cow pie into apple pie would affect the taste of the latter.

After considering all this carefully, I cannot help wondering whether these ‘Clinical Practice Guidelines’ by the ‘Society for Integrative Oncology’ are just full of honest errors or whether they amount to fraud and scientific misconduct.

WHATEVER THE ANSWER, THE GUIDELINES MUST BE RETRACTED, IF THIS SOCIETY WANTS TO AVOID LOSING ALL CREDIBILITY.

In recent blogs, I have written much about acupuncture and particularly about the unscientific notions of traditional acupuncturists. I was therefore surprised to see that a UK charity is teaming up with traditional acupuncturists in an exercise that looks as though it is designed to mislead the public.

The website of ‘Anxiety UK’ informs us that this charity and the British Acupuncture Council (BAcC) have launched a ‘pilot project’ which will see members of Anxiety UK being able to access traditional acupuncture through this new partnership. Throughout the pilot project, they proudly proclaim, data will be collected to “determine the effectiveness of traditional acupuncture for treating those living with anxiety and anxiety based depression.”

This, they believe, will enable both parties to continue to build a body of evidence to measure the success rate of this type of treatment. Anxiety UK’s Chief Executive Nicky Lidbetter said: “This is an exciting project and will provide us with valuable data and outcomes for those members who take part in the pilot and allow us to assess the benefits of extending the pilot to a regular service for those living with anxiety. “We know anecdotally that many people find complementary therapies used to support conventional care can provide enormous benefit, although it should be remembered they are used in addition to and not instead of seeking medical advice from a doctor or taking prescribed medication. This supports our strategic aim to ensure that we continue to make therapies and services that are of benefit to those with anxiety and anxiety based depression, accessible.”

And what is wrong with that, you might ask.

What is NOT wrong with it, would be my response.

To start with, traditional acupuncture relies of obsolete assumptions like yin and yang, meridians, energy flow, acupuncture points etc. They have one thing in common: they fly in the face of science and evidence. But this might just be a triviality. More important is, I believe, the fact that a pilot project cannot determine the effectiveness of a therapy. Therefore the whole exercise smells very much like a promotional activity for pure quackery.

And what about the hint in the direction of anecdotal evidence in support of the study? Are they not able to do a simple Medline search? Because, if they had done one, they would have found a plethora of articles on the subject. Most of them show that there are plenty of studies but their majority is too flawed to draw firm conclusions.

A review by someone who certainly cannot be accused of being biased against alternative medicine, for instance, informs us that “trials in depression, anxiety disorders and short-term acute anxiety have been conducted but acupuncture interventions employed in trials vary as do the controls against which these are compared. Many trials also suffer from small sample sizes. Consequently, it has not proved possible to accurately assess the effectiveness of acupuncture for these conditions or the relative effectiveness of different treatment regimens. The results of studies showing similar effects of needling at specific and non-specific points have further complicated the interpretation of results. In addition to measuring clinical response, several clinical studies have assessed changes in levels of neurotransmitters and other biological response modifiers in an attempt to elucidate the specific biological actions of acupuncture. The findings offer some preliminary data requiring further investigation.”

Elsewhere, the same author, together with other pro-acupuncture researchers, wrote this: “Positive findings are reported for acupuncture in the treatment of generalised anxiety disorder or anxiety neurosis but there is currently insufficient research evidence for firm conclusions to be drawn. No trials of acupuncture for other anxiety disorders were located. There is some limited evidence in favour of auricular acupuncture in perioperative anxiety. Overall, the promising findings indicate that further research is warranted in the form of well designed, adequately powered studies.”

What does this mean in the context of the charity’s project?

I think, it tells us that acupuncture for anxiety is not exactly the most promising approach to further investigate. Even in the realm of alternative medicine, there are several interventions which are supported by more encouraging evidence. And even if one disagrees with this statement, one cannot possibly disagree with the fact that more flimsy research is not required. If we do need more studies, they must be rigorous and not promotion thinly disguised as science.

I guess the ultimate question here is one of ethics. Do charities not have an ethical and moral duty to spend our donations wisely and productively? When does such ill-conceived pseudo-research cross the line to become offensive or even fraudulent?

The randomized, placebo-controlled, double-blind trial is usually the methodology to test the efficacy of a therapy that carries the least risk of bias. This fact is an obvious annoyance to some alt med enthusiasts, because such trials far too often fail to produce the results they were hoping for.

But there is no need to despair. Here I provide a few simple tips on how to mislead the public with seemingly rigorous trials.

1 FRAUD

The most brutal method for misleading people is simply to cheat. The Germans have a saying, ‘Papier ist geduldig’ (paper is patient), implying that anyone can put anything on paper. Fortunately we currently have plenty of alt med journals which publish any rubbish anyone might dream up. The process of ‘peer-review’ is one of several mechanisms supposed to minimise the risk of scientific fraud. Yet alt med journals are more clever than that! They tend to have a peer-review that rarely involves independent and critical scientists, more often than not you can even ask that you best friend is invited to do the peer-review, and the alt med journal will follow your wish. Consequently the door is wide open to cheating. Once your fraudulent paper has been published, it is almost impossible to tell that something is fundamentally wrong.

But cheating is not confined to original research. You can also apply the method to other types of research, of course. For instance, the authors of the infamous ‘Swiss report’ on homeopathy generated a false positive picture using published systematic reviews of mine by simply changing their conclusions from negative to positive. Simple!

2 PRETTIFICATION

Obviously, outright cheating is not always as simple as that. Even in alt med, you cannot easily claim to have conducted a clinical trial without a complex infrastructure which invariably involves other people. And they are likely to want to have some control over what is happening. This means that complete fabrication of an entire data set may not always be possible. What might still be feasible, however, is the ‘prettification’ of the results. By just ‘re-adjusting’ a few data points that failed to live up to your expectations, you might be able to turn a negative into a positive trial. Proper governance is aimed at preventing his type of ‘mini-fraud’ but fortunately you work in alt med where such mechanisms are rarely adequately implemented.

3 OMISSION

Another very handy method is the omission of aspects of your trial which regrettably turned out to be in disagreement with the desired overall result. In most studies, one has a myriad of endpoints. Once the statistics of your trial have been calculated, it is likely that some of them yield the wanted positive results, while others do not. By simply omitting any mention of the embarrassingly negative results, you can easily turn a largely negative study into a seemingly positive one. Normally, researchers have to rely on a pre-specified protocol which defines a primary outcome measure. Thankfully, in the absence of proper governance, it usually is possible to publish a report which obscures such detail and thus mislead the public (I even think there has been an example of such an omission on this very blog).

4 STATISTICS

Yes – lies, dam lies, and statistics! A gifted statistician can easily find ways to ‘torture the data until they confess’. One only has to run statistical test after statistical test, and BINGO one will eventually yield something that can be marketed as the longed-for positive result. Normally, researchers must have a protocol that pre-specifies all the methodologies used in a trial, including the statistical analyses. But, in alt med, we certainly do not want things to function normally, do we?

5 TRIAL DESIGNS THAT CANNOT GENERATE A NEGATIVE RESULT

All the above tricks are a bit fraudulent, of course. Unfortunately, fraud is not well-seen by everyone. Therefore, a more legitimate means of misleading the public would be highly desirable for those aspiring alt med researchers who do not want to tarnish their record to their disadvantage. No worries guys, help is on the way!

The fool-proof trial design is obviously the often-mentioned ‘A+B versus B’ design. In such a study, patients are randomized to receive an alt med treatment (A) together with usual care (B) or usual care (B) alone. This looks rigorous, can be sold as a ‘pragmatic’ trial addressing a real-fife problem, and has the enormous advantage of never failing to produce a positive result: A+B is always more than B alone, even if A is a pure placebo. Such trials are akin to going into a hamburger joint for measuring the calories of a Big Mac without chips and comparing them to the calories of a Big Mac with chips. We know the result before the research has started; in alt med, that’s how it should be!

I have been banging on about the ‘A+B versus B’ design often enough, but recently I came across a new study design used in alt med which is just as elegantly misleading. The trial in question has a promising title: Quality-of-life outcomes in patients with gynecologic cancer referred to integrative oncology treatment during chemotherapy. Here is the unabbreviated abstract:

OBJECTIVE:

Integrative oncology incorporates complementary medicine (CM) therapies in patients with cancer. We explored the impact of an integrative oncology therapeutic regimen on quality-of-life (QOL) outcomes in women with gynecological cancer undergoing chemotherapy.

PATIENTS AND METHODS:

A prospective preference study examined patients referred by oncology health care practitioners (HCPs) to an integrative physician (IP) consultation and CM treatments. QOL and chemotherapy-related toxicities were evaluated using the Edmonton Symptom Assessment Scale (ESAS) and Measure Yourself Concerns and Wellbeing (MYCAW) questionnaire, at baseline and at a 6-12-week follow-up assessment. Adherence to the integrative care (AIC) program was defined as ≥4 CM treatments, with ≤30 days between each session.

RESULTS:

Of 128 patients referred by their HCP, 102 underwent IP consultation and subsequent CM treatments. The main concerns expressed by patients were fatigue (79.8 %), gastrointestinal symptoms (64.6 %), pain and neuropathy (54.5 %), and emotional distress (45.5 %). Patients in both AIC (n = 68) and non-AIC (n = 28) groups shared similar demographic, treatment, and cancer-related characteristics. ESAS fatigue scores improved by a mean of 1.97 points in the AIC group on a scale of 0-10 and worsened by a mean of 0.27 points in the non-AIC group (p = 0.033). In the AIC group, MYCAW scores improved significantly (p < 0.0001) for each of the leading concerns as well as for well-being, a finding which was not apparent in the non-AIC group.

CONCLUSIONS:

An IP-guided CM treatment regimen provided to patients with gynecological cancer during chemotherapy may reduce cancer-related fatigue and improve other QOL outcomes.

A ‘prospective preference study’ – this is the design the world of alt med has been yearning for! Its principle is beautiful in its simplicity. One merely administers a treatment or treatment package to a group of patients; inevitably some patients take it, while others don’t. The reasons for not taking it could range from lack of perceived effectiveness to experience of side-effects. But never mind, the fact that some do not want your treatment provides you with two groups of patients: those who comply and those who do not comply. With a bit of skill, you can now make the non-compliers appear like a proper control group. Now you only need to compare the outcomes and BOB IS YOUR UNCLE!

Brilliant! Absolutely brilliant!

I cannot think of a more deceptive trial-design than this one; it will make any treatment look good, even one that is a mere placebo. Alright, it is not randomized, and it does not even have a proper control group. But it sure looks rigorous and meaningful, this ‘prospective preference study’!

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories