MD, PhD, FMedSci, FRSB, FRCP, FRCPEd.

pseudo-science

1 2 3 39

I remember reading this paper entitled ‘Comparison of acupuncture and other drugs for chronic constipation: A network meta-analysis’ when it first came out. I considered discussing it on my blog, but then decided against it for a range of reasons which I shall explain below. The abstract of the original meta-analysis is copied below:

The objective of this study was to compare the efficacy and side effects of acupuncture, sham acupuncture and drugs in the treatment of chronic constipation. Randomized controlled trials (RCTs) assessing the effects of acupuncture and drugs for chronic constipation were comprehensively retrieved from electronic databases (such as PubMed, Cochrane Library, Embase, CNKI, Wanfang Database, VIP Database and CBM) up to December 2017. Additional references were obtained from review articles. With quality evaluations and data extraction, a network meta-analysis (NMA) was performed using a random-effects model under a frequentist framework. A total of 40 studies (n = 11032) were included: 39 were high-quality studies and 1 was a low-quality study. NMA showed that (1) acupuncture improved the symptoms of chronic constipation more effectively than drugs; (2) the ranking of treatments in terms of efficacy in diarrhoea-predominant irritable bowel syndrome was acupuncture, polyethylene glycol, lactulose, linaclotide, lubiprostone, bisacodyl, prucalopride, sham acupuncture, tegaserod, and placebo; (3) the ranking of side effects were as follows: lactulose, lubiprostone, bisacodyl, polyethylene glycol, prucalopride, linaclotide, placebo and tegaserod; and (4) the most commonly used acupuncture point for chronic constipation was ST25. Acupuncture is more effective than drugs in improving chronic constipation and has the least side effects. In the future, large-scale randomized controlled trials are needed to prove this. Sham acupuncture may have curative effects that are greater than the placebo effect. In the future, it is necessary to perform high-quality studies to support this finding. Polyethylene glycol also has acceptable curative effects with fewer side effects than other drugs.

END OF 1st QUOTE

This meta-analysis has now been retracted. Here is what the journal editors have to say about the retraction:

After publication of this article [1], concerns were raised about the scientific validity of the meta-analysis and whether it provided a rigorous and accurate assessment of published clinical studies on the efficacy of acupuncture or drug-based interventions for improving chronic constipation. The PLOS ONE Editors re-assessed the article in collaboration with a member of our Editorial Board and noted several concerns including the following:

  • Acupuncture and related terms are not mentioned in the literature search terms, there are no listed inclusion or exclusion criteria related to acupuncture, and the outcome measures were not clearly defined in terms of reproducible clinical measures.
  • The study included acupuncture and electroacupuncture studies, though this was not clearly discussed or reported in the Title, Methods, or Results.
  • In the “Routine paired meta-analysis” section, both acupuncture and sham acupuncture groups were reported as showing improvement in symptoms compared with placebo. This finding and its implications for the conclusions of the article were not discussed clearly.
  • Several included studies did not meet the reported inclusion criteria requiring that studies use adult participants and assess treatments of >2 weeks in duration.
  • Data extraction errors were identified by comparing the dataset used in the meta-analysis (S1 Table) with details reported in the original research articles. Errors included aspects of the study design such as the experimental groups included in the study, the number of study arms in the trial, number of participants, and treatment duration. There are also several errors in the Reference list.
  • With regard to side effects, 22 out of 40 studies were noted as having reported side effects. It was not made clear whether side effects were assessed as outcome measures for the other 18 studies, i.e. did the authors collect data clarifying that there were no side effects or was this outcome measure not assessed or reported in the original article. Without this clarification the conclusion comparing side effect frequencies is not well supported.
  • The network geometry presented in Fig 5 is not correct and misrepresents some of the study designs, for example showing two-arm studies as three-arm studies.
  • The overall results of the meta-analysis are strongly reliant on the evidence comparing acupuncture versus lactulose treatment. Several of the trials that assessed this comparison were poorly reported, and the meta-analysis dataset pertaining to these trials contained data extraction errors. Furthermore, potential bias in studies assessing lactulose efficacy in acupuncture trials versus lactulose efficacy in other trials was not sufficiently addressed.

While some of the above issues could be addressed with additional clarifications and corrections to the text, the concerns about study inclusion, the accuracy with which the primary studies’ research designs and data were represented in the meta-analysis, and the reporting quality of included studies directly impact the validity and accuracy of the dataset underlying the meta-analysis. As a consequence, we consider that the overall conclusions of the study are not reliable. In light of these issues, the PLOS ONE Editors retract the article. We apologize that these issues were not adequately addressed during pre-publication peer review.

LZ disagreed with the retraction. YM and XD did not respond.

END OF 2nd QUOTE

Let me start by explaining why I initially decided not to discuss this paper on my blog. Already the first sentence of the abstract put me off, and an entire chorus of alarm-bells started ringing once I read further.

  • A meta-analysis is not a ‘study’ in my book, and I am somewhat weary of researchers who employ odd or unprecise language.
  • We all know (and I have discussed it repeatedly) that studies of acupuncture frequently fail to report adverse effects (in doing this, their authors violate research ethics!). So, how can it be a credible aim of a meta-analysis to compare side-effects in the absence of adequate reporting?
  • The methodology of a network meta-analysis is complex and I know not a lot about it.
  • Several things seemed ‘too good to be true’, for instance, the funnel-plot and the overall finding that acupuncture is the best of all therapeutic options.
  • Looking at the references, I quickly confirmed my suspicion that most of the primary studies were in Chinese.

In retrospect, I am glad I did not tackle the task of criticising this paper; I would probably have made not nearly such a good job of it as PLOS ONE eventually did. But it was only after someone raised concerns that the paper was re-reviewed and all the defects outlined above came to light.

While some of my concerns listed above may have been trivial, my last point is the one that troubles me a lot. As it also related to dozens of Cochrane reviews which currently come out of China, it is worth our attention, I think. The problem, as I see it, is as follows:

  • Chinese (acupuncture, TCM and perhaps also other) trials are almost invariably reporting positive findings, as we have discussed ad nauseam on this blog.
  • Data fabrication seems to be rife in China.
  • This means that there is good reason to be suspicious of such trials.
  • Many of the reviews that currently flood the literature are based predominantly on primary studies published in Chinese.
  • Unless one is able to read Chinese, there is no way of evaluating these papers.
  • Therefore reviewers of journal submissions tend to rely on what the Chinese review authors write about the primary studies.
  • As data fabrication seems to be rife in China, this trust might often not be justified.
  • At the same time, Chinese researchers are VERY keen to publish in top Western journals (this is considered a great boost to their career).
  • The consequence of all this is that reviews of this nature might be misleading, even if they are published in top journals.

I have been struggling with this problem for many years and have tried my best to alert people to it. However, it does not seem that my efforts had even the slightest success. The stream of such reviews has only increased and is now a true worry (at least for me). My suspicion – and I stress that it is merely that – is that, if one would rigorously re-evaluate these reviews, their majority would need to be retracted just as the above paper. That would mean that hundreds of papers would disappear because they are misleading, a thought that should give everyone interested in reliable evidence sleepless nights!

So, what can be done?

Personally, I now distrust all of these papers, but I admit, that is not a good, constructive solution. It would be better if Journal editors (including, of course, those at the Cochrane Collaboration) would allocate such submissions to reviewers who:

  • are demonstrably able to conduct a CRITICAL analysis of the paper in question,
  • can read Chinese,
  • have no conflicts of interest.

In the case of an acupuncture review, this would narrow it down to perhaps just a handful of experts worldwide. This probably means that my suggestion is simply not feasible.

But what other choice do we have?

One could oblige the authors of all submissions to include full and authorised English translations of non-English articles. I think this might work, but it is, of course, tedious and expensive. In view of the size of the problem (I estimate that there must be around 1 000 reviews out there to which the problem applies), I do not see a better solution.

(I would truly be thankful, if someone had a better one and would tell us)

Psoriasis is one of those conditions that is

  • chronic,
  • not curable,
  • irritating to the point where it reduces quality of life.

In other words, it is a disease for which virtually all alternative treatments on the planet are claimed to be effective. But which therapies do demonstrably alleviate the symptoms?

This review (published in JAMA Dermatology) compiled the evidence on the efficacy of the most studied complementary and alternative medicine (CAM) modalities for treatment of patients with plaque psoriasis and discusses those therapies with the most robust available evidence.

PubMed, Embase, and ClinicalTrials.gov searches (1950-2017) were used to identify all documented CAM psoriasis interventions in the literature. The criteria were further refined to focus on those treatments identified in the first step that had the highest level of evidence for plaque psoriasis with more than one randomized clinical trial (RCT) supporting their use. This excluded therapies lacking RCT data or showing consistent inefficacy.

A total of 457 articles were found, of which 107 articles were retrieved for closer examination. Of those articles, 54 were excluded because the CAM therapy did not have more than 1 RCT on the subject or showed consistent lack of efficacy. An additional 7 articles were found using references of the included studies, resulting in a total of 44 RCTs (17 double-blind, 13 single-blind, and 14 nonblind), 10 uncontrolled trials, 2 open-label nonrandomized controlled trials, 1 prospective controlled trial, and 3 meta-analyses.

Compared with placebo, application of topical indigo naturalis, studied in 5 RCTs with 215 participants, showed significant improvements in the treatment of psoriasis. Treatment with curcumin, examined in 3 RCTs (with a total of 118 participants), 1 nonrandomized controlled study, and 1 uncontrolled study, conferred statistically and clinically significant improvements in psoriasis plaques. Fish oil treatment was evaluated in 20 studies (12 RCTs, 1 open-label nonrandomized controlled trial, and 7 uncontrolled studies); most of the RCTs showed no significant improvement in psoriasis, whereas most of the uncontrolled studies showed benefit when fish oil was used daily. Meditation and guided imagery therapies were studied in 3 single-blind RCTs (with a total of 112 patients) and showed modest efficacy in treatment of psoriasis. One meta-analysis of 13 RCTs examined the association of acupuncture with improvement in psoriasis and showed significant improvement with acupuncture compared with placebo.

The authors concluded that CAM therapies with the most robust evidence of efficacy for treatment of psoriasis are indigo naturalis, curcumin, dietary modification, fish oil, meditation, and acupuncture. This review will aid practitioners in advising patients seeking unconventional approaches for treatment of psoriasis.

I am sorry to say so, but this review smells fishy! And not just because of the fish oil. But the fish oil data are a good case in point: the authors found 12 RCTs of fish oil. These details are provided by the review authors in relation to oral fish oil trials: Two double-blind RCTs (one of which evaluated EPA, 1.8g, and DHA, 1.2g, consumed daily for 12 weeks, and the other evaluated EPA, 3.6g, and DHA, 2.4g, consumed daily for 15 weeks) found evidence supporting the use of oral fish oil. One open-label RCT and 1 open-label non-randomized controlled trial also showed statistically significant benefit. Seven other RCTs found lack of efficacy for daily EPA (216mgto5.4g)or DHA (132mgto3.6g) treatment. The remainder of the data supporting efficacy of oral fish oil treatment were based on uncontrolled trials, of which 6 of the 7 studies found significant benefit of oral fish oil. This seems to support their conclusion. However, the authors also state that fish oil was not shown to be effective at several examined doses and duration. Confused? Yes, me too!

Even more confusing is their failure to mention a single trial of Mahonia aquifolium. A 2013 meta-analysis published in the British Journal of Dermatology included 5 RCTs of Mahonia aquifolium which, according to these authors, provided ‘limited support’ for its effectivenessHow could they miss that?

More importantly, how could the reviewers miss to conduct a proper evaluation of the quality of the studies they included in their review (even in their abstract, they twice speak of ‘robust evidence’ – but how can they without assessing its robustness? [quantity is not remotely the same as quality!!!]). Without a transparent evaluation of the rigour of the primary studies, any review is nearly worthless.

Take the 12 acupuncture trials, for instance, which the review authors included based not on an assessment of the studies but on a dodgy review published in a dodgy journal. Had they critically assessed the quality of the primary studies, they could have not stated that CAM therapies with the most robust evidence of efficacy for treatment of psoriasis …[include]… acupuncture. Instead they would have had to admit that these studies are too dubious for any firm conclusion. Had they even bothered to read them, they would have found that many are in Chinese (which would have meant they had to be excluded in their review [as many pseudo-systematic reviewers, the authors only considered English papers]).

There might be a lesson in all this – well, actually I can think of at least two:

  1. Systematic reviews might well be the ‘Rolls Royce’ of clinical evidence. But even a Rolls Royce needs to be assembled correctly, otherwise it is just a heap of useless material.
  2. Even top journals do occasionally publish poor-quality and thus misleading reviews.

If you thought that Chinese herbal medicine is just for oral use, you were wrong. This article explains it all in some detail: Injections of traditional Chinese herbal medicines are also referred to as TCM injections. This approach has evolved during the last 70 years as a treatment modality that, according to the authors, parallels injections of pharmaceutical products.

The researchers from China try to provide a descriptive analysis of various aspects of TCM injections. They used the the following data sources: (1) information retrieved from website of drug registration system of China, and (2) regulatory documents, annual reports and ADR Information Bulletins issued by drug regulatory authority.

As of December 31, 2017, 134 generic names for TCM injections from 224 manufacturers were approved for sale. Only 5 of the 134 TCM injections are documented in the present version of Ch.P (2015). Most TCM injections are documented in drug standards other than Ch.P. The formulation, ingredients and routes of administration of TCM injections are more complex than conventional chemical injections. Ten TCM injections are covered by national lists of essential medicine and 58 are covered by China’s basic insurance program of 2017. Adverse drug reactions (ADR) reports related to TCM injections account for  over 50% of all ADR reports related to TCMs, and the percentages have been rising annually.

The authors concluded that making traditional medicine injectable might be a promising way to develop traditional medicines. However, many practical challenges need to be overcome by further development before a brighter future for injectable traditional medicines can reasonably be expected.

I have to admit that TCM injections frighten the hell out of me. I feel that before we inject any type of substance into patients, we ought to know as a bare minimum:

  • for what conditions, if any, they have been proven to be efficacious,
  • what adverse effects each active ingredient can cause,
  • with what other drugs they might interact,
  • how reliable the quality control for these injections is.

I somehow doubt that these issues have been fully addressed in China. Therefore, I can only hope the Chinese manufacturers are not planning to export their dubious TCM injections.

This could (and perhaps should) be a very short post:

I HAVE NO QUALIFICATIONS IN HOMEOPATHY!

NONE!!!

[the end]

The reason why it is not quite as short as that lies in the the fact that homeopathy-fans regularly start foaming from the mouth when they state, and re-state, and re-state, and re-state this simple, undeniable fact.

The latest example is by our friend Barry Trestain who recently commented on this blog no less than three times about the issue:

  1. Falsified? You didn’t have any qualifications falsified or otherwise according to this. In quotes as well lol. Perhaps you could enlighten us all on this. Edzard Ernst, Professor of Complementary and Alternative Medicine (CAM) at Exeter University, is the most frequently cited „expert‟ by critics of homeopathy, but a recent interview has revealed the astounding fact that he “never completed any courses” and has no qualifications in homeopathy. What is more his principal experience in the field was when “After my state exam I worked under Dr Zimmermann at the Münchner Krankenhaus für Naturheilweisen” (Munich Hospital for Natural Healing Methods). Asked if it is true that he only worked there “for half a year”, he responded that “I am not sure … it is some time ago”!
  2. I don’t know what you got. I’m only going by your quotes above. You didn’t pass ANY exams. “Never completed any courses and has no qualifications in Homeopathy.” Those aren’t my words.
  3. LOL qualification for their cat? You didn’t even get a psuedo qualification and on top of that you practiced Homeopathy for 20 years eremember. With no qualifications. You are a fumbling and bumbling Proffessor of Cam? LOL. In fact I think I’ll make my cat a proffessor of Cam. Why not? He’ll be as qualified as you.

Often, these foaming (and in their apoplectic fury badly-spelling) defenders of homeopathy state or imply that I lied about all this. Yet, it is they who are lying, if they say so. I never claimed that I got any qualifications in homeopathy; I was trained in homeopathy by doctors of considerable standing in their field just like I was trained in many other clinical skills (what is more, I published a memoir where all this is explained in full detail).

In my bewilderment, I sometimes ask my accusers why they think I should have got a qualification in homeopathy. Sadly, so far, I  have not received a logical answer (most of the time not even an illogical one).

So, today I ask the question again: WHY SHOULD I HAVE NEEDED ANY QUALIFICATION IN HOMEOPATHY?

My answers are here:

  1. I consider such qualifications as laughable.  A proper qualification in nonsense is just nonsense!
  2. For practising homeopathy (which I did for a while), I did not need such qualifications; as a licensed physician, I was at liberty to use the treatments I felt to be adequate.
  3. For researching homeopathy (which I did too and published ~120 Medline-listed papers as a result of it), I do not need them either. Anyone can research homeopathy, and some of the most celebrated heroes of homeopathy research (e. g. Klaus Linde and Robert Mathie) do also have no such qualifications.

I am therefore truly puzzled and write this post to give everyone the chance to name the reasons why they feel I needed qualifications in homeopathy.

Please do tell me!

In one of his many comments, our friend Iqbal just linked to an article that unquestionably is interesting. Here is its abstract (the link also provides the full paper):

Objective: The objective was to assess the usefulness of homoeopathic genus epidemicus (Bryonia alba 30C) for the prevention of chikungunya during its epidemic outbreak in the state of Kerala, India.

Materials and Methods: A cluster- randomised, double- blind, placebo -controlled trial was conducted in Kerala for prevention of chikungunya during the epidemic outbreak in August-September 2007 in three panchayats of two districts. Bryonia alba 30C/placebo was randomly administered to 167 clusters (Bryonia alba 30C = 84 clusters; placebo = 83 clusters) out of which data of 158 clusters was analyzed (Bryonia alba 30C = 82 clusters; placebo = 76 clusters) . Healthy participants (absence of fever and arthralgia) were eligible for the study (Bryonia alba 30 C n = 19750; placebo n = 18479). Weekly follow-up was done for 35 days. Infection rate in the study groups was analysed and compared by use of cluster analysis.

Results: The findings showed that 2525 out of 19750 persons of Bryonia alba 30 C group suffered from chikungunya, compared to 2919 out of 18479 in placebo group. Cluster analysis showed significant difference between the two groups [rate ratio = 0.76 (95% CI 0.14 – 5.57), P value = 0.03]. The result reflects a 19.76% relative risk reduction by Bryonia alba 30C as compared to placebo.

Conclusion: Bryonia alba 30C as genus epidemicus was better than placebo in decreasing the incidence of chikungunya in Kerala. The efficacy of genus epidemicus needs to be replicated in different epidemic settings.

________________________________________________________________________________

I have often said the notion that homeopathy might prevent epidemics is purely based on observational data. Here I stand corrected. This is an RCT! What is more, it suggests that homeopathy might be effective. As this is an important claim, let me quickly post just 10 comments on this study. I will try to make this short (I only looked at it briefly), hoping that others complete my criticism where I missed important issues:

  1. The paper was published in THE INDIAN JOURNAL OF RESEARCH IN HOMEOPATHY. This is not a publication that could be called a top journal. If this study really shows something as revolutionarily new as its conclusions imply, one must wonder why it was published in an obscure and inaccessible journal.
  2. Several of its authors are homeopaths who unquestionably have an axe to grind, yet they do not declare any conflicts of interest.
  3. The abstract states that the trial was aimed at assessing the usefulness of Bryonia C30, while the paper itself states that it assessed its efficacy. The two are not the same, I think.
  4. The trial was conducted in 2007 and published only 7 years later; why the delay?
  5. The criteria for the main outcome measure were less than clear and had plenty of room for interpretation (“Any participant who suffered from fever and arthralgia (characteristic symptoms of chikungunya) during the follow-up period was considered as a case of chikungunya”).
  6. I fail to follow the logic of the sample size calculation provided by the authors and therefore believe that the trial was woefully underpowered.
  7. As a cluster RCT, its unit of assessment is the cluster. Yet the significant results seem to have been obtained by using single patients as the unit of assessment (“At the end of follow-ups it was observed that 12.78% (2525 out of 19750) healthy individuals, administered with Bryonia alba 30 C, were presented diagnosed as probable case of chikungunya, whereas it was 15.79% (2919 out of 18749) in the placebo group”).
  8. The p-value was set at 0.05. As we have often explained, this is far too low considering that the verum was a C30 dilution with zero prior probability.
  9. Nine clusters were not included in the analysis because of ‘non-compliance’. I doubt whether this was the correct way of dealing with this issue and think that an intention to treat analysis would have been better.
  10. This RCT was published 4 years ago. If true, its findings are nothing short of a sensation. Therefore, one would have expected that, by now, we would see several independent replications. The fact that this is not the case might mean that such RCTs were done but failed to confirm the findings above.

As I said, I would welcome others to have a look and tell us what they think about this potentially important study.

Kinesiology tape KT is fashionable, it seems. Gullible consumers proudly wear it as decorative ornaments to attract attention and show how very cool they are.

Am I too cynical?

Perhaps.

But does KT really do anything more?

A new trial might tell us.

The aim of this study was to investigate whether adding kinesiology tape (KT) to spinal manipulation (SM) can provide any extra effect in athletes with chronic non-specific low back pain (CNLBP).

Forty-two athletes (21males, 21females) with CNLBP were randomized into two groups of SM (n = 21) and SM plus KT (n = 21). Pain intensity, functional disability level and trunk flexor-extensor muscles endurance were assessed by Numerical Rating Scale (NRS), Oswestry pain and disability index (ODI), McQuade test, and unsupported trunk holding test, respectively. The tests were done before and immediately, one day, one week, and one month after the interventions and compared between the two groups.

After treatments, pain intensity and disability level decreased and endurance of trunk flexor-extensor muscles increased significantly in both groups. Repeated measures analysis, however, showed that there was no significant difference between the groups in any of the evaluations.

The authors, physiotherapists from Iran, concluded that the findings of the present study showed that adding KT to SM does not appear to have a significant extra effect on pain, disability and muscle endurance in athletes with CNLBP. However, more studies are needed to examine the therapeutic effects of KT in treating these patients.

Regular readers of my blog will be able to predict what I have to say about this study design: A+B versus B is not a meaningful test of anything. I used to claim that it cannot possibly produce a negative result – and yet, here it seems to have done exactly that!

How come?

The way I see it, there are two possibilities to explain this:

  • the KT has a mildly negative effect on CNLBP; thus the expected positive placebo-effect was neutralised to result in a null-effect overall;
  • the study was under-powered such that the true inter-group difference could not manifest itself.

I think the second possibility is more likely, but it does really not matter at all. Because the only lesson we can learn from this trial is this: inadequate study designs will  hardly ever generate anything worthwhile.

And this is, I think, a lesson that would be valuable for many researchers.

_______________________________________________________________________

Reference

2018 Apr;22(2):540-545. doi: 10.1016/j.jbmt.2017.07.008. Epub 2017 Jul 26.

Comparing spinal manipulation with and without Kinesio Taping® in the treatment of chronic low back pain.

 

I have often cautioned my readers about the ‘evidence’ supporting acupuncture (and other alternative therapies). Rightly so, I think. Here is yet another warning.

This systematic review assessed the clinical effectiveness of acupuncture in the treatment of postpartum depression (PPD). Nine trials involving 653 women were selected. A meta-analysis demonstrated that the acupuncture group had a significantly greater overall effective rate compared with the control group. Moreover, acupuncture significantly increased oestradiol levels compared with the control group. Regarding the HAMD and EPDS scores, no difference was found between the two groups. The Chinese authors concluded that acupuncture appears to be effective for postpartum depression with respect to certain outcomes. However, the evidence thus far is inconclusive. Further high-quality RCTs following standardised guidelines with a low risk of bias are needed to confirm the effectiveness of acupuncture for postpartum depression.

What a conclusion!

What a review!

What a journal!

What evidence!

Let’s start with the conclusion: if the authors feel that the evidence is ‘inconclusive’, why do they state that ‘acupuncture appears to be effective for postpartum depression‘. To me this does simply not make sense!

Such oddities are abundant in the review. The abstract does not mention the fact that all trials were from China (published in Chinese which means that people who cannot read Chinese are unable to check any of the reported findings), and their majority was of very poor quality – two good reasons to discard the lot without further ado and conclude that there is no reliable evidence at all.

The authors also tell us very little about the treatments used in the control groups. In the paper, they state that “the control group needed to have received a placebo or any type of herb, drug and psychological intervention”. But was acupuncture better than all or any of these treatments? I could not find sufficient data in the paper to answer this question.

Moreover, only three trials seem to have bothered to mention adverse effects. Thus the majority of the studies were in breach of research ethics. No mention is made of this in the discussion.

In the paper, the authors re-state that “this meta-analysis showed that the acupuncture group had a significantly greater overall effective rate compared with the control group. Moreover, acupuncture significantly increased oestradiol levels compared with the control group.” This is, I think, highly misleading (see above).

Finally, let’s have a quick look at the journal ‘Acupuncture in Medicine’ (AiM). Even though it is published by the BMJ group (the reason for this phenomenon can be found here: “AiM is owned by the British Medical Acupuncture Society and published by BMJ“; this means that all BMAS-members automatically receive the journal which thus is a resounding commercial success), it is little more than a cult-newsletter. The editorial board is full of acupuncture enthusiasts, and the journal hardly ever publishes anything that is remotely critical of the wonderous myths of acupuncture.

My conclusion considering all this is as follows: we ought to be very careful before accepting any ‘evidence’ that is currently being published about the benefits of acupuncture, even if it superficially looks ok. More often than not, it turns out to be profoundly misleading, utterly useless and potentially harmful pseudo-evidence.


Reference

Acupunct Med. 2018 Jun 15. pii: acupmed-2017-011530. doi: 10.1136/acupmed-2017-011530. [Epub ahead of print]

Effectiveness of acupuncture in postpartum depression: a systematic review and meta-analysis.

Li S, Zhong W, Peng W, Jiang G.

How often do we hear this sentence: “I know, because I have done my research!” I don’t doubt that most people who make this claim believe it to be true.

But is it?

What many mean by saying, “I know, because I have done my research”, is that they went on the internet and looked at a few websites. Others might have been more thorough and read books and perhaps even some original papers. But does that justify their claim, “I know, because I have done my research”?

The thing is, there is research and there is research.

The dictionary defines research as “The systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.” This definition is helpful because it mentions several issues which, I believe, are important.

Research should be:

  • systematic,
  • an investigation,
  • establish facts,
  • reach new conclusions.

To me, this indicates that none of the following can be truly called research:

  • looking at a few randomly chosen papers,
  • merely reading material published by others,
  • uncritically adopting the views of others,
  • repeating the conclusions of others.

Obviously, I am being very harsh and uncompromising here. Not many people could, according to these principles, truthfully claim to have done research in alternative medicine. Most people in this realm do not fulfil any of those criteria.

As I said, there is research and research – research that meets the above criteria, and the type of research most people mean when they claim: “I know, because I have done my research.”

Personally, I don’t mind that the term ‘research’ is used in more than one way:

  • there is research meeting the criteria of the strict definition
  • and there is a common usage of the word.

But what I do mind, however, is when the real research is claimed to be as relevant and reliable as the common usage of the term. This would be a classical false equivalence, akin to putting experts on a par with pseudo-experts, to believing that facts are no different from fantasy, or to assume that truth is akin to post-truth.

Sadly, in the realm of alternative medicine (and alarmingly, in other areas as well), this is exactly what has happened since quite some time. No doubt, this might be one reason why many consumers are so confused and often make wrong, sometimes dangerous therapeutic decisions. And this is why I think it is important to point out the difference between research and research.

We probably have all heard of predatory journals. The phenomenon of ‘predatory conferences’ seems to be less-well appreciated. Hardly a day goes by that I do not receive emails like the one below:

________________________________________________________

Dear Dr. E Ernst ,

Good day!

After the success of Traditional Medicine-2018 in Rome, Italy, on behalf of the Organizing committee, we are delighted to invite you to be a speaker at our upcoming “3rd World Congress and Expo on Traditional and Alternative Medicine” (Traditional Medicine-2019) which will be held during June 06-08, 2019 in Berlin, Germany.

Traditional Medicine-2019 will focus on the theme “Natural and Scientific Approach for Treatment and Rehabilitation”…

_________________________________________________________

I have chosen this particular one because it refers to the success of a recent conference in Rome. This is a conference where I was a member of  the organising committee and have been listed as a keynote speaker. Here is the original entry from the programme:

Keynote Forum 09:15-09:55

Title: Integrative Medicine: Hype or Hope? Ernst Edzard, University of Exeter, United Kingdom

And here is the strange tale how it all came about:

After receiving a barrage of similar invitations and having ignored them for months, I thought that maybe I am unnecessarily suspicious – perhaps these conferences are not as dodgy as they appear to be. So, I responded to one email and stated the usual things:

  • I do not insist on a fee,
  • I want my expensed paid,
  • I need a topic that I feel comfortable with,
  • I need to know who else is speaking,
  • I must know who is sponsoring the event,
  • the whole thing must fit into my time-table.

I got an enthusiastic response and, even though not all my questions were answered, they agreed to fund my travel and hotel costs with a lump sum of 300 Euro. They asked me to act as chair of the entire meeting and as ‘signing authority for the conference’ (I don’t know what this means) but I declined. Yet I wanted to see how the whole thing would play out. So, I accepted a keynote lecture, agreed to be a member of the organising/scientific committee, and send them my abstract.

Then I did not hear anything for a long time (normally, I would, as a member of the organising/scientific committee, have expected to receive abstract submissions for review and other material). When someone sent me an email about it, I noted that the organisers were advertising the conference with my name and photo. I was irritated by that, but decided to play along so that I could get to the bottom of all this. Then, about 6 weeks before the event came this email from the organisers:

Dear Dr. Ernst ,

Greetings of the day!!

We are glad to have your presence at Traditional Medicine 2018.

Hope this mail finds you in good spirits.

Kindly find the attached final program for the Conference.

Could you please confirm us your check in & check out dates.

Revert back to me for further queries…

I replied as follows:

I will look at the possibilities of trains, flights etc., once you send me the promised funds for buying my tickets.

e ernst

________________________________________________________

And the rest was silence!

I did not hear a word from them after telling them that they need to send me the money before I commit myself into buying flight tickets etc. Nor did I expect to hear from them after that.

The run-up to the conference was too bizarre, in my view, for a credible conference:

  • The organisers seemed to know next to nothing about the topic of the conference.
  • They signed with English names and had a London address, but their language skills seem to be limited.
  • They had few of the features that are typical for a serious conference.
  • Almost all of their emails seemed strangely vague.
  • I got the impression that the entire organisation is not run by a thinking person but by a computer.
  • They seemed to organise dozens of conferences at any one time.
  • All their conferences were in towns that might seem attractive to visit.
  • None were associated with a leading scientist’s place of work.
  • They wanted my commitments but never committed themselves to anything tangible.

In a word, they seemed phony!

Of course, in the end, I did not fly to Rome and did not deliver my keynote lecture. Evidently, this did not stop them to email me soon after stating “After the success of Traditional Medicine-2018in Rome, Italy, on behalf of the Organizing committee…”

The reason for writing this is to warn you: there are obviously quite a few (not so) clever people out there who want to get hold of your cash by tempting you to attend an apparently interesting conference in an attractive town which, once you participate, turns out to be a waste of time, money and effort.

Can I tempt you to run a little (hopefully instructive) thought-experiment with you? It is quite simple: I will tell you about the design of a clinical trial, and you will tell me what the likely outcome of this study would be.

Are you game?

Here we go:

_____________________________________________________________________________

Imagine we conduct a trial of acupuncture for persistent pain (any type of pain really). We want to find out whether acupuncture is more than a placebo when it comes to pain-control. Of course, we want our trial to look as rigorous as possible. So, we design it as a randomised, sham-controlled, partially-blinded study. To be really ‘cutting edge’, our study will not have two but three parallel groups:

1. Standard needle acupuncture administered according to a protocol recommended by a team of expert acupuncturists.

2. Minimally invasive sham-acupuncture employing shallow needle insertion using short needles at non-acupuncture points. Patients in groups 1 and 2 are blinded, i. e. they are not supposed to know whether they receive the sham or real acupuncture.

3. No treatment at all.

We apply the treatments for a sufficiently long time, say 12 weeks. Before we start, after 6 and 12 weeks, we measure our patients’ pain with a validated method. We use sound statistical methods to compare the outcomes between the three groups.

WHAT DO YOU THINK THE RESULT WOULD BE?

You are not sure?

Well, let me give you some hints:

Group 3 is not going to do very well; not only do they receive no therapy at all, but they are also disappointed to have ended up in this group as they joined the study in the hope to get acupuncture. Therefore, they will (claim to) feel a lot of pain.

Group 2 will be pleased to receive some treatment. However, during the course of the 6 weeks, they will get more and more suspicious. As they were told during the process of obtaining informed consent that the trial entails treating some patients with a sham/placebo, they are bound to ask themselves whether they ended up in this group. They will see the short needles and the shallow needling, and a percentage of patients from this group will doubtlessly suspect that they are getting the sham treatment. The doubters will not show a powerful placebo response. Therefore, the average pain scores in this group will decrease – but only a little.

Group 1 will also be pleased to receive some treatment. As the therapists cannot be blinded, they will do their best to meet the high expectations of their patients. Consequently, they will benefit fully from the placebo effect of the intervention and the pain score of this group will decrease significantly.

So, now we can surely predict the most likely result of this trial without even conducting it. Assuming that acupuncture is a placebo-therapy, as many people do, we now see that group 3 will suffer the most pain. In comparison, groups 1 and 2 will show better outcomes.

Of course, the main question is, how do groups 1 and 2 compare to each other? After all, we designed our sham-controlled trial in order to answer exactly this issue: is acupuncture more than a placebo? As pointed out above, some patients in group 2 would have become suspicious and therefore would not have experienced the full placebo-response. This means that, provided the sample sizes are sufficiently large, there should be a significant difference between these two groups favouring real acupuncture over sham. In other words, our trial will conclude that acupuncture is better than placebo, even if acupuncture is a placebo.

THANK YOU FOR DOING THIS THOUGHT EXPERIMENT WITH ME.

Now I can tell you that it has a very real basis. The leading medical journal, JAMA, just published such a study and, to make matters worse, the trial was even sponsored by one of the most prestigious funding agencies: the NIH.

Here is the abstract:

___________________________________________________________________________

Musculoskeletal symptoms are the most common adverse effects of aromatase inhibitors and often result in therapy discontinuation. Small studies suggest that acupuncture may decrease aromatase inhibitor-related joint symptoms.

Objective:

To determine the effect of acupuncture in reducing aromatase inhibitor-related joint pain.

Design, Setting, and Patients:

Randomized clinical trial conducted at 11 academic centers and clinical sites in the United States from March 2012 to February 2017 (final date of follow-up, September 5, 2017). Eligible patients were postmenopausal women with early-stage breast cancer who were taking an aromatase inhibitor and scored at least 3 on the Brief Pain Inventory Worst Pain (BPI-WP) item (score range, 0-10; higher scores indicate greater pain).

Interventions:

Patients were randomized 2:1:1 to the true acupuncture (n = 110), sham acupuncture (n = 59), or waitlist control (n = 57) group. True acupuncture and sham acupuncture protocols consisted of 12 acupuncture sessions over 6 weeks (2 sessions per week), followed by 1 session per week for 6 weeks. The waitlist control group did not receive any intervention. All participants were offered 10 acupuncture sessions to be used between weeks 24 and 52.

Main Outcomes and Measures:

The primary end point was the 6-week BPI-WP score. Mean 6-week BPI-WP scores were compared by study group using linear regression, adjusted for baseline pain and stratification factors (clinically meaningful difference specified as 2 points).

Results:

Among 226 randomized patients (mean [SD] age, 60.7 [8.6] years; 88% white; mean [SD] baseline BPI-WP score, 6.6 [1.5]), 206 (91.1%) completed the trial. From baseline to 6 weeks, the mean observed BPI-WP score decreased by 2.05 points (reduced pain) in the true acupuncture group, by 1.07 points in the sham acupuncture group, and by 0.99 points in the waitlist control group. The adjusted difference for true acupuncture vs sham acupuncture was 0.92 points (95% CI, 0.20-1.65; P = .01) and for true acupuncture vs waitlist control was 0.96 points (95% CI, 0.24-1.67; P = .01). Patients in the true acupuncture group experienced more grade 1 bruising compared with patients in the sham acupuncture group (47% vs 25%; P = .01).

Conclusions and Relevance:

Among postmenopausal women with early-stage breast cancer and aromatase inhibitor-related arthralgias, true acupuncture compared with sham acupuncture or with waitlist control resulted in a statistically significant reduction in joint pain at 6 weeks, although the observed improvement was of uncertain clinical importance.

__________________________________________________________________________

Do you see how easy it is to deceive (almost) everyone with a trial that looks rigorous to (almost) everyone?

My lesson from all this is as follows: whether consciously or unconsciously, SCAM-researchers often build into their trials more or less well-hidden little loopholes that ensure they generate a positive outcome. Thus even a placebo can appear to be effective. They are true masters of producing false-positive findings which later become part of a meta-analysis which is, of course, equally false-positive. It is a great shame, in my view, that even top journals (in the above case JAMA) and prestigious funders (in the above case the NIH) cannot (or want not to?) see behind this type of trickery.

1 2 3 39
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories