MD, PhD, FMedSci, FSB, FRCP, FRCPEd

research

1 2 3 7

Therapeutic Touch is a therapy mostly popular with nurses. We have discussed it before, for instance here, here, here and here. To call it implausible would be an understatement. But what does the clinical evidence tell us? Does it work?

This literature review by Iranian authors was aimed at critically evaluating the data from clinical trials examining the clinical efficacy of therapeutic touch as a supportive care modality in adult patients with cancer.

Four electronic databases were searched from the year 1990 to 2015 to locate potentially relevant peer-reviewed articles using the key words therapeutic touch, touch therapy, neoplasm, cancer, and CAM. Additionally, relevant journals and references of all the located articles were manually searched for other potentially relevant studies.

The number of 334 articles was found on the basis of the key words, of which 17 articles related to the clinical trial were examined in accordance with the objectives of the study. A total of 6 articles were in the final dataset in which several examples of the positive effects of healing touch on pain, nausea, anxiety and fatigue, and life quality and also on biochemical parameters were observed.

The authors concluded that, based on the results of this study, an affirmation can be made regarding the use of TT, as a non-invasive intervention for improving the health status in patients with cancer. Moreover, therapeutic touch was proved to be a useful strategy for adult patients with cancer.

This review is badly designed and poorly reported. Crucially, its conclusions are not credible. Contrary to what the authors stated when formulating their aims, the methods lack any attempt of critically evaluating the primary data.

A systematic review is more than a process of ‘pea counting’. It requires a rigorous assessment of the risk of bias of the included studies. If that crucial step is absent, the article is next to worthless and the review degenerates into a promotional excercise. Sadly, this is the case with the present review.

You may think that this is relatively trivial (“Who cares what a few feeble-minded nurses do?”), but I would disagree: if the medical literature continues to be polluted by such irresponsible trash, many people (nurses, journalists, healthcare decision makers, researchers) who may not be in a position to see the fatal flaws of such pseudo-reviews will arrive at the wrong conclusions and make wrong decisions. This will inevitably contribute to a hindrance of progress and, in certain circumstances, must endanger the well-being or even the life of vulnerable patients.

A recently published study was aimed at evaluating the efficacy and safety of potentized estrogen compared to placebo in homeopathic treatment of endometriosis-associated pelvic pain (EAPP). This 24-week, randomized, double-blind, placebo-controlled trial included 50 women aged 18-45 years old with diagnosis of deeply infiltrating endometriosis based on magnetic resonance imaging or transvaginal ultrasound after bowel preparation, and score≥5 on a visual analogue scale (VAS: range 0 to 10) for endometriosis-associated pelvic pain. Potentized estrogen (12cH, 18cH and 24cH) or placebo was administered twice daily. The primary outcome measure was change in the severity of EAPP global and partial scores (VAS) from baseline to week 24, determined as the difference in the mean score of five modalities of chronic pelvic pain (dysmenorrhea, deep dyspareunia, non-cyclic pelvic pain, cyclic bowel pain and/or cyclic urinary pain). The secondary outcome measures were mean score difference for quality of life assessed with SF-36 Health Survey Questionnaire, depression symptoms on Beck Depression Inventory (BDI), and anxiety symptoms on Beck Anxiety Inventory (BAI).

The EAPP global score (VAS: range 0 to 50) decreased by 12.82 in the group treated with potentized estrogen from baseline to week 24. Group that used potentized estrogen also exhibited partial score (VAS: range 0 to 10) reduction in three EAPP modalities: dysmenorrhea (3.28;), non-cyclic pelvic pain (2.71), and cyclic bowel pain (3.40). Placebo group did not show any significant changes in EAPP global or partial scores. In addition, the potentized estrogen group showed significant improvement in three of eight SF-36 domains (bodily pain, vitality and mental health) and depression symptoms (BDI). The placebo group showed no significant improvement in this regard. These results demonstrate superiority of potentized estrogen over placebo. Few adverse events were associated with potentized estrogen.

The authors concluded that potentized estrogen (12cH, 18cH and 24cH) at a dose of 3 drops twice daily for 24 weeks was significantly more effective than placebo for reducing endometriosis-associated pelvic pain.

The study is unusual in several ways. For instance, contrary to most trials of homeopathy, its protocol had been published in ‘Homeopathy’ in August 2016. Here is the abstract:

BACKGROUND:

Endometriosis is a chronic inflammatory disease that causes difficult-to-treat pelvic pain. Thus being, many patients seek help in complementary and alternative medicine, including homeopathy. The effectiveness of homeopathic treatment for endometriosis is controversial due to the lack of evidences in the literature. The aim of the present randomized controlled trial is to assess the efficacy of potentized estrogen compared to placebo in the treatment of chronic pelvic pain associated with endometriosis.

METHODS/DESIGN:

The present is a randomized, double-blind, placebo-controlled trial of a homeopathic medicine individualized according to program ‘New Homeopathic Medicines: use of modern drugs according to the principle of similitude’ (http://newhomeopathicmedicines.com). Women with endometriosis, chronic pelvic pain and a set of signs and symptoms similar to the adverse events caused by estrogen were recruited at the Endometriosis Unit of Division of Clinical Gynecology, Clinical Hospital, School of Medicine, University of São Paulo (Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo – HCFMUSP). The participants were selected based on the analysis of their medical records and the application of self-report structured questionnaires. A total of 50 women meeting the eligibility criteria will be randomly allocated to receive potentized estrogen or placebo. The primary clinical outcome measure will be severity of chronic pelvic pain. Statistical analysis will be performed on the intention-to-treat and per-protocol approaches comparing the effect of the homeopathic medicine versus placebo after 24 weeks of intervention.

DISCUSSION:

The present study was approved by the research ethics committee of HCFMUSP and the results are expected in 2016.

END OF QUOTE

As far as I can see, this study has no major flaws (I do not have access, however, to the full article). It seems to suggest that highly diluted homeopathic remedies are efficacious. I am aware of the fact that this will be difficult to accept for many readers of this blog.

So, what should we make of this new trial?

Should we recommend homeopathic estrogen to women suffering from endometriosis? I don’t think so. On the contrary, I recommend a healthy dose of scepticism. Clinical trials can produce false results sometimes by chance or through fraud. In any case, we hardly ever rely on the findings of a single study. The sensible course of action always is to wait for an independent replication (and, of course, study the full text of the paper).

 

One phenomenon that can be noted more frequently than any other in alternative medicine research is that studies arrive at wrong or misleading conclusions. This is more than a little disappointing, not least because it is the conclusion of a trial that is often picked up by health writers and others who in turn mislead the public. On this blog, we must have seen hundreds of examples of this irritating phenomenon. Here is yet another one. This study, a randomized, parallel, open-label exploratory trial, evaluated and compared the effects of systemic manual acupuncture, periauricular electroacupuncture and distal electroacupuncture for treating patients with tinnitus. It included patients who suffered from idiopathic tinnitus for more than two weeks were recruited. They were divided into three groups:

  1. systemic manual acupuncture group (MA),
  2. periauricular electroacupuncture group (PE),
  3. distal electroacupuncture group (DE).

Nine acupoints (TE 17, TE21, SI19, GB2, GB8, ST36, ST37, TE3 and TE9), two periauricular acupoints (TE17 and TE21), and four distal acupoints (TE3, TE9, ST36, and ST37) were selected. The treatment sessions were performed twice weekly for a total of 8 sessions over 4 weeks. Outcome measures were the tinnitus handicap inventory (THI) score and the loud and uncomfortable visual analogue scales (VAS). Demographic and clinical characteristics of all participants were compared between the groups upon admission using one-way analysis of variance (ANOVA). One-way ANOVA was used to evaluate the THI, VAS loud, and VAS uncomfortable scores. The least significant difference test was used as a post-hoc test. In total, 39 subjects were eligible for analysis. No differences in THI and VAS loudness scores were observed between groups. The VAS uncomfortable scores decreased significantly in MA and DE compared with those in PE. Within the group, all three treatments showed some effect on THI, VAS loudness scores and VAS uncomfortable scores after treatment except DE in THI. The authors concluded that there was no statistically significant difference between systemic manual acupuncture, periauricular electroacupuncture and distal electroacupuncture in tinnitus. However, all three treatments had some effect on tinnitus within the group before and after treatment. Systemic manual acupuncture and distal electroacupuncture have some effect on VAS. Neither of the three treatments tested in this study have been previously proven to work. Therefore, it is quite simply nonsensical to compare them. Comparative studies are indicated only with therapies that have a solid evidence-base. They are called ‘superiority trials’ and require a different statistical approach as well as much larger sample sizes. In other words, this study was an unethical waste of resources from the outset. With this in mind, there is only one conclusion that fits the data: there was no statistically significant difference between the three types of acupuncture. The data are therefore in keeping with the notion that all three are placebos. Alternatively one might conclude more clearly for those who are otherwise resistant to learning a lesson: POORLY DESIGNED CLINICAL TRIALS ARE UNETHICAL AND NEVER LEND THEMSELVES TO MEANINGFUL CONCLUSIONS.

One of the questions I hear frequently is ‘HOW CAN I BE SURE THIS STUDY IS SOUND’? Even though I have spent much of my professional life on this issue, I am invariably struggling to provide an answer. Firstly, because a comprehensive reply must inevitably have the size of a book, perhaps even several books. And secondly, to most lay people, the reply would be intensely boring, I am afraid.

Yet many readers of this blog evidently search for some guidance – so, let me try to provide a few indicators – indicators, not more!!! – as to what might signify a good and a poor clinical trial (other types of research would need different criteria).

INDICATORS SUGGESTIVE OF A GOOD CLINICAL TRIAL

  • Author from a respected institution.
  • Article published in a respected journal.
  • A clear research question.
  • Full description of the methods used such that an independent researcher could repeat the study.
  • Randomisation of study participants into experimental and control groups.
  • Use of a placebo in the control group where possible.
  • Blinding of patients.
  • Blinding of investigators, including clinicians administering the treatments.
  • Clear definition of a primary outcome measure.
  • Sufficiently large sample size demonstrated with a power calculation.
  • Adequate statistical analyses.
  • Clear presentation of the data such that an independent assessor can check them.
  • Understandable write-up of the entire study.
  • A discussion that puts the study into the context of all the important previous work in this area.
  • Self-critical analysis of the study design, conduct and interpretation of the results.
  • Cautious conclusion which are strictly based on the data presented.
  • Full disclosure of ethics approval and informed consent,
  • Full disclosure of funding sources.
  • Full disclosure of conflicts of interest.
  • List of references is up-to-date and includes also studies that contradict the authors’ findings.

I told you this would be boring! Not only that, but each bullet point is far too short to make real sense, and any full explanation would be even more boring to a lay person, I am sure.

What might be a little more fun is to list features of a clinical trial that might signify a poor study. So, let’s try that.

WARNIG SIGNALS INDICATING A POOR CLINICAL TRIAL

  • published in one of the many dodgy CAM journals (or in a book, blog or similar),
  • single author,
  • authors are known to be proponents of the treatment tested,
  • author has previously published only positive studies of the therapy in question (or member of my ‘ALT MED HALL OF FAME’),
  • lack of plausible rationale for the study,
  • lack of plausible rationale for the therapy that is being tested,
  • stated aim of the study is ‘to demonstrate the effectiveness of…’ (clinical trials are for testing, not demonstrating effectiveness or efficacy),
  • stated aim ‘to establish the effectiveness AND SAFETY of…’ (even large trials are usually far too small for establishing the safety of an intervention),
  • text full of mistakes, e. g. spelling, grammar, etc.
  • sample size is tiny,
  • pilot study reporting anything other than the feasibility of a definitive trial,
  • methods not described in sufficient detail,
  • mismatch between aim, method, and conclusions of the study,
  • results presented only as a graph (rather than figures which others can re-calculate),
  • statistical approach inadequate or not sufficiently detailed,
  • discussion without critical input,
  • lack of disclosures of ethics, funding or conflicts of interest,
  • conclusions which are not based on the results.

The problem here (as above) is that one would need to write at least an entire chapter on each point to render it comprehensible. Without further detailed explanations, the issues raised remain rather abstract or nebulous. Another problem is that both of the above lists are, of course, far from complete; they are merely an expression of my own experience in assessing clinical trials.

Despite these caveats, I hope that those readers who are not complete novices to the critical evaluation of clinical trials might be able to use my ‘warning signals’ as a form of check list that helps them to tell the chaff from the wheat.

The common cold is one of the indications for which homeopathy is deemed to be effective… by homeopaths that is! Non-homeopaths are understandably critical about this claim, not least because there is no good evidence for it. But, hold on, there is a new study which might change all this.

This study was recently published in COMPLEMENTARY THERAPIES IN MEDICINE which is supposed to be one of the better journals in this area. According to its authors, it was conducted “to determine if a homeopathic syrup was effective in treating cold symptoms in preschool children.” Children diagnosed with an upper respiratory tract infection were randomized to receive a commercial homeopathic cold syrup containing allium cepa 6X, hepar sulf calc 12X, natrum muriaticum 6X, phosphorous 12X, pulsatilla 6X, sulphur 12X, and hydrastasis 6X or placebo. Parents administered the study medication as needed for 3 days. The primary outcome was change in symptoms one hour after each dose. Parents also assessed the severity of each of the symptoms of runny nose, cough, congestion and sneezing at baseline and twice daily for 3 days, using a 4-point rating scale. A composite cold score was calculated by combining the values for each of the four symptoms. Among 261 eligible participants, data on 957 doses of study medication in 154 children were analyzed. There was no significant difference in improvement one hour after the dose for any symptom between the two groups. Analysis of twice daily data on the severity of cold symptoms compared to baseline values found that improvements in sneezing, cough and the composite cold score were significantly greater at both the first and second assessments among those receiving the cold syrup compared to placebo recipients.

The authors concluded that the homeopathic syrup appeared to be effective in reducing the severity of cold symptoms in the first day after beginning treatment.

Where to start? There are so many problems with this study that I find it difficult to chose the most crucial ones:

  • The study had a clearly defined primary endpoint; it was not affected by the homeopathic treatment which doubtlessly makes the study a negative trial. The only correct conclusion therefore is that THE HOMEOPATHIC SYRUP FAILED TO AFFECT THE PRIMARY OUTCOME MEASURE OF THIS STUDY. THEREFORE THE TRIAL DID NOT PRODUCE ANY EVIDENCE TO ASSUME THAT THE EXPERIMENTAL TREATMENT WAS EFFICACIOUS.
  • I don’t think that many of the primary or secondary outcome measures are validated or reliable.
  • All the positive results reported in the abstract and the article relate to secondary endpoints which are purely explanatory by nature. They should, in my view, not be mentioned in the conclusions at all.
  • The fact that some results turned out to be positive can be explained by the fact that the investigators ran dozens of tests for statistical significance which means that, by simple chance, some will turn out to produce a positive result.
  • A further explanation for the seemingly positive results might be the fact disclosed in the text of the article that the children in the homeopathy group received more conventional drugs than those in the placebo group.
  • Whatever the reason for these positive results, they certainly had nothing to do with the homeopathic syrup.
  • The study was funded by the company producing the syrup and for which one of the authors was employed as a consultant. This might be an explanation for the abominably poor science. In other words, this paper is not an exercise in testing a hypothesis but one in marketing.

While I might forgive the company for trying to maximise their sales figures, I do find it harder to forgive the authors, reviewers and editors for publishing such overtly false conclusions. In my view, they are all guilty of scientific misconduct.

Meniscus-injuries are common and there is no consensus as to how best treat them. Physiotherapists tend to advocate exercise, while surgeons tend to advise surgery.

Of course, exercise is not a typical alternative therapy but, as many alternative practitioners might disagree with this statement because they regularly recommend it to their patients, it makes sense to cover it on this blog. So, is exercise better than surgery for meniscus-problems?

The aim of this recent Norwegian study aimed to shed some light on this question. Specifically wanted to determine whether  exercise therapy is superior to arthroscopic partial meniscectomy for knee function in  patients with degenerative meniscal tears.

A total of 140 adults with degenerative medial meniscal tear verified by magnetic resonance imaging were randomised to either receiving 12 week supervised exercise therapy alone, or arthroscopic partial meniscectomy alone. Intention to treat analysis of between group difference in change in knee injury and osteoarthritis outcome score (KOOS4), defined a priori as the mean score for four of five KOOS subscale scores (pain, other symptoms, function in sport and recreation, and knee related quality of life) from baseline to two-year follow-up and change in thigh muscle strength from baseline to three months.

The results showed no clinically relevant difference between the two groups in change in KOOS4 at two years (0.9 points, 95% confidence interval −4.3 to 6.1; P=0.72). At three months, muscle strength had improved in the exercise group (P≤0.004). No serious adverse events occurred in either group during the two-year follow-up. 19% of the participants allocated to exercise therapy crossed over to surgery during the two-year follow-up, with no additional benefit.

The authors concluded that the observed difference in treatment effect was minute after two years of follow-up, and the trial’s inferential uncertainty was sufficiently small to exclude clinically relevant differences. Exercise therapy showed positive effects over surgery in improving thigh muscle strength, at least in the short-term. Our results should encourage clinicians and middle-aged patients with degenerative meniscal tear and no definitive radiographic evidence of osteoarthritis to consider supervised exercise therapy as a treatment option.

As I stated above, I mention this trial because exercise might be considered by some as an alternative therapy. The main reason for including it is, however, that it is in many ways an exemplary good study from which researchers in alternative medicine could learn.

Like so many alternative therapies, exercise is a treatment for which placebo-controlled studies are difficult, if not impossible. But that does not mean that rigorous tests of its value are impossible. The present study shows the way how it can be done.

Meaningful clinical research is no rocket science; it merely needs well-trained scientists who are willing to test the (rather than promote) their hypotheses. Sadly such individuals are as rare as gold dust in the realm of alternative medicine.

Dietary and herbal supplements (DHS) are currently popular. They are being promoted as being natural and therefore safe – an assumption that is clearly wrong: some DHS can contain toxic substances or they might cause interactions with drugs or other DHS.

This study explored whether adverse events were actually associated with such interactions and examined specific characteristics among inpatient DHS users prone to such adverse events. It was designed as a cross-sectional survey of 947 patients hospitalized in 12 departments of a tertiary academic medical centre in Haifa, Israel. It evaluated the rate of DHS use among inpatients, the potential for interactions, and actual adverse events during hospitalization associated with DHS use. It also assessed whether DHS consumption was documented in patients’ medical files. Statistical analysis was used to delineate DHS users at risk for adverse events associated with interactions with conventional drugs or other DHS.

The results show that about half of all patients took DHS. In 17 (3.7%) of the 458 DHS users, an adverse event may have been caused by DHS-drug-DHS interactions. According to the Drug Interaction Probability Scale, 14 interactions “probably” caused the adverse events, and 11 “possibly” caused them. Interactions occurred more frequently in older patients (p = 0.025, 95% CI: 2.26-19.7), patients born outside Israel (p = 0.025, 95% CI: 0.03-0.42), those with ophthalmologic (p = 0.032, 95% CI: 0.02-0.37) or gastrointestinal (p = 0.008, 95% CI: 0.05-0.46) comorbidities, and those using higher numbers of DHS (p < 0.0001, 95% CI: 0.52-2.48) or drugs (p = 0.027, 95% CI: 0.23-3.77).

The authors concluded that approximately one in 55 hospitalizations in this study may have been caused by adverse events associated with DHS-drug-DHS interactions. To minimize the actual occurrence of adverse events, medical staff education regarding DHS should be improved.

This seems to be a good study and it generated interesting findings on an important topic.

Why do I have nevertheless a problem with it?

The answer is simple but not pleasant: very similar results have been published almost simultaneously in more than one journal. The link above is to an article in the BR J CLIN PHARMACOL of October this year. The following text is from the abstract of an article in INTERN EMERG MED also of October this year:

Of 927 patients who agreed to answer the questionnaire, 458 (49.4 %) reported the use of 89 different DHS. Potential DHS-DHS interactions were identified in 12.9 % of DHS users. Three interactions were associated with the actual occurrence of adverse events. Patients at risk of DHS-DHS interactions included females (p = 0.026) and patients with greater numbers of concomitant medications (p < 0.0001) and of consumed DHS (p < 0.0001). In 88.9 % of DHS users, DHS use was not reported in medical files and only 18 % of the DHS involved in interactions were documented. Potential DHS-DHS interactions are common in inpatients, and may lead to hospitalization or worsen existing medical conditions. The causal relationship between potential interactions and actual adverse events requires further study.

END OF QUOTE

And to my surprise, I also found a third article also from the October issue of INTERN EMERG MED reporting on this survey. Here is part of its abstract:

DHS users were determined via a questionnaire. The Natural Medicine database was used to search for potential DHS-drug interactions for identified DHS, and the clinical significance was evaluated using Lexi-interact online interaction analysis. Medical files were assessed for documentation of DHS use. Univariate and multivariate logistic regression analyses were used to characterize potential risk factors for DHS-drug interactions. Of 927 patients consenting to answer the questionnaire, 458 (49 %) reported DHS use. Of these, 215 (47 %) had at least one potential interaction during hospitalization (759 interactions). Of these interactions, 116 (15 %) were potentially clinically significant. Older age [OR = 1.02 (1.01-1.04), p = 0.002], males [OR = 2.11 (1.35-3.29), p = 0.001] and increased number of used DHS [OR = 4.28 (2.28-8.03), p < 0.001] or drugs [OR = 1.95 (1.17-3.26), p = 0.011] were associated with potential interactions in DHS users. Physicians documented only 16.5 % of DHS involved in these interactions in patients’ medical files. In conclusion, a substantial number of inpatients use DHS with potential interactions with concomitant medications. Medical staff should be aware of this, question patients on DHS usage and check for such interactions.

END OF QUOTE

What is the difference between the three articles? The first one in INTERN EMERG MED authored by Levy I, Attias S, Ben Arye E, Goldstein L, Schiff E evaluated “potential DHS-DHS interactions among inpatients”. The second one in INTERN EMERG MED also authored by Levy I, Attias S, Ben Arye E, Goldstein L, Schiff E evaluated “potentially dangerous interactions of DHS with prescribed medications among inpatients”. Finally the one in BR J CLIN PHARMACOL also authored by Levy I, Attias S, Ben-Arye E, Goldstein L, Schiff E  assessed in addition the interactions between DHS and prescription drugs.

Dual publications are usually considered to be a violation of research ethics. Publication of different aspects of one single data-set in multiple articles is called ‘salami-slicing’ and is often considered to be poor form.

My question to you, the reader of this post, is: What type of scientific misconduct do we have here?

I have warned you before to be sceptical about Chinese studies. This is what I posted on this blog more than 2 years ago, for instance:

Imagine an area of therapeutics where 100% of all findings of hypothesis-testing research are positive, i.e. come to the conclusion that the treatment in question is effective. Theoretically, this could mean that the therapy is a miracle cure which is useful for every single condition in every single setting. But sadly, there are no miracle cures. Therefore something must be badly and worryingly amiss with the research in an area that generates 100% positive results.

Acupuncture is such an area; we and others have shown that Chinese trials of acupuncture hardly ever produce a negative finding. In other words, one does not need to read the paper, one already knows that it is positive – even more extreme: one does not need to conduct the study, one already knows the result before the research has started. But you might not believe my research nor that of others. We might be chauvinist bastards who want to discredit Chinese science. In this case, you might perhaps believe Chinese researchers.

In this systematic review, all randomized controlled trials (RCTs) of acupuncture published in Chinese journals were identified by a team of Chinese scientists. A total of 840 RCTs were found, including 727 RCTs comparing acupuncture with conventional treatment, 51 RCTs with no treatment controls, and 62 RCTs with sham-acupuncture controls. Among theses 840 RCTs, 838 studies (99.8%) reported positive results from primary outcomes and two trials (0.2%) reported negative results. The percentages of RCTs concealment of the information on withdraws or sample size calculations were 43.7%, 5.9%, 4.9%, 9.9%, and 1.7% respectively.

The authors concluded that publication bias might be major issue in RCTs on acupuncture published in Chinese journals reported, which is related to high risk of bias. We suggest that all trials should be prospectively registered in international trial registry in future.

END OF QUOTE

Now an even more compelling reason emerged for taking evidence from China with a pinch of salt:

A recent survey of clinical trials in China has revealed fraudulent practice on a massive scale. China’s food and drug regulator carried out a one-year review of clinical trials. They concluded that more than 80 percent of clinical data is “fabricated“. The review evaluated data from 1,622 clinical trial programs of new pharmaceutical drugs awaiting regulator approval for mass production. Officials are now warning that further evidence malpractice could still emerge in the scandal.
According to the report, much of the data gathered in clinical trials are incomplete, failed to meet analysis requirements or were untraceable. Some companies were suspected of deliberately hiding or deleting records of adverse effects, and tampering with data that did not meet expectations.

“Clinical data fabrication was an open secret even before the inspection,” the paper quoted an unnamed hospital chief as saying. Contract research organizations seem have become “accomplices in data fabrication due to cutthroat competition and economic motivation.”

A doctor at a top hospital in the northern city of Xian said the problem doesn’t lie with insufficient regulations governing clinical trials data, but with the failure to implement them. “There are national standards for clinical trials in the development of Western pharmaceuticals,” he said. “Clinical trials must be carried out in three phases, and they must be assessed at the very least for safety,” he said. “But I don’t know what happened here.”

Public safety problems in China aren’t limited to the pharmaceutical industry and the figure of 80 percent is unlikely to surprise many in a country where citizens routinely engage in the bulk-buying of overseas-made goods like infant formula powder. Guangdong-based rights activist Mai Ke said there is an all-pervasive culture of fakery across all products made in the country. “It’s not just the medicines,” Mai said. “In China, everything is fake, and if there’s a profit in pharmaceuticals, then someone’s going to fake them too.” He said the problem also extends to traditional Chinese medicines, which are widely used in conjunction with Western pharmaceuticals across the healthcare system.
“It’s just harder to regulate the fakes with traditional medicines than it is with Western pharmaceuticals, which have strict manufacturing guidelines,” he said.

According to Luo, academic ethics is an underdeveloped field in China, leading to an academic culture that is accepting of manipulation of data. “I don’t think that the 80 percent figure is overstated,” Luo said.

And what should we conclude from all this?

I find it very difficult to reach a verdict that does not sound hopelessly chauvinistic but feel that we have little choice but to distrust the evidence that originates from China. At the very minimum, I think, we must scrutinise it thoroughly; whenever it looks too good to be true, we ought to discard it as unreliable and await independent replications.

For some time now, the research activity in and around alternative medicine has been seemingly buoyant. In each of the last 4 years, Medline listed around 2 000 articles is the category of ‘complementary alternative medicine’. This will surely look impressive to many!

Why then did I write ‘seemingly’? To comprehend this a little better, we should have some comparisons. Here are numbers of Medline-listed articles published in 2015 for a few other areas:

  • Surgery: 176 277
  • Psychology: 65 679
  • Internal medicine: 36 998
  • Obstetrics/gynaecology: 13 818
  • Pharmacology: 194 322
  • Paediatrics: 30 646

Now you see, I hope, why the 2 049 Medline-listed articles in the category of ‘complementary alternative medicine’ are only seemingly impressive. But what about specific alternative therapies? Here are numbers of Medline-listed articles published in 2015 for some major alternative treatments:

  • Homeopathy: 181
  • Herbal medicine: 1 572
  • Chiropractic: 314
  • Acupuncture: 1 784
  • Naturopathy: 45
  • Dietary supplements: 5 199

These figures are perhaps interesting but not easy to interpret. They might indicate that certain sections of alternative medicine are more open to scientific scrutiny than others. Or do they show that for some areas there are more research funds and expertise than others? I am not sure I know the answer.

If we look a little closer at the research activity in defined alternative therapies, we are bound to get disappointed. I have recently done this for homeopathy and for acupuncture and reached rather gloomy conclusions.

In the case of homeopathy the were:

  1. The research activity into homeopathy is currently very subdued.
  2. Arguably the main research question of efficacy does not seem to concern researchers of homeopathy all that much.
  3. There is an almost irritating abundance of papers that are data-free and thrive on opinion (my category of ‘other papers’).
  4. Given all this, I find it hard to imagine that this area of investigation is going to generate much relevant new knowledge or clinical progress.

And in the case of acupuncture, I stated:

  • Too little research is focussed on the two big questions: efficacy and safety.
  • In relation to the meagre output in RCTs, there are too many systematic reviews.
  • As long as we cannot be sure that acupuncture is more than a placebo, all these pre-clinical studies seem a bit out of place.
  • The vast majority of the articles were in low or very low impact journals.
  • There was only one paper that I would consider outstanding.

And what about the quality of the research into alternative medicine?

Well, this is a sad and depressing tale! If you doubt it, read my previous post or indeed any of the other ~500 which I have written on this particular subject in the past.

This is a post that I wanted to write for a while (I had done something similar on acupuncture moths ago); but I had to wait, and wait, and wait…until finally there were the awaited 100 Medline listed articles on homeopathy with a publication date of 2016. It took until the beginning of August to reach the 100 mark. To put this into perspective with other areas of alternative medicine, let me give you the figures for 3 other therapies:

  • there are currently  1 413 articles from 2016 on herbal medicine;
  • 875 on acupuncture;
  • and 256 on chiropractic.

And to give you a flavour of the research activity in some areas of conventional medicine:

  • there are currently almost 100 000 articles from 2016 on surgery;
  • 1 410 on statins;
  • and 33 033 on psychotherapy.

This suggests quite strongly, I think, that the research activity in homeopathy is relatively low (to put it mildly).

So, what do the first 100 Medline articles on homeopathy cover? Here are some of the findings of my mini-survey:

  • there were 4 RCTs;
  • 3 systematic reviews;
  • 8 papers on observational-type data (case series, observational studies etc.);
  • 9 animal studies;
  • 14 other pre-clinical or basic research studies;
  • 1 pilot study;
  • 14 investigations of the quality of homeopathic preparations;
  • 15 surveys;
  • 2 investigations into the adverse effects of homeopathic treatments;
  • 49 other papers (e. g. comments, opinion pieces, letters, perspective articles, editorials).

I should mention that, because I assessed 100 papers, the above numbers can be read both as absolute as well as percentage figures.

How should we interpret my findings?

As with my previous evaluation, I must caution not to draw generalizable conclusions from them. What follows should therefore be taken with a pinch of salt (or two):

  1. The research activity into homeopathy is currently very subdued.
  2. Arguably the main research question of efficacy does not seem to concern researchers of homeopathy all that much.
  3. There is an almost irritating abundance of papers that are data-free and thrive on opinion (my category of ‘other papers’).
  4. Given all this, I find it hard to imagine that this area of investigation is going to generate much relevant new knowledge or clinical progress.
1 2 3 7
Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories