MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

scientific misconduct

1 2 3 9

Daniel P Wirth used to be THE star amongst researchers and proponents of paranormal healing. About 15 years ago, there was nobody who had published more studies of it than Wirth. The extraordinary phenomenon was not just the number of studies, but also the fact that these trials all reported positive findings.

At the time, this puzzled me a lot. I had conducted two trials of paranormal healing myself; and, in both, cases the results had turned out to be negative (see here and here). Thus I made several attempts to contact Wirth or his co-authors hoping to better understand the phenomenon. Yet I never received a reply and became increasingly suspicious of their research.

In 2004, it was announced that Wirth together with one of his co-workers had been arrested and later imprisoned for fraud. Several of his 20 papers published in various journals were subsequently withdrawn. I remember writing to several journal editors myself urging them to follow suit so that, in future, the literature would not be polluted with dubious studies. Eventually, we all forgot about the whole story.

Recently, I took a renewed interest in paranormal healing. To my surprise, I found that several of Wirth’s papers are still listed on Medline:

1 Does prayer influence the success of in vitro fertilization-embryo transfer? Report of a masked, randomized trial.

Cha KY, Wirth DP.

J Reprod Med. 2001 Sep;46(9):781-7. Erratum in: J Reprod Med. 2004 Oct;49(10):100A. Lobo, RA [removed].

PMID: 11584476

2 Multisite electromyographic analysis of therapeutic touch and qigong therapy.

Wirth DP, Cram JR, Chang RJ.

J Altern Complement Med. 1997 Summer;3(2):109-18.

PMID: 9395700

3 Multisite surface electromyography and complementary healing intervention: a comparative analysis.

Wirth DP, Cram JR.

J Altern Complement Med. 1997 Winter;3(4):355-64.

PMID: 9449057

4 Wound healing and complementary therapies: a review.

Wirth DP, Richardson JT, Eidelman WS.

J Altern Complement Med. 1996 Winter;2(4):493-502. Review.

PMID: 9395679

5 The significance of belief and expectancy within the spiritual healing encounter.

Wirth DP.

Soc Sci Med. 1995 Jul;41(2):249-60.

PMID: 7667686

6 Complementary healing intervention and dermal wound reepithelialization: an overview.

Wirth DP.

Int J Psychosom. 1995;42(1-4):48-53.

PMID: 8582812

7 The psychophysiology of nontraditional prayer.

Wirth DP, Cram JR.

Int J Psychosom. 1994;41(1-4):68-75.

PMID: 7843870

8 Complementary healing therapies.

Wirth DP, Barrett MJ.

Int J Psychosom. 1994;41(1-4):61-7.

PMID: 7843869

Multi-site electromyographic analysis of non-contact therapeutic touch.

Wirth DP, Cram JR.

Int J Psychosom. 1993;40(1-4):47-55.

PMID: 8070986

____________________________________________________________________________

Of these 9 papers, only the first one in the list carries a note indicating that the paper has been removed. In other words, 8 of Wirth’s articles are still available as though they are fine and proper.

The situation is even worse on ‘Research Gate’. Here we find all of the following articles with no indication of any suspicion of fraud:

———-

Article: Does Prayer Influence the Success of in Vitro Fertilization-Embryo Transfer? Report of a Masked, Randomized Trial

KY Cha · Daniel P. Wirth · RA Lobo

Abstract: To assess the potential effect of intercessory prayer (IP) on pregnancy rates in women being treated with in vitro fertilization-embryo transfer (IVF-ET). Prospective, double-blind, randomized clinical trial in which patients and providers were not informed about the intervention. Statisticians and investigators were masked until all the data had been collected and clinical outcomes were known. The setting was an IVF-ET program at Cha Hospital, Seoul, Korea. IP was carried out by prayer…

Article · Oct 2001 · The Journal of reproductive medicine

———-

Article: Exploring Further Menstruation and Spiritual Healing

Daniel P. Wirth

Article · Apr 1997 · Alternative and Complementary Therapies

———-

Article: Multisite Surface Electromyography and Complementary Healing Intervention: A Comparative Analysis

Daniel P. Wirth · Jeffrey R. Cram

Abstract: A comparative analysis was conducted on a series of three experimental studies that examined the effect of various local and nonlocal (distant) complementary healing methods on multisite surface electromyographic (sEMG) and autonomic measures. The series concentrated sEMG electrode placement on specific neuromuscular paraspinal centers (cervical [C4], thoracic [T6], and lumbar [L3]), along with the frontalis region, due to the fact that these sites corresponded to the location of individual…

Article · Feb 1997 · The Journal of Alternative and Complementary Medicine

———-

Article: Multisite Electromyographic Analysis of Therapeutic Touch and Qigong Therapy

Daniel P. Wirth · Jeffrey R. Cram · Richard J. Chang

Abstract: The influence of complementary healing treatment on paraspinal electromagnetic activity at specific neuromuscular sites was examined in an exploratory pilot study that used a multisite surface electromyographic (sEMG) assessment procedure. The study was a replication and extension of previous research that indicated that complementary healing had a significant effect in normalizing the activity of the “end organ” for the central nervous system (CNS). Multisite sEMG electrodes were placed on…

Article · Feb 1997 · The Journal of Alternative and Complementary Medicine

———-

Article: Non-contact Therapeutic Touch intervention and full thickness cutaneous wounds: A replication

Daniel P Wirth · Joseph T. Richardson · Robert D. Martinez · William S. Eidelman · Maria E.L. Lopez

Abstract: The study described here utilized a randomized double-blind methodological protocol in order to examine the effect of non-contact therapeutic touch (NCTT) on the healing rate of full-thickness human dermal wounds. This study is the fifth experiment in a series of extensions based on the original research design, and is an exact methodological replication of the second study in the series. Thirty-two healthy subjects were randomly divided into treatment and control groups and biopsies were…

Article · Oct 1996 · Complementary Therapies in Medicine

———-

Article: Wound Healing and Complementary Therapies: A Review

Daniel P. Wirth · Joseph T. Richardson · William S. Eidelman

Abstract: A series of five innovative experiments conducted by Wirth et al. which examined the effect of various complementary healing interventions on the reepithelialization rate of full thickness human dermal wounds was assessed as to specific methodological and related factors. The treatment interventions utilized in the series included experimental derivatives of the Therapeutic Touch (TT), Reiki, LeShan, and Intercessory Prayer techniques. The results of the series indicated statistical…

Article · Feb 1996 · The Journal of Alternative and Complementary Medicine

———-

Article: Haematological indicators of complementary healing intervention

Daniel P. Wirth · Richard J. Chang · William S. Eidelman · Joanne B. Paxton

Abstract: The effect of Therapeutic Touch, Reiki, LeShan, and Qigong Therapy in combination on haematological measures was examined in an exploratory pilot study utilizing a randomized, double-blind, within-subject, crossover design. Fourteen subjects were randomly assigned to treatment and control conditions for two one-hour evaluation sessions separated by a 24-hour period. Six blood samples were taken from each subject — three during the treatment condition and three during the control condition —…

Article · Jan 1996 · Complementary Therapies in Medicine

———-

Article: The significance of belief and expectancy within the spiritual healing encounter

Daniel P. Wirth

Abstract: Historically, traditional cultures recognized the importance of belief and expectancy within the healing encounter and created complex rituals and ceremonies designed to elicit or foster the expectancy and participation of both the healer and patient, as well as the community as a whole. This holistic approach to health care was a fundamental component in the spiritual healing rituals of virtually all traditional native cultures. The focus of the current study was to assess the impact of…

Article · Aug 1995 · Social Science & Medicine

———-

Article: Non-contact Therapeutic Touch and wound re-epithelialization: An extension of previous research

Daniel P. Wirth · Margaret J Barrett · William S. Eidelman

Abstract: The results demonstrated a non-significant effect for the treatment versus control groups. Several factors may have contributed to the non-significance, including: the ineffectiveness of the healers, the inhibitive or dampening effect of plastic, the use of self-regulatory techniques, the dependent variable examined, the type of dressing utilized, the influence of distance, and the healers’ belief as to the effect of distance. Future studies would benefit by examining the methodological…

Article · Oct 1994 · Complementary Therapies in Medicine

———-

Article: The effect of complementary healing therapy on postoperative pain after surgical removal of impacted third molar teeth

Daniel P. Wirth · David R. Brenlan · Richard J. Levine · Christine M. Rodriguez

Abstract: This study utilized a randomized, double-blind, within subject, crossover design to examine the effect of Reiki and LeShan healing in combination on iatrogenic pain experienced after unilateral operative extraction of the lower third molar. Two separate operations were performed on 21 patients with bilateral, asymptomatic, impacted lower third molar teeth. The patients were randomly assigned to the treatment or control condition prior to the first operation. For the second operation,…

Article · Jul 1993 · Complementary Therapies in Medicine

———-

Article: Full thickness dermal wounds treated with non-contact Therapeutic Touch: a replication and extension

Daniel P. Wirth · Joseph T. Richardson · William S. Eidelman · Alice C. O’Malley

Abstract: The effect of non-contact Therapeutic Touch (NCTT) therapy on the healing rate of full thickness human dermal wounds was examined in a double-blind, placebo controlled study. Punch biopsies were performed on the lateral deltoid in 24 healthy subjects who were randomly assigned to treatment and control groups. Active and control treatments were comprised of daily sessions of 5 min of exposure to a hidden NCTT practitioner or control exposure. Placebo effects and the possible influences of…

Article · Jul 1993 · Complementary Therapies in Medicine

———-

Article: The Effect of Alternative Healing Therapy on the Regeneration Rate of Salamander Forelimbs

DANIEL P. WIRTH · CATHY A. JOHNSON · JOSEPH S. HORVATH

Article · Jan 1992

———-

Article: Complementary Healing Therapy For Patients With Type I Diabetes Mellitus

DANIEL P. WIRTH · BARBARA J. MITCHELL

Abstract: The effect of Noncontact Therapeutic Touch (NCTT) therapy and Intercessory Prayer (IP) on patient determined insulin dosage was exam- ined in an exploratory pilot study which utilized a randomized, double-blind, within subject, crossover design. Sixteen type I diabetes mellitus patients were examined and treated daily by NCTT and IP healers for a duration of two weeks. Each patient underwent two separate sessions-one in the treat- ment condition and one in the control condition-with the…

____________________________________________________________________

What is even worse, Wirth’s papers continue to get cited. In other words, Wirth’s research lives on regardless of the fact that it is highly dubious.

In my view, it is long over-due for all journal-editors to fully and completely delete Wirth’s dubious papers. This is particularly true since several experts have alerted them to the problem. Furthermore, I submit that failing to take action  amounts to unethical behaviour which is quite simply unacceptable.

Since many months, I have noticed a proliferation of so-called pilot studies of alternative therapies. A pilot study (also called feasibility study) is defined as a small scale preliminary study conducted in order to evaluate feasibility, time, cost, adverse events, and improve upon the study design prior to performance of a full-scale research project. Here I submit that most of the pilot studies of alternative therapies are, in fact, bogus.

To qualify as a pilot study, an investigation needs to have an aim that is in line with the above-mentioned definition. Another obvious hallmark must be that its conclusions are in line with this aim. We do not need to conduct much research to find that even these two elementary preconditions are not fulfilled by the plethora of pilot studies that are currently being published, and that proper pilot studies of alternative medicine are very rare.

Three recent examples of dodgy pilot studies will have to suffice (but rest assured, there are many, many more).

Foot Reflexotherapy Induces Analgesia in Elderly Individuals with Low Back Pain: A Randomized, Double-Blind, Controlled Pilot Study

The aim of this study was to evaluate the effects of foot reflexotherapy on pain and postural balance in elderly individuals with low back pain. And the conclusions drawn by its authors were that this study demonstrated that foot reflexotherapy induced analgesia but did not affect postural balance in elderly individuals with low back pain.

Effect of Tai Chi Training on Dual-Tasking Performance That Involves Stepping Down among Stroke Survivors: A Pilot Study.

The aim of this study was to investigate the effect of Tai Chi training on dual-tasking performance that involved stepping down and compared it with that of conventional exercise among stroke survivors. And the conclusions read: These results suggest a beneficial effect of Tai Chi training on cognition among stroke survivors without compromising physical task performance in dual-tasking.

The Efficacy of Acupuncture on Anthropometric Measures and the Biochemical Markers for Metabolic Syndrome: A Randomized Controlled Pilot Study.

The aim of this study was to evaluate the efficacy [of acupuncture] over 12 weeks of treatment and 12 weeks of follow-up. And the conclusion: Acupuncture decreases WC, HC, HbA1c, TG, and TC values and blood pressure in MetS.

It is almost painfully obvious that these studies are not ‘pilot’ studies as defined above.

So, what are they, and why are they so popular in alternative medicine?

The way I see it, they are the result of amateur researchers conducting pseudo-research for publication in lamentable journals in an attempt to promote their pet therapies (I have yet to find such a study that reports a negative finding). The sequence of events that lead to the publication of such pilot studies is usually as follows:

  • An enthusiast or a team of enthusiasts of alternative medicine decide that they will do some research.
  • They have no or very little know-how in conducting a clinical trial.
  • They nevertheless feel that such a study would be nice as it promotes both their careers and their pet therapy.
  • They design some sort of a plan and start recruiting patients for their trial.
  • At this point they notice that things are not as easy as they had imagined.
  • They have too few funds and too little time to do anything properly.
  • This does, however, not stop them to continue.
  • The trial progresses slowly, and patient numbers remain low.
  • After a while the would-be researchers get fed up and decide that their study has enough patients to stop the trial.
  • They improvise some statistical analyses with their results.
  • They write up the results the best they can.
  • They submit it for publication in a 3rd class journal and, in order to get it accepted, they call it a ‘pilot study’.
  • They feel that this title is an excuse for even the most obvious flaws in their work.
  • The journal’s reviewers and editors are all proponents of alternative medicine who welcome any study that seems to confirm their belief.
  • Thus the study does get published despite the fact that it is worthless.

Some might say ‘so what? no harm done!’

But I beg to differ: these studies pollute the medical literature and misguide people who are unable or unwilling to look behind the smoke-screen. Enthusiasts of alternative medicine popularise these bogus trials, while hiding the fact that their results are unreliable. Journalists report about them, and many consumers assume they are being told the truth – after all it was published in a ‘peer-reviewed’ medical journal!

My conclusions are as simple as they are severe:

  • Such pilot studies are the result of gross incompetence on many levels (researchers, funders, ethics committees, reviewers, journal editors).
  • They can cause considerable harm, because they mislead many people.
  • In more than one way, they represent a violation of medical ethics.
  • The could be considered scientific misconduct.
  • We should think of stopping this increasingly common form of scientific misconduct.

As I often said, I find it regrettable that sceptics often say THERE IS NOT A SINGLE STUDY THAT SHOWS HOMEOPATHY TO BE EFFECTIVE (or something to that extent). This is quite simply not true, and it gives homeopathy-fans the occasion to suggest sceptics wrong. The truth is that THE TOTALITY OF THE MOST RELIABLE EVIDENCE FAILS TO SUGGEST THAT HIGHLY DILUTED HOMEOPATHIC REMEDIES ARE EFFECTIVE BEYOND PLACEBO. As a message for consumers, this is a little more complex, but I believe that it’s worth being well-informed and truthful.

And that also means admitting that a few apparently rigorous trials of homeopathy exist and some of them show positive results. Today, I want to focus on this small set of studies.

How can a rigorous trial of a highly diluted homeopathic remedy yield a positive result? As far as I can see, there are several possibilities:

  1. Homeopathy does work after all, and we have not fully understood the laws of physics, chemistry etc. Homeopaths favour this option, of course, but I find it extremely unlikely, and most rational thinkers would discard this possibility outright. It is not that we don’t quite understand homeopathy’s mechanism; the fact is that we understand that there cannot be a mechanism that is in line with the laws of nature.
  2. The trial in question is the victim of some undetected error.
  3. The result has come about by chance. Of 100 trials, 5 would produce a positive result at the 5% probability level purely by chance.
  4. The researchers have cheated.

When we critically assess any given trial, we attempt, in a way, to determine which of the 4 solutions apply. But unfortunately we always have to contend with what the authors of the trial tell us. Publications never provide all the details we need for this purpose, and we are often left speculating which of the explanations might apply. Whatever it is, we assume the result is false-positive.

Naturally, this assumption is hard to accept for homeopaths; they merely conclude that we are biased against homeopathy and conclude that, however, rigorous a study of homeopathy is, sceptics will not accept its result, if it turns out to be positive.

But there might be a way to settle the argument and get some more objective verdict, I think. We only need to remind ourselves of a crucially important principle in all science: INDEPENDENT REPLICATIONTo be convincing, a scientific paper needs to provide evidence that the results are reproducible. In medicine, it unquestionably is wise to accept a new finding only after it has been confirmed by other, independent researchers. Only if we have at least one (better several) independent replications, can we be reasonably sure that the result in question is true and not false-positive due to bias, chance, error or fraud.

And this is, I believe, the extremely odd phenomenon about the ‘positive’ and apparently rigorous studies of homeopathic remedies. Let’s look at the recent meta-analysis of Mathie et al. The authors found several studies that were both positive and fairly rigorous. These trials differ in many respects (e. g. remedies used, conditions treated) but they have, as far as I can see, one important feature in common: THEY HAVE NOT BEEN INDEPENDENTLY REPLICATED.

If that is not astounding, I don’t know what is!

Think of it: faced with a finding that flies in the face of science and would, if true, revolutionise much of medicine, scientists should jump with excitement. Yet, in reality, nobody seems to take the trouble to check whether it is the truth or an error.

To explain this absurdity more fully, let’s take just one of these trials as an example, one related to a common and serious condition: COPD

The study is by Prof Frass and was published in 2005 – surely long enough ago for plenty of independent replications to emerge. Its results showed that potentized (C30) potassium dichromate decreases the amount of tracheal secretions was reduced, extubation could be performed significantly earlier, and the length of stay was significantly shorter. This is a scientific as well as clinical sensation, if there ever was one!

The RCT was published in one of the leading journals on this subject (Chest) which is read by most specialists in the field, and it was at the time widely reported. Even today, there is hardly an interview with Prof Frass in which he does not boast about this trial with truly sensational results (only last week, I saw one). If Frass is correct, his findings would revolutionise the lives of thousands of seriously suffering patients at the very brink of death. In other words, it is inconceivable that Frass’ result has not been replicated!

But it hasn’t; at least there is nothing in Medline.

Why not? A risk-free, cheap, universally available and easy to administer treatment for such a severe, life-threatening condition would normally be picked up instantly. There should not be one, but dozens of independent replications by now. There should be several RCTs testing Frass’ therapy and at least one systematic review of these studies telling us clearly what is what.

But instead there is a deafening silence.

Why?

For heaven sakes, why?

The only logical explanation is that many centres around the world did try Frass’ therapy. Most likely they found it does not work and soon dismissed it. Others might even have gone to the trouble of conducting a formal study of Frass’ ‘sensational’ therapy and found it to be ineffective. Subsequently they felt too silly to submit it for publication – who would not laugh at them, if they said they trailed a remedy that was diluted 1: 1000000000000000000000000000000000000000000000000000000000000 and found it to be worthless? Others might have written up their study and submitted it for publication, but got rejected by all reputable journals in the field because the editors felt that comparing one placebo to another placebo is not real science.

And this is roughly, how it went with the other ‘positive’ and seemingly rigorous studies of homeopathy as well, I suspect.

Regardless of whether I am correct or not, the fact is that there are no independent replications (if readers know any, please let me know).

Once a sufficiently long period of time has lapsed and no replications of a ‘sensational’ finding did not emerge, the finding becomes unbelievable or bogus – no rational thinker can possibly believe such a results (I for one have not yet met an intensive care specialist who believes Frass’ findings, for instance). Subsequently, it is quietly dropped into the waste-basket of science where it no longer obstructs progress.

The absence of independent replications is therefore a most useful mechanism by which science rids itself of falsehoods.

It seems that homeopathy is such a falsehood.

 

 

The plethora of dodgy meta-analyses in alternative medicine has been the subject of a recent post – so this one is a mere update of a regular lament.

This new meta-analysis was to evaluate evidence for the effectiveness of acupuncture in the treatment of lumbar disc herniation (LDH). (Call me pedantic, but I prefer meta-analyses that evaluate the evidence FOR AND AGAINST a therapy.) Electronic databases were searched to identify RCTs of acupuncture for LDH, and 30 RCTs involving 3503 participants were included; 29 were published in Chinese and one in English, and all trialists were Chinese.

The results showed that acupuncture had a higher total effective rate than lumbar traction, ibuprofen, diclofenac sodium and meloxicam. Acupuncture was also superior to lumbar traction and diclofenac sodium in terms of pain measured with visual analogue scales (VAS). The total effective rate in 5 trials was greater for acupuncture than for mannitol plus dexamethasone and mecobalamin, ibuprofen plus fugui gutong capsule, loxoprofen, mannitol plus dexamethasone and huoxue zhitong decoction, respectively. Two trials showed a superior effect of acupuncture in VAS scores compared with ibuprofen or mannitol plus dexamethasone, respectively.

The authors from the College of Traditional Chinese Medicine, Jinan University, Guangzhou, Guangdong, China, concluded that acupuncture showed a more favourable effect in the treatment of LDH than lumbar traction, ibuprofen, diclofenac sodium, meloxicam, mannitol plus dexamethasone and mecobalamin, fugui gutong capsule plus ibuprofen, mannitol plus dexamethasone, loxoprofen and huoxue zhitong decoction. However, further rigorously designed, large-scale RCTs are needed to confirm these findings.

Why do I call this meta-analysis ‘dodgy’? I have several reasons, 10 to be exact:

  1. There is no plausible mechanism by which acupuncture might cure LDH.
  2. The types of acupuncture used in these trials was far from uniform and  included manual acupuncture (MA) in 13 studies, electro-acupuncture (EA) in 10 studies, and warm needle acupuncture (WNA) in 7 studies. Arguably, these are different interventions that cannot be lumped together.
  3. The trials were mostly of very poor quality, as depicted in the table above. For instance, 18 studies failed to mention the methods used for randomisation. I have previously shown that some Chinese studies use the terms ‘randomisation’ and ‘RCT’ even in the absence of a control group.
  4. None of the trials made any attempt to control for placebo effects.
  5. None of the trials were conducted against sham acupuncture.
  6. Only 10 studies 10 trials reported dropouts or withdrawals.
  7. Only two trials reported adverse reactions.
  8. None of these shortcomings were critically discussed in the paper.
  9. Despite their affiliation, the authors state that they have no conflicts of interest.
  10. All trials were conducted in China, and, on this blog, we have discussed repeatedly that acupuncture trials from China never report negative results.

And why do I find the journal ‘dodgy’?

Because any journal that publishes such a paper is likely to be sub-standard. In the case of ‘Acupuncture in Medicine’, the official journal of the British Medical Acupuncture Society, I see such appalling articles published far too frequently to believe that the present paper is just a regrettable, one-off mistake. What makes this issue particularly embarrassing is, of course, the fact that the journal belongs to the BMJ group.

… but we never really thought that science publishing was about anything other than money, did we?

What an odd title, you might think.

Systematic reviews are the most reliable evidence we presently have!

Yes, this is my often-voiced and honestly-held opinion but, like any other type of research, systematic reviews can be badly abused; and when this happens, they can seriously mislead us.

new paper by someone who knows more about these issues than most of us, John Ioannidis from Stanford university, should make us think. It aimed at exploring the growth of published systematic reviews and meta‐analyses and at estimating how often they are redundant, misleading, or serving conflicted interests. Ioannidis demonstrated that publication of systematic reviews and meta‐analyses has increased rapidly. In the period January 1, 1986, to December 4, 2015, PubMed tags 266,782 items as “systematic reviews” and 58,611 as “meta‐analyses.” Annual publications between 1991 and 2014 increased 2,728% for systematic reviews and 2,635% for meta‐analyses versus only 153% for all PubMed‐indexed items. Ioannidis believes that probably more systematic reviews of trials than new randomized trials are published annually. Most topics addressed by meta‐analyses of randomized trials have overlapping, redundant meta‐analyses; same‐topic meta‐analyses may exceed 20 sometimes.

Some fields produce massive numbers of meta‐analyses; for example, 185 meta‐analyses of antidepressants for depression were published between 2007 and 2014. These meta‐analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests. China has rapidly become the most prolific producer of English‐language, PubMed‐indexed meta‐analyses. The most massive presence of Chinese meta‐analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes. Furthermore, many contracting companies working on evidence synthesis receive industry contracts to produce meta‐analyses, many of which probably remain unpublished. Many other meta‐analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta‐analyses are both non‐misleading and useful.

The author concluded that the production of systematic reviews and meta‐analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta‐analyses are unnecessary, misleading, and/or conflicted.

Ioannidis makes the following ‘Policy Points’:

  • Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta‐analyses. Instead of promoting evidence‐based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools.
  • Suboptimal systematic reviews and meta‐analyses can be harmful given the major prestige and influence these types of studies have acquired.
  • The publication of systematic reviews and meta‐analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.

Obviously, Ioannidis did not have alternative medicine in mind when he researched and published this article. But he easily could have! Virtually everything he stated in his paper does apply to it. In some areas of alternative medicine, things are even worse than Ioannidis describes.

Take TCM, for instance. I have previously looked at some of the many systematic reviews of TCM that currently flood Medline, based on Chinese studies. This is what I concluded at the time:

Why does that sort of thing frustrate me so much? Because it is utterly meaningless and potentially harmful:

  • I don’t know what treatments the authors are talking about.
  • Even if I managed to dig deeper, I cannot get the information because practically all the primary studies are published in obscure journals in Chinese language.
  • Even if I  did read Chinese, I do not feel motivated to assess the primary studies because we know they are all of very poor quality – too flimsy to bother.
  • Even if they were formally of good quality, I would have my doubts about their reliability; remember: 100% of these trials report positive findings!
  • Most crucially, I am frustrated because conclusions of this nature are deeply misleading and potentially harmful. They give the impression that there might be ‘something in it’, and that it (whatever ‘it’ might be) could be well worth trying. This may give false hope to patients and can send the rest of us on a wild goose chase.

So, to ease the task of future authors of such papers, I decided give them a text for a proper EVIDENCE-BASED conclusion which they can adapt to fit every review. This will save them time and, more importantly perhaps, it will save everyone who might be tempted to read such futile articles the effort to study them in detail. Here is my suggestion for a conclusion soundly based on the evidence, not matter what TCM subject the review is about:

OUR SYSTEMATIC REVIEW HAS SHOWN THAT THERAPY ‘X’ AS A TREATMENT OF CONDITION ‘Y’ IS CURRENTLY NOT SUPPORTED BY SOUND EVIDENCE.

On another occasion, I stated that I am getting very tired of conclusions stating ‘…XY MAY BE EFFECTIVE/HELPFUL/USEFUL/WORTH A TRY…’ It is obvious that the therapy in question MAY be effective, otherwise one would surely not conduct a systematic review. If a review fails to produce good evidence, it is the authors’ ethical, moral and scientific obligation to state this clearly. If they don’t, they simply misuse science for promotion and mislead the public. Strictly speaking, this amounts to scientific misconduct.

In yet another post on the subject of systematic reviews, I wrote that if you have rubbish trials, you can produce a rubbish review and publish it in a rubbish journal (perhaps I should have added ‘rubbish researchers).

And finally this post about a systematic review of acupuncture: it is almost needless to mention that the findings (presented in a host of hardly understandable tables) suggest that acupuncture is of proven or possible effectiveness/efficacy for a very wide array of conditions. It also goes without saying that there is no critical discussion, for instance, of the fact that most of the included evidence originated from China, and that it has been shown over and over again that Chinese acupuncture research never seems to produce negative results.

The main point surely is that the problem of shoddy systematic reviews applies to a depressingly large degree to all areas of alternative medicine, and this is misleading us all.

So, what can be done about it?

My preferred (but sadly unrealistic) solution would be this:

STOP ENTHUSIASTIC AMATEURS FROM PRETENDING TO BE RESEARCHERS!

Research is not fundamentally different from other professional activities; to do it well, one needs adequate training; and doing it badly can cause untold damage.

A few days ago, the German TV ‘FACT’ broadcast a film (it is in German, the bit on homeopathy starts at ~min 20) about a young woman who had her breast cancer first operated but then decided to forfeit subsequent conventional treatments. Instead she chose homeopathy which she received from Dr Jens Wurster at the ‘Clinica Sta Croce‘ in Lucano/Switzerland.

Elsewhere Dr Wurster stated this: Contrary to chemotherapy and radiation, we offer a therapy with homeopathy that supports the patient’s immune system. The basic approach of orthodox medicine is to consider the tumor as a local disease and to treat it aggressively, what leads to a weakening of the immune system. However, when analyzing all studies on cured cancer cases it becomes evident that the immune system is always the decisive factor. When the immune system is enabled to recognize tumor cells, it will also be able to combat them… When homeopathic treatment is successful in rebuilding the immune system and reestablishing the basic regulation of the organism then tumors can disappear again. I’ve treated more than 1000 cancer patients homeopathically and we could even cure or considerably ameliorate the quality of life for several years in some, advanced and metastasizing cases.

The recent TV programme showed a doctor at this establishment confirming that homeopathy alone can cure cancer. Dr Wurster (who currently seems to be a star amongst European homeopaths) is seen lecturing at the 2017 World Congress of Homeopathic Physicians in Leipzig and stating that a ‘particularly rigorous study’ conducted by conventional scientists (the senior author is Harald Walach!, hardly a conventional scientist in my book) proved homeopathy to be effective for cancer. Specifically, he stated that this study showed that ‘homeopathy offers a great advantage in terms of quality of life even for patients suffering from advanced cancers’.

This study did, of course, interest me. So, I located it and had a look. Here is the abstract:

BACKGROUND:

Many cancer patients seek homeopathy as a complementary therapy. It has rarely been studied systematically, whether homeopathic care is of benefit for cancer patients.

METHODS:

We conducted a prospective observational study with cancer patients in two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). For a direct comparison, matched pairs with patients of the same tumour entity and comparable prognosis were to be formed. Main outcome parameter: change of quality of life (FACT-G, FACIT-Sp) after 3 months. Secondary outcome parameters: change of quality of life (FACT-G, FACIT-Sp) after a year, as well as impairment by fatigue (MFI) and by anxiety and depression (HADS).

RESULTS:

HG: FACT-G, or FACIT-Sp, respectively improved statistically significantly in the first three months, from 75.6 (SD 14.6) to 81.1 (SD 16.9), or from 32.1 (SD 8.2) to 34.9 (SD 8.32), respectively. After 12 months, a further increase to 84.1 (SD 15.5) or 35.2 (SD 8.6) was found. Fatigue (MFI) decreased; anxiety and depression (HADS) did not change. CG: FACT-G remained constant in the first three months: 75.3 (SD 17.3) at t0, and 76.6 (SD 16.6) at t1. After 12 months, there was a slight increase to 78.9 (SD 18.1). FACIT-Sp scores improved significantly from t0 (31.0 – SD 8.9) to t1 (32.1 – SD 8.9) and declined again after a year (31.6 – SD 9.4). For fatigue, anxiety, and depression, no relevant changes were found. 120 patients of HG and 206 patients of CG met our criteria for matched-pairs selection. Due to large differences between the two patient populations, however, only 11 matched pairs could be formed. This is not sufficient for a comparative study.

CONCLUSION:

In our prospective study, we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment. It would take considerably larger samples to find matched pairs suitable for comparison in order to establish a definite causal relation between these effects and homeopathic treatment.

_________________________________________________________________

Even the abstract makes several points very clear, and the full text confirms further embarrassing details:

  • The patients in this study received homeopathy in addition to standard care (the patient shown in the film only had homeopathy until it was too late, and she subsequently died, aged 33).
  • The study compared A+B with B alone (A=homeopathy, B= standard care). It is hardly surprising that the additional attention of A leads to an improvement in quality of life. It is arguably even unethical to conduct a clinical trial to demonstrate such an obvious outcome.
  • The authors of this paper caution that it is not possible to conclude that a causal relationship between homeopathy and the outcome exists.
  • This is true not just because of the small sample size, but also because of the fact that the two groups had not been allocated randomly and therefore are bound to differ in a whole host of variables that have not or cannot be measured.
  • Harald Walach, the senior author of this paper, held a position which was funded by Heel, Baden-Baden, one of Germany’s largest manufacturer of homeopathics.
  • The H.W.& J.Hector Foundation, Germany, and the Samueli Institute, provided the funding for this study.

In the film, one of the co-authors of this paper, the oncologist HH Bartsch from Freiburg, states that Dr Wurster’s interpretation of this study is ‘dishonest’.

I am inclined to agree.

The authors of this systematic review aimed to summarize the evidence of clinical trials on cupping for athletes. Randomized controlled trials on cupping therapy with no restriction regarding the technique, or co-interventions, were included, if they measured the effects of cupping compared with any other intervention on health and performance outcomes in professionals, semi-professionals, and leisure athletes. Data extraction and risk of bias assessment using the Cochrane Risk of Bias Tool were conducted independently by two pairs of reviewers.

Eleven trials with n = 498 participants from China, the United States, Greece, Iran, and the United Arab Emirates were included, reporting effects on different populations, including soccer, football, and handball players, swimmers, gymnasts, and track and field athletes of both amateur and professional nature. Cupping was applied between 1 and 20 times, in daily or weekly intervals, alone or in combination with, for example, acupuncture. Outcomes varied greatly from symptom intensity, recovery measures, functional measures, serum markers, and experimental outcomes. Cupping was reported as beneficial for perceptions of pain and disability, increased range of motion, and reductions in creatine kinase when compared to mostly untreated control groups. The majority of trials had an unclear or high risk of bias. None of the studies reported safety.

Risk of bias of included trials. “+” indicates low risk of bias, “−” indicates high risk of bias, and “?” indicates unclear risk of bias.

The authors concluded that no explicit recommendation for or against the use of cupping for athletes can be made. More studies are necessary for conclusive judgment on the efficacy and safety of cupping in athletes.

Considering the authors’ stated aim, this conclusion seems odd. Surely, they should have concluded that THERE IS NO CONVINCING EVIDENCE FOR THE USE OF CUPPING IN ATHLETES. But this sounds rather negative, and the JCAM does not seem to tolerate negative conclusions, as discussed repeatedly on this blog.

The discussion section of this paper is bar of any noticeable critical input (for those who don’t know: the aim of any systematic review must be to CRITICALLY EVALUATE THE PRIMARY DATA). The authors even go as far as stating that the trials reported in this systematic review found beneficial effects of cupping in athletes when compared to no intervention. I find this surprising and bordering on scientific misconduct. The RCTs were mostly not on cupping but on cupping in combination with some other treatments. More importantly, they were of such deplorable quality that they allow no conclusions about effectiveness. Lastly, they mostly failed to report on adverse effects which, as I have often stated, is a violation of research ethics.

In essence, all this paper proves is that, if you have rubbish trials, you can produce a rubbish review and publish it in a rubbish journal.

Some of you will remember the saga of the British Chiropractic Association suing my friend and co-author Simon Singh (eventually losing the case, lots of money and all respect). One of the ‘hot potatoes’ in this case was the question whether chiropractic is effective for infant colic. This question is settled, I thought: IT HAS NOT BEEN SHOWN TO WORK BETTER THAN A PLACEBO.

Yet manipulators have not forgotten the defeat and are still plotting, it seems, to overturn it. Hence a new systematic review assessed the effect of manual therapy interventions for healthy but unsettled, distressed and excessively crying infants.

The authors reviewed published peer-reviewed primary research articles in the last 26 years from nine databases (Medline Ovid, Embase, Web of Science, Physiotherapy Evidence Database, Osteopathic Medicine Digital Repository , Cochrane (all databases), Index of Chiropractic Literature, Open Access Theses and Dissertations and Cumulative Index to Nursing and Allied Health Literature). The inclusion criteria were: manual therapy (by regulated or registered professionals) of unsettled, distressed and excessively crying infants who were otherwise healthy and treated in a primary care setting. Outcomes of interest were: crying, feeding, sleep, parent-child relations, parent experience/satisfaction and parent-reported global change. The authors included the following types of peer-reviewed studies in our search: RCTs, prospective cohort studies, observational studies, case–control studies, case series, questionnaire surveys and qualitative studies.

Nineteen studies were selected for full review: seven randomised controlled trials, seven case series, three cohort studies, one service evaluation study and one qualitative study. Only 5 studies were rated as high quality: four RCTs (low risk of bias) and a qualitative study.

The authors found moderate strength evidence for the effectiveness of manual therapy on: reduction in crying time (favourable: -1.27 hours per day (95% CI -2.19 to -0.36)), sleep (inconclusive), parent-child relations (inconclusive) and global improvement (no effect).

Reduction in crying: RCTs mean difference.

The risk of reported adverse events was low (only 8 studies mentioned adverse effects at all, meaning that the rest were in breach of research and publication ethics): seven non-serious events per 1000 infants exposed to manual therapy (n=1308) and 110 per 1000 in those not exposed.

The authors concluded that some small benefits were found, but whether these are meaningful to parents remains unclear as does the mechanisms of action. Manual therapy appears relatively safe.

For several reasons, I find this review, although technically sound, quite odd.

Why review uncontrolled data when RCTs are available?

How can a qualitative study be rated as high quality for assessing the effectiveness of a therapy?

How can the authors categorically conclude that there were benefits when there were only 4 RCTs of high quality?

Why do they not explain the implications of none of the RCTs being placebo-controlled?

How can anyone pool the results of all types of manual therapies which, as most of us know, are highly diverse?

How can the authors conclude about the safety of manual therapies when most trials failed to report on this issue?

Why do they not point out that this is unethical?

My greatest general concern about this review is the overt lack of critical input. A systematic review is not a means of promoting an intervention but of critically assessing its value. This void of critical thinking is palpable throughout the paper. In the discussion section, for instance, the authors state that “previous systematic reviews from 2012 and 2014 concluded there was favourable but inconclusive and weak evidence for manual therapy for infantile colic. They mention two reviews to back up this claim. They conveniently forget my own review of 2009 (the first on this subject). Why? Perhaps because it did not fit their preconceived ideas? Here is my abstract:

Some chiropractors claim that spinal manipulation is an effective treatment for infant colic. This systematic review was aimed at evaluating the evidence for this claim. Four databases were searched and three randomised clinical trials met all the inclusion criteria. The totality of this evidence fails to demonstrate the effectiveness of this treatment. It is concluded that the above claim is not based on convincing data from rigorous clinical trials.

Towards the end of their paper, the authors state that “this was a comprehensive and rigorously conducted review…” I beg to differ; it turned out to be uncritical and biased, in my view. And at the very end of the article, we learn a possible reason for this phenomenon: “CM had financial support from the National Council for Osteopathic Research from crowd-funded donations.”

The aim of this three-armed, parallel, randomized exploratory study was to determine, if two types of acupuncture (auricular acupuncture [AA] and traditional Chinese acupuncture [TCA]) were feasible and more effective than usual care (UC) alone for TBI–related headache. The subjects were previously deployed Service members (18–69 years old) with mild-to-moderate TBI and headaches. The interventions explored were UC alone or with the addition of AA or TCA. The primary outcome was the Headache Impact Test (HIT). Secondary outcomes were the Numerical Rating Scale (NRS), Pittsburgh Sleep Quality Index, Post-Traumatic Stress Checklist, Symptom Checklist-90-R, Medical Outcome Study Quality of Life (QoL), Beck Depression Inventory, State-Trait Anxiety Inventory, the Automated Neuropsychological Assessment Metrics, and expectancy of outcome and acupuncture efficacy.

Mean HIT scores decreased in the AA and TCA groups but increased slightly in the UC-only group from baseline to week 6 [AA, −10.2% (−6.4 points); TCA, −4.6% (−2.9 points); UC, +0.8% (+0.6 points)]. Both acupuncture groups had sizable decreases in NRS (Pain Best), compared to UC (TCA versus UC: P = 0.0008, d = 1.70; AA versus UC: P = 0.0127, d = 1.6). No statistically significant results were found for any other secondary outcome measures.

The authors concluded that both AA and TCA improved headache-related QoL more than UC did in Service members with TBI.

The stated aim of this study (to determine whether AA or TCA both with UC are more effective than UC alone) does not make sense and should therefore never have passed ethics review, in my view. The RCT followed a design which essentially is the much-lamented ‘A+B versus B’ protocol (except that a further groups ‘C+B’ was added). The nature of such designs is that there is no control for placebo effects, the extra time and attention, etc. Therefore, such studies cannot fail but generate positive results, even if the tested intervention is a placebo. In such trials, it is impossible to attribute any outcome to the experimental treatment. This means that the positive results are known before the first patient has been enrolled; hence they are an unethical waste of resources which can only serve one purpose: to mislead us. It also means that the conclusions drawn above are not correct.

An alternative and in my view more accurate conclusion would be this one: both AA and TCA had probably no effect; the improved headache-related QoL was due to the additional attention and expectation in the two experimental groups and is unrelated to the interventions tested in this study.

In our new book, MORE HARM THAN GOOD, we discuss that such trials are deceptive to the point of being unethical. Considering the prominence and experience of Wayne Jonas, the 1st author of this paper, such obvious transgression is more than a little disappointing – I would argue that is amounts to overt scientific misconduct.

This announcement caught my eye:

START OF 1st QUOTE

Dr Patrick Vickers of the Northern Baja Gerson Centre, Mexico will deliver a two hour riveting lecture of ‘The American Experience of Dr Max Gerson, M.D.’

The lecture will present the indisputable science supporting the Gerson Therapy and its ability to reverse advanced disease.

Dr Vickers will explain the history and the politics of both medical and governmental authorities and their relentless attempts to surpress this information, keeping it from the world.

‘Dr Max Gerson, Censored for Curing Cancer’

“I see in Dr Max Gerson, one of the most eminent geniuses in medical history” Nobel Prize Laureate, Dr Albert Schweitzer.

END OF 1st QUOTE

Who is this man, Dr Patrik Vickers, I asked myself. And soon I found a CV in his own words:

START OF 2nd QUOTE

Dr. Patrick Vickers is the Director and Founder of the Northern Baja Gerson Clinic. His mission is to provide patients with the highest quality and standard of care available in the world today for the treatment of advanced (and non-advanced) degenerative disease. His dedication and commitment to the development of advanced protocols has led to the realization of exponentially greater results in healing disease. Dr. Vickers, along with his highly trained staff, provides patients with the education, support, and resources to achieve optimal health.

Dr. Patrick was born and raised outside of Milwaukee, Wisconsin. At the age of 11 years old, after witnessing a miraculous recovery from a chiropractic adjustment, Dr. Patrick’s passion for natural medicine was born.

Giving up careers in professional golf and entertainment, Dr. Patrick obtained his undergraduate degrees from the University of Wisconsin-Madison and Life University before going on to receive his doctorate in Chiropractic from New York Chiropractic College in 1997.

While a student at New York Chiropractic College(NYCC), Dr. Patrick befriended Charlotte Gerson, the last living daughter of Dr. Max Gerson, M.D. who Nobel Peace Prize Winner, Dr. Albert Schweitzer called, ” One of the most eminent geniuses in medical history. “

Dr. Gerson, murdered in 1959, remains the most censured doctor in the history of medicine as he was reversing virtually every degenerative disease known to man, including TERMINAL cancer…

END OF 2nd QUOTE

I have to admit, I find all this quite upsetting!

Not because the ticket for the lecture costs just over £27.

Not because exploitation of vulnerable patients by quacks always annoys me.

Not even because the announcement is probably unlawful, according to the UK ‘cancer act’.

I find it upsetting because there is simply no good evidence that the Gerson therapy does anything to cancer patients other than making them die earlier, poorer and more miserable (the fact that Prince Charles is a fan makes it only worse). And I do not believe that the lecture will present indisputable evidence to the contrary – lectures almost never do. Evidence has to be presented in peer-reviewed publications, independently confirmed and scrutinised. And, as far as I can see, Vickers has not authored a single peer-reviewed article [however, he thrives on anecdotal stories via youtube (worth watching, if you want to hear pure BS)].

But mostly I find it upsetting because it is almost inevitable that some desperate cancer patients will believe ‘Dr’ Vickers. And if they do, they will have to pay a very high price.

1 2 3 9
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories