MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

scientific misconduct

As I often said, I find it regrettable that sceptics often say THERE IS NOT A SINGLE STUDY THAT SHOWS HOMEOPATHY TO BE EFFECTIVE (or something to that extent). This is quite simply not true, and it gives homeopathy-fans the occasion to suggest sceptics wrong. The truth is that THE TOTALITY OF THE MOST RELIABLE EVIDENCE FAILS TO SUGGEST THAT HIGHLY DILUTED HOMEOPATHIC REMEDIES ARE EFFECTIVE BEYOND PLACEBO. As a message for consumers, this is a little more complex, but I believe that it’s worth being well-informed and truthful.

And that also means admitting that a few apparently rigorous trials of homeopathy exist and some of them show positive results. Today, I want to focus on this small set of studies.

How can a rigorous trial of a highly diluted homeopathic remedy yield a positive result? As far as I can see, there are several possibilities:

  1. Homeopathy does work after all, and we have not fully understood the laws of physics, chemistry etc. Homeopaths favour this option, of course, but I find it extremely unlikely, and most rational thinkers would discard this possibility outright. It is not that we don’t quite understand homeopathy’s mechanism; the fact is that we understand that there cannot be a mechanism that is in line with the laws of nature.
  2. The trial in question is the victim of some undetected error.
  3. The result has come about by chance. Of 100 trials, 5 would produce a positive result at the 5% probability level purely by chance.
  4. The researchers have cheated.

When we critically assess any given trial, we attempt, in a way, to determine which of the 4 solutions apply. But unfortunately we always have to contend with what the authors of the trial tell us. Publications never provide all the details we need for this purpose, and we are often left speculating which of the explanations might apply. Whatever it is, we assume the result is false-positive.

Naturally, this assumption is hard to accept for homeopaths; they merely conclude that we are biased against homeopathy and conclude that, however, rigorous a study of homeopathy is, sceptics will not accept its result, if it turns out to be positive.

But there might be a way to settle the argument and get some more objective verdict, I think. We only need to remind ourselves of a crucially important principle in all science: INDEPENDENT REPLICATIONTo be convincing, a scientific paper needs to provide evidence that the results are reproducible. In medicine, it unquestionably is wise to accept a new finding only after it has been confirmed by other, independent researchers. Only if we have at least one (better several) independent replications, can we be reasonably sure that the result in question is true and not false-positive due to bias, chance, error or fraud.

And this is, I believe, the extremely odd phenomenon about the ‘positive’ and apparently rigorous studies of homeopathic remedies. Let’s look at the recent meta-analysis of Mathie et al. The authors found several studies that were both positive and fairly rigorous. These trials differ in many respects (e. g. remedies used, conditions treated) but they have, as far as I can see, one important feature in common: THEY HAVE NOT BEEN INDEPENDENTLY REPLICATED.

If that is not astounding, I don’t know what is!

Think of it: faced with a finding that flies in the face of science and would, if true, revolutionise much of medicine, scientists should jump with excitement. Yet, in reality, nobody seems to take the trouble to check whether it is the truth or an error.

To explain this absurdity more fully, let’s take just one of these trials as an example, one related to a common and serious condition: COPD

The study is by Prof Frass and was published in 2005 – surely long enough ago for plenty of independent replications to emerge. Its results showed that potentized (C30) potassium dichromate decreases the amount of tracheal secretions was reduced, extubation could be performed significantly earlier, and the length of stay was significantly shorter. This is a scientific as well as clinical sensation, if there ever was one!

The RCT was published in one of the leading journals on this subject (Chest) which is read by most specialists in the field, and it was at the time widely reported. Even today, there is hardly an interview with Prof Frass in which he does not boast about this trial with truly sensational results (only last week, I saw one). If Frass is correct, his findings would revolutionise the lives of thousands of seriously suffering patients at the very brink of death. In other words, it is inconceivable that Frass’ result has not been replicated!

But it hasn’t; at least there is nothing in Medline.

Why not? A risk-free, cheap, universally available and easy to administer treatment for such a severe, life-threatening condition would normally be picked up instantly. There should not be one, but dozens of independent replications by now. There should be several RCTs testing Frass’ therapy and at least one systematic review of these studies telling us clearly what is what.

But instead there is a deafening silence.

Why?

For heaven sakes, why?

The only logical explanation is that many centres around the world did try Frass’ therapy. Most likely they found it does not work and soon dismissed it. Others might even have gone to the trouble of conducting a formal study of Frass’ ‘sensational’ therapy and found it to be ineffective. Subsequently they felt too silly to submit it for publication – who would not laugh at them, if they said they trailed a remedy that was diluted 1: 1000000000000000000000000000000000000000000000000000000000000 and found it to be worthless? Others might have written up their study and submitted it for publication, but got rejected by all reputable journals in the field because the editors felt that comparing one placebo to another placebo is not real science.

And this is roughly, how it went with the other ‘positive’ and seemingly rigorous studies of homeopathy as well, I suspect.

Regardless of whether I am correct or not, the fact is that there are no independent replications (if readers know any, please let me know).

Once a sufficiently long period of time has lapsed and no replications of a ‘sensational’ finding did not emerge, the finding becomes unbelievable or bogus – no rational thinker can possibly believe such a results (I for one have not yet met an intensive care specialist who believes Frass’ findings, for instance). Subsequently, it is quietly dropped into the waste-basket of science where it no longer obstructs progress.

The absence of independent replications is therefore a most useful mechanism by which science rids itself of falsehoods.

It seems that homeopathy is such a falsehood.

 

 

The plethora of dodgy meta-analyses in alternative medicine has been the subject of a recent post – so this one is a mere update of a regular lament.

This new meta-analysis was to evaluate evidence for the effectiveness of acupuncture in the treatment of lumbar disc herniation (LDH). (Call me pedantic, but I prefer meta-analyses that evaluate the evidence FOR AND AGAINST a therapy.) Electronic databases were searched to identify RCTs of acupuncture for LDH, and 30 RCTs involving 3503 participants were included; 29 were published in Chinese and one in English, and all trialists were Chinese.

The results showed that acupuncture had a higher total effective rate than lumbar traction, ibuprofen, diclofenac sodium and meloxicam. Acupuncture was also superior to lumbar traction and diclofenac sodium in terms of pain measured with visual analogue scales (VAS). The total effective rate in 5 trials was greater for acupuncture than for mannitol plus dexamethasone and mecobalamin, ibuprofen plus fugui gutong capsule, loxoprofen, mannitol plus dexamethasone and huoxue zhitong decoction, respectively. Two trials showed a superior effect of acupuncture in VAS scores compared with ibuprofen or mannitol plus dexamethasone, respectively.

The authors from the College of Traditional Chinese Medicine, Jinan University, Guangzhou, Guangdong, China, concluded that acupuncture showed a more favourable effect in the treatment of LDH than lumbar traction, ibuprofen, diclofenac sodium, meloxicam, mannitol plus dexamethasone and mecobalamin, fugui gutong capsule plus ibuprofen, mannitol plus dexamethasone, loxoprofen and huoxue zhitong decoction. However, further rigorously designed, large-scale RCTs are needed to confirm these findings.

Why do I call this meta-analysis ‘dodgy’? I have several reasons, 10 to be exact:

  1. There is no plausible mechanism by which acupuncture might cure LDH.
  2. The types of acupuncture used in these trials was far from uniform and  included manual acupuncture (MA) in 13 studies, electro-acupuncture (EA) in 10 studies, and warm needle acupuncture (WNA) in 7 studies. Arguably, these are different interventions that cannot be lumped together.
  3. The trials were mostly of very poor quality, as depicted in the table above. For instance, 18 studies failed to mention the methods used for randomisation. I have previously shown that some Chinese studies use the terms ‘randomisation’ and ‘RCT’ even in the absence of a control group.
  4. None of the trials made any attempt to control for placebo effects.
  5. None of the trials were conducted against sham acupuncture.
  6. Only 10 studies 10 trials reported dropouts or withdrawals.
  7. Only two trials reported adverse reactions.
  8. None of these shortcomings were critically discussed in the paper.
  9. Despite their affiliation, the authors state that they have no conflicts of interest.
  10. All trials were conducted in China, and, on this blog, we have discussed repeatedly that acupuncture trials from China never report negative results.

And why do I find the journal ‘dodgy’?

Because any journal that publishes such a paper is likely to be sub-standard. In the case of ‘Acupuncture in Medicine’, the official journal of the British Medical Acupuncture Society, I see such appalling articles published far too frequently to believe that the present paper is just a regrettable, one-off mistake. What makes this issue particularly embarrassing is, of course, the fact that the journal belongs to the BMJ group.

… but we never really thought that science publishing was about anything other than money, did we?

What an odd title, you might think.

Systematic reviews are the most reliable evidence we presently have!

Yes, this is my often-voiced and honestly-held opinion but, like any other type of research, systematic reviews can be badly abused; and when this happens, they can seriously mislead us.

new paper by someone who knows more about these issues than most of us, John Ioannidis from Stanford university, should make us think. It aimed at exploring the growth of published systematic reviews and meta‐analyses and at estimating how often they are redundant, misleading, or serving conflicted interests. Ioannidis demonstrated that publication of systematic reviews and meta‐analyses has increased rapidly. In the period January 1, 1986, to December 4, 2015, PubMed tags 266,782 items as “systematic reviews” and 58,611 as “meta‐analyses.” Annual publications between 1991 and 2014 increased 2,728% for systematic reviews and 2,635% for meta‐analyses versus only 153% for all PubMed‐indexed items. Ioannidis believes that probably more systematic reviews of trials than new randomized trials are published annually. Most topics addressed by meta‐analyses of randomized trials have overlapping, redundant meta‐analyses; same‐topic meta‐analyses may exceed 20 sometimes.

Some fields produce massive numbers of meta‐analyses; for example, 185 meta‐analyses of antidepressants for depression were published between 2007 and 2014. These meta‐analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests. China has rapidly become the most prolific producer of English‐language, PubMed‐indexed meta‐analyses. The most massive presence of Chinese meta‐analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes. Furthermore, many contracting companies working on evidence synthesis receive industry contracts to produce meta‐analyses, many of which probably remain unpublished. Many other meta‐analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta‐analyses are both non‐misleading and useful.

The author concluded that the production of systematic reviews and meta‐analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta‐analyses are unnecessary, misleading, and/or conflicted.

Ioannidis makes the following ‘Policy Points’:

  • Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta‐analyses. Instead of promoting evidence‐based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools.
  • Suboptimal systematic reviews and meta‐analyses can be harmful given the major prestige and influence these types of studies have acquired.
  • The publication of systematic reviews and meta‐analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.

Obviously, Ioannidis did not have alternative medicine in mind when he researched and published this article. But he easily could have! Virtually everything he stated in his paper does apply to it. In some areas of alternative medicine, things are even worse than Ioannidis describes.

Take TCM, for instance. I have previously looked at some of the many systematic reviews of TCM that currently flood Medline, based on Chinese studies. This is what I concluded at the time:

Why does that sort of thing frustrate me so much? Because it is utterly meaningless and potentially harmful:

  • I don’t know what treatments the authors are talking about.
  • Even if I managed to dig deeper, I cannot get the information because practically all the primary studies are published in obscure journals in Chinese language.
  • Even if I  did read Chinese, I do not feel motivated to assess the primary studies because we know they are all of very poor quality – too flimsy to bother.
  • Even if they were formally of good quality, I would have my doubts about their reliability; remember: 100% of these trials report positive findings!
  • Most crucially, I am frustrated because conclusions of this nature are deeply misleading and potentially harmful. They give the impression that there might be ‘something in it’, and that it (whatever ‘it’ might be) could be well worth trying. This may give false hope to patients and can send the rest of us on a wild goose chase.

So, to ease the task of future authors of such papers, I decided give them a text for a proper EVIDENCE-BASED conclusion which they can adapt to fit every review. This will save them time and, more importantly perhaps, it will save everyone who might be tempted to read such futile articles the effort to study them in detail. Here is my suggestion for a conclusion soundly based on the evidence, not matter what TCM subject the review is about:

OUR SYSTEMATIC REVIEW HAS SHOWN THAT THERAPY ‘X’ AS A TREATMENT OF CONDITION ‘Y’ IS CURRENTLY NOT SUPPORTED BY SOUND EVIDENCE.

On another occasion, I stated that I am getting very tired of conclusions stating ‘…XY MAY BE EFFECTIVE/HELPFUL/USEFUL/WORTH A TRY…’ It is obvious that the therapy in question MAY be effective, otherwise one would surely not conduct a systematic review. If a review fails to produce good evidence, it is the authors’ ethical, moral and scientific obligation to state this clearly. If they don’t, they simply misuse science for promotion and mislead the public. Strictly speaking, this amounts to scientific misconduct.

In yet another post on the subject of systematic reviews, I wrote that if you have rubbish trials, you can produce a rubbish review and publish it in a rubbish journal (perhaps I should have added ‘rubbish researchers).

And finally this post about a systematic review of acupuncture: it is almost needless to mention that the findings (presented in a host of hardly understandable tables) suggest that acupuncture is of proven or possible effectiveness/efficacy for a very wide array of conditions. It also goes without saying that there is no critical discussion, for instance, of the fact that most of the included evidence originated from China, and that it has been shown over and over again that Chinese acupuncture research never seems to produce negative results.

The main point surely is that the problem of shoddy systematic reviews applies to a depressingly large degree to all areas of alternative medicine, and this is misleading us all.

So, what can be done about it?

My preferred (but sadly unrealistic) solution would be this:

STOP ENTHUSIASTIC AMATEURS FROM PRETENDING TO BE RESEARCHERS!

Research is not fundamentally different from other professional activities; to do it well, one needs adequate training; and doing it badly can cause untold damage.

A few days ago, the German TV ‘FACT’ broadcast a film (it is in German, the bit on homeopathy starts at ~min 20) about a young woman who had her breast cancer first operated but then decided to forfeit subsequent conventional treatments. Instead she chose homeopathy which she received from Dr Jens Wurster at the ‘Clinica Sta Croce‘ in Lucano/Switzerland.

Elsewhere Dr Wurster stated this: Contrary to chemotherapy and radiation, we offer a therapy with homeopathy that supports the patient’s immune system. The basic approach of orthodox medicine is to consider the tumor as a local disease and to treat it aggressively, what leads to a weakening of the immune system. However, when analyzing all studies on cured cancer cases it becomes evident that the immune system is always the decisive factor. When the immune system is enabled to recognize tumor cells, it will also be able to combat them… When homeopathic treatment is successful in rebuilding the immune system and reestablishing the basic regulation of the organism then tumors can disappear again. I’ve treated more than 1000 cancer patients homeopathically and we could even cure or considerably ameliorate the quality of life for several years in some, advanced and metastasizing cases.

The recent TV programme showed a doctor at this establishment confirming that homeopathy alone can cure cancer. Dr Wurster (who currently seems to be a star amongst European homeopaths) is seen lecturing at the 2017 World Congress of Homeopathic Physicians in Leipzig and stating that a ‘particularly rigorous study’ conducted by conventional scientists (the senior author is Harald Walach!, hardly a conventional scientist in my book) proved homeopathy to be effective for cancer. Specifically, he stated that this study showed that ‘homeopathy offers a great advantage in terms of quality of life even for patients suffering from advanced cancers’.

This study did, of course, interest me. So, I located it and had a look. Here is the abstract:

BACKGROUND:

Many cancer patients seek homeopathy as a complementary therapy. It has rarely been studied systematically, whether homeopathic care is of benefit for cancer patients.

METHODS:

We conducted a prospective observational study with cancer patients in two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). For a direct comparison, matched pairs with patients of the same tumour entity and comparable prognosis were to be formed. Main outcome parameter: change of quality of life (FACT-G, FACIT-Sp) after 3 months. Secondary outcome parameters: change of quality of life (FACT-G, FACIT-Sp) after a year, as well as impairment by fatigue (MFI) and by anxiety and depression (HADS).

RESULTS:

HG: FACT-G, or FACIT-Sp, respectively improved statistically significantly in the first three months, from 75.6 (SD 14.6) to 81.1 (SD 16.9), or from 32.1 (SD 8.2) to 34.9 (SD 8.32), respectively. After 12 months, a further increase to 84.1 (SD 15.5) or 35.2 (SD 8.6) was found. Fatigue (MFI) decreased; anxiety and depression (HADS) did not change. CG: FACT-G remained constant in the first three months: 75.3 (SD 17.3) at t0, and 76.6 (SD 16.6) at t1. After 12 months, there was a slight increase to 78.9 (SD 18.1). FACIT-Sp scores improved significantly from t0 (31.0 – SD 8.9) to t1 (32.1 – SD 8.9) and declined again after a year (31.6 – SD 9.4). For fatigue, anxiety, and depression, no relevant changes were found. 120 patients of HG and 206 patients of CG met our criteria for matched-pairs selection. Due to large differences between the two patient populations, however, only 11 matched pairs could be formed. This is not sufficient for a comparative study.

CONCLUSION:

In our prospective study, we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment. It would take considerably larger samples to find matched pairs suitable for comparison in order to establish a definite causal relation between these effects and homeopathic treatment.

_________________________________________________________________

Even the abstract makes several points very clear, and the full text confirms further embarrassing details:

  • The patients in this study received homeopathy in addition to standard care (the patient shown in the film only had homeopathy until it was too late, and she subsequently died, aged 33).
  • The study compared A+B with B alone (A=homeopathy, B= standard care). It is hardly surprising that the additional attention of A leads to an improvement in quality of life. It is arguably even unethical to conduct a clinical trial to demonstrate such an obvious outcome.
  • The authors of this paper caution that it is not possible to conclude that a causal relationship between homeopathy and the outcome exists.
  • This is true not just because of the small sample size, but also because of the fact that the two groups had not been allocated randomly and therefore are bound to differ in a whole host of variables that have not or cannot be measured.
  • Harald Walach, the senior author of this paper, held a position which was funded by Heel, Baden-Baden, one of Germany’s largest manufacturer of homeopathics.
  • The H.W.& J.Hector Foundation, Germany, and the Samueli Institute, provided the funding for this study.

In the film, one of the co-authors of this paper, the oncologist HH Bartsch from Freiburg, states that Dr Wurster’s interpretation of this study is ‘dishonest’.

I am inclined to agree.

The authors of this systematic review aimed to summarize the evidence of clinical trials on cupping for athletes. Randomized controlled trials on cupping therapy with no restriction regarding the technique, or co-interventions, were included, if they measured the effects of cupping compared with any other intervention on health and performance outcomes in professionals, semi-professionals, and leisure athletes. Data extraction and risk of bias assessment using the Cochrane Risk of Bias Tool were conducted independently by two pairs of reviewers.

Eleven trials with n = 498 participants from China, the United States, Greece, Iran, and the United Arab Emirates were included, reporting effects on different populations, including soccer, football, and handball players, swimmers, gymnasts, and track and field athletes of both amateur and professional nature. Cupping was applied between 1 and 20 times, in daily or weekly intervals, alone or in combination with, for example, acupuncture. Outcomes varied greatly from symptom intensity, recovery measures, functional measures, serum markers, and experimental outcomes. Cupping was reported as beneficial for perceptions of pain and disability, increased range of motion, and reductions in creatine kinase when compared to mostly untreated control groups. The majority of trials had an unclear or high risk of bias. None of the studies reported safety.

Risk of bias of included trials. “+” indicates low risk of bias, “−” indicates high risk of bias, and “?” indicates unclear risk of bias.

The authors concluded that no explicit recommendation for or against the use of cupping for athletes can be made. More studies are necessary for conclusive judgment on the efficacy and safety of cupping in athletes.

Considering the authors’ stated aim, this conclusion seems odd. Surely, they should have concluded that THERE IS NO CONVINCING EVIDENCE FOR THE USE OF CUPPING IN ATHLETES. But this sounds rather negative, and the JCAM does not seem to tolerate negative conclusions, as discussed repeatedly on this blog.

The discussion section of this paper is bar of any noticeable critical input (for those who don’t know: the aim of any systematic review must be to CRITICALLY EVALUATE THE PRIMARY DATA). The authors even go as far as stating that the trials reported in this systematic review found beneficial effects of cupping in athletes when compared to no intervention. I find this surprising and bordering on scientific misconduct. The RCTs were mostly not on cupping but on cupping in combination with some other treatments. More importantly, they were of such deplorable quality that they allow no conclusions about effectiveness. Lastly, they mostly failed to report on adverse effects which, as I have often stated, is a violation of research ethics.

In essence, all this paper proves is that, if you have rubbish trials, you can produce a rubbish review and publish it in a rubbish journal.

Some of you will remember the saga of the British Chiropractic Association suing my friend and co-author Simon Singh (eventually losing the case, lots of money and all respect). One of the ‘hot potatoes’ in this case was the question whether chiropractic is effective for infant colic. This question is settled, I thought: IT HAS NOT BEEN SHOWN TO WORK BETTER THAN A PLACEBO.

Yet manipulators have not forgotten the defeat and are still plotting, it seems, to overturn it. Hence a new systematic review assessed the effect of manual therapy interventions for healthy but unsettled, distressed and excessively crying infants.

The authors reviewed published peer-reviewed primary research articles in the last 26 years from nine databases (Medline Ovid, Embase, Web of Science, Physiotherapy Evidence Database, Osteopathic Medicine Digital Repository , Cochrane (all databases), Index of Chiropractic Literature, Open Access Theses and Dissertations and Cumulative Index to Nursing and Allied Health Literature). The inclusion criteria were: manual therapy (by regulated or registered professionals) of unsettled, distressed and excessively crying infants who were otherwise healthy and treated in a primary care setting. Outcomes of interest were: crying, feeding, sleep, parent-child relations, parent experience/satisfaction and parent-reported global change. The authors included the following types of peer-reviewed studies in our search: RCTs, prospective cohort studies, observational studies, case–control studies, case series, questionnaire surveys and qualitative studies.

Nineteen studies were selected for full review: seven randomised controlled trials, seven case series, three cohort studies, one service evaluation study and one qualitative study. Only 5 studies were rated as high quality: four RCTs (low risk of bias) and a qualitative study.

The authors found moderate strength evidence for the effectiveness of manual therapy on: reduction in crying time (favourable: -1.27 hours per day (95% CI -2.19 to -0.36)), sleep (inconclusive), parent-child relations (inconclusive) and global improvement (no effect).

Reduction in crying: RCTs mean difference.

The risk of reported adverse events was low (only 8 studies mentioned adverse effects at all, meaning that the rest were in breach of research and publication ethics): seven non-serious events per 1000 infants exposed to manual therapy (n=1308) and 110 per 1000 in those not exposed.

The authors concluded that some small benefits were found, but whether these are meaningful to parents remains unclear as does the mechanisms of action. Manual therapy appears relatively safe.

For several reasons, I find this review, although technically sound, quite odd.

Why review uncontrolled data when RCTs are available?

How can a qualitative study be rated as high quality for assessing the effectiveness of a therapy?

How can the authors categorically conclude that there were benefits when there were only 4 RCTs of high quality?

Why do they not explain the implications of none of the RCTs being placebo-controlled?

How can anyone pool the results of all types of manual therapies which, as most of us know, are highly diverse?

How can the authors conclude about the safety of manual therapies when most trials failed to report on this issue?

Why do they not point out that this is unethical?

My greatest general concern about this review is the overt lack of critical input. A systematic review is not a means of promoting an intervention but of critically assessing its value. This void of critical thinking is palpable throughout the paper. In the discussion section, for instance, the authors state that “previous systematic reviews from 2012 and 2014 concluded there was favourable but inconclusive and weak evidence for manual therapy for infantile colic. They mention two reviews to back up this claim. They conveniently forget my own review of 2009 (the first on this subject). Why? Perhaps because it did not fit their preconceived ideas? Here is my abstract:

Some chiropractors claim that spinal manipulation is an effective treatment for infant colic. This systematic review was aimed at evaluating the evidence for this claim. Four databases were searched and three randomised clinical trials met all the inclusion criteria. The totality of this evidence fails to demonstrate the effectiveness of this treatment. It is concluded that the above claim is not based on convincing data from rigorous clinical trials.

Towards the end of their paper, the authors state that “this was a comprehensive and rigorously conducted review…” I beg to differ; it turned out to be uncritical and biased, in my view. And at the very end of the article, we learn a possible reason for this phenomenon: “CM had financial support from the National Council for Osteopathic Research from crowd-funded donations.”

The aim of this three-armed, parallel, randomized exploratory study was to determine, if two types of acupuncture (auricular acupuncture [AA] and traditional Chinese acupuncture [TCA]) were feasible and more effective than usual care (UC) alone for TBI–related headache. The subjects were previously deployed Service members (18–69 years old) with mild-to-moderate TBI and headaches. The interventions explored were UC alone or with the addition of AA or TCA. The primary outcome was the Headache Impact Test (HIT). Secondary outcomes were the Numerical Rating Scale (NRS), Pittsburgh Sleep Quality Index, Post-Traumatic Stress Checklist, Symptom Checklist-90-R, Medical Outcome Study Quality of Life (QoL), Beck Depression Inventory, State-Trait Anxiety Inventory, the Automated Neuropsychological Assessment Metrics, and expectancy of outcome and acupuncture efficacy.

Mean HIT scores decreased in the AA and TCA groups but increased slightly in the UC-only group from baseline to week 6 [AA, −10.2% (−6.4 points); TCA, −4.6% (−2.9 points); UC, +0.8% (+0.6 points)]. Both acupuncture groups had sizable decreases in NRS (Pain Best), compared to UC (TCA versus UC: P = 0.0008, d = 1.70; AA versus UC: P = 0.0127, d = 1.6). No statistically significant results were found for any other secondary outcome measures.

The authors concluded that both AA and TCA improved headache-related QoL more than UC did in Service members with TBI.

The stated aim of this study (to determine whether AA or TCA both with UC are more effective than UC alone) does not make sense and should therefore never have passed ethics review, in my view. The RCT followed a design which essentially is the much-lamented ‘A+B versus B’ protocol (except that a further groups ‘C+B’ was added). The nature of such designs is that there is no control for placebo effects, the extra time and attention, etc. Therefore, such studies cannot fail but generate positive results, even if the tested intervention is a placebo. In such trials, it is impossible to attribute any outcome to the experimental treatment. This means that the positive results are known before the first patient has been enrolled; hence they are an unethical waste of resources which can only serve one purpose: to mislead us. It also means that the conclusions drawn above are not correct.

An alternative and in my view more accurate conclusion would be this one: both AA and TCA had probably no effect; the improved headache-related QoL was due to the additional attention and expectation in the two experimental groups and is unrelated to the interventions tested in this study.

In our new book, MORE HARM THAN GOOD, we discuss that such trials are deceptive to the point of being unethical. Considering the prominence and experience of Wayne Jonas, the 1st author of this paper, such obvious transgression is more than a little disappointing – I would argue that is amounts to overt scientific misconduct.

This announcement caught my eye:

START OF 1st QUOTE

Dr Patrick Vickers of the Northern Baja Gerson Centre, Mexico will deliver a two hour riveting lecture of ‘The American Experience of Dr Max Gerson, M.D.’

The lecture will present the indisputable science supporting the Gerson Therapy and its ability to reverse advanced disease.

Dr Vickers will explain the history and the politics of both medical and governmental authorities and their relentless attempts to surpress this information, keeping it from the world.

‘Dr Max Gerson, Censored for Curing Cancer’

“I see in Dr Max Gerson, one of the most eminent geniuses in medical history” Nobel Prize Laureate, Dr Albert Schweitzer.

END OF 1st QUOTE

Who is this man, Dr Patrik Vickers, I asked myself. And soon I found a CV in his own words:

START OF 2nd QUOTE

Dr. Patrick Vickers is the Director and Founder of the Northern Baja Gerson Clinic. His mission is to provide patients with the highest quality and standard of care available in the world today for the treatment of advanced (and non-advanced) degenerative disease. His dedication and commitment to the development of advanced protocols has led to the realization of exponentially greater results in healing disease. Dr. Vickers, along with his highly trained staff, provides patients with the education, support, and resources to achieve optimal health.

Dr. Patrick was born and raised outside of Milwaukee, Wisconsin. At the age of 11 years old, after witnessing a miraculous recovery from a chiropractic adjustment, Dr. Patrick’s passion for natural medicine was born.

Giving up careers in professional golf and entertainment, Dr. Patrick obtained his undergraduate degrees from the University of Wisconsin-Madison and Life University before going on to receive his doctorate in Chiropractic from New York Chiropractic College in 1997.

While a student at New York Chiropractic College(NYCC), Dr. Patrick befriended Charlotte Gerson, the last living daughter of Dr. Max Gerson, M.D. who Nobel Peace Prize Winner, Dr. Albert Schweitzer called, ” One of the most eminent geniuses in medical history. “

Dr. Gerson, murdered in 1959, remains the most censured doctor in the history of medicine as he was reversing virtually every degenerative disease known to man, including TERMINAL cancer…

END OF 2nd QUOTE

I have to admit, I find all this quite upsetting!

Not because the ticket for the lecture costs just over £27.

Not because exploitation of vulnerable patients by quacks always annoys me.

Not even because the announcement is probably unlawful, according to the UK ‘cancer act’.

I find it upsetting because there is simply no good evidence that the Gerson therapy does anything to cancer patients other than making them die earlier, poorer and more miserable (the fact that Prince Charles is a fan makes it only worse). And I do not believe that the lecture will present indisputable evidence to the contrary – lectures almost never do. Evidence has to be presented in peer-reviewed publications, independently confirmed and scrutinised. And, as far as I can see, Vickers has not authored a single peer-reviewed article [however, he thrives on anecdotal stories via youtube (worth watching, if you want to hear pure BS)].

But mostly I find it upsetting because it is almost inevitable that some desperate cancer patients will believe ‘Dr’ Vickers. And if they do, they will have to pay a very high price.

Can conventional therapy (CT) be combined with herbal therapy (CT + H) in the management of Alzheimer’s disease (AD) to the benefit of patients? This was the question investigated by Chinese researchers in a recent retrospective cohort study funded by grants from China Ministry of Education, National Natural Science Foundation of China, Beijing Municipal Science and Technology Commission, and Beijing Municipal Commission of Health and Family Planning.

In total, 344 outpatients diagnosed as probable dementia due to AD were collected, who had received either CT + H or CT alone. The GRAPE formula was prescribed for AD patients after every visit according to TCM theory. It consisted mainly (what does ‘mainly’ mean as a description of a trial intervention?) of Ren shen (Panax ginseng, 10 g/d), Di huang (Rehmannia glutinosa, 30 g/d), Cang pu (Acorus tatarinowii, 10 g/d), Yuan zhi (Polygala tenuifolia, 10 g/d), Yin yanghuo (Epimedium brevicornu, 10 g/d), Shan zhuyu (Cornus officinalis, 10 g/d), Rou congrong (Cistanche deserticola, 10 g/d), Yu jin (Curcuma aromatica, 10 g/d), Dan shen (Salvia miltiorrhiza, 10 g/d), Dang gui (Angelica sinensis, 10 g/d), Tian ma (Gastrodia elata, 10 g/d), and Huang lian (Coptis chinensis, 10 g/d), supplied by Beijing Tcmages Pharmaceutical Co., LTD. Daily dose was taken twice and dissolved in 150 ml hot water each time. Cognitive function was quantified by the mini-mental state examination (MMSE) every 3 months for 24 months.

The results show that most of the patients were initially diagnosed with mild (MMSE = 21-26, n = 177) and moderate (MMSE = 10-20, n = 137) dementia. At 18 months, CT+ H patients scored on average 1.76 (P = 0.002) better than CT patients, and at 24 months, patients scored on average 2.52 (P < 0.001) better. At 24 months, the patients with improved cognitive function (△MMSE ≥ 0) in CT + H was more than CT alone (33.33% vs 7.69%, P = 0.020). Interestingly, patients with mild AD received the most robust benefit from CT + H therapy. The deterioration of the cognitive function was largely prevented at 24 months (ΔMMSE = -0.06), a significant improvement from CT alone (ΔMMSE = -2.66, P = 0.005).

 

The authors concluded that, compared to CT alone, CT + H significantly benefited AD patients. A symptomatic effect of CT + H was more pronounced with time. Cognitive decline was substantially decelerated in patients with moderate severity, while the cognitive function was largely stabilized in patients with mild severity over two years. These results imply that Chinese herbal medicines may provide an alternative and additive treatment for AD.

Conclusions like these render me speechless – well, almost speechless. This was nothing more than a retrospective chart analysis. It is not possible to draw causal conclusions from such data.

Why?

Because of a whole host of reasons. Most crucially, the CT+H patients were almost certainly a different and therefore non-comparable population to the CT patients. This flaw is so elementary that I need to ask, who are the reviewers letting such utter nonsense pass, and which journal would publish such rubbish? In fact, I can be used for teaching students why randomisation is essential, if we aim to find out about cause and effect.

Ahhh, it’s the ! I think the funders, editors, reviewers, and authors of this paper should all go and hide in shame.

A comprehensive review of the evidence relating to acupuncture entitled “The Acupuncture Evidence Project: A Comparative Literature Review” has just been published. The document aims to provide “an updated review of the literature with greater rigour than was possible in the past.” That sounds great! Let’s see just how rigorous the assessment is.

The review was conducted by John McDonald who no stranger to this blog; we have mentioned him here, for instance. To call him an unbiased, experienced, or expert researcher would, in my view, be more than a little optimistic.

The review was financed by the ‘Australian Acupuncture and Chinese Medicine Association Ltd.’ – call me a pessimist, but I do wonder whether this bodes well for the objectivity of the findings.

The research seems to have been assisted by a range of experts: Professor Caroline Smith, National Institute of Complementary Medicine, Western Sydney University, provided advice regarding evidence levels for assisted reproduction trials; Associate Professor Zhen Zheng, RMIT University identified the evidence levels for postoperative nausea and vomiting and post-operative pain; Dr Suzanne Cochrane, Western Sydney University; Associate Professor Chris Zaslawski, University of Technology Sydney; and Associate Professor Zhen Zheng, RMIT University provided prepublication commentary and advice. I fail to see anyone in this list who is an expert in EBM or who is even mildly critical of acupuncture and the many claims that are being made for it.

The review has not been published in a journal. This means, it has not been peer-reviewed. As we will see shortly, there is reason to doubt that it could pass the peer-review process of any serious journal.

There is an intriguing declaration of conflicts of interest: “Dr John McDonald was a co-author of three of the research papers referenced in this review. Professor Caroline Smith was a co-author of six of the research papers referenced in this review, and Associate Professor Zhen Zheng was co-author of one of the research papers in this review. There were no other conflicts of interest.” Did they all forget to mention that they earn their livelihoods through acupuncture? Or is that not a conflict?

I do love the disclaimer: “The authors and the Australian Acupuncture and Chinese Medicine Association Ltd (AACMA) give no warranty that the information contained in this publication and within any online updates available on the AACMA website are correct or complete.” I think they have a point here.

But let’s not be petty, let’s look at the actual review and how well it was done!

Systematic reviews must first formulate a precise research question, then disclose the exact methodology, reveal the results and finally discuss them critically. I am afraid, I miss almost all of these essential elements in the document in question.

The methods section includes statements which puzzle me (my comments are in bold):

  • A total of 136 systematic reviews, including 27 Cochrane systematic reviews were included in this review, along with three network meta-analyses, nine reviews of reviews and 20 other reviews. Does that indicate that non-systematic reviews were included too? Yes, it does – but only, if they reported a positive result, I presume.
  • Some of the included systematic reviews included studies which were not randomised controlled trials. In this case, they should have not been included at all, in my view.
  • … evidence from individual randomised controlled trials has been included occasionally where new high quality randomised trials may have changed the conclusions from the most recent systematic review. ‘Occasionally’ is the antithesis of systematic. This discloses the present review as being non-systematic and therefore worthless.
  • Some systematic reviews have not reported an assessment of quality of evidence of included trials, and due to time constraints, this review has not attempted to make such an assessment. Say no more!

It is almost needless to mention that the findings (presented in a host of hardly understandable tables) suggest that acupuncture is of proven or possible effectiveness/efficacy for a very wide array of conditions. It also goes without saying that there is no critical discussion, for instance, of the fact that most of the included evidence originated from China, and that it has been shown over and over again that Chinese acupuncture research never seems to produce negative results.

So, what might we conclude from all this?

I don’t know about you, but for me this new review is nothing but an orgy in deceit and wishful thinking!

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories