MD, PhD, FMedSci, FSB, FRCP, FRCPEd

scientific misconduct

Systematic reviews are widely considered to be the most reliable type of evidence for judging the effectiveness of therapeutic interventions. Such reviews should be focused on a well-defined research question and identify, critically appraise and synthesize the totality of the high quality research evidence relevant to that question. Often it is possible to pool the data from individual studies and thus create a new numerical result of the existing evidence; in this case, we speak of a meta-analysis, a sub-category of systematic reviews.

One strength of systematic review is that they minimise selection and random biases by considering at the totality of the evidence of a pre-defined nature and quality. A crucial precondition, however, is that the quality of the primary studies is critically assessed. If this is done well, the researchers will usually be able to determine how robust any given result is, and whether high quality trials generate similar findings as those of lower quality. If there is a discrepancy between findings from rigorous and flimsy studies, it is obviously advisable to trust the former and discard the latter.

And this is where systematic reviews of alternative treatments can run into difficulties. For any given research question in this area we usually have a paucity of primary studies. Equally important is the fact that many of the available trials tend to be of low quality. Consequently, there often is a lack of high quality studies, and this makes it all the more important to include a robust critical evaluation of the primary data. Not doing so would render the overall result of the review less than reliable – in fact, such a paper would not qualify as a systematic review at all; it would be a pseudo-systematic review, i.e. a review which pretends to be systematic but, in fact, is not. Such papers are a menace in that they can seriously mislead us, particularly if we are not familiar with the essential requirements for a reliable review.

This is precisely where some promoters of bogus treatments seem to see their opportunity of making their unproven therapy look as though it was evidence-based. Pseudo-systematic reviews can be manipulated to yield a desired outcome. In my last post, I have shown that this can be done by including treatments which are effective so that an ineffective therapy appears effective (“chiropractic is so much more than just spinal manipulation”). An even simpler method is to exclude some of the studies that contradict one’s belief from the review. Obviously, the review would then not comprise the totality of the available evidence. But, unless the reader bothers to do a considerable amount of research, he/she would be highly unlikely to notice. All one needs to do is to smuggle the paper past the peer-review process – hardly a difficult task, given the plethora of alternative medicine journals that bend over backwards to publish any rubbish as long as it promotes alternative medicine.

Alternatively (or in addition) one can save oneself a lot of work and omit the process of critically evaluating the primary studies. This method is increasingly popular in alternative medicine. It is a fool-proof method of generating a false-positive overall result. As poor quality trials have a tendency to deliver false-positive results, it is obvious that a predominance of flimsy studies must create a false-positive result.

A particularly notorious example of a pseudo-systematic review that used this as well as most of the other tricks for misleading the reader is the famous ‘systematic’ review by Bronfort et al. It was commissioned by the UK GENERAL CHIROPRACTIC COUNCIL after the chiropractic profession got into trouble and was keen to defend those bogus treatments disclosed by Simon Singh. Bronfort and his colleagues thus swiftly published (of course, in a chiro-journal) an all-encompassing review attempting to show that, at least for some conditions, chiropractic was effective. Its lengthy conclusions seemed encouraging: Spinal manipulation/mobilization is effective in adults for: acute, subacute, and chronic low back pain; migraine and cervicogenic headache; cervicogenic dizziness; manipulation/mobilization is effective for several extremity joint conditions; and thoracic manipulation/mobilization is effective for acute/subacute neck pain. The evidence is inconclusive for cervical manipulation/mobilization alone for neck pain of any duration, and for manipulation/mobilization for mid back pain, sciatica, tension-type headache, coccydynia, temporomandibular joint disorders, fibromyalgia, premenstrual syndrome, and pneumonia in older adults. Spinal manipulation is not effective for asthma and dysmenorrhea when compared to sham manipulation, or for Stage 1 hypertension when added to an antihypertensive diet. In children, the evidence is inconclusive regarding the effectiveness for otitis media and enuresis, and it is not effective for infantile colic and asthma when compared to sham manipulation. Massage is effective in adults for chronic low back pain and chronic neck pain. The evidence is inconclusive for knee osteoarthritis, fibromyalgia, myofascial pain syndrome, migraine headache, and premenstrual syndrome. In children, the evidence is inconclusive for asthma and infantile colic. 

Chiropractors across the world cite this paper as evidence that chiropractic has at least some evidence base. What they omit to tell us (perhaps because they do not appreciate it themselves) is the fact that Bronfort et al

  • failed to formulate a focussed research question,
  • invented his own categories of inconclusive findings,
  • included all sorts of studies which had nothing to do with chiropractic,
  • and did not to make an assessment of the quality of the included primary studies they included in their review.

If, for a certain condition, three trials were included, for instance, two of which were positive but of poor quality and one was negative but of good quality, the authors would conclude that, overall, there is sound evidence.

Bronfort himself is, of course, more than likely to know all that (he has learnt his trade with an excellent Dutch research team and published several high quality reviews) – but his readers mostly don’t. And for chiropractors, this ‘systematic’ review is now considered to be the most reliable evidence in their field.

The efficacy or effectiveness of medical interventions is, of course, best tested in clinical trials. The principle of a clinical trial is fairly simple: typically, a group of patients is divided (preferably at random) into two subgroups, one (the ‘verum’ group) is treated with the experimental treatment and the other (the ‘control’ group) with another option (often a placebo), and the eventual outcomes of the two groups is compared. If done well, such studies are able to exclude biases and confounding factors such that their findings allow causal inference. In other words, they can tell us whether an outcome was caused by the intervention per se or by some other factor such as the natural history of the disease, regression towards the mean etc.

A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.

For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.

One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about NON-CONCLUSIVE?

That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!

How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo NON-CONCLUSIVE.

In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.

This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.

Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.

The little trick of applying the ‘NON-CONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.

Whenever a new trial of an alternative intervention emerges which fails to confirm the wishful thinking of the proponents of that therapy, the world of alternative medicine is in turmoil. What can be done about yet another piece of unfavourable evidence? The easiest solution would be to ignore it, of course – and this is precisely what is often tried. But this tactic usually proves to be unsatisfactory; it does not neutralise the new evidence, and each time someone brings it up, one has to stick one’s head back into the sand. Rather than denying its existence, it would be preferable to have a tool which invalidates the study in question once and for all.

The ‘fatal flaw’ solution is simpler than anticipated! Alternative treatments are ‘very special’, and this notion must be emphasised, blown up beyond all proportions and used cleverly to discredit studies with unfavourable outcomes: the trick is simply to claim that studies with unfavourable results have a ‘fatal flaw’ in the way the alternative treatment was applied. As only the experts in the ‘very special’ treatment in question are able to judge the adequacy of their therapy, nobody is allowed to doubt their verdict.

Take acupuncture, for instance; it is an ancient ‘art’ which only the very best will ever master – at least that is what we are being told. So, all the proponents need to do in order to invalidate a trial, is read the methods section of the paper in full detail and state ‘ex cathedra’ that the way acupuncture was done in this particular study is completely ridiculous. The wrong points were stimulated, or the right points were stimulated but not long enough [or too long], or the needling was too deep [or too shallow], or the type of stimulus employed was not as recommended by TCM experts, or the contra-indications were not observed etc. etc.

As nobody can tell a correct acupuncture from an incorrect one, this ‘fatal flaw’ method is fairly fool-proof. It is also ever so simple: acupuncture-fans do not necessarily study hard to find the ‘fatal flaw’, they only have to look at the result of a study – if it was favourable, the treatment was obviously done perfectly by highly experienced experts; if it was unfavourable, the therapists clearly must have been morons who picked up their acupuncture skills in a single weekend course. The reasons for this judgement can always be found or, if all else fails, invented.

And the end-result of the ‘fatal flaw’ method is most satisfactory; what is more, it can be applied to all alternative therapies – homeopathy, herbal medicine, reflexology, Reiki healing, colonic irrigation…the method works for all of them! What is even more, the ‘fatal flaw’ method is adaptable to other aspects of scientific investigations such that it fits every conceivable circumstance.

An article documenting the ‘fatal flaw’ has to be published, of course – but this is no problem! There are dozens of dodgy alternative medicine journals which are only too keen to print even the most far-fetched nonsense as long as it promotes alternative medicine in some way. Once this paper is published, the proponents of the therapy in question have a comfortable default position to rely on each time someone cites the unfavourable study: “WHAT NOT THAT STUDY AGAIN! THE TREATMENT HAS BEEN SHOWN TO BE ALL WRONG. NOBODY CAN EXPECT GOOD RESULTS FROM A THERAPY THAT WAS NOT CORRECTLY ADMINISTERED. IF YOU DON’T HAVE BETTER STUDIES TO SUPPORT YOUR ARGUMENTS, YOU BETTER SHUT UP.”

There might, in fact, be better studies – but chances are that the ‘other side’ has already documented a ‘fatal flaw’ in them too.

Cancer patients are bombarded with information about supplements which allegedly are effective for their condition. I estimate that 99.99% of this information is unreliable and much of it is outright dangerous. So, there is an urgent need for trustworthy, objective information. But which source can we trust?

The authors of a recent article in ‘INTEGRATIVE CANCER THARAPIES’ (the first journal to spearhead and focus on a new and growing movement in cancer treatment. The journal emphasizes scientific understanding of alternative medicine and traditional medicine therapies, and their responsible integration with conventional health care. Integrative care includes therapeutic interventions in diet, lifestyle, exercise, stress care, and nutritional supplements, as well as experimental vaccines, chrono-chemotherapy, and other advanced treatments) review the issue of dietary supplements in the treatment of cancer patients. They claim that the optimal approach is to discuss both the facts and the uncertainty with the patient, in order to reach a mutually informed decision. This sounds promising, and we might thus trust them to deliver something reliable.

In order to enable doctors and other health care professionals to have such discussion, the authors then report on the work of the ‘Clinical Practice Committee’ of ‘The Society of Integrative Oncology’. This panel undertook the challenge of providing basic information to physicians who wish to discuss these issues with their patients. A list of supplements that have the best suggestions of benefit was constructed by leading researchers and clinicians who have experience in using these supplements:

  1. curcumin,
  2. glutamine,
  3. vitamin D,
  4. maitake mushrooms,
  5. fish oil,
  6. green tea,
  7. milk thistle,
  8. astragalus,
  9. melatonin,
  10. probiotics.

The authors claim that their review includes basic information on each supplement, such as evidence on effectiveness and clinical trials, adverse effects, and interactions with medications. The information was constructed to provide an up-to-date base of knowledge, so that physicians and other health care providers would be aware of the supplements and be able to discuss realistic expectations and potential benefits and risks (my emphasis).

At first glance, this task looks ambitious but laudable; however, after studying the paper in some detail, I must admit that I have considerable problems taking it seriously – and here is why.

The first question I ask myself when reading the abstract is: Who are these “leading researchers and clinicians”? Surely such a consensus exercise crucially depends on who is being consulted. The article itself does not reveal who these experts are, merely that they are all members of the ‘Society of Integrative Oncology’. A little research reveals this organisation to be devoted to integrating all sorts of alternative therapies into cancer care. If we assume that the experts are identical with the authors of the review; one should point out that most of them are proponents of alternative medicine. This lack of critical input seems more than a little disconcerting.

My next questions are: How did they identify the 10 supplements and how did they evaluate the evidence for or against them? The article informs us that a 5-step procedure was employed:

1. Each clinician in this project was requested to construct a list of supplements that they tend to use frequently in their practice.

2. An initial list of close to 25 supplements was constructed. This list included supplements that have suggestions of some possible benefit and likely to carry minimal risk in cancer care.

3. From that long list, the group agreed on the 10 leading supplements that have the best suggestions of benefit.

4. Each participant selected 1 to 2 supplements that they have interest and experience in their use and wrote a manuscript related to the selected supplement in a uniformed and agreed format. The agreed format was constructed to provide a base of knowledge, so physicians and other health care providers would be able to discuss realistic expectations and potential benefits and risks with patients and families that seek that kind of information.

5. The revised document was circulated among participants for revisions and comments.

This method might look fine to proponents of alternative medicine, but from a scientific point of view, it is seriously wanting. Essentially, they asked those experts who are in favour of a given supplement to write a report to justify his/her preference. This method is not just open bias, it formally invites bias.

Predictably then, the reviews of the 10 chosen supplements are woefully inadequate. These is no evidence of a systematic approach; the cited evidence is demonstrably cherry-picked; there is a complete lack of critical analysis; for several supplements, clinical data are virtually absent without the authors finding this embarrassing void a reason for concern; dosage recommendations are often vague and naïve, to say the least (for instance, for milk thistle: 200 to 400 mg per day – without indication of what the named weight range refers to, the fresh plant, dried powder, extract…?); safety data are incomplete and nobody seems to mind that supplements are not subject to systematic post-marketing surveillance; the text is full of naïve thinking and contradictions (e.g.”There are no reported side effects of the mushroom extracts or the Maitake D-fraction. As Maitake may lower blood sugar, it should be used with caution in patients with diabetes“); evidence suggesting that a given supplement might reduce the risk of cancer is presented as though this means it is an effective treatment for an existing cancer; cancer is usually treated as though it is one disease entity without any differentiation of different cancer types.

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. But I do wonder, isn’t being in favour of integrating half-baked nonsense into cancer care and being selected for one’s favourable attitude towards certain supplements already a conflict of interest?

In any case, the review is in my view not of sufficient rigor to form the basis for well-informed discussions with patients. The authors of the review cite a guideline by the ‘Society of Integrative Oncology’ for the use of supplements in cancer care which states: For cancer patients who wish to use nutritional supplements, including botanicals for purported antitumor effects, it is recommended that they consult a trained professional. During the consultation, the professional should provide support, discuss realistic expectations, and explore potential benefits and risks. It is recommended that use of those agents occur only in the context of clinical trials, recognized nutritional guidelines, clinical evaluation of the risk/benefit ratio based on available evidence, and close monitoring of adverse effects. It seems to me that, with this review, the authors have not adhered to their own guideline.

Criticising the work of others is perhaps not very difficult, however, doing a better job usually is. So, can I offer anything that is better than the above criticised review? The answer is YES. Our initiative ‘CAM cancer’ provides up-to-date, concise and evidence-based systematic reviews of many supplements and other alternative treatments that cancer patients are likely to hear about. Their conclusions are not nearly as uncritically positive as those of the article in ‘INTEGRATIVE CANCER THERAPIES’.

I happen to believe that it is important for cancer patients to have access to reliable information and that it is unethical to mislead them with biased accounts about the value of any treatment.

One of the perks of researching alternative medicine and writing a blog about it is that one rarely runs out of good laughs. In perfect accordance with ERNST’S LAW, I have recently been entertained, amused, even thrilled by a flurry of ad hominem attacks most of which are true knee-slappers. I would like to take this occasion to thank my assailants for their fantasy and tenacity. Most days, these ad hominem attacks really do make my day.

I can only hope they will continue to make my days a little more joyous. My fear, however, is that they might, one day, run out of material. Even today, their claims are somewhat repetitive:

  • I am not qualified
  • I only speak tosh
  • I do not understand science
  • I never did any ‘real’ research
  • Exeter Uni fired me
  • I have been caught red-handed (not quite sure at what)
  • I am on BIG PHARMA’s payroll
  • I faked my research papers

Come on, you feeble-minded fantasists must be able to do better! Isn’t it time to bring something new?

Yes, I know, innovation is not an easy task. The best ad hominem attacks are, of course, always based on a kernel of truth. In that respect, the ones that have been repeated ad nauseam are sadly wanting. Therefore I have decided to provide all would-be attackers with some true and relevant facts from my life. These should enable them to invent further myths and use them as ammunition against me.

Sounds like fun? Here we go:

Both my grandfather and my father were both doctors

This part of my family history could be spun in all sorts of intriguing ways. For instance, one could make up a nice story about how I, even as a child, was brain-washed to defend the medical profession at all cost from the onslaught of non-medical healers.

Our family physician was a prominent homeopath

Ahhhh, did he perhaps mistreat me and start me off on my crusade against homeopathy? Surely, there must be a nice ad hominem attack in here!

I studied psychology at Munich but did not finish it

Did I give up psychology because I discovered a manic obsession or other character flaw deeply hidden in my soul?

I then studied medicine (also in Munich) and made a MD thesis in the area of blood clotting

No doubt this is pure invention. Where are the proofs of my qualifications? Are the data in my thesis real or invented?

My 1st job as a junior doctor was in a homeopathic hospital in Munich

Yes, but why did I leave? Surely they found out about me and fired me.

I had hands on training in several forms of alternative medicine, including homeopathy

Easy to say, but where are the proofs?

I moved to London where I worked in St George’s Hospital conducting research in blood rheology

Another invention? Where are the published papers to document this?

I went back to Munich university where I continued this line of research and was awarded a PhD

Another thesis? Again with dodgy data? Where can one see this document?

I became Professor Rehabilitation Medicine first at Hannover Medical School and later in Vienna

How did that happen? Did I perhaps bribe the appointment panels?

In 1993, I was appointed to the Chair in Complementary Medicine at Exeter university

Yes, we all know that; but why did I not direct my efforts towards promoting alternative medicine?

In Exeter, together with a team of ~20 colleagues, we published > 1000 papers on alternative medicine, more than anyone else in that field

Impossible! This number clearly shows that many of these articles are fakes or plagiaries.

My H-Index is currently >80

Same as above.

In 2012, I became Emeritus Professor of the University of Exeter

Isn’t ’emeritus’ the Latin word for ‘dishonourable discharge’?

I HOPE I CAN RELY ON ALL OF MY AD HOMINEM ATTACKERS TO USE THIS INFORMATION AND RENDER THE ASSAULTS MORE DIVERSE, REAL AND INTERESTING.

According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.

The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.

The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.

Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.

But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.

And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.

I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.

Acupressure is a treatment-variation of acupuncture; instead of sticking needles into the skin, pressure is applied over ‘acupuncture points’ which is supposed to provide a stimulus similar to needling. Therefore the effects of both treatments should theoretically be similar.

Acupressure could have several advantages over acupuncture:

  • it can be used for self-treatment
  • it is suitable for people with needle-phobia
  • it is painless
  • it is not invasive
  • it has less risks
  • it could be cheaper

But is acupressure really effective? What do the trial data tell us? Our own systematic review concluded that the effectiveness of acupressure is currently not well documented for any condition. But now there is a new study which might change this negative verdict.

The primary objective of this 3-armed RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care alone in the management of chemotherapy-induced nausea. 500 patients from outpatient chemotherapy clinics in three regions in the UK involving 14 different cancer units/centres were randomised to the wristband arm, the sham wristband arm and the standard care only arm. Participants were chemotherapy-naive cancer patients receiving chemotherapy of low, moderate and high emetogenic risk. The experimental group were given acupressure wristbands pressing the P6 point (anterior surface of the forearm). The Rhodes Index for Nausea/Vomiting, the Multinational Association of Supportive Care in Cancer (MASCC) Antiemesis Tool and the Functional Assessment of Cancer Therapy General (FACT-G) served as outcome measures. At baseline, participants completed measures of anxiety/depression, nausea/vomiting expectation and expectations from using the wristbands.

Data were available for 361 participants for the primary outcome. The primary outcome analysis (nausea in cycle 1) revealed no statistically significant differences between the three arms. The median nausea experience in patients using wristbands (both real and sham ones) was somewhat lower than that in the anti-emetics only group (median nausea experience scores for the four cycles: standard care arm 1.43, 1.71, 1.14, 1.14; sham acupressure arm 0.57, 0.71, 0.71, 0.43; acupressure arm 1.00, 0.93, 0.43, 0). Women responded more favourably to the use of sham acupressure wristbands than men (odds ratio 0.35 for men and 2.02 for women in the sham acupressure group; 1.27 for men and 1.17 for women in the acupressure group). No significant differences were detected in relation to vomiting outcomes, anxiety and quality of life. Some transient adverse effects were reported, including tightness in the area of the wristbands, feeling uncomfortable when wearing them and minor swelling in the wristband area (n = 6). There were no statistically significant differences in the costs associated with the use of real acupressure band.

26 subjects took part in qualitative interviews. Participants perceived the wristbands (both real and sham) as effective and helpful in managing their nausea during chemotherapy.

The authors concluded that there were no statistically significant differences between the three arms in terms of nausea, vomiting and quality of life, although apparent resource use was less in both the real acupressure arm and the sham acupressure arm compared with standard care only; therefore; no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting. However, the study provided encouraging evidence in relation to an improved nausea experience and some indications of possible cost savings to warrant further consideration of acupressure both in practice and in further clinical trials.

I could argue about several of the methodological details of this study. But I resist the temptation in order to focus on just one single point which I find important and which has implications beyond the realm of acupressure.

Why on earth do the authors conclude that no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting? The stated aim of this RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care. The results failed to show significant differences of the primary outcome measures, consequently the conclusion cannot be “unclear”, it has to be that ACUPRESSURE WRIST BANDS ARE NOT MORE EFFECTIVE THAN SHAM ACUPRESSURE WRIST BANDS AS AN ADJUNCT TO ANTI-EMETIC DRUG TREATMENT (or something to that extent).

As long as RCTs of alternative therapies are run by evangelic believers in the respective therapy, we are bound to regularly encounter this lamentable phenomenon of white-washing negative findings with an inadequate conclusion. In my view, this is not research or science, it is pseudo-research or pseudo-science. And it is much more than a nuisance or a trivial matter; it is a waste of research funds, a waste of patients’ good will that has reached a point where people will lose trust in alternative medicine research. Someone should really do a systematic study to identify those research teams that regularly commit such scientific misconduct and ensure that they are cut off public funding and support.

Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories