Cancer patients are bombarded with information about supplements which allegedly are effective for their condition. I estimate that 99.99% of this information is unreliable and much of it is outright dangerous. So, there is an urgent need for trustworthy, objective information. But which source can we trust?
The authors of a recent article in ‘INTEGRATIVE CANCER THARAPIES’ (the first journal to spearhead and focus on a new and growing movement in cancer treatment. The journal emphasizes scientific understanding of alternative medicine and traditional medicine therapies, and their responsible integration with conventional health care. Integrative care includes therapeutic interventions in diet, lifestyle, exercise, stress care, and nutritional supplements, as well as experimental vaccines, chrono-chemotherapy, and other advanced treatments) review the issue of dietary supplements in the treatment of cancer patients. They claim that the optimal approach is to discuss both the facts and the uncertainty with the patient, in order to reach a mutually informed decision. This sounds promising, and we might thus trust them to deliver something reliable.
In order to enable doctors and other health care professionals to have such discussion, the authors then report on the work of the ‘Clinical Practice Committee’ of ‘The Society of Integrative Oncology’. This panel undertook the challenge of providing basic information to physicians who wish to discuss these issues with their patients. A list of supplements that have the best suggestions of benefit was constructed by “leading researchers and clinicians“ who have experience in using these supplements:
- vitamin D,
- maitake mushrooms,
- fish oil,
- green tea,
- milk thistle,
The authors claim that their review includes basic information on each supplement, such as evidence on effectiveness and clinical trials, adverse effects, and interactions with medications. The information was constructed to provide an up-to-date base of knowledge, so that physicians and other health care providers would be aware of the supplements and be able to discuss realistic expectations and potential benefits and risks (my emphasis).
At first glance, this task looks ambitious but laudable; however, after studying the paper in some detail, I must admit that I have considerable problems taking it seriously – and here is why.
The first question I ask myself when reading the abstract is: Who are these “leading researchers and clinicians”? Surely such a consensus exercise crucially depends on who is being consulted. The article itself does not reveal who these experts are, merely that they are all members of the ‘Society of Integrative Oncology’. A little research reveals this organisation to be devoted to integrating all sorts of alternative therapies into cancer care. If we assume that the experts are identical with the authors of the review; one should point out that most of them are proponents of alternative medicine. This lack of critical input seems more than a little disconcerting.
My next questions are: How did they identify the 10 supplements and how did they evaluate the evidence for or against them? The article informs us that a 5-step procedure was employed:
1. Each clinician in this project was requested to construct a list of supplements that they tend to use frequently in their practice.
2. An initial list of close to 25 supplements was constructed. This list included supplements that have suggestions of some possible benefit and likely to carry minimal risk in cancer care.
3. From that long list, the group agreed on the 10 leading supplements that have the best suggestions of benefit.
4. Each participant selected 1 to 2 supplements that they have interest and experience in their use and wrote a manuscript related to the selected supplement in a uniformed and agreed format. The agreed format was constructed to provide a base of knowledge, so physicians and other health care providers would be able to discuss realistic expectations and potential benefits and risks with patients and families that seek that kind of information.
5. The revised document was circulated among participants for revisions and comments.
This method might look fine to proponents of alternative medicine, but from a scientific point of view, it is seriously wanting. Essentially, they asked those experts who are in favour of a given supplement to write a report to justify his/her preference. This method is not just open bias, it formally invites bias.
Predictably then, the reviews of the 10 chosen supplements are woefully inadequate. These is no evidence of a systematic approach; the cited evidence is demonstrably cherry-picked; there is a complete lack of critical analysis; for several supplements, clinical data are virtually absent without the authors finding this embarrassing void a reason for concern; dosage recommendations are often vague and naïve, to say the least (for instance, for milk thistle: 200 to 400 mg per day – without indication of what the named weight range refers to, the fresh plant, dried powder, extract…?); safety data are incomplete and nobody seems to mind that supplements are not subject to systematic post-marketing surveillance; the text is full of naïve thinking and contradictions (e.g.”There are no reported side effects of the mushroom extracts or the Maitake D-fraction. As Maitake may lower blood sugar, it should be used with caution in patients with diabetes“); evidence suggesting that a given supplement might reduce the risk of cancer is presented as though this means it is an effective treatment for an existing cancer; cancer is usually treated as though it is one disease entity without any differentiation of different cancer types.
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. But I do wonder, isn’t being in favour of integrating half-baked nonsense into cancer care and being selected for one’s favourable attitude towards certain supplements already a conflict of interest?
In any case, the review is in my view not of sufficient rigor to form the basis for well-informed discussions with patients. The authors of the review cite a guideline by the ‘Society of Integrative Oncology’ for the use of supplements in cancer care which states: For cancer patients who wish to use nutritional supplements, including botanicals for purported antitumor effects, it is recommended that they consult a trained professional. During the consultation, the professional should provide support, discuss realistic expectations, and explore potential benefits and risks. It is recommended that use of those agents occur only in the context of clinical trials, recognized nutritional guidelines, clinical evaluation of the risk/benefit ratio based on available evidence, and close monitoring of adverse effects. It seems to me that, with this review, the authors have not adhered to their own guideline.
Criticising the work of others is perhaps not very difficult, however, doing a better job usually is. So, can I offer anything that is better than the above criticised review? The answer is YES. Our initiative ‘CAM cancer’ provides up-to-date, concise and evidence-based systematic reviews of many supplements and other alternative treatments that cancer patients are likely to hear about. Their conclusions are not nearly as uncritically positive as those of the article in ‘INTEGRATIVE CANCER THERAPIES’.
I happen to believe that it is important for cancer patients to have access to reliable information and that it is unethical to mislead them with biased accounts about the value of any treatment.
One of the perks of researching alternative medicine and writing a blog about it is that one rarely runs out of good laughs. In perfect accordance with ERNST’S LAW, I have recently been entertained, amused, even thrilled by a flurry of ad hominem attacks most of which are true knee-slappers. I would like to take this occasion to thank my assailants for their fantasy and tenacity. Most days, these ad hominem attacks really do make my day.
I can only hope they will continue to make my days a little more joyous. My fear, however, is that they might, one day, run out of material. Even today, their claims are somewhat repetitive:
- I am not qualified
- I only speak tosh
- I do not understand science
- I never did any ‘real’ research
- Exeter Uni fired me
- I have been caught red-handed (not quite sure at what)
- I am on BIG PHARMA’s payroll
- I faked my research papers
Come on, you feeble-minded fantasists must be able to do better! Isn’t it time to bring something new?
Yes, I know, innovation is not an easy task. The best ad hominem attacks are, of course, always based on a kernel of truth. In that respect, the ones that have been repeated ad nauseam are sadly wanting. Therefore I have decided to provide all would-be attackers with some true and relevant facts from my life. These should enable them to invent further myths and use them as ammunition against me.
Sounds like fun? Here we go:
Both my grandfather and my father were both doctors
This part of my family history could be spun in all sorts of intriguing ways. For instance, one could make up a nice story about how I, even as a child, was brain-washed to defend the medical profession at all cost from the onslaught of non-medical healers.
Our family physician was a prominent homeopath
Ahhhh, did he perhaps mistreat me and start me off on my crusade against homeopathy? Surely, there must be a nice ad hominem attack in here!
I studied psychology at Munich but did not finish it
Did I give up psychology because I discovered a manic obsession or other character flaw deeply hidden in my soul?
I then studied medicine (also in Munich) and made a MD thesis in the area of blood clotting
No doubt this is pure invention. Where are the proofs of my qualifications? Are the data in my thesis real or invented?
My 1st job as a junior doctor was in a homeopathic hospital in Munich
Yes, but why did I leave? Surely they found out about me and fired me.
I had hands on training in several forms of alternative medicine, including homeopathy
Easy to say, but where are the proofs?
I moved to London where I worked in St George’s Hospital conducting research in blood rheology
Another invention? Where are the published papers to document this?
I went back to Munich university where I continued this line of research and was awarded a PhD
Another thesis? Again with dodgy data? Where can one see this document?
I became Professor Rehabilitation Medicine first at Hannover Medical School and later in Vienna
How did that happen? Did I perhaps bribe the appointment panels?
In 1993, I was appointed to the Chair in Complementary Medicine at Exeter university
Yes, we all know that; but why did I not direct my efforts towards promoting alternative medicine?
In Exeter, together with a team of ~20 colleagues, we published > 1000 papers on alternative medicine, more than anyone else in that field
Impossible! This number clearly shows that many of these articles are fakes or plagiaries.
My H-Index is currently >80
Same as above.
In 2012, I became Emeritus Professor of the University of Exeter
Isn’t ’emeritus’ the Latin word for ‘dishonourable discharge’?
I HOPE I CAN RELY ON ALL OF MY AD HOMINEM ATTACKERS TO USE THIS INFORMATION AND RENDER THE ASSAULTS MORE DIVERSE, REAL AND INTERESTING.
According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.
The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.
The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.
Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.
But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.
And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.
I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.
Acupressure is a treatment-variation of acupuncture; instead of sticking needles into the skin, pressure is applied over ‘acupuncture points’ which is supposed to provide a stimulus similar to needling. Therefore the effects of both treatments should theoretically be similar.
Acupressure could have several advantages over acupuncture:
- it can be used for self-treatment
- it is suitable for people with needle-phobia
- it is painless
- it is not invasive
- it has less risks
- it could be cheaper
But is acupressure really effective? What do the trial data tell us? Our own systematic review concluded that the effectiveness of acupressure is currently not well documented for any condition. But now there is a new study which might change this negative verdict.
The primary objective of this 3-armed RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care alone in the management of chemotherapy-induced nausea. 500 patients from outpatient chemotherapy clinics in three regions in the UK involving 14 different cancer units/centres were randomised to the wristband arm, the sham wristband arm and the standard care only arm. Participants were chemotherapy-naive cancer patients receiving chemotherapy of low, moderate and high emetogenic risk. The experimental group were given acupressure wristbands pressing the P6 point (anterior surface of the forearm). The Rhodes Index for Nausea/Vomiting, the Multinational Association of Supportive Care in Cancer (MASCC) Antiemesis Tool and the Functional Assessment of Cancer Therapy General (FACT-G) served as outcome measures. At baseline, participants completed measures of anxiety/depression, nausea/vomiting expectation and expectations from using the wristbands.
Data were available for 361 participants for the primary outcome. The primary outcome analysis (nausea in cycle 1) revealed no statistically significant differences between the three arms. The median nausea experience in patients using wristbands (both real and sham ones) was somewhat lower than that in the anti-emetics only group (median nausea experience scores for the four cycles: standard care arm 1.43, 1.71, 1.14, 1.14; sham acupressure arm 0.57, 0.71, 0.71, 0.43; acupressure arm 1.00, 0.93, 0.43, 0). Women responded more favourably to the use of sham acupressure wristbands than men (odds ratio 0.35 for men and 2.02 for women in the sham acupressure group; 1.27 for men and 1.17 for women in the acupressure group). No significant differences were detected in relation to vomiting outcomes, anxiety and quality of life. Some transient adverse effects were reported, including tightness in the area of the wristbands, feeling uncomfortable when wearing them and minor swelling in the wristband area (n = 6). There were no statistically significant differences in the costs associated with the use of real acupressure band.
26 subjects took part in qualitative interviews. Participants perceived the wristbands (both real and sham) as effective and helpful in managing their nausea during chemotherapy.
The authors concluded that there were no statistically significant differences between the three arms in terms of nausea, vomiting and quality of life, although apparent resource use was less in both the real acupressure arm and the sham acupressure arm compared with standard care only; therefore; no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting. However, the study provided encouraging evidence in relation to an improved nausea experience and some indications of possible cost savings to warrant further consideration of acupressure both in practice and in further clinical trials.
I could argue about several of the methodological details of this study. But I resist the temptation in order to focus on just one single point which I find important and which has implications beyond the realm of acupressure.
Why on earth do the authors conclude that no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting? The stated aim of this RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care. The results failed to show significant differences of the primary outcome measures, consequently the conclusion cannot be “unclear”, it has to be that ACUPRESSURE WRIST BANDS ARE NOT MORE EFFECTIVE THAN SHAM ACUPRESSURE WRIST BANDS AS AN ADJUNCT TO ANTI-EMETIC DRUG TREATMENT (or something to that extent).
As long as RCTs of alternative therapies are run by evangelic believers in the respective therapy, we are bound to regularly encounter this lamentable phenomenon of white-washing negative findings with an inadequate conclusion. In my view, this is not research or science, it is pseudo-research or pseudo-science. And it is much more than a nuisance or a trivial matter; it is a waste of research funds, a waste of patients’ good will that has reached a point where people will lose trust in alternative medicine research. Someone should really do a systematic study to identify those research teams that regularly commit such scientific misconduct and ensure that they are cut off public funding and support.