The most widely used definition of EVIDENCE-BASED MEDICINE (EBM) is probably this one: The judicious use of the best current available scientific research in making decisions about the care of patients. Evidence-based medicine (EBM) is intended to integrate clinical expertise with the research evidence and patient values.
David Sackett’s own definition is a little different: Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.
Even though the principles of EBM are now widely accepted, there are those who point out that EBM has its limitations. The major criticisms of EBM relate to five themes: reliance on empiricism, narrow definition of evidence, lack of evidence of efficacy, limited usefulness for individual patients, and threats to the autonomy of the doctor/patient relationship.
Advocates of alternative medicine have been particularly vocal in pointing out that EBM is not really applicable to their area. However, as their arguments were less than convincing, a new strategy for dealing with EBM seemed necessary. Some proponents of alternative medicine therefore are now trying to hoist EBM-advocates by their own petard.
In doing so they refer directly to the definitions of EBM and argue that EBM has to fulfil at least three criteria: 1) external best evidence, 2) clinical expertise and 3) patient values or preferences.
Using this argument, they thrive to demonstrate that almost everything in alternative medicine is evidence-based. Let me explain this with two deliberately extreme examples.
CRYSTAL THERAPY FOR CURING CANCER
There is, of course, not a jot of evidence for this. But there may well be the opinion held by crystal therapist that some cancer patients respond to their treatment. Thus the ‘best’ available evidence is clearly positive, they argue. Certainly the clinical expertise of these crystal therapists is positive. So, if a cancer patient wants crystal therapy, all three preconditions are fulfilled and CRYSTAL THERAPY IS ENTIRELY EVIDENCE-BASED.
CHIROPRACTIC FOR ASTHMA
Even the most optimistic chiropractor would find it hard to deny that the best evidence does not demonstrate the effectiveness of chiropractic for asthma. But never mind, the clinical expertise of the chiropractor may well be positive. If the patient has a preference for chiropractic, at least two of the three conditions are fulfilled. Therefore – on balance – chiropractic for asthma is [fairly] evidence-based.
The ‘HOISTING ON THE PETARD OF EBM’-method is thus a perfect technique for turning the principles of EBM upside down. Its application leads us straight back into the dark ages of medicine when anything was legitimate as long as some charlatan could convince his patients to endure his quackery and pay for it – if necessary with his life.
Do you think that chiropractic is effective for asthma? I don’t – in fact, I know it isn’t because, in 2009, I have published a systematic review of the available RCTs which showed quite clearly that the best evidence suggested chiropractic was ineffective for that condition.
But this is clearly not true, might some enthusiasts reply. What is more, they can even refer to a 2010 systematic review which indicates that chiropractic is effective; its conclusions speak a very clear language: …the eight retrieved studies indicated that chiropractic care showed improvements in subjective measures and, to a lesser degree objective measures… How on earth can this be?
I would not be surprised, if chiropractors claimed the discrepancy is due to the fact that Prof Ernst is biased. Others might point out that the more recent review includes more studies and thus ought to be more reliable. The newer review does, in fact, have about twice the number of studies than mine.
How come? Were plenty of new RCTs published during the 12 months that lay between the two publications? The answer is NO. But why then the discrepant conclusions?
The answer is much less puzzling than you might think. The ‘alchemists of alternative medicine’ regularly succeed in smuggling non-evidence into such reviews in order to beautify the overall picture and confirm their wishful thinking. The case of chiropractic for asthma does by no means stand alone, but it is a classic example of how we are being misled by charlatans.
Anyone who reads the full text of the two reviews mentioned above will find that they do, in fact, include exactly the same amount of RCTs. The reason why they arrive at different conclusions is simple: the enthusiasts’ review added NON-EVIDENCE to the existing RCTs. To be precise, the authors included one case series, one case study, one survey, two randomized controlled trials (RCTs), one randomized patient and observer blinded cross-over trial, one single blind cross study design, and one self-reported impairment questionnaire.
Now, there is nothing wrong with case reports, case series, or surveys – except THEY TELL US NOTHING ABOUT EFFECTIVENESS. I would bet my last shirt that the authors know all of that; yet they make fairly firm and positive conclusions about effectiveness. As the RCT-results collectively happen to be negative, they even pretend that case reports etc. outweigh the findings of RCTs.
And why do they do that? Because they are interested in the truth, or because they don’t mind using alchemy in order to mislead us? Your guess is as good as mine.
Systematic reviews are widely considered to be the most reliable type of evidence for judging the effectiveness of therapeutic interventions. Such reviews should be focused on a well-defined research question and identify, critically appraise and synthesize the totality of the high quality research evidence relevant to that question. Often it is possible to pool the data from individual studies and thus create a new numerical result of the existing evidence; in this case, we speak of a meta-analysis, a sub-category of systematic reviews.
One strength of systematic review is that they minimise selection and random biases by considering at the totality of the evidence of a pre-defined nature and quality. A crucial precondition, however, is that the quality of the primary studies is critically assessed. If this is done well, the researchers will usually be able to determine how robust any given result is, and whether high quality trials generate similar findings as those of lower quality. If there is a discrepancy between findings from rigorous and flimsy studies, it is obviously advisable to trust the former and discard the latter.
And this is where systematic reviews of alternative treatments can run into difficulties. For any given research question in this area we usually have a paucity of primary studies. Equally important is the fact that many of the available trials tend to be of low quality. Consequently, there often is a lack of high quality studies, and this makes it all the more important to include a robust critical evaluation of the primary data. Not doing so would render the overall result of the review less than reliable – in fact, such a paper would not qualify as a systematic review at all; it would be a pseudo-systematic review, i.e. a review which pretends to be systematic but, in fact, is not. Such papers are a menace in that they can seriously mislead us, particularly if we are not familiar with the essential requirements for a reliable review.
This is precisely where some promoters of bogus treatments seem to see their opportunity of making their unproven therapy look as though it was evidence-based. Pseudo-systematic reviews can be manipulated to yield a desired outcome. In my last post, I have shown that this can be done by including treatments which are effective so that an ineffective therapy appears effective (“chiropractic is so much more than just spinal manipulation”). An even simpler method is to exclude some of the studies that contradict one’s belief from the review. Obviously, the review would then not comprise the totality of the available evidence. But, unless the reader bothers to do a considerable amount of research, he/she would be highly unlikely to notice. All one needs to do is to smuggle the paper past the peer-review process – hardly a difficult task, given the plethora of alternative medicine journals that bend over backwards to publish any rubbish as long as it promotes alternative medicine.
Alternatively (or in addition) one can save oneself a lot of work and omit the process of critically evaluating the primary studies. This method is increasingly popular in alternative medicine. It is a fool-proof method of generating a false-positive overall result. As poor quality trials have a tendency to deliver false-positive results, it is obvious that a predominance of flimsy studies must create a false-positive result.
A particularly notorious example of a pseudo-systematic review that used this as well as most of the other tricks for misleading the reader is the famous ‘systematic’ review by Bronfort et al. It was commissioned by the UK GENERAL CHIROPRACTIC COUNCIL after the chiropractic profession got into trouble and was keen to defend those bogus treatments disclosed by Simon Singh. Bronfort and his colleagues thus swiftly published (of course, in a chiro-journal) an all-encompassing review attempting to show that, at least for some conditions, chiropractic was effective. Its lengthy conclusions seemed encouraging: Spinal manipulation/mobilization is effective in adults for: acute, subacute, and chronic low back pain; migraine and cervicogenic headache; cervicogenic dizziness; manipulation/mobilization is effective for several extremity joint conditions; and thoracic manipulation/mobilization is effective for acute/subacute neck pain. The evidence is inconclusive for cervical manipulation/mobilization alone for neck pain of any duration, and for manipulation/mobilization for mid back pain, sciatica, tension-type headache, coccydynia, temporomandibular joint disorders, fibromyalgia, premenstrual syndrome, and pneumonia in older adults. Spinal manipulation is not effective for asthma and dysmenorrhea when compared to sham manipulation, or for Stage 1 hypertension when added to an antihypertensive diet. In children, the evidence is inconclusive regarding the effectiveness for otitis media and enuresis, and it is not effective for infantile colic and asthma when compared to sham manipulation. Massage is effective in adults for chronic low back pain and chronic neck pain. The evidence is inconclusive for knee osteoarthritis, fibromyalgia, myofascial pain syndrome, migraine headache, and premenstrual syndrome. In children, the evidence is inconclusive for asthma and infantile colic.
Chiropractors across the world cite this paper as evidence that chiropractic has at least some evidence base. What they omit to tell us (perhaps because they do not appreciate it themselves) is the fact that Bronfort et al
- failed to formulate a focussed research question,
- invented his own categories of inconclusive findings,
- included all sorts of studies which had nothing to do with chiropractic,
- and did not to make an assessment of the quality of the included primary studies they included in their review.
If, for a certain condition, three trials were included, for instance, two of which were positive but of poor quality and one was negative but of good quality, the authors would conclude that, overall, there is sound evidence.
Bronfort himself is, of course, more than likely to know all that (he has learnt his trade with an excellent Dutch research team and published several high quality reviews) – but his readers mostly don’t. And for chiropractors, this ‘systematic’ review is now considered to be the most reliable evidence in their field.
Imagine a type of therapeutic intervention that has been shown to be useless. Let’s take surgery, for instance. Imagine that research had established with a high degree of certainty that surgical operations are ineffective. Imagine further that surgeons, once they can no longer hide this evidence, argue that good surgeons do much more than just operate: surgeons wash their hands which effectively reduces the risk of infections, they prescribe medications, they recommend rehabilitative and preventative treatments, etc. All of these measures are demonstratively effective in their own right, never mind the actual surgery. Therefore, surgeons could argue that the things surgeons do are demonstrably effective and helpful, even though surgery itself would be useless in this imagined scenario.
I am, of course, not for a minute claiming that surgery is rubbish, but I have used this rather extreme example to expose the flawed argument that is often used in alternative medicine for white-washing bogus treatments. The notion is that, because a particular alternative health care profession employs not just one but multiple forms of treatments, it should not be judged by the effectiveness of its signature-therapy, particularly if it happens to be ineffective.
This type of logic seems nowhere more prevalent than in the realm of chiropractic. Its founding father, D.D. Palmer, dreamt up the bizarre notion that all human disease is caused by ‘subluxations’ which require spinal manipulation for returning the ill person to good health. Consequently, most chiropractors see spinal manipulation as a panacea and use this type of treatment for almost 100% of their patients. In other words, spinal manipulation is as much the hallmark-therapy for chiropractic as surgery is for surgeons.
When someone points out that, for this or that condition, spinal manipulation is not of proven effectiveness or even of proven ineffectiveness, chiropractors have in recent years taken to answering as outlined above; they might say: WE DO ALL SORTS OF OTHER THINGS TOO, YOU KNOW. FOR INSTANCE, WE EMPLOY OTHER MANUAL TECHNIQUES, GIVE LIFE-STYLE ADVICE AND USE NO END OF PHYSIOTHERAPEUTIC INTERVENTIONS. YOU CANNOT SAY THAT THESE APPROACHES ARE BOGUS. THEREFORE CHIROPRACTIC IS FAR FROM USELESS.
To increase the chances of convincing us with this notion, they have, in recent months, produced dozens of ‘systematic reviews’ which allegedly prove their point. Here are some of the conclusions from these articles which usually get published in chiro-journals:
The majority of the included trials appeared to indicate that the parents of infants receiving manipulative therapies reported fewer hours crying per day than parents whose infants did not, based on contemporaneous crying diaries, and this difference was statistically significant.
This study found a level of B or fair evidence for manual manipulative therapy of the shoulder, shoulder girdle, and/or the FKC combined with multimodal or exercise therapy for rotator cuff injuries/disorders, disease, or dysfunction.
Personally, I find this kind of ‘logic’ irritatingly illogical. If we accept it as valid, the boundaries between sense and nonsense disappear, and our tools of differentiating between quackery and ethical health care become blunt.
The next step could then even be to claim that a homeopathic hospital must be a good thing because some of its clinicians occasionally also prescribe non-homeopathic treatments.
The efficacy or effectiveness of medical interventions is, of course, best tested in clinical trials. The principle of a clinical trial is fairly simple: typically, a group of patients is divided (preferably at random) into two subgroups, one (the ‘verum’ group) is treated with the experimental treatment and the other (the ‘control’ group) with another option (often a placebo), and the eventual outcomes of the two groups is compared. If done well, such studies are able to exclude biases and confounding factors such that their findings allow causal inference. In other words, they can tell us whether an outcome was caused by the intervention per se or by some other factor such as the natural history of the disease, regression towards the mean etc.
A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.
For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.
One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about NON-CONCLUSIVE?
That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!
How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo NON-CONCLUSIVE.
In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.
This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.
Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.
The little trick of applying the ‘NON-CONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.
Whenever a new trial of an alternative intervention emerges which fails to confirm the wishful thinking of the proponents of that therapy, the world of alternative medicine is in turmoil. What can be done about yet another piece of unfavourable evidence? The easiest solution would be to ignore it, of course – and this is precisely what is often tried. But this tactic usually proves to be unsatisfactory; it does not neutralise the new evidence, and each time someone brings it up, one has to stick one’s head back into the sand. Rather than denying its existence, it would be preferable to have a tool which invalidates the study in question once and for all.
The ‘fatal flaw’ solution is simpler than anticipated! Alternative treatments are ‘very special’, and this notion must be emphasised, blown up beyond all proportions and used cleverly to discredit studies with unfavourable outcomes: the trick is simply to claim that studies with unfavourable results have a ‘fatal flaw’ in the way the alternative treatment was applied. As only the experts in the ‘very special’ treatment in question are able to judge the adequacy of their therapy, nobody is allowed to doubt their verdict.
Take acupuncture, for instance; it is an ancient ‘art’ which only the very best will ever master – at least that is what we are being told. So, all the proponents need to do in order to invalidate a trial, is read the methods section of the paper in full detail and state ‘ex cathedra’ that the way acupuncture was done in this particular study is completely ridiculous. The wrong points were stimulated, or the right points were stimulated but not long enough [or too long], or the needling was too deep [or too shallow], or the type of stimulus employed was not as recommended by TCM experts, or the contra-indications were not observed etc. etc.
As nobody can tell a correct acupuncture from an incorrect one, this ‘fatal flaw’ method is fairly fool-proof. It is also ever so simple: acupuncture-fans do not necessarily study hard to find the ‘fatal flaw’, they only have to look at the result of a study – if it was favourable, the treatment was obviously done perfectly by highly experienced experts; if it was unfavourable, the therapists clearly must have been morons who picked up their acupuncture skills in a single weekend course. The reasons for this judgement can always be found or, if all else fails, invented.
And the end-result of the ‘fatal flaw’ method is most satisfactory; what is more, it can be applied to all alternative therapies – homeopathy, herbal medicine, reflexology, Reiki healing, colonic irrigation…the method works for all of them! What is even more, the ‘fatal flaw’ method is adaptable to other aspects of scientific investigations such that it fits every conceivable circumstance.
An article documenting the ‘fatal flaw’ has to be published, of course – but this is no problem! There are dozens of dodgy alternative medicine journals which are only too keen to print even the most far-fetched nonsense as long as it promotes alternative medicine in some way. Once this paper is published, the proponents of the therapy in question have a comfortable default position to rely on each time someone cites the unfavourable study: “WHAT NOT THAT STUDY AGAIN! THE TREATMENT HAS BEEN SHOWN TO BE ALL WRONG. NOBODY CAN EXPECT GOOD RESULTS FROM A THERAPY THAT WAS NOT CORRECTLY ADMINISTERED. IF YOU DON’T HAVE BETTER STUDIES TO SUPPORT YOUR ARGUMENTS, YOU BETTER SHUT UP.”
There might, in fact, be better studies – but chances are that the ‘other side’ has already documented a ‘fatal flaw’ in them too.
It is usually BIG PHARMA who stands accused of being less than honest with the evidence, particularly when it runs against commercial interests; and the allegations prove to be correct with depressing regularity. In alternative medicine, commercial interests exist too, but there is usually much less money at stake. So, a common assumption is that conflicts of interest are less relevant in alternative medicine. Like so many assumptions in this area, this notion is clearly and demonstrably erroneous.
The sums of money are definitely smaller, but non-commercial conflicts of interest are potentially more important than the commercial ones. I am thinking of the quasi-religious beliefs that are so very prevalent in alternative medicine. Belief can move mountains, they say – it can surely delude people and make them do the most extraordinary things. Belief can transform advocates of alternative medicine into ‘ALCHEMISTS OF ALTERNATIVE EVIDENCE’ who turn negative/unfavourable into positive/favourable evidence.
The alchemists’ ‘tricks of the trade’ are often the same as used by BIG PHARMA; they include:
- drawing conclusions which are not supported by the data
- designing studies such that they will inevitably generate a favourable result
- cherry-picking the evidence
- hiding unfavourable findings
- publishing favourable results multiple times
- submitting data-sets to multiple statistical tests until a positive result emerges
- defaming scientists who publish unfavourable findings
- bribing experts
- prettify data
- falsifying data
As I said, these methods, albeit despicable, are well-known to pseudoscientists in all fields of inquiry. To assume that they are unknown in alternative medicine is naïve and unrealistic, as many of my previous posts confirm.
In addition to these ubiquitous ‘standard’ methods of scientific misconduct and fraud, there are a few techniques which are more or less unique to and typical for the alchemists of alternative medicine. In the following parts of this series of articles, I will try to explain these methods in more detail.
Cancer patients are bombarded with information about supplements which allegedly are effective for their condition. I estimate that 99.99% of this information is unreliable and much of it is outright dangerous. So, there is an urgent need for trustworthy, objective information. But which source can we trust?
The authors of a recent article in ‘INTEGRATIVE CANCER THARAPIES’ (the first journal to spearhead and focus on a new and growing movement in cancer treatment. The journal emphasizes scientific understanding of alternative medicine and traditional medicine therapies, and their responsible integration with conventional health care. Integrative care includes therapeutic interventions in diet, lifestyle, exercise, stress care, and nutritional supplements, as well as experimental vaccines, chrono-chemotherapy, and other advanced treatments) review the issue of dietary supplements in the treatment of cancer patients. They claim that the optimal approach is to discuss both the facts and the uncertainty with the patient, in order to reach a mutually informed decision. This sounds promising, and we might thus trust them to deliver something reliable.
In order to enable doctors and other health care professionals to have such discussion, the authors then report on the work of the ‘Clinical Practice Committee’ of ‘The Society of Integrative Oncology’. This panel undertook the challenge of providing basic information to physicians who wish to discuss these issues with their patients. A list of supplements that have the best suggestions of benefit was constructed by “leading researchers and clinicians“ who have experience in using these supplements:
- vitamin D,
- maitake mushrooms,
- fish oil,
- green tea,
- milk thistle,
The authors claim that their review includes basic information on each supplement, such as evidence on effectiveness and clinical trials, adverse effects, and interactions with medications. The information was constructed to provide an up-to-date base of knowledge, so that physicians and other health care providers would be aware of the supplements and be able to discuss realistic expectations and potential benefits and risks (my emphasis).
At first glance, this task looks ambitious but laudable; however, after studying the paper in some detail, I must admit that I have considerable problems taking it seriously – and here is why.
The first question I ask myself when reading the abstract is: Who are these “leading researchers and clinicians”? Surely such a consensus exercise crucially depends on who is being consulted. The article itself does not reveal who these experts are, merely that they are all members of the ‘Society of Integrative Oncology’. A little research reveals this organisation to be devoted to integrating all sorts of alternative therapies into cancer care. If we assume that the experts are identical with the authors of the review; one should point out that most of them are proponents of alternative medicine. This lack of critical input seems more than a little disconcerting.
My next questions are: How did they identify the 10 supplements and how did they evaluate the evidence for or against them? The article informs us that a 5-step procedure was employed:
1. Each clinician in this project was requested to construct a list of supplements that they tend to use frequently in their practice.
2. An initial list of close to 25 supplements was constructed. This list included supplements that have suggestions of some possible benefit and likely to carry minimal risk in cancer care.
3. From that long list, the group agreed on the 10 leading supplements that have the best suggestions of benefit.
4. Each participant selected 1 to 2 supplements that they have interest and experience in their use and wrote a manuscript related to the selected supplement in a uniformed and agreed format. The agreed format was constructed to provide a base of knowledge, so physicians and other health care providers would be able to discuss realistic expectations and potential benefits and risks with patients and families that seek that kind of information.
5. The revised document was circulated among participants for revisions and comments.
This method might look fine to proponents of alternative medicine, but from a scientific point of view, it is seriously wanting. Essentially, they asked those experts who are in favour of a given supplement to write a report to justify his/her preference. This method is not just open bias, it formally invites bias.
Predictably then, the reviews of the 10 chosen supplements are woefully inadequate. These is no evidence of a systematic approach; the cited evidence is demonstrably cherry-picked; there is a complete lack of critical analysis; for several supplements, clinical data are virtually absent without the authors finding this embarrassing void a reason for concern; dosage recommendations are often vague and naïve, to say the least (for instance, for milk thistle: 200 to 400 mg per day – without indication of what the named weight range refers to, the fresh plant, dried powder, extract…?); safety data are incomplete and nobody seems to mind that supplements are not subject to systematic post-marketing surveillance; the text is full of naïve thinking and contradictions (e.g.”There are no reported side effects of the mushroom extracts or the Maitake D-fraction. As Maitake may lower blood sugar, it should be used with caution in patients with diabetes“); evidence suggesting that a given supplement might reduce the risk of cancer is presented as though this means it is an effective treatment for an existing cancer; cancer is usually treated as though it is one disease entity without any differentiation of different cancer types.
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. But I do wonder, isn’t being in favour of integrating half-baked nonsense into cancer care and being selected for one’s favourable attitude towards certain supplements already a conflict of interest?
In any case, the review is in my view not of sufficient rigor to form the basis for well-informed discussions with patients. The authors of the review cite a guideline by the ‘Society of Integrative Oncology’ for the use of supplements in cancer care which states: For cancer patients who wish to use nutritional supplements, including botanicals for purported antitumor effects, it is recommended that they consult a trained professional. During the consultation, the professional should provide support, discuss realistic expectations, and explore potential benefits and risks. It is recommended that use of those agents occur only in the context of clinical trials, recognized nutritional guidelines, clinical evaluation of the risk/benefit ratio based on available evidence, and close monitoring of adverse effects. It seems to me that, with this review, the authors have not adhered to their own guideline.
Criticising the work of others is perhaps not very difficult, however, doing a better job usually is. So, can I offer anything that is better than the above criticised review? The answer is YES. Our initiative ‘CAM cancer’ provides up-to-date, concise and evidence-based systematic reviews of many supplements and other alternative treatments that cancer patients are likely to hear about. Their conclusions are not nearly as uncritically positive as those of the article in ‘INTEGRATIVE CANCER THERAPIES’.
I happen to believe that it is important for cancer patients to have access to reliable information and that it is unethical to mislead them with biased accounts about the value of any treatment.
There is not a discussion about homeopathy where an apologist would eventually state: HOMEOPATHY CANNOT BE A PLACEBO, BECAUSE IT WORKS IN ANIMALS!!! Those who are not well-versed in this subject tend to be impressed, and the argument has won many consumers over to the dark side, I am sure. But is it really correct?
The short answer to this question is NO.
Pavlov discovered the phenomenon of ‘conditioning’ in animals, and ‘conditioning’ is considered to be a major part of the placebo-response. So, depending on the circumstances, animals do respond to placebo (my dog, for instance, used to go into a distinct depressive mood when he saw me packing a suitcase).
Then there is the fact that the animal’s response might be less important than the owner’s reaction to homeopathic treatment. This is particularly important with pets, of course. Homeopathy-believing pet owners might over-interpret the pet’s response and report that the homeopathic remedy has worked wonders when, in fact, it has made no difference.
Finally, there may be some situations where neither of the above two phenomena can play a decisive role. Homeopaths like to cite studies where entire herds of cows were treated homeopathically to prevent mastitis, a common problem in milk-cows. It is unlikely that conditioning or wishful thinking of the owner are decisive in such a study. Let’s see whether homeopathy-promoters will also be fond of this new study of exactly this subject.
New Zealand vets compared clinical and bacteriological cure rates of clinical mastitis following treatment with either antimicrobials or homeopathic preparations. They used 7 spring-calving herds from the Waikato region of New Zealand to source cases of clinical mastitis (n=263 glands) during the first 90 days following calving. Duplicate milk samples were collected for bacteriology from each clinically infected gland at diagnosis and 25 (SD 5.3) days after the initial treatment. Affected glands were treated with either an antimicrobial formulation or a homeopathic remedy. Generalised linear models with binomial error distribution and logit link were used to analyse the proportion of cows that presented clinical treatment cures and the proportion of glands that were classified as bacteriological cures, based on initial and post-treatment milk samples.
The results show that the mean cumulative incidence of clinical mastitis was 7% (range 2-13% across herds) of cows. Streptococcus uberis was the most common pathogen isolated from culture-positive samples from affected glands (140/209; 67%). The clinical cure rate was higher for cows treated with antimicrobials (107/113; 95%) than for cows treated with homeopathic remedies (72/114; 63%) (p<0.001) based on the observance of clinical signs following initial treatment. Across all pathogen types bacteriological cure rate at gland level was higher for those cows treated with antimicrobials (75/102; 74%) than for those treated with a homeopathic preparation (39/107; 36%) (p<0.001).
The authors conclude that homeopathic remedies had significantly lower clinical and bacteriological cure rates compared with antimicrobials when used to treat post-calving clinical mastitis where S. uberis was the most common pathogen. The proportion of cows that needed retreatment was significantly higher for the homeopathic treated cows. This, combined with lower bacteriological cure rates, has implications for duration of infection, individual cow somatic cell count, costs associated with treatment and animal welfare.
Yes, I know, this is just one single study, and we need to consider the totality of the reliable evidence. Currently, there are 203 clinical trials of homeopathic treatments of animals; and they are being reviewed at the very moment (unfortunately by a team that is not known for its objective stance on homeopathy). So, we will have to wait and see. When, in 1999, A. Vickers reviewed all per-clinical studies, including those on animals, he concluded that there is a lack of independent replication of any pre-clinical research in homoeopathy. In the few instances where a research team has set out to replicate the work of another, either the results were negative or the methodology was questionable.
All this is to say that, until truly convincing evidence to the contrary is available, the homeopaths’ argument ‘HOMEOPATHY CANNOT BE A PLACEBO, BECAUSE IT WORKS IN ANIMALS!!!’ is, in my view, as weak as the dilution of their remedies.
There are dozens of observational studies of homeopathy which seem to suggest – at least to homeopaths – that homeopathic treatments generate health benefits. As these investigations lack a control group, their results can be all to easily invalidated by pointing out that factors like ‘regression towards the mean‘ (RTM, a statistical artefact caused by the phenomenon that a variable that is extreme on its first measurement tends to be closer to the average on its second measurement) might be the cause of the observed change. Thus the debate whether such observational data are reliable or not has been raging for decades. Now, German (pro-homeopathy) investigators have published a paper which potentially could resolve this dispute.
With this re-analysis of an observational study, the investigators wanted to evaluate whether the observed changes in previous cohort studies are due to RTM and to estimate RTM adjusted effects. SF-36 quality-of-life (QoL) data from a cohort of 2827 chronically diseased adults treated with homeopathy were reanalysed using a method described in 1991 by Mee and Chua’s. RTM adjusted effects, standardized by the respective standard deviation at baseline, were 0.12 (95% CI: 0.06-0.19, P < 0.001) in the mental and 0.25 (0.22-0.28, P < 0.001) in the physical summary score of the SF-36. Small-to-moderate effects were confirmed for most individual diagnoses in physical, but not in mental component scores. Under the assumption that the true population mean equals the mean of all actually diseased patients, RTM adjusted effects were confirmed for both scores in most diagnoses.
The authors reached the following conclusion: “In our paper we showed that the effects on quality of life observed in patients receiving homeopathic care in a usual care setting are small or moderate at maximum, but cannot be explained by RTM alone. Due to the uncontrolled study design they may, however, completely be due to nonspecific effects. All our analyses made a restrictive and conservative assumption, so the true treatment effects might be larger than shown.”
Of course, the analysis heavily relies on the validity of Mee and Chua’s modified t-test. It requires the true mean in the target population to be known, a requirement that seldom can be fulfilled. The authors therefore took the SF-36 mean summary scores from the 1998 German health survey as proxies. I am not a statistician and therefore unable to tell how reliable this method might be (- if there is someone out there who can give us some guidance here, please post your comment).
In order to make sense of these data, we need to consider that, during the study period, about half of the patients admitted to have had additional visits to non-homeopathic doctors, and 27% also received conventional drugs. In addition, they would have benefitted from:
- the benign history of the conditions they were suffering from,
- a placebo-effect,
- the care and attention they received
- and all sorts of other non-specific effects.
So, considering these factors, what does this interesting re-analysis really tell us? My interpretation is as follows: the type of observational study that homeopaths are so fond of yields false-positive results. If we correct them – as the authors have done here for just one single factor, the RTM – the effect size gets significantly smaller. If we were able to correct them for some of the other factors mentioned above, the effect size would shrink more and more. And if we were able to correct them for all confounders, their results would almost certainly concur with those of rigorously controlled trials which demonstrate that homeopathic remedies are pure placebos.
I am quite sure that this interpretation is unpopular with homeopaths, but I am equally certain that it is correct.