I have often asked myself whether it is right/necessary to scientifically test things which are entirely implausible. Should we, for instance test the effectiveness of treatments which have a very low prior probability of generating a positive effect such as paranormal healing, homeopathy or Bach flower remedies? If you believe in the principles of evidence-based medicine you might focus on the clinical evidence and see biological plausibility as secondary. If you are a basic scientist, you are likely to do the reverse.
A recent article addressed this issue. The author points out that evaluating the absurd is absurd. Specifically, he noted that the empirical evaluation of a therapy would normally assume a plausible rationale regarding the mechanism of action. However, examination of the historical background and underlying principles for reflexology, iridology, acupuncture, auricular acupuncture, and some herbal medicines, reveals a rationale founded on the principle of analogical correspondences, which is a common basis for magical thinking and pseudoscientific beliefs such as astrology and chiromancy. Where this is the case, it is suggested that subjecting these therapies to empirical evaluation may be tantamount to evaluating the absurd.
This makes a lot of sense – but is it really entirely true? Are there no legitimate reasons at all for testing alternative treatments that lack biological plausibility? Ten or twenty years ago, I would have disagreed with the notion that plausibility is an essential prerequisite for scientific testing; today, I have changed my mind a little, but not as much as to agree completely with the assumption. In other words, I still see more than one good reason why evaluating the absurd might be reasonable or even advisable.
- Using plausibility as the only arbiter of scientific ‘evaluability’, assumes that we understand everything about plausibility there is to know. Yet it might just be possible that we mis-categorise something as implausible simply because we are not yet fully aware of all the facts.
- Declaring something as plausible and another thing as implausible are not hard and fast verdicts but judgements which, at least to some degree, are subjective. Sceptics find the axioms of homeopathy utterly implausible, for instance – but ask a homeopath, and you will hear all sorts of explanations which, at least to them, sound plausible.
- If an implausible alternative treatment is in wide-spread use, we arguably have a responsibility to test it scientifically in order to demonstrate the truth about it (to those proponents of that therapy who are willing to accept that rigorous science can find the truth). If we fail to do this, it will be the enthusiasts of that therapy who conduct less than rigorous science and produce false positive results. In turn, this will give the impression that the treatment is effective and mislead consumers, politicians, journalists etc. Seen from this perspective, it might even be unethical to not do the science.
So, I am in two minds about this (which might be a reflection of the fact that, during different periods of my life, I have been a clinician, a basic scientist and a clinical researcher). I realise that plausibility and prior probability are important – much more so than I appreciated years ago. But I think they should not be the only criteria. The clinical evidence should not be pushed aside completely.
I’d be interested to learn your views on this tricky issue.
The mechanisms thorough which spinal manipulative therapy (SMT) exerts its alleged clinical effects are not well established. A new study investigated the effects of subject expectation on clinical outcomes.
Sixty healthy subjects underwent quantitative sensory testing to their legs and low backs. They were randomly assigned to receive a positive, negative, or neutral expectation instructional set regarding the effects of a spe cific SMT technique on pain perception. Following the instructional set, all subjects received SMT and underwent repeat sensory tests.
No inter-group differences in pain response were present in the lower extremity following SMT. However, a main effect for hypoalgesia was present. A significant interaction was present between change in pain perception and group assignment in the low back with participants receiving a negative expectation instructional set demonstrating significant hyperalgesia.
The authors concluded that this study provides preliminary evidence for the influence of a non- specific effect (expectation) on the hypoalgesia associated with a single session of SMT in normal subjects. We replicated our previous findings of hypoalgesia in the lower extremity associated with SMT to the low back. Additionally, the resultant hypoalgesia in the lower extremity was independent of an expectation instructional set directed at the low back. Conversely, participants receiving a negative expectation instructional set demonstrated hyperalgesia in the low back following SMT which was not observed in those receiving a positive or neutral instructional set.
More than 10 years ago, we addressed a similar issue by conducting a systematic review of all sham-controlled trials of SMT. Specifically, we wanted to summarize the evidence from sham-controlled clinical trials of SMT. Eight studies fulfilled our inclusion/exclusion criteria. Three trials (two on back pain and one on enuresis) were judged to be burdened with serious methodological flaws. The results of the three most rigorous studies (two on asthma and one on primary dysmenorrhea) did not suggest that SMT leads to therapeutic responses which differ from an inactive sham-treatment. We concluded that sham-controlled trials of SMT are sparse but feasible. The most rigorous of these studies suggest that SMT is not associated with clinically relevant specific therapeutic effects.
Taken together, these two articles provide intriguing evidence to suggest that SMT is little more than a theatrical placebo. Given the facts that SMT is neither cheap nor devoid of risks, the onus is now on those who promote SMT, e.g. chiropractors, osteopaths and physiotherapists, to show that this is not true.
A recent meta-analysis evaluated the efficacy of acupuncture for treatment of irritable bowel syndrome (IBS) and arrived at bizarrely positive conclusions.
The authors state that they searched 4 electronic databases for double-blind, placebo-controlled trials investigating the efficacy of acupuncture in the management of IBS. Studies were screened for inclusion based on randomization, controls, and measurable outcomes reported.
Six RCTs were included in the meta-analysis, and 5 articles were of high quality. The pooled relative risk for clinical improvement with acupuncture was 1.75 (95%CI: 1.24-2.46, P = 0.001). Using two different statistical approaches, the authors confirmed the efficacy of acupuncture for treating IBS and concluded that acupuncture exhibits clinically and statistically significant control of IBS symptoms.
As IBS is a common and often difficult to treat condition, this would be great news! But is it true? We do not need to look far to find the embarrassing mistakes and – dare I say it? – lies on which this result was constructed.
The largest RCT included in this meta-analysis was neither placebo-controlled nor double blind; it was a pragmatic trial with the infamous ‘A+B versus B’ design. Here is the key part of its methods section: 116 patients were offered 10 weekly individualised acupuncture sessions plus usual care, 117 patients continued with usual care alone. Intriguingly, this was the ONLY one of the 6 RCTs with a significantly positive result!
The second largest study (as well as all the other trials) showed that acupuncture was no better than sham treatments. Here is the key quote from this trial: there was no statistically significant difference between acupuncture and sham acupuncture.
So, let me re-write the conclusions of this meta-analysis without spin, lies or hype: These results of this meta-analysis seem to indicate that:
- currently there are several RCTs testing whether acupuncture is an effective therapy for IBS,
- all the RCTs that adequately control for placebo-effects show no effectiveness of acupuncture,
- the only RCT that yields a positive result does not make any attempt to control for placebo-effects,
- this suggests that acupuncture is a placebo,
- it also demonstrates how misleading studies with the infamous ‘A+B versus B’ design can be,
- finally, this meta-analysis seems to be a prime example of scientific misconduct with the aim of creating a positive result out of data which are, in fact, negative.
Mohamed Khalifa is a therapist who works in Austria and has been practicing manual therapy for more than 30 years. His treatment, the so-called “Khalifa therapy”, is based on rhythmically applying manual pressure on parts of the body. Khalifa claims to be able to speed the self-healing processes of the human body. He has treated many top-athletes from all over the world; however, his method has never been investigated in detail within interdisciplinary scientific studies.
Now the first RCT of Khalifa therapy has become available.
Rupture of the anterior cruciate ligament (ACL) is an injury which usually needs to be treated surgically. It does not heal spontaneously, although some claim this commonly accepted knowledge to be not true. This randomized, controlled, observer-blinded, multicentre study was performed to test the effectiveness of Khalifa therapy for ACL. Thirty patients with complete ACL rupture, magnetic resonance imaging (MRI) verified, were included. Study examinations (e.g., international knee documentation committee (IKDC) score) were performed at inclusion (t 0). Patients were randomized to receive either standardised physiotherapy (ST) or additionally 1 hour of Khalifa therapy at the first session (STK). Twenty-four hours later, study examinations were performed again (t 1). Three months later control MRI and follow-up examinations were performed (t 2).
Initial status was comparable between both groups. There was a highly significant difference of mean IKDC score results at t 1 and t 2. After 3 months, 47% of the STK patients, but no ST patient, demonstrated an end-to-end homogeneous ACL in MRI. Clinical and physical examinations were significantly different in t 1 and t 2. ACL healing can be improved with manual therapy. Physical activity could be performed without pain and nearly normal range of motion after one treatment of specific pressure.
The authors of this study concluded that spontaneous healing of ACL rupture is possible within 3 months after lesion, enhanced by Khalifa therapy. The effect sizes of 1.6 and 2.0 standard deviations after treatment and after 3 months are considerable and prompt further work. Further progress in understanding the underlying mechanisms including placebo will be possible when more experience with the manual pressure therapy has been gathered by other therapists.
The authors of this RCT state that according to common knowledge, it (ACL) does not heal spontaneously. Other authors disagree with this notion:
Observations on 14 patients with ACL, for instance, indicated an acutely injured ACL may eventually spontaneously heal without using an extension brace, allowing return to athletic activity. Another study suggested that an acutely injured ACL has healing capability. It also suggests that conservative management of the acute ACL injury can yield satisfactory results in a group of individuals who have low athletic demands and continuous ACL on MRI, provided the patients are willing to accept the slight risk of late ACL reconstruction and meniscal injury.
So yes, the authors of the new RCT are correct in stating: spontaneous healing of ACL rupture is possible within 3 months … but the healing might indeed be SPONTANEOUS, i.e. unrelated to the Khalifa therapy. Before we can accept that Khalifa therapy is anything but a theatrical placebo, this RCT needs independent replication. Generally speaking, it seems a bad idea to make exaggerated claims on the basis of one single trial, particularly for treatments that are as implausible as this one.
Chronic neck pain is common and makes the life of many sufferers a misery. Pain-killers are helpful, of course, but who wants to take such medications on the long-term? Is there anything else these patients can do?
Massage therapy has been shown to work but how often for how long? This trial was designed to evaluate the optimal dose of massage for individuals with chronic neck pain. 228 individuals with chronic non-specific neck pain were recruited and randomized them to 5 groups receiving various doses of massage:
- 30-minute treatments 2 or 3 times weekly
- 60-minute treatments once weekly
- 60-minutte treatments twice weekly
- 60-minute treatments thrice weekly
- a 4-week period on a wait list
Neck-related dysfunction was assessed with the Neck Disability Index (range, 0-50 points) and pain intensity with a numerical rating scale (range, 0-10 points) at baseline and at 5 weeks.
The results suggested that 30-minute treatments were not significantly better than the waiting list control condition in terms of achieving a clinically meaningful improvement in neck dysfunction or pain, regardless of the frequency of treatments. In contrast, 60-minute treatments 2 and 3 times weekly significantly increased the likelihood of such improvement compared with the control condition in terms of both neck dysfunction and pain intensity.
The authors conclude that after 4 weeks of treatment, we found multiple 60-minute massages per week more effective than fewer or shorter sessions for individuals with chronic neck pain. Clinicians recommending massage and researchers studying this therapy should ensure that patients receive a likely effective dose of treatment.
So two or three hours of massage therapy seems to be optimal as a treatment for chronic neck pain. This would cost ~£ 200-300 per week! Who can or wants to afford this? And are there other options that might be less expensive and equally or more effective? For instance, is physiotherapeutic exercise an option?
I am not sure I know the answers to these questions but, before we recommend massage therapy to the many who chronically suffer from neck pain, we should find out.
The safety of the manual treatments such as spinal manipulation is a frequent subject on this blog. Few experts would disagree with the argument that more good data are needed – and what could be better data than that coming from a randomised clinical trial (RCT)?
The aim of this RCT was to investigate differences in occurrence of adverse events between three different combinations of manual treatment techniques used by manual therapists (i.e. chiropractors, naprapaths, osteopaths, physicians and physiotherapists) for patients seeking care for back and/or neck pain.
Participants were recruited among patients seeking care at the educational clinic of the Scandinavian College of Naprapathic Manual Medicine in Stockholm. 767 patients were randomized to one of three treatment arms:
- manual therapy (i.e. spinal manipulation, spinal mobilization, stretching and massage) (n = 249),
- manual therapy excluding spinal manipulation (n = 258)
- manual therapy excluding stretching (n = 260).
Treatments were provided by students in the seventh semester (of total 8). Adverse events were monitored via a questionnaire after each return visit and categorized in to five levels:
- short minor,
- long minor,
- short moderate,
- long moderate,
This was based on the duration and/or severity of the event.
The most common adverse events were soreness in muscles, increased pain and stiffness. No differences were found between the treatment arms concerning the occurrence of these adverse event. Fifty-one percent of patients, who received at least three treatments, experienced at least one adverse event after one or more visits. Women more often had short moderate adverse events, and long moderate adverse events than men.
The authors conclude that adverse events after manual therapy are common and transient. Excluding spinal manipulation or stretching do not affect the occurrence of adverse events. The most common adverse event is soreness in the muscles. Women reports more adverse events than men.
What on earth is naprapathy? I hear you ask. Here is a full explanation from a naprapathy website:
Naprapathy is a form of bodywork that is focused on the manual manipulation of the spine and connective tissue. Based on the fundamental principles of osteopathy and chiropractic techniques, naprapathy is a holistic and integrative approach to restoring whole health. In fact, naprapathy often incorporates multiple, complimentary therapies, such as massage, nutritional counseling, electrical muscle stimulation and low-level laser therapy.
Naprapathy also targets vertebral subluxations, or physical abnormalities present that suggest a misalignment or injury of the spinal vertebrae. This analysis is made by a physical inspection of the musculoskeletal system, as well as visual observation. The practitioner will also conduct a lengthy interview with the client to help determine stress level and nutritional status as well. An imbalance along one or more of these lines may signal trouble within the musculoskeletal structure.
The naprapathy practitioner is particularly skilled in identifying restricted or stressed components of the fascial system, or connective tissue. It is believed that where constriction of muscles, ligaments, and tendons exists, there is impaired blood flow and nerve functioning. Naprapathy attempts to correct these blockages through hands-on manipulation and stretching of connective tissue. However, since this discipline embodies a holistic approach, the naprapathy practitioner is also concerned with their client’s emotional health. To that end, many practitioners are also trained in psychotherapy and even hypnotherapy.
So, now we know!
We also know that the manual therapies tested here cause adverse effects in about half of all patients. This figure ties in nicely with the ones we had regarding chiropractic: ~ 50% of all patients suffer mild to moderate adverse effects after chiropractic spinal manipulation which usually last 2-3 days and can be strong enough to affect their quality of life. In addition very serious complications have been noted which luckily seem to be much rarer events.
In my view, this raises the question: DO THESE TREATMENTS GENERATE MORE GOOD THAN HARM? I fail to see any good evidence to suggest that they do – but, of course, I would be more than happy to revise this verdict, provided someone shows me the evidence.
Do you think that chiropractic is effective for asthma? I don’t – in fact, I know it isn’t because, in 2009, I have published a systematic review of the available RCTs which showed quite clearly that the best evidence suggested chiropractic was ineffective for that condition.
But this is clearly not true, might some enthusiasts reply. What is more, they can even refer to a 2010 systematic review which indicates that chiropractic is effective; its conclusions speak a very clear language: …the eight retrieved studies indicated that chiropractic care showed improvements in subjective measures and, to a lesser degree objective measures… How on earth can this be?
I would not be surprised, if chiropractors claimed the discrepancy is due to the fact that Prof Ernst is biased. Others might point out that the more recent review includes more studies and thus ought to be more reliable. The newer review does, in fact, have about twice the number of studies than mine.
How come? Were plenty of new RCTs published during the 12 months that lay between the two publications? The answer is NO. But why then the discrepant conclusions?
The answer is much less puzzling than you might think. The ‘alchemists of alternative medicine’ regularly succeed in smuggling non-evidence into such reviews in order to beautify the overall picture and confirm their wishful thinking. The case of chiropractic for asthma does by no means stand alone, but it is a classic example of how we are being misled by charlatans.
Anyone who reads the full text of the two reviews mentioned above will find that they do, in fact, include exactly the same amount of RCTs. The reason why they arrive at different conclusions is simple: the enthusiasts’ review added NON-EVIDENCE to the existing RCTs. To be precise, the authors included one case series, one case study, one survey, two randomized controlled trials (RCTs), one randomized patient and observer blinded cross-over trial, one single blind cross study design, and one self-reported impairment questionnaire.
Now, there is nothing wrong with case reports, case series, or surveys – except THEY TELL US NOTHING ABOUT EFFECTIVENESS. I would bet my last shirt that the authors know all of that; yet they make fairly firm and positive conclusions about effectiveness. As the RCT-results collectively happen to be negative, they even pretend that case reports etc. outweigh the findings of RCTs.
And why do they do that? Because they are interested in the truth, or because they don’t mind using alchemy in order to mislead us? Your guess is as good as mine.
The efficacy or effectiveness of medical interventions is, of course, best tested in clinical trials. The principle of a clinical trial is fairly simple: typically, a group of patients is divided (preferably at random) into two subgroups, one (the ‘verum’ group) is treated with the experimental treatment and the other (the ‘control’ group) with another option (often a placebo), and the eventual outcomes of the two groups is compared. If done well, such studies are able to exclude biases and confounding factors such that their findings allow causal inference. In other words, they can tell us whether an outcome was caused by the intervention per se or by some other factor such as the natural history of the disease, regression towards the mean etc.
A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.
For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.
One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about NON-CONCLUSIVE?
That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!
How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo NON-CONCLUSIVE.
In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.
This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.
Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.
The little trick of applying the ‘NON-CONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.
Whenever a new trial of an alternative intervention emerges which fails to confirm the wishful thinking of the proponents of that therapy, the world of alternative medicine is in turmoil. What can be done about yet another piece of unfavourable evidence? The easiest solution would be to ignore it, of course – and this is precisely what is often tried. But this tactic usually proves to be unsatisfactory; it does not neutralise the new evidence, and each time someone brings it up, one has to stick one’s head back into the sand. Rather than denying its existence, it would be preferable to have a tool which invalidates the study in question once and for all.
The ‘fatal flaw’ solution is simpler than anticipated! Alternative treatments are ‘very special’, and this notion must be emphasised, blown up beyond all proportions and used cleverly to discredit studies with unfavourable outcomes: the trick is simply to claim that studies with unfavourable results have a ‘fatal flaw’ in the way the alternative treatment was applied. As only the experts in the ‘very special’ treatment in question are able to judge the adequacy of their therapy, nobody is allowed to doubt their verdict.
Take acupuncture, for instance; it is an ancient ‘art’ which only the very best will ever master – at least that is what we are being told. So, all the proponents need to do in order to invalidate a trial, is read the methods section of the paper in full detail and state ‘ex cathedra’ that the way acupuncture was done in this particular study is completely ridiculous. The wrong points were stimulated, or the right points were stimulated but not long enough [or too long], or the needling was too deep [or too shallow], or the type of stimulus employed was not as recommended by TCM experts, or the contra-indications were not observed etc. etc.
As nobody can tell a correct acupuncture from an incorrect one, this ‘fatal flaw’ method is fairly fool-proof. It is also ever so simple: acupuncture-fans do not necessarily study hard to find the ‘fatal flaw’, they only have to look at the result of a study – if it was favourable, the treatment was obviously done perfectly by highly experienced experts; if it was unfavourable, the therapists clearly must have been morons who picked up their acupuncture skills in a single weekend course. The reasons for this judgement can always be found or, if all else fails, invented.
And the end-result of the ‘fatal flaw’ method is most satisfactory; what is more, it can be applied to all alternative therapies – homeopathy, herbal medicine, reflexology, Reiki healing, colonic irrigation…the method works for all of them! What is even more, the ‘fatal flaw’ method is adaptable to other aspects of scientific investigations such that it fits every conceivable circumstance.
An article documenting the ‘fatal flaw’ has to be published, of course – but this is no problem! There are dozens of dodgy alternative medicine journals which are only too keen to print even the most far-fetched nonsense as long as it promotes alternative medicine in some way. Once this paper is published, the proponents of the therapy in question have a comfortable default position to rely on each time someone cites the unfavourable study: “WHAT NOT THAT STUDY AGAIN! THE TREATMENT HAS BEEN SHOWN TO BE ALL WRONG. NOBODY CAN EXPECT GOOD RESULTS FROM A THERAPY THAT WAS NOT CORRECTLY ADMINISTERED. IF YOU DON’T HAVE BETTER STUDIES TO SUPPORT YOUR ARGUMENTS, YOU BETTER SHUT UP.”
There might, in fact, be better studies – but chances are that the ‘other side’ has already documented a ‘fatal flaw’ in them too.
There is not a discussion about homeopathy where an apologist would eventually state: HOMEOPATHY CANNOT BE A PLACEBO, BECAUSE IT WORKS IN ANIMALS!!! Those who are not well-versed in this subject tend to be impressed, and the argument has won many consumers over to the dark side, I am sure. But is it really correct?
The short answer to this question is NO.
Pavlov discovered the phenomenon of ‘conditioning’ in animals, and ‘conditioning’ is considered to be a major part of the placebo-response. So, depending on the circumstances, animals do respond to placebo (my dog, for instance, used to go into a distinct depressive mood when he saw me packing a suitcase).
Then there is the fact that the animal’s response might be less important than the owner’s reaction to homeopathic treatment. This is particularly important with pets, of course. Homeopathy-believing pet owners might over-interpret the pet’s response and report that the homeopathic remedy has worked wonders when, in fact, it has made no difference.
Finally, there may be some situations where neither of the above two phenomena can play a decisive role. Homeopaths like to cite studies where entire herds of cows were treated homeopathically to prevent mastitis, a common problem in milk-cows. It is unlikely that conditioning or wishful thinking of the owner are decisive in such a study. Let’s see whether homeopathy-promoters will also be fond of this new study of exactly this subject.
New Zealand vets compared clinical and bacteriological cure rates of clinical mastitis following treatment with either antimicrobials or homeopathic preparations. They used 7 spring-calving herds from the Waikato region of New Zealand to source cases of clinical mastitis (n=263 glands) during the first 90 days following calving. Duplicate milk samples were collected for bacteriology from each clinically infected gland at diagnosis and 25 (SD 5.3) days after the initial treatment. Affected glands were treated with either an antimicrobial formulation or a homeopathic remedy. Generalised linear models with binomial error distribution and logit link were used to analyse the proportion of cows that presented clinical treatment cures and the proportion of glands that were classified as bacteriological cures, based on initial and post-treatment milk samples.
The results show that the mean cumulative incidence of clinical mastitis was 7% (range 2-13% across herds) of cows. Streptococcus uberis was the most common pathogen isolated from culture-positive samples from affected glands (140/209; 67%). The clinical cure rate was higher for cows treated with antimicrobials (107/113; 95%) than for cows treated with homeopathic remedies (72/114; 63%) (p<0.001) based on the observance of clinical signs following initial treatment. Across all pathogen types bacteriological cure rate at gland level was higher for those cows treated with antimicrobials (75/102; 74%) than for those treated with a homeopathic preparation (39/107; 36%) (p<0.001).
The authors conclude that homeopathic remedies had significantly lower clinical and bacteriological cure rates compared with antimicrobials when used to treat post-calving clinical mastitis where S. uberis was the most common pathogen. The proportion of cows that needed retreatment was significantly higher for the homeopathic treated cows. This, combined with lower bacteriological cure rates, has implications for duration of infection, individual cow somatic cell count, costs associated with treatment and animal welfare.
Yes, I know, this is just one single study, and we need to consider the totality of the reliable evidence. Currently, there are 203 clinical trials of homeopathic treatments of animals; and they are being reviewed at the very moment (unfortunately by a team that is not known for its objective stance on homeopathy). So, we will have to wait and see. When, in 1999, A. Vickers reviewed all per-clinical studies, including those on animals, he concluded that there is a lack of independent replication of any pre-clinical research in homoeopathy. In the few instances where a research team has set out to replicate the work of another, either the results were negative or the methodology was questionable.
All this is to say that, until truly convincing evidence to the contrary is available, the homeopaths’ argument ‘HOMEOPATHY CANNOT BE A PLACEBO, BECAUSE IT WORKS IN ANIMALS!!!’ is, in my view, as weak as the dilution of their remedies.