Reiki is a form of energy healing that evidently has been getting so popular that, according to the ‘Shropshire Star’, even stressed hedgehogs are now being treated with this therapy. In case you argue that this publication is not cutting edge when it comes to reporting of scientific advances, you may have a point. So, let us see what evidence we find on this amazing intervention.
A recent systematic review of the therapeutic effects of Reiki concludes that the serious methodological and reporting limitations of limited existing Reiki studies preclude a definitive conclusion on its effectiveness. High-quality randomized controlled trials are needed to address the effectiveness of Reiki over placebo. Considering that this article was published in the JOURNAL OF ALTERNATIVE AND COMPLEMENTARY MEDICINE, this is a fairly damming verdict. The notion that Reiki is but a theatrical placebo recently received more support from a new clinical trial.
This pilot study examined the effects of Reiki therapy and companionship on improvements in quality of life, mood, and symptom distress during chemotherapy. Thirty-six breast cancer patients received usual care, Reiki, or a companion during chemotherapy. Data were collected from patients while they were receiving usual care. Subsequently, patients were randomized to either receive Reiki or a companion during chemotherapy. Questionnaires assessing quality of life, mood, symptom distress, and Reiki acceptability were completed at baseline and chemotherapy sessions 1, 2, and 4. Reiki was rated relaxing and caused no side effects. Both Reiki and companion groups reported improvements in quality of life and mood that were greater than those seen in the usual care group.
The authors of this study conclude that interventions during chemotherapy, such as Reiki or companionship, are feasible, acceptable, and may reduce side effects.
This is an odd conclusion, if there ever was one. Clearly the ‘companionship’ group was included to see whether Reiki has effects beyond simply providing sympathetic attention. The results show that this is not the case. It follows, I think, that Reiki is a placebo; its perceived relaxing effects are the result of non-specific phenomena which have nothing to do with Reiki per se. The fact that the authors fail to spell this out more clearly makes me wonder whether they are researchers or promoters of Reiki.
Some people will feel that it does not matter how Reiki works, the main thing is that it does work. I beg to differ!
If its effects are due to nothing else than attention and companionship, we do not need ‘trained’ Reiki masters to do the treatment; anyone who has time, compassion and sympathy can do it. More importantly, if Reiki is a placebo, we should not mislead people that some super-natural energy is at work. This only promotes irrationality – and, as Voltaire once said: those who make you believe in absurdities can make you commit atrocities.
Acute tonsillitis (AT) is an upper respiratory tract infection which is prevalent, particularly in children. The cause is usually a viral or, less commonly, a bacterial infection. Treatment is symptomatic and usually consists of ample fluid intake and pain-killers; antibiotics are rarely indicated, even if the infection is bacterial by nature. The condition is self-limiting and symptoms subside normally after one week.
Homeopaths believe that their remedies are effective for AT – but is there any evidence? A recent trial seems to suggest there is.
It aimed, according to its authors, to determine the efficacy of a homeopathic complex on the symptoms of acute viral tonsillitis in African children in South Africa.
The double-blind, placebo-controlled RCT was a 6-day “pilot study” and included 30 children aged 6 to 12 years, with acute viral tonsillitis. Participants took two tablets 4 times per day. The treatment group received lactose tablets medicated with the homeopathic complex (Atropa belladonna D4, Calcarea phosphoricum D4, Hepar sulphuris D4, Kalium bichromat D4, Kalium muriaticum D4, Mercurius protoiodid D10, and Mercurius biniodid D10). The placebo consisted of the unmedicated vehicle only. The Wong-Baker FACES Pain Rating Scale was used for measuring pain intensity, and a Symptom Grading Scale assessed changes in tonsillitis signs and symptoms.
The results showed that the treatment group had a statistically significant improvement in the following symptoms compared with the placebo group: pain associated with tonsillitis, pain on swallowing, erythema and inflammation of the pharynx, and tonsil size.
The authors drew the following conclusions: the homeopathic complex used in this study exhibited significant anti-inflammatory and pain-relieving qualities in children with acute viral tonsillitis. No patients reported any adverse effects. These preliminary findings are promising; however, the sample size was small and therefore a definitive conclusion cannot be reached. A larger, more inclusive research study should be undertaken to verify the findings of this study.
Personally, I agree only with the latter part of the conclusion and very much doubt that this study was able to “determine the efficacy” of the homeopathic product used. The authors themselves call their trial a “pilot study”. Such projects are not meant to determine efficacy but are usually designed to determine the feasibility of a trial design in order to subsequently mount a definitive efficacy study.
Moreover, I have considerable doubts about the impartiality of the authors. Their affiliation is “Department of Homoeopathy, University of Johannesburg, Johannesburg, South Africa”, and their article was published in a journal known to be biased in favour of homeopathy. These circumstances in itself might not be all that important, but what makes me more than a little suspicious is this sentence from the introduction of their abstract:
“Homeopathic remedies are a useful alternative to conventional medications in acute uncomplicated upper respiratory tract infections in children, offering earlier symptom resolution, cost-effectiveness, and fewer adverse effects.”
A useful alternative to conventional medications (there are no conventional drugs) for earlier symptom resolution?
If it is true that the usefulness of homeopathic remedies has been established, why conduct the study?
If the authors were so convinced of this notion (for which there is, of course, no good evidence) how can we assume they were not biased in conducting this study?
I think that, in order to agree that a homeopathic remedy generates effects that differ from those of placebo, we need a proper (not a pilot) study, published in a journal of high standing by unbiased scientists.
Rigorous research into the effectiveness of a therapy should tell us the truth about the ability of this therapy to treat patients suffering from a given condition — perhaps not one single study, but the totality of the evidence (as evaluated in systematic reviews) should achieve this aim. Yet, in the realm of alternative medicine (and probably not just in this field), such reviews are often highly contradictory.
A concrete example might explain what I mean.
There are numerous systematic reviews assessing the effectiveness of acupuncture for fibromyalgia syndrome (FMS). It is safe to assume that the authors of these reviews have all conducted comprehensive searches of the literature in order to locate all the published studies on this subject. Subsequently, they have evaluated the scientific rigor of these trials and summarised their findings. Finally they have condensed all of this into an article which arrives at a certain conclusion about the value of the therapy in question. Understanding this process (outlined here only very briefly), one would expect that all the numerous reviews draw conclusions which are, if not identical, at least very similar.
However, the disturbing fact is that they are not remotely similar. Here are two which, in fact, are so different that one could assume they have evaluated a set of totally different primary studies (which, of course, they have not).
One recent (2014) review concluded that acupuncture for FMS has a positive effect, and acupuncture combined with western medicine can strengthen the curative effect.
Another recent review concluded that a small analgesic effect of acupuncture was present, which, however, was not clearly distinguishable from bias. Thus, acupuncture cannot be recommended for the management of FMS.
How can this be?
By contrast to most systematic reviews of conventional medicine, systematic reviews of alternative therapies are almost invariably based on a small number of primary studies (in the above case, the total number was only 7 !). The quality of these trials is often low (all reviews therefore end with the somewhat meaningless conclusion that more and better studies are needed).
So, the situation with primary studies of alternative therapies for inclusion into systematic reviews usually is as follows:
- the number of trials is low
- the quality of trials is even lower
- the results are not uniform
- the majority of the poor quality trials show a positive result (bias tends to generate false positive findings)
- the few rigorous trials yield a negative result
Unfortunately this means that the authors of systematic reviews summarising such confusing evidence often seem to feel at liberty to project their own pre-conceived ideas into their overall conclusion about the effectiveness of the treatment. Often the researchers are in favour of the therapy in question – in fact, this usually is precisely the attitude that motivated them to conduct a review in the first place. In other words, the frequently murky state of the evidence (as outlined above) can serve as a welcome invitation for personal bias to do its effect in skewing the overall conclusion. The final result is that the readers of such systematic reviews are being misled.
Authors who are biased in favour of the treatment will tend to stress that the majority of the trials are positive. Therefore the overall verdict has to be positive as well, in their view. The fact that most trials are flawed does not usually bother them all that much (I suspect that many fail to comprehend the effects of bias on the study results); they merely add to their conclusions that “more and better trials are needed” and believe that this meek little remark is sufficient evidence for their ability to critically analyse the data.
Authors who are not biased and have the necessary skills for critical assessment, on the other hand, will insist that most trials are flawed and therefore their results must be categorised as unreliable. They will also emphasise the fact that there are a few reliable studies and clearly point out that these are negative. Thus their overall conclusion must be negative as well.
In the end, enthusiasts will conclude that the treatment in question is at least promising, if not recommendable, while real scientists will rightly state that the available data are too flimsy to demonstrate the effectiveness of the therapy; as it is wrong to recommend unproven treatments, they will not recommend the treatment for routine use.
The difference between the two might just seem marginal – but, in fact, it is huge: IT IS THE DIFFERENCE BETWEEN MISLEADING PEOPLE AND GIVING RESPONSIBLE ADVICE; THE DIFFERENCE BETWEEN VIOLATING AND ADHERING TO ETHICAL STANDARDS.
A reader of this blog recently sent me the following message: “Looks like this group followed you recent post about how to perform a CAM RCT!” A link directed me to a new trial of ear-acupressure. Today is ‘national acupuncture and oriental medicine day’ in the US, a good occasion perhaps to have a critical look at it.
The aim of this study was to assess the effectiveness of ear acupressure and massage vs. control in the improvement of pain, anxiety and depression in persons diagnosed with dementia.
For this purpose, the researchers recruited a total of 120 elderly dementia patients institutionalized in residential homes. The participants were randomly allocated, to three groups:
- Control group – they continued with their routine activities;
- Ear acupressure intervention group – they received ear acupressure treatment (pressure was applied to acupressure points on the ear);
- Massage therapy intervention group – they received relaxing massage therapy.
Pain, anxiety and depression were assessed with the Doloplus2, Cornell and Campbell scales. The study was carried out during 5 months; three months of experimental treatment and two months with no treatment. The assessments were done at baseline, each month during the treatment and at one and two months of follow-up.
A total of 111 participants completed the study. The ear acupressure intervention group showed better improvements than the two other groups in relation to pain and depression during the treatment period and at one month of follow-up. The best improvement in pain was achieved in the last (3rd) month of ear acupressure treatment. The best results regarding anxiety were also observed in the last month of treatment.
The authors concluded that ear acupressure and massage therapy showed better results than the control group in relation to pain, anxiety and depression. However, ear acupressure achieved more improvements.
The question is: IS THIS A RIGOROUS TRIAL?
My answer would be NO.
Now I better explain why, don’t I?
If we look at them critically, the results of this trial might merely prove that spending some time with a patient, being nice to her, administering a treatment that involves time and touch, etc. yields positive changes in subjective experiences of pain, anxiety and depression. Thus the results of this study might have nothing to do with the therapies per se.
And why would acupressure be more successful than massage therapy? Massage therapy is an ‘old hat’ for many patients; by contrast, acupressure is exotic and relates to mystical life forces etc. Features like that have the potential to maximise the placebo-response. Therefore it is conceivable that they have contributed to the superiority of acupressure over massage.
What I am saying is that the results of this trial can be interpreted in not just one but several ways. The main reason for that is the fact that the control group were not given an acceptable placebo, one that was indistinguishable from the real treatment. Patients were fully aware of what type of intervention they were getting. Therefore their expectations, possibly heightened by the therapists, determined the outcomes. Consequently there were factors at work which were totally beyond the control of the researchers and a clear causal link between the therapy and the outcome cannot be established.
An RCT that is aimed to test the effectiveness of a therapy but fails to establish such a causal link beyond reasonable doubt cannot be characterised as a rigorous study, I am afraid.
Sorry! Did I spoil your ‘national acupuncture and oriental medicine day’?
One of the most commonly ‘accepted’ indications for acupuncture is anxiety. Many trials have suggested that it is effective for that condition. But is this really true? To find out, we need someone to conduct a systematic review or meta-analysis.
Korean researchers have just published such a paper; they wanted to assess the preoperative anxiolytic efficacy of acupuncture therapy and therefore conducted a meta-analysis of all RCTs on the subject. Four electronic databases were searched up to February 2014. Data were included in the meta-analysis from RCTs in which groups receiving preoperative acupuncture treatment were compared with control groups receiving a placebo for anxiety.
Fourteen publications with a total of 1,034 patients were included. Six RCTs, using the State-Trait Anxiety Inventory-State (STAI-S), reported that acupuncture interventions led to greater reductions in preoperative anxiety relative to sham acupuncture. A further eight publications, employing visual analogue scales, also indicated significant differences in preoperative anxiety amelioration between acupuncture and sham acupuncture.
The authors concluded that aacupuncture therapy aiming at reducing preoperative anxiety has a statistically significant effect relative to placebo or nontreatment conditions. Well-designed and rigorous studies that employ large sample sizes are necessary to corroborate this finding.
From these conclusions most casual readers might get the impression that acupuncture is indeed effective. One has to dig a bit deeper to realise that is perhaps not so.
Why? Because the quality of the primary studies was often dismally poor. Most did not even mention adverse effects which, in my view, is a clear breach of publication ethics. What is more, all the studies were wide open to bias. The authors of the meta-analysis include in their results section the following short paragraph:
The 14 included studies exhibited various degrees of bias susceptibility (Figure 2 and Figure 3). The agreement rate, as measured using Cohen’s kappa, was 0.8 . Only six studies reported concealed allocation; the other six described a method of adequate randomization, although the word “randomization” appeared in all of the articles. Thirteen studies prevented blinding of the participants. Participants in these studies had no previous experience of acupuncture. According to STRICTA, two studies enquired after patients’ beliefs as a group: there were no significant differences [20, 24].
There is a saying amongst experts about such meta-analyses: RUBBISH IN, RUBBISH OUT. It describes the fact that several poor studies, pooled meta-analytically, can never give a reliable result.
This does, however, not mean that such meta-analyses are necessarily useless. If the authors prominently (in the abstract) stress that the quality of the primary studies was wanting and that therefore the overall result is unreliable, they might inspire future researchers to conduct more rigorous trials and thus generate progress. Most importantly, by insisting on pointing out these limitations and by not drawing positive conclusions from flawed data, they would avoid misleading those health care professionals – and let’s face it, they are the majority – who merely read the abstract or even just the conclusions of such articles.
The authors of this review have failed to do any of this; they and the journal EBCAM have thus done a disservice to us all by contributing to the constant drip of misleading and false-positive information about the value of acupuncture.
After the usually challenging acute therapy is behind them, cancer patients are often desperate to find a therapy that might improve their wellbeing. At that stage they may suffer from a wide range of symptoms which can seriously limit their quality of life. Any treatment that can be shown to restore them to their normal mental and physical health would be more than welcome.
Most homeopaths believe that their remedies can do just that, particularly if they are tailored not to the disease but to the individual patient. Sadly, the evidence that this might be so is almost non-existent. Now, a new trial has become available; it was conducted by Jennifer Poole, a chartered psychologist and registered homeopath, and researcher and teacher at Nemeton Research Foundation, Romsey.
The aim of this study was to explore the benefits of a three-month course of individualised homeopathy (IH) for survivors of cancer. Fifteen survivors of any type of cancer were recruited from a walk-in cancer support centre. Conventional treatment had to have taken place within the last three years. Patients saw a homeopath who prescribed IH. After three months of IH, they scored their total, physical and emotional wellbeing using the Functional Assessment of Chronic Illness Therapy for Cancer (FACIT-G). The results show that 11 of the 14 women had statistically positive outcomes for emotional, physical and total wellbeing.
The conclusions of the author are clear: Findings support previous research, suggesting CAM or IH could be beneficial for survivors of cancer.
This article was published in the NURSING TIMES, and the editor added a footnote informing us that “This article has been double-blind “.
I find this surprising. A decent peer-review should have picked up the point that a study of that nature cannot possibly produce results which tell us anything about the benefits of IH. The reasons for this are fairly obvious:
- there was no control group,
- therefore the observed outcomes are most likely due to 1) natural history, 2) placebo, 3) regression towards the mean and 4) social desirability; it seems most unlikely that IH had anything to do with the result
- the sample size was tiny,
- the patients elected to receive IH which means that had high expectations of a positive outcome,
- only subjective outcome measures were used,
- there is no good previous research suggesting that IH benefits cancer patients.
On the last point, a recent systematic review showed that the studies available on this topic had mixed results either showing a significantly greater improvement in QOL in the intervention group compared to the control group, or no significant difference between groups. The authors concluded that there existed significant gaps in the evidence base for the effectiveness of CAM on QOL in cancer survivors. Further work in this field needs to adopt more rigorous methodology to help support cancer survivors to actively embrace self-management and effective CAMs, without recommending inappropriate interventions which are of no proven benefit.
All this new study might tell us is that IH did not seem to harm these patients – but even this finding is not certain; to be sure, we would need to include many more patients. Any conclusions about the effectiveness of IH are totally unwarranted. But are there ANY generalizable conclusions that can be drawn from this article? Yes, I can think of a few:
- Some cancer patients can be persuaded to try the most implausible treatments.
- Some journals will publish any rubbish.
- Some peer-reviewers fail to spot the most obvious defects.
- Some ‘researchers’ haven’t got a clue.
- The attempts of misleading us about the value of homeopathy are incessant.
One might argue that this whole story is too trivial for words; who cares what dodgy science is published in the NURSING TIMES? But I think it does matter – not so much because of this one silly article itself, but because similarly poor research with similarly ridiculous conclusions is currently published almost every day. Subsequently it is presented to the public as meaningful science heralding important advances in medicine. It matters because this constant drip of bogus research eventually influences public opinion and determines far-reaching health care decisions.
Many proponents of alternative medicine seem somewhat suspicious of research; they have obviously understood that it might not produce the positive result they had hoped for; after all, good research tests hypotheses and does not necessarily confirm beliefs. At the same time, they are often tempted to conduct research: this is perceived as being good for the image and, provided the findings are positive, also good for business.
Therefore they seem to be tirelessly looking for a study design that cannot ‘fail’, i.e. one that avoids the risk of negative results but looks respectable enough to be accepted by ‘the establishment’. For these enthusiasts, I have good news: here is the study design that cannot fail.
It is perhaps best outlined as a concrete example; for reasons that will become clear very shortly, I have chosen reflexology as a treatment of diabetic neuropathy, but you can, of course, replace both the treatment and the condition as it suits your needs. Here is the outline:
- recruit a group of patients suffering from diabetic neuropathy – say 58, that will do nicely,
- randomly allocate them to two groups,
- the experimental group receives regular treatments by a motivated reflexologist,
- the controls get no such therapy,
- both groups also receive conventional treatments for their neuropathy,
- the follow-up is 6 months,
- the following outcome measures are used: pain reduction, glycemic control, nerve conductivity, and thermal and vibration sensitivities,
- the results show that the reflexology group experience more improvements in all outcome measures than those of control subjects,
- your conclusion: This study exhibited the efficient utility of reflexology therapy integrated with conventional medicines in managing diabetic neuropathy.
This method is fool-proof, trust me, I have seen it often enough being tested, and never has it generated disappointment. It cannot fail because it follows the notorious A+B versus B design (I know, I have mentioned this several times before on this blog, but it is really important, I think): both patient groups receive the essential mainstream treatment, and the experimental group receives a useless but pleasant alternative treatment in addition. The alternative treatment involves touch, time, compassion, empathy, expectations, etc. All of these elements will inevitably have positive effects, and they can even be used to increase the patients’ compliance with the conventional treatments that is being applied in parallel. Thus all outcome measures will be better in the experimental compared to the control group.
The overall effect is pure magic: even an utterly ineffective treatment will appear as being effective – the perfect method for producing false-positive results.
And now we hopefully all understand why this study design is so very popular in alternative medicine. It looks solid – after all, it’s an RCT!!! – and it thus convinces even mildly critical experts of the notion that the useless treatment is something worth while. Consequently the useless treatment will become accepted as ‘evidence-based’, will be used more widely and perhaps even reimbursed from the public purse. Business will be thriving!
And why did I employ reflexology for diabetic neuropathy? Is that example not a far-fetched? Not a bit! I used it because it describes precisely a study that has just been published. Of course, I could also have taken the chiropractic trial from my last post, or dozens of other studies following the A+B versus B design – it is so brilliantly suited for misleading us all.
Chiropractors, like other alternative practitioners, use their own unique diagnostic tools for identifying the health problems of their patients. One such test is the Kemp’s test, a manual test used by most chiropractors to diagnose problems with lumbar facet joints. The chiropractor rotates the torso of the patient, while her pelvis is fixed; if manual counter-rotative resistance on one side of the pelvis by the chiropractor causes lumbar pain for the patient, it is interpreted as a sign of lumbar facet joint dysfunction which, in turn would be treated with spinal manipulation.
All diagnostic tests have to fulfil certain criteria in order to be useful. It is therefore interesting to ask whether the Kemp’s test meets these criteria. This is precisely the question addressed in a recent paper. Its objective was to evaluate the existing literature regarding the accuracy of the Kemp’s test in the diagnosis of facet joint pain compared to a reference standard.
All diagnostic accuracy studies comparing the Kemp’s test with an acceptable reference standard were located and included in the review. Subsequently, all studies were scored for quality and internal validity.
Five articles met the inclusion criteria. Only two studies had a low risk of bias, and three had a low concern regarding applicability. Pooling of data from studies using similar methods revealed that the test’s negative predictive value was the only diagnostic accuracy measure above 50% (56.8%, 59.9%).
The authors concluded that currently, the literature supporting the use of the Kemp’s test is limited and indicates that it has poor diagnostic accuracy. It is debatable whether clinicians should continue to use this test to diagnose facet joint pain.
The problem with chiropractic diagnostic methods is not confined to the Kemp’s test, but extends to most tests employed by chiropractors. Why should this matter?
If diagnostic methods are not reliable, they produce either false-positive or false-negative findings. When a false-negative diagnosis is made, the chiropractor might not treat a condition that needs attention. Much more common in chiropractic routine, I guess, are false-positive diagnoses. This means chiropractors frequently treat conditions which the patient does not have. This, in turn, is not just a waste of money and time but also, if the ensuing treatment is associated with risks, an unnecessary exposure of patients to getting harmed.
The authors of this review, chiropractors from Canada, should be praised for tackling this subject. However, their conclusion that “it is debatable whether clinicians should continue to use this test to diagnose facet joint pain” is in itself highly debatable: the use of nonsensical diagnostic tools can only result in nonsense and should therefore be disallowed.
Most of the underlying assumptions of alternative medicine (AM) lack plausibility. Whenever this is the case, so the argument put forward by an international team of researchers in a recent paper, there are difficulties involved in obtaining a valid statistical significance in clinical studies.
Using a mostly statistical approach, they argue that, since the prior probability of a research hypothesis is directly related to its scientific plausibility, the commonly used frequentist statistics, which do not account for this probability, are unsuitable for studies exploring matters in various degree disconnected from science. Any statistical significance obtained in this field should be considered with great caution and may be better applied to more plausible hypotheses (like placebo effect) than the specific efficacy of the intervention.
The researchers conclude that, since achieving meaningful statistical significance is an essential step in the validation of medical interventions, AM practices, producing only outcomes inherently resistant to statistical validation, appear not to belong to modern evidence-based medicine.
To emphasize their arguments, the researchers make the following additional points:
- It is often forgotten that frequentist statistics, commonly used in clinical trials, provides only indirect evidence in support of the hypothesis examined.
- The p-value inherently tends to exaggerate the support for the hypothesis tested, especially if the scientific plausibility of the hypothesis is low.
- When the rationale for a clinical intervention is disconnected from the basic principles of science, as in case of complementary alternative medicines, any positive result obtained in clinical studies is more reasonably ascribable to hypotheses (generally to placebo effect) other than the hypothesis on trial, which commonly is the specific efficacy of the intervention.
- Since meaningful statistical significance as a rule is an essential step to validation of a medical intervention, complementary alternative medicine cannot be considered evidence-based.
Further explanations can be found in the discussion of the article where the authors argue that the quality of the hypothesis tested should be consistent with sound logic and science and therefore have a reasonable prior probability of being correct. As a rule of thumb, assuming a “neutral” attitude towards the null hypothesis (odds = 1:1), a p-value of 0.01 or, better, 0.001 should suffice to give a satisfactory posterior probability of 0.035 and 0.005 respectively.
In the area of AM, hypotheses often are entirely inconsistent with logic and frequently fly in the face of science. Four examples can demonstrate this instantly and sufficiently, I think:
- Homeopathic remedies which contain not a single ‘active’ molecule are not likely to generate biological effects.
- Healing ‘energy’ of Reiki masters has no basis in science.
- Meridians of acupuncture are pure imagination.
- Chiropractic subluxation have never been shown to exist.
Positive results from clinical trials of implausible forms of AM are thus either due to chance, bias or must be attributed to more credible causes such as the placebo effect. Since the achievement of meaningful statistical significance is an essential step in the validation of medical interventions, unless some authentic scientific support to AM is provided, one has to conclude that AM cannot be considered as evidence-based.
Such arguments are by no means new; they have been voiced over and over again. Essentially, they amount to the old adage: IF YOU CLAIM THAT YOU HAVE A CAT IN YOUR GARDEN, A SIMPLE PICTURE MAY SUFFICE. IF YOU CLAIM THERE IS A UNICORN IN YOUR GARDEN, YOU NEED SOMETHING MORE CONVINCING. An extraordinary claim requires an extraordinary proof! Put into the context of the current discussion about AM, this means that the usual level of clinical evidence is likely to be very misleading as long as it totally neglects the biological plausibility of the prior hypothesis.
Proponents of AM do not like to hear such arguments. They usually insist on what we might call a ‘level playing field’ and fail to see why their assumptions require not only a higher level of evidence but also a reasonable scientific hypothesis. They forget that the playing field is not even to start with; to understand the situation better, they should read this excellent article. Perhaps its elegant statistical approach will convince them – but I would not hold my breath.
Bach Flower Remedies are the brain child of Dr Edward Bach who, as an ex-homeopath, invented his very own highly diluted remedies. Like homeopathic medicines, they are devoid of active molecules and are claimed to work via some non-defined ‘energy’. Consequently, the evidence for these treatments is squarely negative: my systematic review analysed the data of all 7 RCTs of human patients or volunteers that were available in 2010. All but one were placebo-controlled. All placebo-controlled trials failed to demonstrate efficacy. I concluded that the most reliable clinical trials do not show any differences between flower remedies and placebos.
But now, a new investigation has become available. The aim of this study was to evaluate the effect of Bach flower Rescue Remedy on the control of risk factors for cardiovascular disease in rats.
A randomized longitudinal experimental study was conducted on 18 Wistar rats which were randomly divided into three groups of six animals each and orogastrically dosed with either 200μl of water (group A, control), or 100μl of water and 100μl of Bach flower remedy (group B), or 200μl of Bach flower remedy (group C) every 2 days, for 20 days. All animals were fed standard rat chow and water ad libitum.
Urine volume, body weight, feces weight, and food intake were measured every 2 days. On day 20, tests of glycemia, hyperuricemia, triglycerides, high-density lipoprotein (HDL) cholesterol, and total cholesterol were performed, and the anatomy and histopathology of the heart, liver and kidneys were evaluated. Data were analyzed using Tukey’s test at a significance level of 5%.
No significant differences were found in food intake, feces weight, urine volume and uric acid levels between groups. Group C had a significantly lower body weight gain than group A and lower glycemia compared with groups A and B. Groups B and C had significantly higher HDL-cholesterol and lower triglycerides than controls. Animals had mild hepatic steatosis, but no cardiac or renal damage was observed in the three groups.
From these results, the authors conclude that Bach flower Rescue Remedy was effective in controlling glycemia, triglycerides, and HDL-cholesterol and may serve as a strategy for reducing risk factors for cardiovascular disease in rats. This study provides some preliminary “proof of concept” data that Bach Rescue Remedy may exert some biological effects.
If ever there was a bizarre study, it must be this one:
- As far as I know, nobody has ever claimed that Rescue Remedy modified cardiovascular risk factors.
- It seems debatable whether the observed changes are all positive as far as the cardiovascular risk is concerned.
- It seems odd that a remedy that does not contain active molecules is associated with some sort of dose-effect response.
- The modification of cardiovascular risk factors in rats might be of little relevance for humans.
- A strategy for reducing cardiovascular risk factors in rats seems a strange idea.
- Even the authors cannot offer a mechanism of action [other than pure magic].
Does this study tell us anything of value? The authors are keen to point out that it provides a preliminary proof of concept for Rescue Remedy having biological effects. Somehow, I doubt that this conclusion will convince many of my readers.