MD, PhD, FMedSci, FSB, FRCP, FRCPEd

clinical trial

1 2 3 12

This is a question which I have asked myself more often than I care to remember. The reason is probably that, in alternative medicine, I feel surrounded by so much dodgy research that I simply cannot avoid asking it.

In particular, the co-called ‘pragmatic’ trials which are so much ‘en vogue’ at present are, in my view, a reason for concern. Take a study of cancer patients, for instance, where one group is randomized to get the usual treatments and care, while the experimental group receives the same and several alternative treatments in addition. These treatments are carefully selected to be agreeable and pleasant; each patient can choose the ones he/she likes best, always had wanted to try, or has heard many good things about. The outcome measure of our fictitious study would, of course, be some subjective parameter such as quality of life.

In this set-up, the patients in our experimental group thus have high expectations, are delighted to get something extra, even more happy to get it for free, receive plenty of attention and lots of empathy, care, time, attention etc. By contrast, our poor patients in the control group would be a bit miffed to have drawn the ‘short straw’ and receive none of this.

What result do we expect?

Will the quality of life after all this be equal in both groups?

Will it be better in the miffed controls?

Or will it be higher in those lucky ones who got all this extra pampering?

I don’t think I need to answer these questions; the answers are too obvious and too trivial.

But the real and relevant question is the following, I think: IS SUCH A TRIAL JUST SILLY AND MEANINGLESS OR IS IT UNETHICAL?

I would argue the latter!

Why?

Because the results of the study are clearly known before the first patient had even been recruited. This means that the trial was not necessary; the money, time and effort has been wasted. Crucially, patients have been misled into thinking that they give their time, co-operation, patience etc. because there is a question of sufficient importance to be answered.

But, in truth, there is no question at all!

Perhaps you believe that nobody in their right mind would design, fund and conduct such a daft trial. If so, you assumed wrongly. Such studies are currently being published by the dozen. Here is the abstract of the most recent one I could find:

The aim of this study was to evaluate the effectiveness of an additional, individualized, multi-component complementary medicine treatment offered to breast cancer patients at the Merano Hospital (South Tyrol) on health-related quality of life compared to patients receiving usual care only. A randomized pragmatic trial with two parallel arms was performed. Women with confirmed diagnoses of breast cancer were randomized (stratified by usual care treatment) to receive individualized complementary medicine (CM group) or usual care alone (usual care group). Both groups were allowed to use conventional treatment for breast cancer. Primary endpoint was the breast cancer-related quality of life FACT-B score at 6 months. For statistical analysis, we used analysis of covariance (with factors treatment, stratum, and baseline FACT-B score) and imputed missing FACT-B scores at 6 months with regression-based multiple imputation. A total of 275 patients were randomized between April 2011 and March 2012 to the CM group (n = 136, 56.3 ± 10.9 years of age) or the usual care group (n = 139, 56.0 ± 11.0). After 6 months from randomization, adjusted means for health-related quality of life were higher in the CM group (FACT-B score 107.9; 95 % CI 104.1-111.7) compared to the usual care group (102.2; 98.5-105.9) with an adjusted FACT-B score difference between groups of 5.7 (2.6-8.7, p < 0.001). Thus, an additional individualized and complex complementary medicine intervention improved quality of life of breast cancer patients compared to usual care alone. Further studies evaluating specific effects of treatment components should follow to optimize the treatment of breast cancer patients. 

The key sentence in this abstract is, of course: complementary medicine intervention improved quality of life of breast cancer patients… It provides the explanation as to why these trials are so popular with alternative medicine researchers: they are not real research but they are quite simply promotion! The next step would be to put a few of those pseudo-scientific trials together and claim that there is solid proof that integrating alternative treatments into conventional health care produces better results. At that stage, few people will bother asking whether this is really due to the treatments in questioning or to the additional attention, pampering etc.

My question is ARE SUCH TRIALS ETHICAL?

I would very much appreciate your opinion.

A new study of homeopathic arnica suggests efficacy. How come?

Subjects scheduled for rhinoplasty surgery with nasal bone osteotomies by a single surgeon were prospectively randomized to receive either oral perioperative arnica or placebo in a double-blinded fashion. A commercially available preparation was used which contained 12 capsules: one 500 mg capsule with arnica 1M is given preoperatively on the morning of surgery and two more later that day after surgery. Thereafter, arnica was administered in the 12C potency three times daily for the next 3 days (“C” indicates a 100-fold serial dilution; and M, a 1000-fold dilution)

Ecchymosis was measured in digital “three-quarter”-view photographs at three postoperative time points. Each bruise was outlined with Adobe Photoshop and the extent was scaled to a standardized reference card. Cyan, magenta, yellow, black, and luminosity were analyzed in the bruised and control areas to calculate change in intensity.

Compared with 13 subjects receiving placebo, 9 taking arnica had 16.2%, 32.9%, and 20.4% less extent on postoperative days 2/3, 7, and 9/10, a statistically significant difference on day 7. Color change initially showed 13.1% increase in intensity with arnica, but 10.9% and 36.3% decreases on days 7 and 9/10, a statistically significant difference on day 9/10. One subject experienced mild itching and rash with the study drug that resolved during the study period.

The authors concluded that Arnica montana seems to accelerate postoperative healing, with quicker resolution of the extent and the intensity of ecchymosis after osteotomies in rhinoplasty surgery, which may dramatically affect patient satisfaction.

Why are the results positive? Pervious systematic reviews confirm that homeopathic arnica is a pure placebo. First, I thought the answer lies in the 1M potency. It could well still contain active molecules. But then I realised that the answer is much more simple: if we apply the conventional level of statistical significance, there are no statistically significant differences to placebo at all! I had not noticed the little sentence by the authors: a P value of 0.1 was set as a meaningful difference with statistical significance. In fact, none of the effects called significant by the authors pass the conventionally used probability level of 5%.

So, what so the results of this new study truly mean? In my view, they show what was known all along: HOMEOPATHIC REMEDIES ARE PLACEBOS.

A recent comment to a post of mine (by a well-known and experienced German alt med researcher) made the following bold statement aimed directly at me and at my apparent lack of understanding research methodology:

C´mon , as researcher you should know the difference between efficacy and effectiveness. This is pharmacological basic knowledge. Specific (efficacy) + nonspecific effects = effectiveness. And, in fact, everything can be effective – because of non-specific or placebo-like effects. That does not mean that efficacy is existent.

The point he wanted to make is that outcome studies – studies without a control group where the researcher simply observe the outcome of a particular treatment in a ‘real life’ situation – suffice to demonstrate the effectiveness of therapeutic interventions. This belief is very wide-spread in alternative medicine and tends to mislead all concerned. It is therefore worth re-visiting this issue here in an attempt to create some clarity.

When a patient’s condition improves after receiving a therapy, it is very tempting to feel that this improvement reflects the effectiveness of the intervention (as the researcher mentioned above obviously does). Tempting but wrong: there are many other factors involved as well, for instance:

  • the placebo effect (mainly based on conditioning and expectation),
  • the therapeutic relationship with the clinician (empathy, compassion etc.),
  • the regression towards the mean (outliers tend to return to the mean value),
  • the natural history of the patient’s condition (most conditions get better even without treatment),
  • social desirability (patients tend to say they are better to please their friendly clinician),
  • concomitant treatments (patients often use treatments other than the prescribed one without telling their clinician).

So, how does this fit into the statement above ‘Specific (efficacy) + nonspecific effects = effectiveness’? Even if this formula were correct, it would not mean that outcome studies of the nature described demonstrate the effectiveness of a therapy. It all depends, of course, on what we call ‘non-specific’ effects. We all agree that placebo-effects belong to this category. Probably, most experts also would include the therapeutic relationship and the regression towards the mean under this umbrella. But the last three points from my list are clearly not non-specific effects of the therapy; they are therapy-independent determinants of the clinical outcome.

The most important factor here is usually the natural history of the disease. Some people find it hard to imagine what this term actually means. Here is a little joke which, I hope, will make its meaning clear and memorable.

CONVERATION BETWEEN TWO HOSPITAL DOCTORS:

Doc A: The patient from room 12 is much better today.

Doc B: Yes, we stared his treatment just in time; a day later and he would have been cured without it!

I am sure that most of my readers now understand (and never forget) that clinical improvement cannot be equated with the effectiveness of the treatment administered (they might thus be immune to the misleading messages they are constantly exposed to). Yet, I am not at all sure that all ‘alternativists’ have got it.

The founder of Johrei Healing (JH), Mokichi Okada, believed that “all human beings have toxins in their physical bodies. Some are inherited, others are acquired by ingesting medicines, food additives, unnatural food, unclean air, most drugs, etc. all of these contain chemicals which cannot be used by the body and are treated as poisons…….. Illness is no more than the body’s way of purifying itself to regain health…… The more we resist illness by taking suppressive medications, the harder and more built up the toxins become…… If we do not allow the toxins to be eliminated from the body, we will suffer more, and have more difficult purification…..on the other hand, if we allow illness to take its course by letting the toxins become naturally eliminated from our bodies, we will be healthier.”

Johrei healers channel light or energy or warmth etc. into the patient’s or recipient’s body in order to stimulate well-being and healing. Sounds wacky? Yes!

Still, at one stage my team conducted research into all sorts of wacky healing practices (detailed reasons and study designs can be found in my recent book ‘A SCIENTIST IN WONDERLAND‘). Despite the wackiness, we even conducted a study of JH. Dr Michael Dixon, who was closely collaborating with us at the time, had persuaded me that it would be reasonable to do such a study. He brought some Japanese JH-gurus to my department to discuss the possibility, and (to my utter amazement) they were happy to pay £ 70 000 into the university’s research accounts for a small pilot study. I made sure that all the necessary ethical safe-guards were in place, and eventually we all agreed to design and conduct a study. Here is the abstract of the paper published once the results were available and written up.

Johrei is a form of spiritual healing comprising “energy channelling” and light massage given either by a trained healer or, after some basic training, by anyone. This pilot trial aimed to identify any potential benefits of family-based Johrei practice in childhood eczema and for general health and to establish the feasibility of a subsequent randomised controlled trial. Volunteer families of 3-5 individuals, including at least one child with eczema were recruited to an uncontrolled pilot trial lasting 12 months. Parents were trained in Johrei healing and then practised at home with their family. Participants kept diaries and provided questionnaire data at baseline, 3,6 and 12 months. Eczema symptoms were scored at the same intervals. Scepticism about Johrei is presently an obstacle to recruitment and retention of a representative sample in a clinical trial, and to its potential use in general practice. The frequency and quality of practise at home by families may be insufficient to bring about the putative health benefits. Initial improvements in eczema symptoms and diary recorded illness, could not be separated from seasonal factors and other potential confounders. There were no improvements on other outcomes measuring general health and psychological wellbeing of family members.”

Our findings were hugely disappointing for the JH-gurus, of course, but we did insist on our right to publish them. Dr Dixon was not involved in the day to day running of our trial, nor in evaluating its results, nor in writing up the paper. He nevertheless showed a keen interest in the matter, kept in contact with the Japanese sponsors, and arranged regular meetings to discuss our progress. It was at one of those gatherings when he mentioned that he was about to fly to Japan to give a progress report to the JH organisation that had financed the study. My team felt this was odd (not least because, at this point, the study was far from finished) and we were slightly irritated by this interference.

When Dixon had returned from Japan, we asked him how the meeting had been. He said the JH sponsors had received him extremely well and had appreciated his presentation of our preliminary findings. As an ‘aside’, he mentioned something quite extraordinary: he, his wife and his three kids had all flown business class paid for by the sponsors of our trial. This, we all felt, was an overt abuse of potential research funds, unethical and totally out of line with academic behaviour. Recently, I found this fascinating clip on youtube, and I wonder whether it was filmed when Dr Dixon visited Japan on that occasion. One does get the impression that the Johrei organisation is not short of money.

A few months later, I duly reported this story to my dean, Prof Tooke, who was about to get involved with Dr Dixon in connection with a postgraduate course on integrated medicine for our medical school (more about this episode here or in my book). He agreed with me that such a thing was a most regrettable violation of academic and ethical standards. To my great surprise, he then asked me not to tell anybody about it. Today I feel very little loyalty to either of these two people and have therefore decided to publish my account – which, by the way, is fully documented as I have kept all relevant records and a detailed diary (in case anyone should feel like speaking to libel lawyers).

In my last post, I claimed that researchers of alternative medicine tend to be less than rigorous. I did not link this statement to any evidence at all. Perhaps I should have at least provided an example!? As it happens, I just came across a brand new paper which nicely demonstrates what I meant.

According to its authors, this non-interventional study was performed to generate data on safety and treatment effects of a complex homeopathic drug. They treated 1050 outpatients suffering from common cold with a commercially available homeopathic remedy for 8 days. The study was conducted in 64 German outpatient practices of medical doctors trained in CAM. Tolerability, compliance and the treatment effects were assessed by the physicians and by patient diaries. Adverse events were collected and assessed with specific attention to homeopathic aggravation and proving symptoms. Each adverse effect was additionally evaluated by an advisory board of experts.

The physicians detected 60 adverse events from 46 patients (4.4%). Adverse drug reactions occurred in 14 patients (1.3%). Six patients showed proving symptoms (0.57%) and only one homeopathic aggravation (0.1%) appeared. The rate of compliance was 84% for all groups. The global assessment of the treatment effects resulted in the verdict “good” and “very good” in 84.9% of all patients.

The authors concluded that the homeopathic complex drug was shown to be safe and effective for children and adults likewise. Adverse reactions specifically related to homeopathic principles are very rare. All observed events recovered quickly and were of mild to moderate intensity.

So why do I think this is ‘positively barmy’?

The study had no control group. This means that there is no way anyone can attribute the observed ‘treatment effects’ to the homeopathic remedy. There are many other phenomena that may have caused or contributed to it, e. g.:

  • a placebo effect
  • the natural history of the condition
  • regression to the mean
  • other treatments which the patients took but did not declare
  • the empathic encounter with the physician
  • social desirability

To plan a study with the aim as stated above and to draw the conclusion as cited above is naïve and unprofessional (to say the least) on the part of the researchers (I often wonder where, in such cases, the boundary between incompetence and research misconduct might lie). To pass such a paper through the peer review process is negligent on the part of the reviewers. To publish the article is irresponsible on the part of the editor.

In a nut-shell: COLLECTIVELY, THIS IS ‘POSITIVELY BARMY’!!!

On this blog, we have discussed the Alexander Technique before; it is an educational method promoted for all sorts of conditions, including neck pain. The very first website I found when googling it stated the following: “Back and neck pain can be caused by poor posture. Alexander Technique lessons help you to understand how to improve your posture throughout your daily activities. Many people, even those with herniated disc or pinched nerve, experience relief after one lesson, often permanent relief after five or ten lessons.”

Sounds too good to be true? Is there any good evidence?

The aim of this study, a randomized controlled trial with 3 parallel groups, was to test the efficacy of the Alexander Technique, local heat and guided imagery on pain and quality of life in patients with chronic non-specific neck pain. A total of 72 patients (65 females, 40.7±7.9 years) with chronic, non-specific neck pain were recruited. They received 5 sessions of the Alexander Technique, while the control groups were treated with local heat application or guided imagery. All interventions were conducted once a week for 45 minutes each.

The primary outcome measure at week 5 was neck pain intensity quantified on a 100-mm visual analogue scale; secondary outcomes included neck disability, quality of life, satisfaction and safety. The results show no group differences for pain intensity for the Alexander Technique compared to local heat. An exploratory analysis revealed the superiority of the Alexander Technique over guided imagery. Significant group differences in favor of the Alexander Technique were also found for physical quality of life. Adverse events were mild and mainly included slightly increased pain and muscle soreness.

The authors concluded that Alexander Technique was not superior to local heat application in treating chronic non-specific neck pain. It cannot be recommended as routine intervention at this time. Further trials are warranted for conclusive judgment.

I am impressed with these conclusions: this is how results should be interpreted. The primary outcome measure failed to yield a significant effect, and therefore such a negative conclusion is the only one that can be justified. Yet such clear words are an extreme rarity in the realm of alternative medicine. Most researchers in this area would, in my experience, have highlighted the little glimpses of the possibility of a positive effect and concluded that this therapeutic approach may be well worth a try.

In my view, this article is a fine example for demonstrating the difference between true scientists (who aim at testing the effectiveness of interventions) and pseudo-scientists (who aim at promoting their pet therapy). I applaud the authors of this paper!

The use of homeopathy to treat depression in peri- and postmenopausal women seems widespread, but there is a lack of clinical trials testing its efficacy. The aim of this new study was therefore to assess efficacy and safety of individualized homeopathic treatment versus placebo and fluoxetine versus placebo in peri- and postmenopausal women with moderate to severe depression.

A randomized, placebo-controlled, double-blind, double-dummy, superiority, three-arm trial with a 6 week follow-up study was conducted. The study was performed in a Mexican outpatient service of homeopathy. One hundred thirty-three peri- and postmenopausal women diagnosed with major depression according to DSM-IV (moderate to severe intensity) were included. The outcomes were:

  1. the change in the mean total score among groups on the 17-item Hamilton Rating Scale for Depression,
  2. the Beck Depression Inventory;
  3. the Greene Scale, after 6 weeks of treatment,
  4. response rates,
  5. remission rates,
  6. safety.

Efficacy data were analyzed in the intention-to-treat population (ANOVA with Bonferroni post-hoc test).

After a 6-week treatment, the results of homeopathic group showed more effectiveness than placebo in the Hamilton Scale. Response rate was 54.5% and remission rate was 15.9%. There was a significant difference between groups in response rate, but not in remission rate. The fluoxetine-placebo difference was 3.2 points. No differences were observed between groups in the Beck Depression Inventory. The results of the homeopathic group were superior to placebo regarding Greene Climacteric Scale (8.6 points). Fluoxetine was not different from placebo in the Greene Climacteric Scale.

The authors concluded that homeopathy and fluoxetine are effective and safe antidepressants for climacteric women. Homeopathy and fluoxetine were significantly different from placebo in response definition only. Homeopathy, but not fluoxetine, improves menopausal symptoms scored by Greene Climacteric Scale.

The article is interesting but highly confusing and poorly reported. The trial is small and short-term only. The way I see it, the finding that individualised homeopathy is better than a standard anti-depressant might be due to a range of phenomena:

  • residual bias; (for instance, it is conceivable that some patients were ‘de-blinded’ due to the well-known side-effects of the conventional anti-depressant);
  • inappropriate statistical analysis if the data;
  • chance;
  • fraud;
  • or the effectiveness of individualised homeopathy.

Even if the findings of this study turned out to be real, it would most certainly be premature to advise patients to opt for homeopathy. At the very minimum, we would need an independent replication of this study – and somehow I doubt that it would confirm the results of this Mexican trial.

Twenty years ago, I published a short article in the British Journal of Rheumatology. Its title was ALTERNATIVE MEDICINE, THE BABY AND THE BATH WATER. Reading it again today – especially in the light of the recent debate (with over 700 comments) on acupuncture – indicates to me that very little has since changed in the discussions about alternative medicine (AM). Does that mean we are going around in circles? Here is the (slightly abbreviated) article from 1995 for you to judge for yourself:

“Proponents of alternative medicine (AM) criticize the attempt of conducting RCTs because they view this is in analogy to ‘throwing out the baby with the bath water’. The argument usually goes as follows: the growing popularity of AM shows that individuals like it and, in some way, they benefit through using it. Therefore it is best to let them have it regardless of its objective effectiveness. Attempts to prove or disprove effectiveness may even be counterproductive. Should RCTs prove that a given intervention is not superior to a placebo, one might stop using it. This, in turn, would be to the disadvantage of the patient who, previous to rigorous research, has unquestionably been helped by the very remedy. Similar criticism merely states that AM is ‘so different, so subjective, so sensitive that it cannot be investigated in the same way as mainstream medicine’. Others see reasons to change the scientific (‘reductionist’) research paradigm into a broad ‘philosophical’ approach. Yet others reject the RCTs because they think that ‘this method assumes that every person has the same problems and there are similar causative factors’.

The example of acupuncture as a (popular) treatment for osteoarthritis, demonstrates the validity of such arguments and counter-arguments. A search of the world literature identified only two RCTs on the subject. When acupuncture was tested against no treatment, the experimental group of osteoarthritis sufferers reported a 23% decrease of pain, while the controls suffered a 12% increase. On the basis of this result, it might seem highly unethical to withhold acupuncture from pain-stricken patients—’if a patient feels better for whatever reason and there are no toxic side effects, then the patient should have the right to get help’.

But what about the placebo effect? It is notoriously difficult to find a placebo indistinguishable to acupuncture which would allow patient-blinded studies. Needling non-acupuncture points may be as close as one can get to an acceptable placebo. When patients with osteoarthritis were randomized into receiving either ‘real acupuncture or this type of sham acupuncture both sub-groups showed the same pain relief.

These findings (similar results have been published for other AMs) are compatible only with two explanations. Firstly acupuncture might be a powerful placebo. If this were true, we need to establish how safe acupuncture is (clearly it is not without potential harm); if the risk/benefit ratio is favourable and no specific, effective form of therapy exists one might still consider employing this form as a ‘placebo therapy’ for easing the pain of osteoarthritis sufferers. One would also feel motivated to research this powerful placebo and identify its characteristics or modalities with the aim of using the knowledge thus generated to help future patients.

Secondly, it could be the needling, regardless of acupuncture points and philosophy, that decreases pain. If this were true, we could henceforward use needling for pain relief—no special training in or equipment for acupuncture would be required, and costs would therefore be markedly reduced. In addition, this knowledge would lead us to further our understanding of basic mechanisms of pain reduction which, one day, might evolve into more effective analgesia. In any case the published research data, confusing as they often are, do not call for a change of paradigm; they only require more RCTs to solve the unanswered problems.

Conducting rigorous research is therefore by no means likely to ‘throw out the baby with the bath water’. The concept that such research could harm the patient is wrong and anti-scientific. To follow its implications would mean neglecting the ‘baby in the bath water’ until it suffers serious damage. To conduct proper research means attending the ‘baby’ and making sure that it is safe and well.

In the realm of homeopathy there is no shortage of irresponsible claims. I am therefore used to a lot – but this new proclamation takes the biscuit, particularly as it currently is being disseminated in various forms worldwide. It is so outrageously unethical that I decided to reproduce it here [in a slightly shortened version]:

“Homeopathy has given rise to a new hope to patients suffering from dreaded HIV, tuberculosis and the deadly blood disease Hemophilia. In a pioneering two-year long study, city-based homeopath Dr Rajesh Shah has developed a new medicine for AIDS patients, sourced from human immunodeficiency virus (HIV) itself.

The drug has been tested on humans for safety and efficacy and the results are encouraging, said Dr Shah. Larger studies with and without concomitant conventional ART (Antiretroviral therapy) can throw more light in future on the scope of this new medicine, he said. Dr Shah’s scientific paper for debate has just been published in Indian Journal of Research in Homeopathy…

The drug resulted in improvement of blood count (CD4 cells) of HIV patients, which is a very positive and hopeful sign, he said and expressed the hope that this will encourage an advanced research into the subject. Sourcing of medicines from various virus and bacteria has been a practise in the homeopathy stream long before the prevailing vaccines came into existence, said Dr Shah, who is also organising secretary of Global Homeopathy Foundation (GHF)…

Dr Shah, who has been campaigning for the integration of homeopathy and allopathic treatments, said this combination has proven to be useful for several challenging diseases. He teamed up with noted virologist Dr Abhay Chowdhury and his team at the premier Haffkine Institute and developed a drug sourced from TB germs of MDR-TB patients.”

So, where is the study? It is not on Medline, but I found it on the journal’s website. This is what the abstract tells us:

“Thirty-seven HIV-infected persons were registered for the trial, and ten participants were dropped out from the study, so the effect of HIV nosode 30C and 50C, was concluded on 27 participants under the trial.

Results: Out of 27 participants, 7 (25.93%) showed a sustained reduction in the viral load from 12 to 24 weeks. Similarly 9 participants (33.33%) showed an increase in the CD4+ count by 20% altogether in 12 th and 24 th week. Significant weight gain was observed at week 12 (P = 0.0206). 63% and 55% showed an overall increase in either appetite or weight. The viral load increased from baseline to 24 week through 12 week in which the increase was not statistically significant (P > 0.05). 52% (14 of 27) participants have shown either stability or improvement in CD4% at the end of 24 weeks, of which 37% participants have shown improvement (1.54-48.35%) in CD4+ count and 15% had stable CD4+ percentage count until week 24 week. 16 out of 27 participants had a decrease (1.8-46.43%) in CD8 count. None of the adverse events led to discontinuation of study.

Conclusion: The study results revealed improvement in immunological parameters, treatment satisfaction, reported by an increase in weight, relief in symptoms, and an improvement in health status, which opens up possibilities for future studies.”

In other words, the study had not even a control group. This means that the observed ‘effects’ are most likely just the normal fluctuations one would expect without any clinical significance whatsoever.

The homeopathic Ebola cure was bad enough, I thought, but, considering the global importance of AIDS, the homeopathic HIV treatment is clearly worse.

Reflexology is the treatment of reflex zones, usually on the sole of the feet, with manual massage and pressure. Reflexologists assume that certain zones correspond to certain organs, and that their treatment can influence the function of these organs. Thus reflexology is advocated for all sorts of conditions. Proponents are keen to point out that their approach has many advantages: it is pleasant (the patient feels well with the treatment and the therapist feels even better with the money), safe and cheap, particularly if the patient does the treatment herself.

Self-administered foot reflexology could be practical because it is easy to learn and not difficult to apply. But is it also effective? A recent systematic review evaluated the effectiveness of self-foot reflexology for symptom management.

Participants were healthy persons not diagnosed with a specific disease. The intervention was foot reflexology administered by participants, not by practitioners or healthcare providers. Studies with either between groups or within group comparison were included. The electronic literature searches utilized core databases (MEDLINE, EMBASE, Cochrane, and CINAHL Chinese (CNKI), Japanese (J-STAGE), and Korean databases (KoreaMed, KMbase, KISS, NDSL, KISTI, and OASIS)).

Three non-randomized trials and three before-and-after studies met the inclusion criteria. No RCTs were located. The results of these studies showed that self-administered foot reflexology resulted in significant improvement in subjective outcomes such as perceived stress, fatigue, and depression. However, there was no significant improvement in objective outcomes such as cortisol levels, blood pressure, and pulse rate. We did not find any randomized controlled trial.

The authors concluded that this study presents the effectiveness of self-administered foot reflexology for healthy persons’ psychological and physiological symptoms. While objective outcomes showed limited results, significant improvements were found in subjective outcomes. However, owing to the small number of studies and methodological flaws, there was insufficient evidence supporting the use of self-performed foot reflexology. Well-designed randomized controlled trials are needed to assess the effect of self-administered foot reflexology in healthy people.

I find this review quite interesting, but I would draw very different conclusions from its findings.

The studies that are available turned out to be of very poor methodological quality: they lack randomisation or rely on before/after comparisons. This means they are wide open to bias and false-positive results, particularly in regards to subjective outcome measures. Predictably, the findings of this review confirm that no effects are seen on objective endpoints. This is in perfect agreement with the hypothesis that reflexology is a pure placebo. Considering the biological implausibility of the underlying assumptions of reflexology, this makes sense.

My conclusions of this review would therefore be as follows: THE RESULTS ARE IN KEEPING WITH REFLEXOLOGY BEING A PURE PLACEBO.

1 2 3 12
Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories