MD, PhD, FMedSci, FSB, FRCP, FRCPEd

bias

1 2 3 14

This is a question which I have asked myself more often than I care to remember. The reason is probably that, in alternative medicine, I feel surrounded by so much dodgy research that I simply cannot avoid asking it.

In particular, the co-called ‘pragmatic’ trials which are so much ‘en vogue’ at present are, in my view, a reason for concern. Take a study of cancer patients, for instance, where one group is randomized to get the usual treatments and care, while the experimental group receives the same and several alternative treatments in addition. These treatments are carefully selected to be agreeable and pleasant; each patient can choose the ones he/she likes best, always had wanted to try, or has heard many good things about. The outcome measure of our fictitious study would, of course, be some subjective parameter such as quality of life.

In this set-up, the patients in our experimental group thus have high expectations, are delighted to get something extra, even more happy to get it for free, receive plenty of attention and lots of empathy, care, time, attention etc. By contrast, our poor patients in the control group would be a bit miffed to have drawn the ‘short straw’ and receive none of this.

What result do we expect?

Will the quality of life after all this be equal in both groups?

Will it be better in the miffed controls?

Or will it be higher in those lucky ones who got all this extra pampering?

I don’t think I need to answer these questions; the answers are too obvious and too trivial.

But the real and relevant question is the following, I think: IS SUCH A TRIAL JUST SILLY AND MEANINGLESS OR IS IT UNETHICAL?

I would argue the latter!

Why?

Because the results of the study are clearly known before the first patient had even been recruited. This means that the trial was not necessary; the money, time and effort has been wasted. Crucially, patients have been misled into thinking that they give their time, co-operation, patience etc. because there is a question of sufficient importance to be answered.

But, in truth, there is no question at all!

Perhaps you believe that nobody in their right mind would design, fund and conduct such a daft trial. If so, you assumed wrongly. Such studies are currently being published by the dozen. Here is the abstract of the most recent one I could find:

The aim of this study was to evaluate the effectiveness of an additional, individualized, multi-component complementary medicine treatment offered to breast cancer patients at the Merano Hospital (South Tyrol) on health-related quality of life compared to patients receiving usual care only. A randomized pragmatic trial with two parallel arms was performed. Women with confirmed diagnoses of breast cancer were randomized (stratified by usual care treatment) to receive individualized complementary medicine (CM group) or usual care alone (usual care group). Both groups were allowed to use conventional treatment for breast cancer. Primary endpoint was the breast cancer-related quality of life FACT-B score at 6 months. For statistical analysis, we used analysis of covariance (with factors treatment, stratum, and baseline FACT-B score) and imputed missing FACT-B scores at 6 months with regression-based multiple imputation. A total of 275 patients were randomized between April 2011 and March 2012 to the CM group (n = 136, 56.3 ± 10.9 years of age) or the usual care group (n = 139, 56.0 ± 11.0). After 6 months from randomization, adjusted means for health-related quality of life were higher in the CM group (FACT-B score 107.9; 95 % CI 104.1-111.7) compared to the usual care group (102.2; 98.5-105.9) with an adjusted FACT-B score difference between groups of 5.7 (2.6-8.7, p < 0.001). Thus, an additional individualized and complex complementary medicine intervention improved quality of life of breast cancer patients compared to usual care alone. Further studies evaluating specific effects of treatment components should follow to optimize the treatment of breast cancer patients. 

The key sentence in this abstract is, of course: complementary medicine intervention improved quality of life of breast cancer patients… It provides the explanation as to why these trials are so popular with alternative medicine researchers: they are not real research but they are quite simply promotion! The next step would be to put a few of those pseudo-scientific trials together and claim that there is solid proof that integrating alternative treatments into conventional health care produces better results. At that stage, few people will bother asking whether this is really due to the treatments in questioning or to the additional attention, pampering etc.

My question is ARE SUCH TRIALS ETHICAL?

I would very much appreciate your opinion.

In the world of homeopathy, the truth is often much weirder than fiction. Take this recent article, for instance; it was published by the famous lay homeopath Alan Schmukler in the current issue of ‘HOMEOPATHY 4 EVERYONE’.

Before you read the text in question, it might be relevant to explain who Schmukler is: he attended Temple University, where he added humanistic psychology to his passions. After graduating Summa Cum Laude, Phi Beta Kappa and President’s Scholar, he spent several years doing workshops in human relations. Alan also studied respiratory therapy and worked for three years at Einstein Hospital in Philadelphia. Those thousands of hours in the intensive care and emergency rooms taught him both the strengths and limitations of conventional medicine. Schmukler learned about homeopathy in 1991 when he felt he had been cured of an infection with Hepar sulph. He later founded the Homeopathic Study Group of Metropolitan Philadelphia, giving free lectures and hosting the areas best homeopaths to teach. He also helped found and edit Homeopathy News and Views, a popular culture newsletter on homeopathy. He taught homeopathy for Temple University’s Adult Programs, and has been either studying, writing, lecturing or consulting on homeopathy since 1991. He wrote Homeopathy An A to Z home Handbook, which is now available in five languages. Alan Schmukler has been practicing homeopathy for more than two decades and is Chief Editor of Hpathy.com and of Homeopathy4Everyone. He says that his work as Editor is one of his most rewarding experiences.

Now, brace yourself, here is the promised text/satire (in bold); I promise, I did not change a single word:

EIGHT REASONS TO VACCINATE YOUR CHILD

  1. Your child is deficient in Mercury, Aluminum, Formaldehyde, viruses, foreign DNA or other ingredients proven to cause neurological damage.
  2. Your child has an excess of healthy, functioning brain cells.
  3. You need more cash. The National Vaccine Injury Compensation program has paid out 2.8 billion dollars to parents of children injured or killed by vaccines.
  4. You and your husband are feeling alienated and you need a crisis to bring you together.
  5. You believe that pharmaceutical conglomerates which earn billions from vaccines are more credible than consumer groups.
  6. You think thousands of parents who report that their children became autistic two weeks after vaccination are lying.
  7. You don’t see a problem in logic when the government tells you that vaccines work, but that vaccinated children can catch diseases from unvaccinated children.
  8. You think the government should dictate which healing methods you and your children are allowed to use.

Funny? No!

Bad taste? Very much so!

Barmy? I think so!

Dangerous? Yes!

Irresponsible? Most certainly!

Unethical? Yes!

Characteristic for lay homeopathy? Possibly!

A few years ago, I fell ill with shingles. When patients had consulted me for this condition, during the times when I still was a clinician, I always had to stop myself smiling; they complained bitterly but, really, this was far from serious. Now, affected myself, I did not smile a bit: this was incredibly painful!

I promptly saw my GP in Exeter who, to my utter amazement, prescribed paracetamol. She too seemed to think that this was really nothing to bother her with. As I had feared, the paracetamol did absolutely nothing to my pain. After a few sleepless nights, I went back and asked for something a little more effective. She refused, and I decided to change GP.

Meanwhile, we went on a scheduled holiday to France. I had hoped my shingles would come to a natural end, but my pain continued unabated. People could see it on my face; so our kind neighbour asked whether she could help. I explained the situation, and she instantly claimed to have just the right treatment for me: she knew a healer who lived just round the corner and had helped many of her friends when they had suffered from pain.

“A healer?” I asked, “you cannot be serious.” I explained that I had conducted studies and done other research into this particular subject. Without exception, the results had shown that healing is a pure placebo. “I prefer to carry on taking even something as useless as paracetamol!” I insisted.

But she would have none of it. The next time I saw her, she declared triumphantly that she had made an appointment for me, and there was no question: I had to go.

As it happened, the day before she announced this, I had met up with a doctor friend of mine who, seeing I was in agony, gave me a prescription for gabapentin. In fact, I was just on the way to the pharmacist to pick it up. Thus I was in hopeful that my ordeal was coming to an end. In this optimistic mood I thanked my neighbour for her effort and concern and said something non-committal like “we shall see”.

A few days later, we met again. By this time, the gabapentin had done it’s trick: a was more or less pain-free, albeit a little dazed from the powerful medication. When my neighbour saw me, she exclaimed: “I see that that you are much improved. Wonderful! Yesterday’s healing session has worked!!!”

In my daze, I had forgotten all about the healing, and I had, of course, not been to see the healer. She was so delighted with her coup, that I did not have the heart to tell her the truth. I only said “yes much better, merci”

These events happened a few years ago, but even today, my kind and slightly alternative neighbour believes that, despite having been highly sceptical, healing has cured me of my shingles. To my embarrassment, she occasionally mentions my ‘miraculous cure’.

One day, I must tell her the truth… on second thoughts, perhaps not, she might claim it was distant healing!

A recent comment to a post of mine (by a well-known and experienced German alt med researcher) made the following bold statement aimed directly at me and at my apparent lack of understanding research methodology:

C´mon , as researcher you should know the difference between efficacy and effectiveness. This is pharmacological basic knowledge. Specific (efficacy) + nonspecific effects = effectiveness. And, in fact, everything can be effective – because of non-specific or placebo-like effects. That does not mean that efficacy is existent.

The point he wanted to make is that outcome studies – studies without a control group where the researcher simply observe the outcome of a particular treatment in a ‘real life’ situation – suffice to demonstrate the effectiveness of therapeutic interventions. This belief is very wide-spread in alternative medicine and tends to mislead all concerned. It is therefore worth re-visiting this issue here in an attempt to create some clarity.

When a patient’s condition improves after receiving a therapy, it is very tempting to feel that this improvement reflects the effectiveness of the intervention (as the researcher mentioned above obviously does). Tempting but wrong: there are many other factors involved as well, for instance:

  • the placebo effect (mainly based on conditioning and expectation),
  • the therapeutic relationship with the clinician (empathy, compassion etc.),
  • the regression towards the mean (outliers tend to return to the mean value),
  • the natural history of the patient’s condition (most conditions get better even without treatment),
  • social desirability (patients tend to say they are better to please their friendly clinician),
  • concomitant treatments (patients often use treatments other than the prescribed one without telling their clinician).

So, how does this fit into the statement above ‘Specific (efficacy) + nonspecific effects = effectiveness’? Even if this formula were correct, it would not mean that outcome studies of the nature described demonstrate the effectiveness of a therapy. It all depends, of course, on what we call ‘non-specific’ effects. We all agree that placebo-effects belong to this category. Probably, most experts also would include the therapeutic relationship and the regression towards the mean under this umbrella. But the last three points from my list are clearly not non-specific effects of the therapy; they are therapy-independent determinants of the clinical outcome.

The most important factor here is usually the natural history of the disease. Some people find it hard to imagine what this term actually means. Here is a little joke which, I hope, will make its meaning clear and memorable.

CONVERATION BETWEEN TWO HOSPITAL DOCTORS:

Doc A: The patient from room 12 is much better today.

Doc B: Yes, we stared his treatment just in time; a day later and he would have been cured without it!

I am sure that most of my readers now understand (and never forget) that clinical improvement cannot be equated with the effectiveness of the treatment administered (they might thus be immune to the misleading messages they are constantly exposed to). Yet, I am not at all sure that all ‘alternativists’ have got it.

In my last post, I claimed that researchers of alternative medicine tend to be less than rigorous. I did not link this statement to any evidence at all. Perhaps I should have at least provided an example!? As it happens, I just came across a brand new paper which nicely demonstrates what I meant.

According to its authors, this non-interventional study was performed to generate data on safety and treatment effects of a complex homeopathic drug. They treated 1050 outpatients suffering from common cold with a commercially available homeopathic remedy for 8 days. The study was conducted in 64 German outpatient practices of medical doctors trained in CAM. Tolerability, compliance and the treatment effects were assessed by the physicians and by patient diaries. Adverse events were collected and assessed with specific attention to homeopathic aggravation and proving symptoms. Each adverse effect was additionally evaluated by an advisory board of experts.

The physicians detected 60 adverse events from 46 patients (4.4%). Adverse drug reactions occurred in 14 patients (1.3%). Six patients showed proving symptoms (0.57%) and only one homeopathic aggravation (0.1%) appeared. The rate of compliance was 84% for all groups. The global assessment of the treatment effects resulted in the verdict “good” and “very good” in 84.9% of all patients.

The authors concluded that the homeopathic complex drug was shown to be safe and effective for children and adults likewise. Adverse reactions specifically related to homeopathic principles are very rare. All observed events recovered quickly and were of mild to moderate intensity.

So why do I think this is ‘positively barmy’?

The study had no control group. This means that there is no way anyone can attribute the observed ‘treatment effects’ to the homeopathic remedy. There are many other phenomena that may have caused or contributed to it, e. g.:

  • a placebo effect
  • the natural history of the condition
  • regression to the mean
  • other treatments which the patients took but did not declare
  • the empathic encounter with the physician
  • social desirability

To plan a study with the aim as stated above and to draw the conclusion as cited above is naïve and unprofessional (to say the least) on the part of the researchers (I often wonder where, in such cases, the boundary between incompetence and research misconduct might lie). To pass such a paper through the peer review process is negligent on the part of the reviewers. To publish the article is irresponsible on the part of the editor.

In a nut-shell: COLLECTIVELY, THIS IS ‘POSITIVELY BARMY’!!!

On this blog, we have discussed the Alexander Technique before; it is an educational method promoted for all sorts of conditions, including neck pain. The very first website I found when googling it stated the following: “Back and neck pain can be caused by poor posture. Alexander Technique lessons help you to understand how to improve your posture throughout your daily activities. Many people, even those with herniated disc or pinched nerve, experience relief after one lesson, often permanent relief after five or ten lessons.”

Sounds too good to be true? Is there any good evidence?

The aim of this study, a randomized controlled trial with 3 parallel groups, was to test the efficacy of the Alexander Technique, local heat and guided imagery on pain and quality of life in patients with chronic non-specific neck pain. A total of 72 patients (65 females, 40.7±7.9 years) with chronic, non-specific neck pain were recruited. They received 5 sessions of the Alexander Technique, while the control groups were treated with local heat application or guided imagery. All interventions were conducted once a week for 45 minutes each.

The primary outcome measure at week 5 was neck pain intensity quantified on a 100-mm visual analogue scale; secondary outcomes included neck disability, quality of life, satisfaction and safety. The results show no group differences for pain intensity for the Alexander Technique compared to local heat. An exploratory analysis revealed the superiority of the Alexander Technique over guided imagery. Significant group differences in favor of the Alexander Technique were also found for physical quality of life. Adverse events were mild and mainly included slightly increased pain and muscle soreness.

The authors concluded that Alexander Technique was not superior to local heat application in treating chronic non-specific neck pain. It cannot be recommended as routine intervention at this time. Further trials are warranted for conclusive judgment.

I am impressed with these conclusions: this is how results should be interpreted. The primary outcome measure failed to yield a significant effect, and therefore such a negative conclusion is the only one that can be justified. Yet such clear words are an extreme rarity in the realm of alternative medicine. Most researchers in this area would, in my experience, have highlighted the little glimpses of the possibility of a positive effect and concluded that this therapeutic approach may be well worth a try.

In my view, this article is a fine example for demonstrating the difference between true scientists (who aim at testing the effectiveness of interventions) and pseudo-scientists (who aim at promoting their pet therapy). I applaud the authors of this paper!

Many experts have argued that the growing popularity of alternative medicine (AM) mandates their implementation into formal undergraduate medical education. Most medical students seem to feel a need to learn about AM. Yet little is known about the student-specific need for AM education. The objective of this paper was address this issue, specifically the authors wanted to assess the self-reported need for AM education among Australian medical students.

Thirty second-year to final-year medical students participated in semi-structured interviews. A constructivist grounded theory methodological approach was used to generate, construct and analyse the data.

The results show that these medical students generally held favourable attitudes toward AM but had knowledge deficits and did not feel adept at counselling patients about AMs. All students were supportive of integrating AM into education, noting its importance in relation to the doctor-patient encounter, specifically with regard to interactions with medical management. Students recognised the need to be able to effectively communicate about AMs and advise patients regarding safe and effective AM use.

The authors of this survey concluded that Australian medical students expressed interest in, and the need for, AM education in medical education regardless of their opinion of it, and were supportive of evidence-based AMs being part of their armamentarium. However, current levels of AM education in medical schools do not adequately enable this. This level of receptivity suggests the need for AM education with firm recommendations and competencies to assist AM education development required. Identifying this need may help medical educators to respond more effectively.

One might object to such wide-reaching conclusions based on a sample size of just 30. However, there are several similar surveys from other parts of the world which seem to paint a similar picture: most medical students clearly do want to learn about AM. But this issue raises several important questions:

  • How can this be squeezed into the already over-full curriculum?
  • Should students learn about AM or should they learn how to practice AM?
  • Who should teach this subject?

In my view, students should learn the essentials about AM but not how to do this or that therapy. Most deans of medical schools seem to agree with me on that particular point.

The question as to who should teach students about AM is, however, much more contentious. Most conventional medical instructors have no interest in and/or no knowledge of the subject. Consequently, there is a tendency for medical schools to delegate AM by hiring a few alternative practitioners to cover AM. Thus we see homeopaths teaching medical students all (well, almost all) about homeopathy, acupuncturists teaching acupuncture, herbalists teaching herbal medicine etc. To many observers, this might sound right and reasonable – but I beg to differ resolutely.

Most alternative practitioners who I have met (and these were many over the last 20 years) are clearly not capable of teaching their own subject in a way that befits a medical school. They have little or no idea about the nature of scientific evidence and usually lack the slightest hint of critical analysis. Thus a homeopaths might teach homeopathy such that students get the impression that it is well grounded in evidence, for instance. Students who have been taught in this fashion are not likely to advise their future patients responsibly on the subject in question: THE TEACHING OF NONSENSE IS BOUND TO RESULT IN NONSENSICAL PRACTICE!

In my view, AM is an ideal subject to acquaint medical students with the concepts of critical thinking. In this respect, it offers an almost opportunity for medical schools to develop much-needed skills in their students. Sadly, however, this is not what is currently happening. All too often, medical school deans find themselves caught between the devil and the deep blue sea. In the end, they tend to delegate the subject of AM to people who are not competent and should not be let loose on impressionable students.

I fear that progress and care of future patients are bound to suffer.

 

The use of homeopathy to treat depression in peri- and postmenopausal women seems widespread, but there is a lack of clinical trials testing its efficacy. The aim of this new study was therefore to assess efficacy and safety of individualized homeopathic treatment versus placebo and fluoxetine versus placebo in peri- and postmenopausal women with moderate to severe depression.

A randomized, placebo-controlled, double-blind, double-dummy, superiority, three-arm trial with a 6 week follow-up study was conducted. The study was performed in a Mexican outpatient service of homeopathy. One hundred thirty-three peri- and postmenopausal women diagnosed with major depression according to DSM-IV (moderate to severe intensity) were included. The outcomes were:

  1. the change in the mean total score among groups on the 17-item Hamilton Rating Scale for Depression,
  2. the Beck Depression Inventory;
  3. the Greene Scale, after 6 weeks of treatment,
  4. response rates,
  5. remission rates,
  6. safety.

Efficacy data were analyzed in the intention-to-treat population (ANOVA with Bonferroni post-hoc test).

After a 6-week treatment, the results of homeopathic group showed more effectiveness than placebo in the Hamilton Scale. Response rate was 54.5% and remission rate was 15.9%. There was a significant difference between groups in response rate, but not in remission rate. The fluoxetine-placebo difference was 3.2 points. No differences were observed between groups in the Beck Depression Inventory. The results of the homeopathic group were superior to placebo regarding Greene Climacteric Scale (8.6 points). Fluoxetine was not different from placebo in the Greene Climacteric Scale.

The authors concluded that homeopathy and fluoxetine are effective and safe antidepressants for climacteric women. Homeopathy and fluoxetine were significantly different from placebo in response definition only. Homeopathy, but not fluoxetine, improves menopausal symptoms scored by Greene Climacteric Scale.

The article is interesting but highly confusing and poorly reported. The trial is small and short-term only. The way I see it, the finding that individualised homeopathy is better than a standard anti-depressant might be due to a range of phenomena:

  • residual bias; (for instance, it is conceivable that some patients were ‘de-blinded’ due to the well-known side-effects of the conventional anti-depressant);
  • inappropriate statistical analysis if the data;
  • chance;
  • fraud;
  • or the effectiveness of individualised homeopathy.

Even if the findings of this study turned out to be real, it would most certainly be premature to advise patients to opt for homeopathy. At the very minimum, we would need an independent replication of this study – and somehow I doubt that it would confirm the results of this Mexican trial.

Distant healing is one of the most bizarre yet popular forms of alternative medicine. Healers claim they can transmit ‘healing energy’ towards patients to enable them to heal themselves. There have been many trials testing the effectiveness of the method, and the general consensus amongst critical thinkers is that all variations of ‘energy healing’ rely entirely on a placebo response. A recent and widely publicised paper seems to challenge this view.

This article has, according to its authors, two aims. Firstly it reviews healing studies that involved biological systems other than ‘whole’ humans (e.g., studies of plants or cell cultures) that were less susceptible to placebo-like effects. Secondly, it presents a systematic review of clinical trials on human patients receiving distant healing.

All the included studies examined the effects upon a biological system of the explicit intention to improve the wellbeing of that target; 49 non-whole human studies and 57 whole human studies were included.

The combined weighted effect size for non-whole human studies yielded a highly significant (r = 0.258) result in favour of distant healing. However, outcomes were heterogeneous and correlated with blind ratings of study quality; 22 studies that met minimum quality thresholds gave a reduced but still significant weighted r of 0.115.

Whole human studies yielded a small but significant effect size of r = .203. Outcomes were again heterogeneous, and correlated with methodological quality ratings; 27 studies that met threshold quality levels gave an r = .224.

From these findings, the authors drew the following conclusions: Results suggest that subjects in the active condition exhibit a significant improvement in wellbeing relative to control subjects under circumstances that do not seem to be susceptible to placebo and expectancy effects. Findings with the whole human database suggests that the effect is not dependent upon the previous inclusion of suspect studies and is robust enough to accommodate some high profile failures to replicate. Both databases show problems with heterogeneity and with study quality and recommendations are made for necessary standards for future replication attempts.

In a press release, the authors warned: the data need to be treated with some caution in view of the poor quality of many studies and the negative publishing bias; however, our results do show a significant effect of healing intention on both human and non-human living systems (where expectation and placebo effects cannot be the cause), indicating that healing intention can be of value.

My thoughts on this article are not very complimentary, I am afraid. The problems are, it seems to me, too numerous to discuss in detail:

  • The article is written such that it is exceedingly difficult to make sense of it.
  • It was published in a journal which is not exactly known for its cutting edge science; this may seem a petty point but I think it is nevertheless important: if distant healing works, we are confronted with a revolution in the understanding of nature – and surely such a finding should not be buried in a journal that hardly anyone reads.
  • The authors seem embarrassingly inexperienced in conducting and publishing systematic reviews.
  • There is very little (self-) critical input in the write-up.
  • A critical attitude is necessary, as the primary studies tend to be by evangelic believers in and amateur enthusiasts of healing.
  • The article has no data table where the reader might learn the details about the primary studies included in the review.
  • It also has no table to inform us in sufficient detail about the quality assessment of the included trials.
  • It seems to me that some published studies of distant healing are missing.
  • The authors ignored all studies that were not published in English.
  • The method section lacks detail, and it would therefore be impossible to conduct an independent replication.
  • Even if one ignored all the above problems, the effect sizes are small and would not be clinically important.
  • The research was sponsored by the ‘Confederation of Healing Organisations’ and some of the comments look as though the sponsor had a strong influence on the phraseology of the article.

Given these reservations, my conclusion from an analysis of the primary studies of distant healing would be dramatically different from the one published by the authors: DESPITE A SIZABLE AMOUNT OF PRIMARY STUDIES ON THE SUBJECT, THE EFFECTIVENESS OF DISTANT HEALING REMAINS UNPROVEN. AS THIS THERAPY IS BAR OF ANY BIOLOGICAL PLAUSIBILITY, FURTHER RESEARCH IN THIS AREA SEEMS NOT WARRANTED.

Twenty years ago, I published a short article in the British Journal of Rheumatology. Its title was ALTERNATIVE MEDICINE, THE BABY AND THE BATH WATER. Reading it again today – especially in the light of the recent debate (with over 700 comments) on acupuncture – indicates to me that very little has since changed in the discussions about alternative medicine (AM). Does that mean we are going around in circles? Here is the (slightly abbreviated) article from 1995 for you to judge for yourself:

“Proponents of alternative medicine (AM) criticize the attempt of conducting RCTs because they view this is in analogy to ‘throwing out the baby with the bath water’. The argument usually goes as follows: the growing popularity of AM shows that individuals like it and, in some way, they benefit through using it. Therefore it is best to let them have it regardless of its objective effectiveness. Attempts to prove or disprove effectiveness may even be counterproductive. Should RCTs prove that a given intervention is not superior to a placebo, one might stop using it. This, in turn, would be to the disadvantage of the patient who, previous to rigorous research, has unquestionably been helped by the very remedy. Similar criticism merely states that AM is ‘so different, so subjective, so sensitive that it cannot be investigated in the same way as mainstream medicine’. Others see reasons to change the scientific (‘reductionist’) research paradigm into a broad ‘philosophical’ approach. Yet others reject the RCTs because they think that ‘this method assumes that every person has the same problems and there are similar causative factors’.

The example of acupuncture as a (popular) treatment for osteoarthritis, demonstrates the validity of such arguments and counter-arguments. A search of the world literature identified only two RCTs on the subject. When acupuncture was tested against no treatment, the experimental group of osteoarthritis sufferers reported a 23% decrease of pain, while the controls suffered a 12% increase. On the basis of this result, it might seem highly unethical to withhold acupuncture from pain-stricken patients—’if a patient feels better for whatever reason and there are no toxic side effects, then the patient should have the right to get help’.

But what about the placebo effect? It is notoriously difficult to find a placebo indistinguishable to acupuncture which would allow patient-blinded studies. Needling non-acupuncture points may be as close as one can get to an acceptable placebo. When patients with osteoarthritis were randomized into receiving either ‘real acupuncture or this type of sham acupuncture both sub-groups showed the same pain relief.

These findings (similar results have been published for other AMs) are compatible only with two explanations. Firstly acupuncture might be a powerful placebo. If this were true, we need to establish how safe acupuncture is (clearly it is not without potential harm); if the risk/benefit ratio is favourable and no specific, effective form of therapy exists one might still consider employing this form as a ‘placebo therapy’ for easing the pain of osteoarthritis sufferers. One would also feel motivated to research this powerful placebo and identify its characteristics or modalities with the aim of using the knowledge thus generated to help future patients.

Secondly, it could be the needling, regardless of acupuncture points and philosophy, that decreases pain. If this were true, we could henceforward use needling for pain relief—no special training in or equipment for acupuncture would be required, and costs would therefore be markedly reduced. In addition, this knowledge would lead us to further our understanding of basic mechanisms of pain reduction which, one day, might evolve into more effective analgesia. In any case the published research data, confusing as they often are, do not call for a change of paradigm; they only require more RCTs to solve the unanswered problems.

Conducting rigorous research is therefore by no means likely to ‘throw out the baby with the bath water’. The concept that such research could harm the patient is wrong and anti-scientific. To follow its implications would mean neglecting the ‘baby in the bath water’ until it suffers serious damage. To conduct proper research means attending the ‘baby’ and making sure that it is safe and well.

1 2 3 14
Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories