MD, PhD, FMedSci, FSB, FRCP, FRCPEd

clinical trial

1 2 3 10

As I have said on several occasions before: I am constantly on the lookout for new rigorous science that supports the claims of alternative medicine. Thus I was delighted to find a recent and potentially important article with some positive evidence.

Fish oil has been studied extensively in terms of its effects on health. We know that it has powerful anti-inflammatory properties and might thus benefit a wide range of conditions. However, the effects of FO in rheumatoid arthritis (RA) have not been examined in the context of contemporary treatment of early RA.

A new study has tried to fill this gap by examining the effects of high versus low dose FO in early RA employing a ‘treat-to-target’ protocol of combination disease-modifying anti-rheumatic drugs (DMARDs).

Patients with RA <12 months’ duration and who were DMARD-naïve were enrolled and randomised 2:1 to FO at a high dose or low dose (for masking). These groups, designated FO and control, were given 5.5 or 0.4 g/day, respectively, of the omega-3 fats, eicosapentaenoic acid + docosahexaenoic acid. All patients received methotrexate (MTX), sulphasalazine and hydroxychloroquine, and DMARD doses were adjusted according to an algorithm taking disease activity and toxicity into account. DAS28-erythrocyte sedimentation rate, modified Health Assessment Questionnaire (mHAQ) and remission were assessed three monthly. The primary outcome measure was failure of triple DMARD therapy.

In the FO group, failure of triple DMARD therapy was lower (HR=0.28 (95% CI 0.12 to 0.63; p=0.002) unadjusted and 0.24 (95% CI 0.10 to 0.54; p=0.0006) following adjustment for smoking history, shared epitope and baseline anti–cyclic citrullinated peptide. The rate of first American College of Rheumatology (ACR) remission was significantly greater in the FO compared with the control group (HRs=2.17 (95% CI 1.07 to 4.42; p=0.03) unadjusted and 2.09 (95% CI 1.02 to 4.30; p=0.04) adjusted). There were no differences between groups in MTX dose, DAS28 or mHAQ scores, or adverse events.

The authors concluded that FO was associated with benefits additional to those achieved by combination ‘treat-to-target’ DMARDs with similar MTX use. These included reduced triple DMARD failure and a higher rate of ACR remission.

So here we have a dietary supplement that actually might generate more good than harm! There is a mountain of data of good research on the subject. We understand the mechanism of action and we have encouraging clinical evidence. Some people might still say that we do not need to take supplements in order to benefit from the health effects of FO, consuming fatty fish regularly might have the same effects. This is true, of course, but the amount of fish that one would need to eat every day would probably be too large for most people’s taste.

The drawback (from the perspective of alternative medicine) in all this is, of course, that some experts might deny that FO has much to do with alternative medicine. Again: what do we call alternative medicine that works? We call it MEDICINE! And perhaps FO is an excellent example of exactly that.

Guest post by Pete Attkins

Commentator “jm” asked a profound and pertinent question: “What DOES it take for people to get real in this world, practice some common sense, and pay attention to what’s going on with themselves?” This question was asked in the context of asserting that personal experience always trumps the results of large-scale scientific experiments; and asserting that alt-med experts are better able to provide individulized healthcare than 21st Century orthodox medicine.

What does common sense and paying attention lead us to conclude about the following? We test a six-sided die for bias by rolling it 100 times. The number 1 occurs only once and the number 6 occurs many times, never on its own, but in several groups of consecutive sixes.

I think it is reasonable to say that common sense would, and should, lead everyone to conclude that the die is biased and not fit for its purpose as a source of random numbers.

In other words, we have a gut feeling that the die is untrustworthy. Gut instincts and common sense are geared towards maximizing our chances of survival in our complex and unpredictable world — these are innate and learnt behaviours that have enabled humans to survive despite the harshness of our ever changing habitat.

Only very recently in the long history of our species have we developed specialized tools that enable us to better understand our harsh and complex world: science and critical thinking. These tools are difficult to master because they still haven’t been incorporated into our primary and secondary formal education systems.

The vast majority of people do not have these skills therefore, when a scientific finding flies in the face of our gut instincts and/or common sense, it creates an overwhelming desire to reject the finding and classify the scientist(s) as being irrational and lacking basic common sense. It does not create an intense desire to accept the finding then painstakingly learn all of the science that went into producing the finding.

With that in mind, let’s rethink our common sense conclusion that the six-sided die is biased and untrustworthy. What we really mean is that the results have given all of us good reason to be highly suspicious of this die. We aren’t 100% certain that this die is biased, but our gut feeling and common sense are more than adequate to form a reasonable mistrust of it and to avoid using it for anything important to us. Reasons to keep this die rather than discard it might be to provide a source of mild entertainment or to use its bias for the purposes of cheating.

Some readers might be surprised to discover at this point that the results I presented from this apparently heavily-biased die are not only perfectly valid results obtained from a truly random unbiased die, they are to be fully expected. Even if the die had produced 100 sixes in that test, it would not confirm that the die is biased in any way whatsoever. Rolling a truly unbiased die once will produce one of six possible outcomes. Rolling the same die 100 times will produce one unique sequence out of the 6^100 (6.5 x 10^77) possible sequences: all of which are equally valid!

Gut feeling plus common sense rightfully informs us that the probability of a random die producing one hundred consecutive sixes is so incredibly remote that nobody will ever see it occur in reality. This conclusion is also mathematically sound: if there were 6.5 x 10^77 people on Earth, each performing the same test on truly random dice, there is no guarantee that anyone would observe a sequence of one hundred consecutive sixes.

When we observe a sequence such as 2 5 1 4 6 3 1 4 3 6 5 2… common sense informs us that the die is very likely random. If we calculate the arithmetic mean to be very close to 3.5 then common sense will lead us to conclude that the die is both random and unbiased enough to use it as a reliable source of random numbers.

Unfortunately, this is a perfect example of our gut feelings and common sense failing us abysmally. They totally failed to warn us that the 2 5 1 4 6 3 1 4 3 6 5 2… sequence we observed had exactly the same (im)probability of occurring as a sequence of one hundred 6s or any other sequence that one can think of that doesn’t look random to a human observer.

The 100-roll die test is nowhere near powerful enough to properly test a six-sided die, but this test is more than adequately powered to reveal some of our cognitive biases and some of the deficits in our personal mastery of science and critical thinking.

To properly test the die we need to provide solid evidence that it is both truly random and that its measured bias tends towards zero as the number of rolls tends towards infinity. We could use the services of one testing lab to conduct billions of test rolls, but this would not exclude errors caused by such things as miscalibrated equipment and experimenter bias. It is better to subdivide the testing across multiple labs then carefully analyse and appropriately aggregate the results: this dramatically reduces errors caused by equipment and humans.

In medicine, this testing process is performed via systematic reviews of multiple, independent, double-blind, placebo-controlled trials — every trial that is insufficiently powered to add meaningfully to the result is rightfully excluded from the aggregation.

Alt-med relies on a diametrically opposed testing process. It performs a plethora of only underpowered tests; presents those that just happen to show a positive result (just as a random die could’ve produced); and sweeps under the carpet the overwhelming number of tests that produced a negative result. It publishes only the ‘successes’, not its failures. By sweeping its failures under the carpet it feels justified in making the very bold claim: Our plethora of collected evidence shows clearly that it mostly ‘works’ and, when it doesn’t, it causes no harm.

One of the most acidic tests for a hypothesis and its supporting data (which is a mandatory test in a few branches of critical engineering) is to substitute the collected data for random data that has been carefully crafted to emulate the probability mass functions of the collected datasets. This test has to be run multiple times for reasons that I’ve attempted to explain in my random die example. If the proposer of the hypothesis is unable to explain the multiple failures resulting from this acid test then it is highly likely that the proposer either does not fully understand their hypothesis or that their hypothesis is indistinguishable from the null hypothesis.

Guest post by Jan Oude-Aost

ADHD is a common disorder among children. There are evidence based pharmacological treatments, the best known being methylphenidate (MPH). MPH has kind of a bad reputation, but is effective and reasonably safe. The market is also full of alternative treatments, pharmacological and others, some of them under investigation, some unproven and many disproven. So I was not surprised to find a study about Ginkgo biloba as a treatment for ADHD. I was surprised, however, to find this study in the German Journal of Child and Adolescent Psychiatry and Psychotherapy, officially published by the “German Society of Child and Adolescent Psychiatry and Psychotherapy“ (Deutsche Gesellschaft für Kinder- und Jugendpsychiatrie und Psychotherapie). The journal’s guidelines state that studies should provide new scientific results.

The study is called “Ginkgo biloba Extract EGb 761® in Children with ADHD“. EGb 761® is the key ingredient in “Tebonin®“, a herbal drug made by “Dr. Wilma Schwabe GmbH“. The abstract states:

One possible treatment, at least for cognitive problems, might be the administration of Ginkgo biloba, though evidence is rare.This study tests the clinical efficacy of a Ginkgo biloba special extract (EGb 761®) (…) in children with ADHD (…).

Eine erfolgversprechende, bislang kaum untersuchte Möglichkeit zur Behandlung kognitiver Aspekte ist die Gabe von Ginkgo biloba. Ziel der vorliegenden Studie war die Prüfung klinischer Wirksamkeit (…) von Ginkgo biloba-Extrakt Egb 761® bei Kindern mit ADHS.“ (Taken from the English and German abstracts.)

The study sample (20!) was recruited among children who “did not tolerate or were unwilling“ to take MPH. The unwilling part struck me as problematic. There is likely a strong selection bias towards parents who are unwilling to give their children MPH. I guess it is not the children who are unwilling to take MPH, but the parents who are unwilling to administer it. At least some of these parents might be biased against MPH and might already favor CAMmodalities.

The authors state three main problems with “herbal therapy“ that require more empirical evidence: First of all the question of adverse reactions, which they claim occur in about 1% of cases with “some CAMs“ (mind you, not “herbal therapy“). Secondly, the question of drug interactions and thirdly, the lack of information physicians have about the CAMs their patients use.

A large part of the study is based on results of an EEG-protocol, which I choose to ignore, because the clinical results are too weak to give the EEG findings any clinical relevance.

Before looking at the study itself, let’s look at what is known about Ginkgo biloba as a drug. Ginkgo is best known for its use in patients with dementia, cognitive impairment und tinnitus. A Cochrane review from 2009 concluded:

There is no convincing evidence that Ginkgo biloba is efficacious for dementia and cognitive impairment“ [1].

The authors of the current Study cite Sarris et al. (2011), a systematic review of complementary treatment of ADHD. Sarris et al. mention Salehi et al. (2010) who tested Ginkgo against MPH. MPH turned out to be much more effective than Ginkgo, but Sarris et al. argue that the duration of treatment (6 weeks) might have been too short to see the full effects of Ginkgo.

Given the above information it is unclear why Ginkgo is judged a “possible“ treatment, properly translated from German even “promising”, and why the authors state that Ginkgo has been “barely studied“.

In an unblinded, uncontrolled study with a sample likely to be biased toward the tested intervention, anything other than a positive result would be odd. In the treatment of autism there are several examples of implausible treatments that worked as long as parents knew that their children were getting the treatment, but didn’t after proper blinding (e.g. secretin).

This study’s aim was to test clinical efficacy, but the conclusion begins with how well tolerated Ginkgo was. The efficacy is mentioned subsequently: “Following administration, interrelated improvements on behavioral ratings of ADHD symptoms (…) were detected (…).“ But the way they where “detected“ is interesting. The authors used an established questionnaire (FBB-HKS) to let parents rate their children. Only the parents. The children and their teachers where not given the FBB-HKS-questionnaires, inspite of this being standard clinical practice (and inspite of giving children questionnaires to determine changes in quality of life, which were not found).

None of the three problems that the authors describe as important (adverse reactions, drug interactions, lack of information) can be answered by this study. I am no expert in statistics but it seems unlikely to me to meaningfully determine adverse effects in just 20 patients especially when adverse effects occur at a rate of 1%. The authors claim they found an incidence rate of 0,004% in “700 observation days“. Well, if they say so.

The authors conclude:

Taken together, the current study provides some preliminary evidence that Ginkgo biloba Egb 761® seems to be well tolerated in the short term and may be a clinically useful treatment for children with ADHD. Double-blind randomized trials are required to clarify the value of the presented data.

Given the available information mentioned earlier, one could have started with that conclusion and conducted a double blind RCT in the first place!

Clinical Significance

The trends of this preliminary open study may suggest that Ginkgo biloba Egb 761® might be considered as a complementary or alternative medicine for treating children with ADHD.“

So, why do I care? If preliminary evidence “may suggest“ that something “might be considered“ as a treatment? Because I think that this study does not answer any important questions or give us any new or useful knowledge. Following the journal’s guidelines, it should therefore not have been published. I also think it is an example of bad science. Bad not just because of the lack of critical thinking. It also adds to the misinformation about possible ADHD treatments spreading through the internet. The study was published in September. In November I found a website citing the study and calling it “clinical proof“ when it is not. But child psychiatrists will have to explain that to many parents, instead of talking about their children’s health.

I somehow got the impression that this study was more about marketing than about science. I wonder if Schwabe will help finance the necessary double-blind randomized trial…

[1] See more at: http://summaries.cochrane.org/CD003120/DEMENTIA_there-is-no-convincing-evidence-that-ginkgo-biloba-is-efficacious-for-dementia-and-cognitive-impairment#sthash.oqKFrSCC.dpuf

Acupuncture seems to be as popular as never before – many conventional pain clinics now employ acupuncturists, for instance. It is probably true to say that acupuncture is one of the best-known types of all alternative therapies. Yet, experts are still divided in their views about this treatment – some proclaim that acupuncture is the best thing since sliced bread, while others insist that it is no more than a theatrical placebo. Consumers, I imagine, are often left helpless in the middle of these debates. Here are 7 important bits of factual information that might help you make up your mind, in case you are tempted to try acupuncture.

  1. Acupuncture is ancient; some enthusiast thus claim that it has ‘stood the test of time’, i. e. that its long history proves its efficacy and safety beyond reasonable doubt and certainly more conclusively than any scientific test. Whenever you hear such arguments, remind yourself that the ‘argumentum ad traditionem’ is nothing but a classic fallacy. A long history of usage proves very little – think of how long blood letting was used, even though it killed millions.
  2. We often think of acupuncture as being one single treatment, but there are many different forms of this therapy. According to believers in acupuncture, acupuncture points can be stimulated not just by inserting needles (the most common way) but also with heat, electrical currents, ultrasound, pressure, etc. Then there is body acupuncture, ear acupuncture and even tongue acupuncture. Finally, some clinicians employ the traditional Chinese approach based on the assumption that two life forces are out of balance and need to be re-balanced, while so-called ‘Western’ acupuncturists adhere to the concepts of conventional medicine and claim that acupuncture works via scientifically explainable mechanisms that are unrelated to ancient Chinese philosophies.
  3. Traditional Chinese acupuncturists have not normally studied medicine and base their practice on the Taoist philosophy of the balance between yin and yang which has no basis in science. This explains why acupuncture is seen by traditional acupuncturists as a ‘cure all’ . In contrast, medical acupuncturists tend to cite neurophysiological explanations as to how acupuncture might work. However, it is important to note that, even though they may appear plausible, these explanations are currently just theories and constitute no proof for the validity of acupuncture as a medical intervention.
  4. The therapeutic claims made for acupuncture are legion. According to the traditional view, acupuncture is useful for virtually every condition affecting mankind; according to the more modern view, it is effective for a relatively small range of conditions only. On closer examination, the vast majority of these claims can be disclosed to be based on either no or very flimsy evidence. Once we examine the data from reliable clinical trials (today several thousand studies of acupuncture are available – see below), we realise that acupuncture is associated with a powerful placebo effect, and that it works better than a placebo only for very few (some say for no) conditions.
  5. The interpretation of the trial evidence is far from straight forward: most of the clinical trials of acupuncture originate from China, and several investigations have shown that very close to 100% of them are positive. This means that the results of these studies have to be taken with more than a small pinch of salt. In order to control for patient-expectations, clinical trials can be done with sham needles which do not penetrate the skin but collapse like miniature stage-daggers. This method does, however, not control for acupuncturists’ expectations; blinding of the therapists is difficult and therefore truly double (patient and therapist)-blind trials of acupuncture do hardly exist. This means that even the most rigorous studies of acupuncture are usually burdened with residual bias.
  6. Few acupuncturists warn their patients of possible adverse effects; this may be because the side-effects of acupuncture (they occur in about 10% of all patients) are mostly mild. However, it is important to know that very serious complications of acupuncture are on record as well: acupuncture needles can injure vital organs like the lungs or the heart, and they can introduce infections into the body, e. g. hepatitis. About 100 fatalities after acupuncture have been reported in the medical literature – a figure which, due to lack of a monitoring system, may disclose just the tip of an iceberg.
  7. Given that, for the vast majority of conditions, there is no good evidence that acupuncture works beyond a placebo response, and that acupuncture is associated with finite risks, it seems to follow that, in most situations, the risk/benefit balance for acupuncture fails to be convincingly positive.

Reiki is a form of energy healing that evidently has been getting so popular that, according to the ‘Shropshire Star’, even stressed hedgehogs are now being treated with this therapy. In case you argue that this publication is not cutting edge when it comes to reporting of scientific advances, you may have a point. So, let us see what evidence we find on this amazing intervention.

A recent systematic review of the therapeutic effects of Reiki concludes that the serious methodological and reporting limitations of limited existing Reiki studies preclude a definitive conclusion on its effectiveness. High-quality randomized controlled trials are needed to address the effectiveness of Reiki over placebo. Considering that this article was published in the JOURNAL OF ALTERNATIVE AND COMPLEMENTARY MEDICINE, this is a fairly damming verdict. The notion that Reiki is but a theatrical placebo recently received more support from a new clinical trial.

This pilot study examined the effects of Reiki therapy and companionship on improvements in quality of life, mood, and symptom distress during chemotherapy. Thirty-six breast cancer patients received usual care, Reiki, or a companion during chemotherapy. Data were collected from patients while they were receiving usual care. Subsequently, patients were randomized to either receive Reiki or a companion during chemotherapy. Questionnaires assessing quality of life, mood, symptom distress, and Reiki acceptability were completed at baseline and chemotherapy sessions 1, 2, and 4. Reiki was rated relaxing and caused no side effects. Both Reiki and companion groups reported improvements in quality of life and mood that were greater than those seen in the usual care group.

The authors of this study conclude that interventions during chemotherapy, such as Reiki or companionship, are feasible, acceptable, and may reduce side effects.

This is an odd conclusion, if there ever was one. Clearly the ‘companionship’ group was included to see whether Reiki has effects beyond simply providing sympathetic attention. The results show that this is not the case. It follows, I think, that Reiki is a placebo; its perceived relaxing effects are the result of non-specific phenomena which have nothing to do with Reiki per se. The fact that the authors fail to spell this out more clearly makes me wonder whether they are researchers or promoters of Reiki.

Some people will feel that it does not matter how Reiki works, the main thing is that it does work. I beg to differ!

If its effects are due to nothing else than attention and companionship, we do not need ‘trained’ Reiki masters to do the treatment; anyone who has time, compassion and sympathy can do it. More importantly, if Reiki is a placebo, we should not mislead people that some super-natural energy is at work. This only promotes irrationality – and, as Voltaire once said: those who make you believe in absurdities can make you commit atrocities.

A special issue of Medical Care has just been published; it was sponsored by the Veterans Health Administration’s Office of Patient Centered Care and Cultural Transformation. A press release made the following statement about it:

Complementary and alternative medicine therapies are increasingly available, used, and appreciated by military patients, according to Drs Taylor and Elwy. They cite statistics showing that CAM programs are now offered at nearly 90 percent of VA medical facilities. Use CAM modalities by veterans and active military personnel is as at least as high as in the general population.

If you smell a bit of the old ad populum fallacy here, you may be right. But let’s look at the actual contents of the special issue. The most interesting article is about a study testing acupuncture for posttraumatic stress disorder (PTSD).

Fifty-five service members meeting research diagnostic criteria for PTSD were randomized to usual PTSD care (UPC) plus eight 60-minute sessions of acupuncture conducted twice weekly or to UPC alone. Outcomes were assessed at baseline and 4, 8, and 12 weeks postrandomization. The primary study outcomes were difference in PTSD symptom improvement on the PTSD Checklist (PCL) and the Clinician-administered PTSD Scale (CAPS) from baseline to 12-week follow-up between the two treatment groups. Secondary outcomes were depression, pain severity, and mental and physical health functioning. Mixed model regression and t test analyses were applied to the data.

The results show that the mean improvement in PTSD severity was significantly greater among those receiving acupuncture than in those receiving UPC. Acupuncture was also associated with significantly greater improvements in depression, pain, and physical and mental health functioning. Pre-post effect-sizes for these outcomes were large and robust.

The authors conclude from these data that acupuncture was effective for reducing PTSD symptoms. Limitations included small sample size and inability to parse specific treatment mechanisms. Larger multisite trials with longer follow-up, comparisons to standard PTSD treatments, and assessments of treatment acceptability are needed. Acupuncture is a novel therapeutic option that may help to improve population reach of PTSD treatment.

What shall we make of this?

I know I must sound like a broken record to some, but I have strong reservations that the interpretation provided here is correct. One does not even need to be a ‘devil’s advocate’ to point out that the observed outcomes may have nothing at all to do with acupuncture per se. A much more rational interpretation of the findings would be that the 8 times 60 minutes of TLC and attention have positive effects on the subjective symptoms of soldiers suffering from PTSD. No needles required for this to happen; and no mystical chi, meridians, life forces etc.

It would, of course, have been quite easy to design the study such that the extra attention is controlled for. But the investigators evidently did not want to do that. They seemed to have the desire to conduct a study where the outcome was clear even before the first patient had been recruited. That some if not most experts would call this poor science or even unethical may not have been their primary concern.

The question I ask myself is, why did the authors of this study fail to express the painfully obvious fact that the results are most likely unrelated to acupuncture? Is it because, in military circles, Occam’s razor is not on the curriculum? Is it because critical thinking has gone out of fashion ( – no, it is not even critical thinking to point out something that is more than obvious)? Is it then because, in the present climate, it is ‘politically’ correct to introduce a bit of ‘holistic touchy feely’ stuff into military medicine?

I would love to hear what my readers think.

Acute tonsillitis (AT) is an upper respiratory tract infection which is prevalent, particularly in children. The cause is usually a viral or, less commonly, a bacterial infection. Treatment is symptomatic and usually consists of ample fluid intake and pain-killers; antibiotics are rarely indicated, even if the infection is bacterial by nature. The condition is self-limiting and symptoms subside normally after one week.

Homeopaths believe that their remedies are effective for AT – but is there any evidence? A recent trial seems to suggest there is.

It aimed, according to its authors, to determine the efficacy of a homeopathic complex on the symptoms of acute viral tonsillitis in African children in South Africa.

The double-blind, placebo-controlled RCT was a 6-day “pilot study” and included 30 children aged 6 to 12 years, with acute viral tonsillitis. Participants took two tablets 4 times per day. The treatment group received lactose tablets medicated with the homeopathic complex (Atropa belladonna D4, Calcarea phosphoricum D4, Hepar sulphuris D4, Kalium bichromat D4, Kalium muriaticum D4, Mercurius protoiodid D10, and Mercurius biniodid D10). The placebo consisted of the unmedicated vehicle only. The Wong-Baker FACES Pain Rating Scale was used for measuring pain intensity, and a Symptom Grading Scale assessed changes in tonsillitis signs and symptoms.

The results showed that the treatment group had a statistically significant improvement in the following symptoms compared with the placebo group: pain associated with tonsillitis, pain on swallowing, erythema and inflammation of the pharynx, and tonsil size.

The authors drew the following conclusions: the homeopathic complex used in this study exhibited significant anti-inflammatory and pain-relieving qualities in children with acute viral tonsillitis. No patients reported any adverse effects. These preliminary findings are promising; however, the sample size was small and therefore a definitive conclusion cannot be reached. A larger, more inclusive research study should be undertaken to verify the findings of this study.

Personally, I agree only with the latter part of the conclusion and very much doubt that this study was able to “determine the efficacy” of the homeopathic product used. The authors themselves call their trial a “pilot study”. Such projects are not meant to determine efficacy but are usually designed to determine the feasibility of a trial design in order to subsequently mount a definitive efficacy study.

Moreover, I have considerable doubts about the impartiality of the authors. Their affiliation is “Department of Homoeopathy, University of Johannesburg, Johannesburg, South Africa”, and their article was published in a journal known to be biased in favour of homeopathy. These circumstances in itself might not be all that important, but what makes me more than a little suspicious is this sentence from the introduction of their abstract:

“Homeopathic remedies are a useful alternative to conventional medications in acute uncomplicated upper respiratory tract infections in children, offering earlier symptom resolution, cost-effectiveness, and fewer adverse effects.”

A useful alternative to conventional medications (there are no conventional drugs) for earlier symptom resolution?

If it is true that the usefulness of homeopathic remedies has been established, why conduct the study?

If the authors were so convinced of this notion (for which there is, of course, no good evidence) how can we assume they were not biased in conducting this study?

I think that, in order to agree that a homeopathic remedy generates effects that differ from those of placebo, we need a proper (not a pilot) study, published in a journal of high standing by unbiased scientists.

Rigorous research into the effectiveness of a therapy should tell us the truth about the ability of this therapy to treat patients suffering from a given condition — perhaps not one single study, but the totality of the evidence (as evaluated in systematic reviews) should achieve this aim. Yet, in the realm of alternative medicine (and probably not just in this field), such reviews are often highly contradictory.

A concrete example might explain what I mean.

There are numerous systematic reviews assessing the effectiveness of acupuncture for fibromyalgia syndrome (FMS). It is safe to assume that the authors of these reviews have all conducted comprehensive searches of the literature in order to locate all the published studies on this subject. Subsequently, they have evaluated the scientific rigor of these trials and summarised their findings. Finally they have condensed all of this into an article which arrives at a certain conclusion about the value of the therapy in question. Understanding this process (outlined here only very briefly), one would expect that all the numerous reviews draw conclusions which are, if not identical, at least very similar.

However, the disturbing fact is that they are not remotely similar. Here are two which, in fact, are so different that one could assume they have evaluated a set of totally different primary studies (which, of course, they have not).

One recent (2014) review concluded that acupuncture for FMS has a positive effect, and acupuncture combined with western medicine can strengthen the curative effect.

Another recent review concluded that a small analgesic effect of acupuncture was present, which, however, was not clearly distinguishable from bias. Thus, acupuncture cannot be recommended for the management of FMS.

How can this be?

By contrast to most systematic reviews of conventional medicine, systematic reviews of alternative therapies are almost invariably based on a small number of primary studies (in the above case, the total number was only 7 !). The quality of these trials is often low (all reviews therefore end with the somewhat meaningless conclusion that more and better studies are needed).

So, the situation with primary studies of alternative therapies for inclusion into systematic reviews usually is as follows:

  • the number of trials is low
  • the quality of trials is even lower
  • the results are not uniform
  • the majority of the poor quality trials show a positive result (bias tends to generate false positive findings)
  • the few rigorous trials yield a negative result

Unfortunately this means that the authors of systematic reviews summarising such confusing evidence often seem to feel at liberty to project their own pre-conceived ideas into their overall conclusion about the effectiveness of the treatment. Often the researchers are in favour of the therapy in question – in fact, this usually is precisely the attitude that motivated them to conduct a review in the first place. In other words, the frequently murky state of the evidence (as outlined above) can serve as a welcome invitation for personal bias to do its effect in skewing the overall conclusion. The final result is that the readers of such systematic reviews are being misled.

Authors who are biased in favour of the treatment will tend to stress that the majority of the trials are positive. Therefore the overall verdict has to be positive as well, in their view. The fact that most trials are flawed does not usually bother them all that much (I suspect that many fail to comprehend the effects of bias on the study results); they merely add to their conclusions that “more and better trials are needed” and believe that this meek little remark is sufficient evidence for their ability to critically analyse the data.

Authors who are not biased and have the necessary skills for critical assessment, on the other hand, will insist that most trials are flawed and therefore their results must be categorised as unreliable. They will also emphasise the fact that there are a few reliable studies and clearly point out that these are negative. Thus their overall conclusion must be negative as well.

In the end, enthusiasts will conclude that the treatment in question is at least promising, if not recommendable, while real scientists will rightly state that the available data are too flimsy to demonstrate the effectiveness of the therapy; as it is wrong to recommend unproven treatments, they will not recommend the treatment for routine use.

The difference between the two might just seem marginal – but, in fact, it is huge: IT IS THE DIFFERENCE BETWEEN MISLEADING PEOPLE AND GIVING RESPONSIBLE ADVICE; THE DIFFERENCE BETWEEN VIOLATING AND ADHERING TO ETHICAL STANDARDS.

One of the problems regularly encountered when evaluating the effectiveness of chiropractic spinal manipulation is that there are numerous chiropractic spinal manipulative techniques and clinical trials rarely provide an exact means of differentiating between them. Faced with a negative studies, chiropractors might therefore argue that the result was negative because the wrong techniques were used; therefore they might insist that it does not reflect chiropractic in a wider sense. Others claim that even a substantial body of negative evidence does not apply to chiropractic as a whole because there is a multitude of techniques that have not yet been properly tested. It seems as though the chiropractic profession wants the cake and eat it.

Amongst the most commonly used is the ‘DIVERSIFIED TECHNIQUE’ (DT) which has been described as follows: Like many chiropractic and osteopathic manipulative techniques, Diversified is characterized by a high velocity low amplitude thrust. Diversified is considered the most generic chiropractic manipulative technique and is differentiated from other techniques in that its objective is to restore proper movement and alignment of spine and joint dysfunction.

Also widely used is a technique called ‘FLEXION DISTRACTION’ (FD) which involves the use of a specialized table that gently distracts or stretches the spine and which allows the chiropractor to isolate the area of disc involvement while slightly flexing the spine in a pumping rhythm.

The ‘ACTIVATOR TECHNIQUE’ (AT) seems a little less popular; it involves having the patient lie in a prone position and comparing the functional leg lengths. Often one leg will seem to be shorter than the other. The chiropractor then carries out a series of muscle tests such as having the patient move their arms in a certain position in order to activate the muscles attached to specific vertebrae. If the leg lengths are not the same, that is taken as a sign that the problem is located at that vertebra. The chiropractor treats problems found in this way moving progressively along the spine in the direction from the feet towards the head. The activator is a small handheld spring-loaded instrument which delivers a small impulse to the spine. It was found to give off no more than 0.3 J of kinetic energy in a 3-millisecond pulse. The aim is to produce enough force to move the vertebrae but not enough to cause injury.

There is limited research comparing the effectiveness of these and the many other techniques used by chiropractors, and the few studies that are available are usually less than rigorous and their findings are thus unreliable. A first step in researching this rather messy area would be to determine which techniques are most frequently employed.

The aim of this new investigation was to do just that, namely to provide insight into which treatment approaches are used most frequently by Australian chiropractors to treat spinal musculoskeletal conditions.

A questionnaire was sent online to the members of the two main Australian chiropractic associations in 2013. The participants were asked to provide information on treatment choices for specific spinal musculoskeletal conditions.

A total of 280 responses were received. DT was the first choice of treatment for most of the included conditions. DT was used significantly less in 4 conditions: cervical disc syndrome with radiculopathy and cervical central stenosis were more likely to be treated with AT. FD was used almost as much as DT in the treatment of lumbar disc syndrome with radiculopathy and lumbar central stenosis. More experienced Australian chiropractors use more AT and soft tissue therapy and less DT compared to their less experienced chiropractors. The majority of the responding chiropractors also used ancillary procedures such as soft tissue techniques and exercise prescription in the treatment of spinal musculoskeletal conditions.

The authors concluded that this survey provides information on commonly used treatment choices to the chiropractic profession. Treatment choices changed based on the region of disorder and whether neurological symptoms were present rather than with specific diagnoses. Diversified technique was the most commonly used spinal manipulative therapy, however, ancillary procedures such as soft tissue techniques and exercise prescription were also commonly utilised. This information may help direct future studies into the efficacy of chiropractic treatment for spinal musculoskeletal disorders.

I am a little less optimistic that this information will help to direct future research. Critical readers might have noticed that the above definitions of two commonly used techniques are rather vague, particularly that of DT.

Why is that so? The answer seems to be that even chiropractors are at a loss coming up with a good definition of their most-used therapeutic techniques. I looked hard for a more precise definition but the best I could find was this: Diversified is characterized by the manual delivery of a high velocity low amplitude thrust to restricted joints of the spine and the extremities. This is known as an adjustment and is performed by hand. Virtually all joints of the body can be adjusted to help restore proper range of motion and function. Initially a functional and manual assessment of each joint’s range and quality of motion will establish the location and degree of joint dysfunction. The patient will then be positioned depending on the region being adjusted when a specific, quick impulse will be delivered through the line of the joint in question. The direction, speed, depth and angles that are used are the product of years of experience, practice and a thorough understanding of spinal mechanics. Often a characteristic ‘crack’ or ‘pop’ may be heard during the process. This is perfectly normal and is nothing to worry about. It is also not a guide as to the value or effectiveness of the adjustment.

This means that the DT is not a single method but a hotchpotch of techniques; this assumption is also confirmed by the following quote: The diversified technique is a technique used by chiropractors that is composed of all other techniques. It is the most commonly used technique and primarily focuses on spinal adjustments to restore function to vertebral and spinal problems.

What does that mean for research into chiropractic spinal manipulation? It means, I think, that even if we manage to define that a study was to test the effectiveness of one named chiropractic technique, such as DT, the chiropractors doing the treatments would most likely do what they believe is required for each individual patient.

There is, of course, nothing wrong with that approach; it is used in many other area of health care as well. In such cases, we need to view the treatment as something like a ‘black box'; we test the effectiveness of the black box without attempting to define its exact contents, and we trust that the clinicians in the trial are well-trained to use the optimal mix of techniques as needed for each individual patient.

I would assume that, in most studies available to date, this is precisely what already has been implemented. It is simply not reasonable to assume that a trial the trialists regularly instructed the chiropractors not to use the optimal treatments.

What does that mean for the interpretation of the existing trial evidence? It means, I think, that we should interpret it on face value. The clinical evidence for chiropractic treatment of most conditions fails to be convincingly positive. Chiropractors often counter that such negative findings fail to take into account that chiropractors use numerous different techniques. This argument is not valid because we must assume that in each trial the optimal techniques were administered.

In other words, the chiropractic attempt to have the cake and eat it has failed.

A reader of this blog recently sent me the following message: “Looks like this group followed you recent post about how to perform a CAM RCT!” A link directed me to a new trial of ear-acupressure. Today is ‘national acupuncture and oriental medicine day’ in the US, a good occasion perhaps to have a critical look at it.

The aim of this study was to assess the effectiveness of ear acupressure and massage vs. control in the improvement of pain, anxiety and depression in persons diagnosed with dementia.

For this purpose, the researchers recruited a total of 120 elderly dementia patients institutionalized in residential homes. The participants were randomly allocated, to three groups:

  • Control group – they continued with their routine activities;
  • Ear acupressure intervention group – they received ear acupressure treatment (pressure was applied to acupressure points on the ear);
  • Massage therapy intervention group – they received relaxing massage therapy.

Pain, anxiety and depression were assessed with the Doloplus2, Cornell and Campbell scales. The study was carried out during 5 months; three months of experimental treatment and two months with no treatment. The assessments were done at baseline, each month during the treatment and at one and two months of follow-up.

A total of 111 participants completed the study. The ear acupressure intervention group showed better improvements than the two other groups in relation to pain and depression during the treatment period and at one month of follow-up. The best improvement in pain was achieved in the last (3rd) month of ear acupressure treatment. The best results regarding anxiety were also observed in the last month of treatment.

The authors concluded that ear acupressure and massage therapy showed better results than the control group in relation to pain, anxiety and depression. However, ear acupressure achieved more improvements.

The question is: IS THIS A RIGOROUS TRIAL?

My answer would be NO.

Now I better explain why, don’t I?

If we look at them critically, the results of this trial might merely prove that spending some time with a patient, being nice to her, administering a treatment that involves time and touch, etc. yields positive changes in subjective experiences of pain, anxiety and depression. Thus the results of this study might have nothing to do with the therapies per se.

And why would acupressure be more successful than massage therapy? Massage therapy is an ‘old hat’ for many patients; by contrast, acupressure is exotic and relates to mystical life forces etc. Features like that have the potential to maximise the placebo-response. Therefore it is conceivable that they have contributed to the superiority of acupressure over massage.

What I am saying is that the results of this trial can be interpreted in not just one but several ways. The main reason for that is the fact that the control group were not given an acceptable placebo, one that was indistinguishable from the real treatment. Patients were fully aware of what type of intervention they were getting. Therefore their expectations, possibly heightened by the therapists, determined the outcomes. Consequently there were factors at work which were totally beyond the control of the researchers and a clear causal link between the therapy and the outcome cannot be established.

An RCT that is aimed to test the effectiveness of a therapy but fails to establish such a causal link beyond reasonable doubt cannot be characterised as a rigorous study, I am afraid.

Sorry! Did I spoil your ‘national acupuncture and oriental medicine day’?

1 2 3 10
Recent Comments
Click here for a comprehensive list of recent comments.
Categories