Can I tempt you to run a little (hopefully instructive) thought-experiment with you? It is quite simple: I will tell you about the design of a clinical trial, and you will tell me what the likely outcome of this study would be.
Are you game?
Here we go:
Imagine we conduct a trial of acupuncture for persistent pain (any type of pain really). We want to find out whether acupuncture is more than a placebo when it comes to pain-control. Of course, we want our trial to look as rigorous as possible. So, we design it as a randomised, sham-controlled, partially-blinded study. To be really ‘cutting edge’, our study will not have two but three parallel groups:
1. Standard needle acupuncture administered according to a protocol recommended by a team of expert acupuncturists.
2. Minimally invasive sham-acupuncture employing shallow needle insertion using short needles at non-acupuncture points. Patients in groups 1 and 2 are blinded, i. e. they are not supposed to know whether they receive the sham or real acupuncture.
3. No treatment at all.
We apply the treatments for a sufficiently long time, say 12 weeks. Before we start, after 6 and 12 weeks, we measure our patients’ pain with a validated method. We use sound statistical methods to compare the outcomes between the three groups.
WHAT DO YOU THINK THE RESULT WOULD BE?
You are not sure?
Well, let me give you some hints:
Group 3 is not going to do very well; not only do they receive no therapy at all, but they are also disappointed to have ended up in this group as they joined the study in the hope to get acupuncture. Therefore, they will (claim to) feel a lot of pain.
Group 2 will be pleased to receive some treatment. However, during the course of the 6 weeks, they will get more and more suspicious. As they were told during the process of obtaining informed consent that the trial entails treating some patients with a sham/placebo, they are bound to ask themselves whether they ended up in this group. They will see the short needles and the shallow needling, and a percentage of patients from this group will doubtlessly suspect that they are getting the sham treatment. The doubters will not show a powerful placebo response. Therefore, the average pain scores in this group will decrease – but only a little.
Group 1 will also be pleased to receive some treatment. As the therapists cannot be blinded, they will do their best to meet the high expectations of their patients. Consequently, they will benefit fully from the placebo effect of the intervention and the pain score of this group will decrease significantly.
So, now we can surely predict the most likely result of this trial without even conducting it. Assuming that acupuncture is a placebo-therapy, as many people do, we now see that group 3 will suffer the most pain. In comparison, groups 1 and 2 will show better outcomes.
Of course, the main question is, how do groups 1 and 2 compare to each other? After all, we designed our sham-controlled trial in order to answer exactly this issue: is acupuncture more than a placebo? As pointed out above, some patients in group 2 would have become suspicious and therefore would not have experienced the full placebo-response. This means that, provided the sample sizes are sufficiently large, there should be a significant difference between these two groups favouring real acupuncture over sham. In other words, our trial will conclude that acupuncture is better than placebo, even if acupuncture is a placebo.
THANK YOU FOR DOING THIS THOUGHT EXPERIMENT WITH ME.
Now I can tell you that it has a very real basis. The leading medical journal, JAMA, just published such a study and, to make matters worse, the trial was even sponsored by one of the most prestigious funding agencies: the NIH.
Here is the abstract:
Musculoskeletal symptoms are the most common adverse effects of aromatase inhibitors and often result in therapy discontinuation. Small studies suggest that acupuncture may decrease aromatase inhibitor-related joint symptoms.
To determine the effect of acupuncture in reducing aromatase inhibitor-related joint pain.
Design, Setting, and Patients:
Randomized clinical trial conducted at 11 academic centers and clinical sites in the United States from March 2012 to February 2017 (final date of follow-up, September 5, 2017). Eligible patients were postmenopausal women with early-stage breast cancer who were taking an aromatase inhibitor and scored at least 3 on the Brief Pain Inventory Worst Pain (BPI-WP) item (score range, 0-10; higher scores indicate greater pain).
Patients were randomized 2:1:1 to the true acupuncture (n = 110), sham acupuncture (n = 59), or waitlist control (n = 57) group. True acupuncture and sham acupuncture protocols consisted of 12 acupuncture sessions over 6 weeks (2 sessions per week), followed by 1 session per week for 6 weeks. The waitlist control group did not receive any intervention. All participants were offered 10 acupuncture sessions to be used between weeks 24 and 52.
Main Outcomes and Measures:
The primary end point was the 6-week BPI-WP score. Mean 6-week BPI-WP scores were compared by study group using linear regression, adjusted for baseline pain and stratification factors (clinically meaningful difference specified as 2 points).
Among 226 randomized patients (mean [SD] age, 60.7 [8.6] years; 88% white; mean [SD] baseline BPI-WP score, 6.6 [1.5]), 206 (91.1%) completed the trial. From baseline to 6 weeks, the mean observed BPI-WP score decreased by 2.05 points (reduced pain) in the true acupuncture group, by 1.07 points in the sham acupuncture group, and by 0.99 points in the waitlist control group. The adjusted difference for true acupuncture vs sham acupuncture was 0.92 points (95% CI, 0.20-1.65; P = .01) and for true acupuncture vs waitlist control was 0.96 points (95% CI, 0.24-1.67; P = .01). Patients in the true acupuncture group experienced more grade 1 bruising compared with patients in the sham acupuncture group (47% vs 25%; P = .01).
Conclusions and Relevance:
Among postmenopausal women with early-stage breast cancer and aromatase inhibitor-related arthralgias, true acupuncture compared with sham acupuncture or with waitlist control resulted in a statistically significant reduction in joint pain at 6 weeks, although the observed improvement was of uncertain clinical importance.
Do you see how easy it is to deceive (almost) everyone with a trial that looks rigorous to (almost) everyone?
My lesson from all this is as follows: whether consciously or unconsciously, SCAM-researchers often build into their trials more or less well-hidden little loopholes that ensure they generate a positive outcome. Thus even a placebo can appear to be effective. They are true masters of producing false-positive findings which later become part of a meta-analysis which is, of course, equally false-positive. It is a great shame, in my view, that even top journals (in the above case JAMA) and prestigious funders (in the above case the NIH) cannot (or want not to?) see behind this type of trickery.
“Non-reproducible single occurrences are of no significance to science”, this quote by Karl Popper often seems to get forgotten in medicine, particularly in alternative medicine. It indicates that findings have to be reproducible to be meaningful – if not, we cannot be sure that the outcome in question was caused by the treatment we applied.
This is thus a question of cause and effect.
The statistician Sir Austin Bradford Hill proposed in 1965 a set of 9 criteria to provide evidence of a relationship between a presumed cause and an observed effect while demonstrating the connection between cigarette smoking and lung cancer. One of his criteria is consistency or reproducibility: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.
By mentioning ‘different persons’, Hill seems to also establish the concept of INDEPENDENT replication.
Let me try to explain this with an example from the world of SCAM.
- A homeopath feels that childhood diarrhoea could perhaps be treated with individualised homeopathic remedies. She conducts a trial, finds a positive result and concludes that the statistically significant decrease in the duration of diarrhea in the treatment group suggests that homeopathic treatment might be useful in acute childhood diarrhea. Further study of this treatment deserves consideration.
- Unsurprisingly, this study is met with disbelieve by many experts. Some go as far as doubting its validity, and several letters to the editor appear expressing criticism. The homeopath is thus motivated to run another trial to prove her point. Its results are consistent with the finding from the previous study that individualized homeopathic treatment decreases the duration of diarrhea and number of stools in children with acute childhood diarrhea.
- We now have a replication of the original finding. Yet, for a range of reasons, sceptics are far from satisfied. The homeopath thus runs a further trial and publishes a meta-analysis of all there studies. The combined analysis shows a duration of diarrhoea of 3.3 days in the homeopathy group compared with 4.1 in the placebo group (P = 0.008). She thus concludes that the results from these studies confirm that individualized homeopathic treatment decreases the duration of acute childhood diarrhea and suggest that larger sample sizes be used in future homeopathic research to ensure adequate statistical power. Homeopathy should be considered for use as an adjunct to oral rehydration for this illness.
To most homeopaths it seems that this body of evidence from three replication is sound and solid. Consequently, they frequently cite these publications as a cast-iron proof of their assumption that individualised homeopathy is effective. Sceptics, however, are still not convinced.
The studies have been replicated alright, but what is missing is an INDEPENDENT replication.
To me this word implies two things:
- The results have to be reproduced by another research group that is unconnected to the one that conducted the three previous studies.
- That group needs to be independent from any bias that might get in the way of conducting a rigorous trial.
And why do I think this latter point is important?
Simply because I know from many years of experience that a researcher, who strongly believes in homeopathy or any other subject in question, will inadvertently introduce all sorts of biases into a study, even if its design is seemingly rigorous. In the end, these flaws will not necessarily show in the published article which means that the public will be mislead. In other words, the paper will report a false-positive finding.
It is possible, even likely, that this has happened with the three trials mentioned above. The fact is that, as far as I know, there is no independent replication of these studies.
In the light of all this, Popper’s axiom as applied to medicine should perhaps be modified: findings without independent replication are of no significance. Or, to put it even more bluntly: independent replication is an essential self-cleansing process of science by which it rids itself from errors, fraud and misunderstandings.
On this blog, we constantly discuss the shortcomings of clinical trials of (and other research into) alternative medicine. Yet, there can be no question that research into conventional medicine is often unreliable as well.
What might be the main reasons for this lamentable fact?
A recent BMJ article discussed 5 prominent reasons:
Firstly, much research fails to address questions that matter. For example, new drugs are tested against placebo rather than against usual treatments. Or the question may already have been answered, but the researchers haven’t undertaken a systematic review that would have told them the research was not needed. Or the research may use outcomes, perhaps surrogate measures, that are not useful.
Secondly, the methods of the studies may be inadequate. Many studies are too small, and more than half fail to deal adequately with bias. Studies are not replicated, and when people have tried to replicate studies they find that most do not have reproducible results.
Thirdly, research is not efficiently regulated and managed. Quality assurance systems fail to pick up the flaws in the research proposals. Or the bureaucracy involved in having research funded and approved may encourage researchers to conduct studies that are too small or too short term.
Fourthly, the research that is completed is not made fully accessible. Half of studies are never published at all, and there is a bias in what is published, meaning that treatments may seem to be more effective and safer than they actually are. Then not all outcome measures are reported, again with a bias towards those are positive.
Fifthly, published reports of research are often biased and unusable. In trials about a third of interventions are inadequately described meaning they cannot be implemented. Half of study outcomes are not reported.
END OF QUOTE
Apparently, these 5 issues are the reason why 85% of biomedical research is being wasted.
That is in CONVENTIONAL medicine, of course.
What about alternative medicine?
There is no question in my mind that the percentage figure must be even higher here. But do the same reasons apply? Let’s go through them again:
- Much research fails to address questions that matter. That is certainly true for alternative medicine – just think of the plethora of utterly useless surveys that are being published.
- The methods of the studies may be inadequate. Also true, as we have seen hundreds of time on this blog. Some of the most prevalent flaws include in my experience small sample sizes, lack of adequate controls (e.g. A+B vs B design) and misleading conclusions.
- Research is not efficiently regulated and managed. True, but probably not a specific feature of alternative medicine research.
- Research that is completed is not made fully accessible. most likely true but, due to lack of information and transparency, impossible to judge.
- Published reports of research are often biased and unusable. This is unquestionably a prominent feature of alternative medicine research.
All of this seems to indicate that the problems are very similar – similar but much more profound in the realm of alternative medicine, I’d say based on many years of experience (yes, what follows is opinion and not evidence because the latter is hardly available).
The thing is that, like almost any other job, research needs knowledge, skills, training, experience, integrity and impartiality to do it properly. It simply cannot be done well without such qualities. In alternative medicine, we do not have many individuals who have all or even most of these qualities. Instead, we have people who often are evangelic believers in alternative medicine, want to further their field by doing some research and therefore acquire a thin veneer of scientific expertise.
In my 25 years of experience in this area, I have not often seen researchers who knew that research is for testing hypotheses and not for trying to prove one’s hunches to be correct. In my own team, those who were the most enthusiastic about a particular therapy (and were thus seen as experts in its clinical application), were often the lousiest researchers who had the most difficulties coping with the scientific approach.
For me, this continues to be THE problem in alternative medicine research. The investigators – and some of them are now sufficiently skilled to bluff us to believe they are serious scientists – essentially start on the wrong foot. Because they never were properly trained and educated, they fail to appreciate how research proceeds. They hardly know how to properly establish a hypothesis, and – most crucially – they don’t know that, once that is done, you ought to conduct investigation after investigation to show that your hypothesis is incorrect. Only once all reasonable attempts to disprove it have failed, can your hypothesis be considered correct. These multiple attempts of disproving go entirely against the grain of an enthusiast who has plenty of emotional baggage and therefore cannot bring him/herself to honestly attempt to disprove his/her beloved hypothesis.
The plainly visible result of this situation is the fact that we have dozens of alternative medicine researchers who never publish a negative finding related to their pet therapy (some of them were admitted to what I call my HALL OF FAME on this blog, in case you want to verify this statement). And the lamentable consequence of all this is the fast-growing mountain of dangerously misleading (but often seemingly robust) articles about alternative treatments polluting Medline and other databases.
Is homeopathy effective for specific conditions? The FACULTY OF HOMEOPATHY (FoH, the professional organisation of UK doctor homeopaths) say YES. In support of this bold statement, they cite a total of 35 systematic reviews of homeopathy with a focus on specific clinical areas. “Nine of these 35 reviews presented conclusions that were positive for homeopathy”, they claim. Here they are:
Allergies and upper respiratory tract infections 8,9
Childhood diarrhoea 10
Post-operative ileus 11
Rheumatic diseases 12
Seasonal allergic rhinitis (hay fever) 13–15
And here are the references (I took the liberty of adding my comments in blod):
8. Bornhöft G, Wolf U, Ammon K, et al. Effectiveness, safety and cost-effectiveness of homeopathy in general practice – summarized health technology assessment. Forschende Komplementärmedizin, 2006; 13 Suppl 2: 19–29.
This is the infamous ‘Swiss report‘ which, nowadays, only homeopaths take seriously.
9. Bellavite P, Ortolani R, Pontarollo F, et al. Immunology and homeopathy. 4. Clinical studies – Part 1. Evidence-based Complementary and Alternative Medicine: eCAM, 2006; 3: 293–301.
This is not a systematic review as it lacks any critical assessment of the primary data and includes observational studies and even case series.
10. Jacobs J, Jonas WB, Jimenez-Perez M, Crothers D. Homeopathy for childhood diarrhea: combined results and metaanalysis from three randomized, controlled clinical trials. Pediatric Infectious Disease Journal, 2003; 22: 229–234.
This is a meta-analysis by Jennifer Jacobs (who recently featured on this blog) of 3 studies by Jennifer Jacobs; hardly convincing I’d say.
11. Barnes J, Resch K-L, Ernst E. Homeopathy for postoperative ileus? A meta-analysis. Journal of Clinical Gastroenterology, 1997; 25: 628–633.
This is my own paper! It concluded that “several caveats preclude a definitive judgment.”
12. Jonas WB, Linde K, Ramirez G. Homeopathy and rheumatic disease. Rheumatic Disease Clinics of North America, 2000; 26: 117–123.
This is not a systematic review; here is the (unabridged) abstract:
Despite a growing interest in uncovering the basic mechanisms of arthritis, medical treatment remains symptomatic. Current medical treatments do not consistently halt the long-term progression of these diseases, and surgery may still be needed to restore mechanical function in large joints. Patients with rheumatic syndromes often seek alternative therapies, with homeopathy being one of the most frequent. Homeopathy is one of the most frequently used complementary therapies worldwide.
13. Wiesenauer M, Lüdtke R. A meta-analysis of the homeopathic treatment of pollinosis with Galphimia glauca. Forschende Komplementärmedizin und Klassische Naturheilkunde, 1996; 3: 230–236.
This is a meta-analysis by Wiesenauer of trials conducted by Wiesenauer.
My own, more recent analysis of these data arrived at a considerably less favourable conclusion: “… three of the four currently available placebo-controlled RCTs of homeopathic Galphimia glauca (GG) suggest this therapy is an effective symptomatic treatment for hay fever. There are, however, important caveats. Most essentially, independent replication would be required before GG can be considered for the routine treatment of hay fever. (Focus on Alternative and Complementary Therapies September 2011 16(3))
14. Taylor MA, Reilly D, Llewellyn-Jones RH, et al. Randomised controlled trials of homoeopathy versus placebo in perennial allergic rhinitis with overview of four trial series. British Medical Journal, 2000; 321: 471–476.
15. Bellavite P, Ortolani R, Pontarollo F, et al. Immunology and homeopathy. 4. Clinical studies – Part 2. Evidence-based Complementary and Alternative Medicine: eCAM, 2006; 3: 397–409.
This is not a systematic review as it lacks any critical assessment of the primary data and includes observational studies and even case series.
16. Schneider B, Klein P, Weiser M. Treatment of vertigo with a homeopathic complex remedy compared with usual treatments: a meta-analysis of clinical trials. Arzneimittelforschung, 2005; 55: 23–29.
This is a meta-analysis of 2 (!) RCTs and 2 observational studies of ‘Vertigoheel’, a preparation which is not a homeopathic but a homotoxicologic remedy (it does not follow the ‘like cures like’ assumption of homeopathy) . Moreover, this product contains pharmacologically active substances (and nobody doubts that active substances can have effects).
So, positive evidence from 9 systematic reviews in 6 specific clinical areas?
I let you answer this question.
Shiatsu is an alternative therapy that is popular, but has so far attracted almost no research. Therefore, I was excited when I saw a new paper on the subject. Sadly, my excitement waned quickly when I stared reading the abstract.
This single-blind randomized controlled study was aimed to evaluate shiatsu on mood, cognition, and functional independence in patients undergoing physical activity. Alzheimer disease (AD) patients with depression were randomly assigned to the “active group” (Shiatsu + physical activity) or the “control group” (physical activity alone).
Shiatsu was performed by the same therapist once a week for ten months. Global cognitive functioning (Mini Mental State Examination – MMSE), depressive symptoms (Geriatric Depression Scale – GDS), and functional status (Activity of Daily Living – ADL, Instrumental ADL – IADL) were assessed before and after the intervention.
The researchers found a within-group improvement of MMSE, ADL, and GDS in the Shiatsu group. However, the analysis of differences before and after the interventions showed a statistically significant decrease of GDS score only in the Shiatsu group.
The authors concluded that the combination of Shiatsu and physical activity improved depression in AD patients compared to physical activity alone. The pathomechanism might involve neuroendocrine-mediated effects of Shiatsu on neural circuits implicated in mood and affect regulation.
- We first evaluated the effect of Shiatsu in depressed patients with Alzheimer’s disease (AD).
- Shiatsu significantly reduced depression in a sample of mild-to-moderate AD patients.
- Neuroendocrine-mediated effect of Shiatsu may modulate mood and affect neural circuits.
Where to begin?
1 The study is called a ‘pilot’. As such it should not draw conclusions about the effectiveness of Shiatsu.
2 The design of the study was such that there was no accounting for the placebo effect (the often-discussed ‘A+B vs B’ design); therefore, it is impossible to attribute the observed outcome to Shiatsu. The ‘highlight’ – Shiatsu significantly reduced depression in a sample of mild-to-moderate AD patients – therefore turns out to be a low-light.
3 As this was a study with a control group, within-group changes are irrelevant and do not even deserve a mention.
4 The last point about the mode of action is pure speculation, and not borne out of the data presented.
5 Accumulating so much nonsense in one research paper is, in my view, unethical.
Research into alternative medicine does not have a good reputation – studies like this one are not inclined to improve it.
Personally, I find our good friend Dana Ullman truly priceless. There are several reasons for that; one is that he is often so exemplarily wrong that it helps me to explain fundamental things more clearly. With a bit of luck, this might enable me to better inform people who might be thinking a bit like Dana. In this sense, our good friend Dana has significant educational value.
According to present and former editors of THE LANCET and the NEW ENGLAND JOURNAL OF MEDICINE, “evidence based medicine” can no longer be trusted. There is obviously no irony in Ernst and his ilk “banking” on “evidence” that has no firm footing except their personal belief systems: https://medium.com/@drjasonfung/the-corruption-of-evidence-based-medicine-killing-for-profit-41f2812b8704
Ernst is a fundamentalist whose God is reductionistic science, a 20th century model that has little real meaning today…but this won’t stop the new attacks on me personally…
END OF COMMENT
Where to begin?
Let’s start with some definitions.
- Evidence is the body of facts that leads to a given conclusion. Because the outcomes of treatments such as homeopathy depend on a multitude of factors, the evidence for or against their effectiveness is best based not on experience but on clinical trials and systematic reviews of clinical trials (this is copied from my book).
- EBM is the integration of best research evidence with clinical expertise and patient values. It thus rests on three pillars: external evidence, ideally from systematic reviews, the clinician’s experience, and the patient’s preferences (and this is from another book).
Few people would argue that EBM, as it is applied currently, is without fault. Certainly I would not suggest that; I even used to give lectures about the limitations of EBM, and many experts (who are much wiser than I) have written about the many problems with EBM. It is important to note that such criticism demonstrates the strength of modern medicine and not its weakness, as Dana seems to think: it is a sign of a healthy debate aimed at generating progress. And it is noteworthy that internal criticism of this nature is largely absent in alternative medicine.
The criticism of EBM is often focussed on the unreliability of the what I called above the ‘best research evidence’. Let me therefore repeat what I wrote about it on this blog in 2012:
… The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.
Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.
Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.
Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.
Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-comings, they are far superior than any other method for determining the efficacy of medical interventions.
There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.
Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.
In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.
END OF QUOTE
Other criticism is aimed at the way EBM is currently used (and abused). This criticism is often justified and necessary, and it is again the expression of our efforts to generate progress. EBM is practised by humans; and humans are far from perfect. They can be corrupt, misguided, dishonest, sloppy, negligent, stupid, etc., etc. Sadly, that means that the practice of EBM can have all of these qualities as well. All we can do is to keep on criticising malpractice, educate people, and hope that this might prevent the worst abuses in future.
Dana and many of his fellow SCAMers have a different strategy; they claim that EBM “can no longer be trusted” (interestingly they never tell us what system might be better; eminence-based medicine? experience-based medicine? random-based medicine? Dana-based medicine?).
The claim that EBM can no longer be trusted is clearly not true, counter-productive and unethical; and I suspect they know it.
Why then do they make it?
Because they feel that it entitles them to argue that homeopathy (or any other form of SCAM) cannot be held to EBM-standards. If EBM is unreliable, surely, nobody can ask the ‘Danas of this world’ to provide anything like sound data!!! And that, of course, would be just dandy for business, wouldn’t it?
So, let’s not be deterred or misled by these deliberately destructive people. Their motives are transparent and their arguments are nonsensical. EBM is not flawless, but with our continued efforts it will improve. Or, to repeat something that I have said many times before: EBM is the worst form of healthcare, except for all other known options.
THE CONVERSATION recently carried an article shamelessly promoting osteopathy. It seems to originate from the University of Swansea, UK, and is full of bizarre notions. Here is an excerpt:
To find out more about how osteopathy could potentially affect mental health, at our university health and well-being academy, we have recently conducted one of the first studies on the psychological impact of OMT – with positive results.
For the last five years, therapists at the academy have been using OMT to treat members of the public who suffer from a variety of musculoskeletal disorders which have led to chronic pain. To find out more about the mental health impacts of the treatment, we looked at three points in time – before OMT treatment, after the first week of treatment, and after the second week of treatment – and asked patients how they felt using mental health questionnaires.
This data has shown that OMT is effective for reducing anxiety and psychological distress, as well as improving patient self-care. But it may not be suitable for all mental illnesses associated with chronic pain. For instance, we found that OMT was less effective for depression and fear avoidance.
All is not lost, though. Our results also suggested that the positive psychological effects of OMT could be further optimised by combining it with therapy approaches like acceptance and commitment therapy (ACT). Some research indicates that psychological problems such as anxiety and depression are associated with inflexibility, and lead to experiential avoidance. ACT has a positive effect at reducing experiential avoidance, so may be useful with reducing the fear avoidance and depression (which OMT did not significantly reduce).
Other researchers have also suggested that this combined approach may be useful for some subgroups receiving OMT where they may accept this treatment. And, further backing this idea up, there has already been at least one pilot clinical trial and a feasibility study which have used ACT and OMT with some success.
Looking to build on our positive results, we have now begun to develop our ACT treatment in the academy, to be combined with the osteopathic therapy already on offer. Though there will be a different range of options, one of these ACT therapies is psychoeducational in nature. It does not require an active therapist to work with the patient, and can be delivered through internet instruction videos and homework exercises, for example.
Looking to the future, this kind of low cost, broad healthcare could not only save the health service money if rolled out nationwide but would also mean that patients only have to undergo one treatment.
END OF QUOTE
So, they recruited a few patients who had come to receive osteopathic treatments (a self-selected population full of expectation and in favour of osteopathy), let them fill a few questionnaires and found some positive changes. From that, they conclude that OMT (osteopathic manipulative therapy) is effective. Not only that, they advocate that OMT is rolled out nationwide to save NHS funds.
Vis a vis so much nonsense, I am (almost) speechless!
As this comes not from some commercial enterprise but from a UK university, the nonsense is intolerable, I find.
Do I even need to point out what is wrong with it?
Not really, it’s too obvious.
But, just in case some readers struggle to find the fatal flaws of this ‘study’, let me mention just the most obvious one. There was no control group! That means the observed outcome could be due to many factors that are totally unrelated to OMT – such as placebo-effect, regression towards the mean, natural history of the condition, concomitant treatments, etc. In turn, this also means that the nationwide rolling out of their approach would most likely be a costly mistake.
The general adoption of OMT would of course please osteopaths a lot; it could even reduce anxiety – but only that of the osteopaths and their bank-managers, I am afraid.
One thing one cannot say about George Vithoulkas, the ueber-guru of homeopathy, is that he is not as good as his word. Last year, he announced that he would focus on publishing case reports that would convince us all that homeopathy is effective:
…the only evidence that homeopathy can present to the scientific world at this moment are these thousands of cured cases. It is a waste of time, money, and energy to attempt to demonstrate the effectiveness of homeopathy through double blind trials.
… the international “scientific” community, which has neither direct perception nor personal experience of the beneficial effects of homeopathy, is forced to repeat the same old mantra: “Where is the evidence? Show us the evidence!” … the successes of homeopathy have remained hidden in the offices of hardworking homeopaths – and thus go largely ignored by the world’s medical authorities, governments, and the whole international scientific community…
… simple questions that are usually asked by the “gnorant”, for example, “Can homeopathy cure cancer, multiple sclerosis, ulcerative colitis, etc.?” are invalid and cannot elicit a direct answer because the reality is that many such cases can be ameliorated significantly, and a number can be cured…
And focussing on successful cases is just what the great Vithoulkas now does.
Together with homeopaths from the Centre for Classical Homeopathy, Vijayanagar, Bangalore, India, Vithoulkas has recently published a retrospective case series of 10 Indian patients who were diagnosed with dengue fever and treated exclusively with homeopathic remedies at Bangalore, India. This case series demonstrates with evidence of laboratory reports that even when the platelets dropped considerably there was good result without resorting to any other means.
The homeopaths concluded that a need for further, larger studies is indicated by this evidence, to precisely define the role of homeopathy in treating dengue fever. This study also emphasises the importance of individualised treatment during an epidemic for favourable results with homeopathy.
Keeping one’s promise must be a good thing.
But how meaningful are these 10 cases?
Dengue is a viral infection which, in the vast majority of cases, takes a benign course. After about two weeks, patients tend to be back to normal, even if they receive no treatment at all. In other words, the above-quoted case series is an exact description of the natural history of the condition. To put it even more bluntly: if these patients would have been treated with kind attention and good general care, the outcome would not have been one iota different.
To me, this means that “to precisely define the role of homeopathy in treating dengue fever” would be a waste of resources. It’s role is already clear: there is no role of homeopathy in the treatment of this (or any other) condition.
The announcement was made during the German sceptics conference ‘Skepkon‘ in Koeln. As I could not be present, I obtained the photo via Twitter.
So, if you are a homeopath or a fan of homeopathy, all you have to do – as the above slide says – is to reproducibly identify homeopathic remedies in high potency. The procedure for obtaining the money has to follow three pre-defined steps:
- Identification of three homeopathic preparations in high potency according to a proscribed protocol.
- Documentation of a method enabling a third party to identify the remedies.
- Verification of the experiment by repeating it.
Anyone interested must adhere to the full instructions published by the German sceptics GWUP:
1. Review of test protocol
Together with a representative of GWUP, the applicants review and agree on this protocol prior to the start of the test. Minor changes may be applied if justified, provided they are mutually agreed to in advance and do not affect the validity of the test, especially the blinding and randomization of the samples. In any case we want to avoid that the results get compromised or their credibility impeached by modifications of the protocol while the test is already under way. After mutual confirmation, the test protocol is binding for the whole duration of the test and its evaluation.
2. Selection of drugs
The applicant proposes which three drugs should be used in the trial. This gives them the opportunity to select substances that they think they could distinguish particularly well as homeopathic remedies. The potency may be selected freely as well, whereby the following conditions must be observed:
– all drugs must be available as sugar globules of the same grade (“Globuli” in German);
– the same potency, namely D- or C-potency above D24 / C12, is used for all three drugs;
– all drugs can be procured from the same producer.
3. Procurement of samples
The samples will be purchased by GWUP and shipped from the vendor directly to the notary who will perform the randomization. GWUP will purchase sufficient numbers of packages to complete the series of 12 samples according to the randomization list. The procurement will ensure that the samples derive from different batches of production as follows.
3.1. Common remedies
Common remedies, i.e. remedies sold in high numbers, will be procured from randomly selected pharmacies from the biggest cities in Germany (Berlin, Hamburg, Munich, Cologne, Frankfurt, Stuttgart…). Each pharmacy supplies a bottle for each of the three selected remedies and ships it directly to the notary in charge of randomization. If the applicants need a sample of known content for calibration, then this will be procured from yet another pharmacy in another German city.
3.2. Special remedies
If due to low sales it is possible that the above procedure is not sufficient to have all samples from different batches, a randomly selected pharmacy will be appointed to produce all the samples from raw materials purchased from the producer. GWUP will procure the mother tinctures, the raw sugar pills, and bottles and packages, to be shipped directly to the respective pharmacy who then will do the potentization, label the bottles and send them to the notary. If there are extra samples of known content required for calibration, then an extra set of samples will be produced. One set of samples will be kept in a sealed package for future reference.
The applicant and GWUP mutually agree on which procedure is used before the start of procurement. If more than 10 grams of globules per sample are required for the procedure used for inentification, the applicant has to indicate this in advance, and GWUP will take this into account.
4. Randomization / blinding
The randomization and blinding is done by a sworn-in public notary in Würzburg, Germany, who is selected by a random procedure. Würzburg is chosen because the first part of the task is to be evaluated at the University of Würzburg, for all participants based in Europe. For overseas applicants, the location will be mutually agreed on.
The notary receives a coding list showing how the three drugs A, B and C are to be distributed among the twelve samples. This list is compiled by the GWUP representative by throwing dice. The notary also determines which drug is assigned to which letter by throwing dice. Note that the drugs may not be present in the set in equal numbers.
The notary completely removes the original label from the bottle and replaces it with the number without opening the bottle. The randomization protocol is deposited in a sealed envelope with the notary public without a copy being made beforehand. The notary disposes of surplus packs. If special remedies are processed, one set of marked samples is sealed and forwarded to GWUP for later reference in a sealed package.
The coded bottles are sent from the notary to the applicant without individual packaging and documentation. The applicant confirms receipt of the samples.
The applicant identifies which of the 12 bottles contains which drug, using any method and procedure of his choice. There is no limit as to the method used for identification, and this well may be a procedure not currently recognized by modern science. However, GWUP at the start requires a short and rough outline of how the applicant wants to proceed, and GWUP reserves the right to reject applications whose sincerity for scientific work seems questionable.
The applicant is also required to specify a period of time within which they will be able to produce their results. This period may not exceed six months. If it expires without the applicant being able to show their results, the outcome will be considered negative. However, the candidate may apply for an extension in good time before the deadline, if they can provide a reasonable explanation and is not caused by the respective identification process as such.
The applicant is explicitly advised to observe ethics standards, and to procure the consent of an appropriate ethics committee if their method involves testing on humans or animals.
6. Result Pt. 1
If reasonable, the applicant may present their findings as part of the PSI-Tests held annually by GWUP at the University of Würzburg. The applicant’s result will be compared to the coding protocol from the notary. The number of bottles in which the notary’s record corresponds to the applicant’s details is determined. The result includes a description of the method used, if possible with meaningful intermediate data such as measurement protocols or symptom lists of drug provings.
The first part of the test is considered a success if the content of no more than one bottle is identified incorrectly and a description of the procedure is produced.
7. Result Pt. 2 and 3: Replication and Verification
Replication of the test is to ensure that a successful first result was not caused by chance alone. In addition, the procedure explained by the applicant is to be verified in a way depending on its nature. The objective is to verify that the identification was indeed performed by using this very method, and that the description is complete and suitable for a third party to achieve the same outcome.
For replication, steps 2 to 5 will be repeated. Applicants may select to use the same drugs as before. In this case they will then procured from another manufacturer or prepared by a different pharmacy with raw material from a different supplier. Alternatively, the candidate may indicate three new drugs which then can be obtained from the original vendor.
For a successful replication the same precision as before is required, that is, that only one out of 12 bottles may be identified incorrectly.
The evaluation and presentation of these results may take place at any location, press or other media may be invited to the event as agreed to by the applicant and GWUP.
Is anyone going to take up this challenge?
Personally, I don’t hold my breath.
Many years ago (at a time when homeopaths still saw me as one of their own), I had plans to do a similar but slightly less rigorous test as part of a doctoral thesis for one of my students.
Our investigation was straight forward: we approached several of the world’s leading/most famous homeopaths and asked them to participate. Their task was to tell us which homeopathic remedy they thought was easiest to differentiate from a placebo. Subsequently we would post them several vials – I think the number was 10 – and ask them to tell us which contained the remedy of their choice (in a C30 potency), and which the placebo (the distribution was 50:50, and the authenticity of each vial was to be confirmed by a notary). The experimental method for identifying which was which was entirely left to each participating homeopath; they were even allowed to use multiple, different tests. Based on the results, we would then calculate whether their identification skills were better than pure chance.
Sadly, the trial never happened. Initially, we had a positive response from some homeopaths who were interested in participating. However, when they then saw the exact protocol, they all pulled out.
But times may have changed; perhaps today there are some homeopaths out there who actually believe in homeopathy?
Perhaps our strategy to work only with ‘the best’ homeopaths was wrong?
Perhaps there are some homeopaths who are less risk-adverse?
I sure hope that lots of enthusiastic homeopaths will take up this challenge.
GOOD LUCK! And watch this space.
We recently discussed the deplorable case of Larry Nassar and the fact that the ‘American Osteopathic Association’ stated that intravaginal manipulations are indeed an approved osteopathic treatment. At the time, I thought this was a shocking claim. So, imagine my surprise when I was alerted to a German trial of osteopathic intravaginal manipulations.
Here is the full and unaltered abstract of the study:
Introduction: 50 to 80% of pregnant women suffer from low back pain (LBP) or pelvic pain (Sabino und Grauer, 2008). There is evidence for the effectiveness of manual therapy like osteopathy, chiropractic and physiotherapy in pregnant women with LBP or pelvic pain (Liccardione et al., 2010). Anatomical, functional and neural connections support the relationship between intrapelvic dysfunctions and lumbar and pelvic pain (Kanakaris et al., 2011). Strain, pressure and stretch of visceral and parietal peritoneum, bladder, urethra, rectum and fascial tissue can result in pain and secondary in muscle spasm. Visceral mobility, especially of the uterus and rectum, can induce tension on the inferior hypogastric plexus, which may influence its function. Thus, stretching the broad ligament of the uterus and the intrapelvic fascia tissue during pregnancy can reinforce the influence of the inferior hypogastric plexus. Based on above facts an additional intravaginal treatment seems to be a considerable approach in the treatment of low back pain in pregnant women.
Objective: The purpose of this study was to compare the effect of osteopathic treatment including intravaginal techniques versus osteopathic treatment only in females with pregnancy-related low back pain.
Methods: Design: The study was performed as a randomized controlled trial. The participants were randomized by drawing lots, either into the intervention group including osteopathic and additional intravaginal treatment (IV) or a control group with osteopathic treatment only (OI). Setting: Medical practice in south of Germany.
Participants 46 patients were recruited between the 30th and 36th week of pregnancy suffering from low back pain.
Intervention Both groups received three treatments within a period of three weeks. Both groups were treated with visceral, mobilization, and myofascial techniques in the cervical, thoracic and lumbar spine, the pelvic and the abdominal region (American Osteopathic Association Guidelines, 2010). The IV group received an additional treatment with intravaginal techniques in supine position. This included myofascial techniques of the M. levator ani and the internal obturator muscles, the vaginal tissue, the pubovesical and uterosacral ligaments as well as the inferior hypogastric plexus.
Main outcome measures As primary outcome the back pain intensity was measured by Visual Analogue Scale (VAS). Secondary outcome was the disability index assessed by Oswestry-Low-Back-Pain-Disability-Index (ODI), and Pregnancy-Mobility-Index (PMI).
Results: 46 participants were randomly assigned into the intervention group (IV; n = 23; age: 29.0 ±4.8 years; height: 170.1 ±5.8 cm; weight: 64.2 ±10.3 kg; BMI: 21.9 ±2.6 kg/m2) and the control group (OI; n = 23; age: 32.0 ±3.9 years; height: 168.1 ±3.5 cm; weight: 62.3 ±7.9 kg; BMI: 22.1 ±3.2 kg/m2). Data from 42 patients were included in the final analyses (IV: n=20; OI: n=22), whereas four patients dropped out due to general pregnancy complications. Back pain intensity (VAS) changed significantly in both groups: in the intervention group (IV) from 59.8 ±14.8 to 19.6 ±8.4 (p<0.05) and in the control group (OI) from 57.4 ±11.3 to 24.7 ±12.8. The difference between groups of 7.5 (95%CI: -16.3 to 1.3) failed to demonstrate statistical significance (p=0.93). Pregnancy-Mobility-Index (PMI) changed significantly in both groups, too. IV group: from 33.4 ±8.9 to 29.6 ±6.6 (p<0.05), control group (OI): from 36.3 ±5.2 to 29.7 ±6.8. The difference between groups of 2.6 (95%CI: -5.9 to 0.6) was not statistically significant (p=0.109). Oswestry-Low-Back-Pain-Disability-Index (ODI) changed significantly in the intervention group (IV) from 15.1 ±7.8 to 9.2 ±3.6 (p<0.05) and also significantly in the control group (OI) from 13.8 ±4.9 to 9.2 ±3.0. Between-groups difference of 1.3 (95%CI: -1.5 to 4.1) was not statistically significant (p=0.357).
Conclusions: In this sample a series of osteopathic treatments showed significant effects in reducing pain and increasing the lumbar range of motion in pregnant women with low back pain. Both groups attained clinically significant improvement in functional disability, activity and quality of life. Furthermore, no benefit of additional intravaginal treatment was observed.
END OF QUOTE
My first thoughts after reading this were: how on earth did the investigators get this past an ethics committee? It cannot be ethical, in my view, to allow osteopaths (in Germany, they have no relevant training to speak of) to manipulate women intravaginally. How deluded must an osteopath be to plan and conduct such a trial? What were the patients told before giving informed consent? Surely not the truth!
My second thoughts were about the scientific validity of this study: the hypothesis which this trial claims to be testing is a far-fetched extrapolation, to put it mildly; in fact, it is not a hypothesis, it’s a very daft idea. The control-intervention is inadequate in that it cannot control for the (probably large) placebo effects of intravaginal manipulations. The observed outcomes are based on within-group comparisons and are therefore most likely unrelated to the treatments applied. The conclusion is as barmy as it gets; a proper conclusion should clearly and openly state that the results did not show any effects of the intravaginal manipulations.
In summary, this is a breathtakingly idiotic trial, and everyone involved in it (ethics committee, funding body, investigators, statistician, reviewers, journal editor) should be deeply ashamed and apologise to the poor women who were abused in a most deplorable fashion.