Can I tempt you to run a little (hopefully instructive) thought-experiment with you? It is quite simple: I will tell you about the design of a clinical trial, and you will tell me what the likely outcome of this study would be.
Are you game?
Here we go:
Imagine we conduct a trial of acupuncture for persistent pain (any type of pain really). We want to find out whether acupuncture is more than a placebo when it comes to pain-control. Of course, we want our trial to look as rigorous as possible. So, we design it as a randomised, sham-controlled, partially-blinded study. To be really ‘cutting edge’, our study will not have two but three parallel groups:
1. Standard needle acupuncture administered according to a protocol recommended by a team of expert acupuncturists.
2. Minimally invasive sham-acupuncture employing shallow needle insertion using short needles at non-acupuncture points. Patients in groups 1 and 2 are blinded, i. e. they are not supposed to know whether they receive the sham or real acupuncture.
3. No treatment at all.
We apply the treatments for a sufficiently long time, say 12 weeks. Before we start, after 6 and 12 weeks, we measure our patients’ pain with a validated method. We use sound statistical methods to compare the outcomes between the three groups.
WHAT DO YOU THINK THE RESULT WOULD BE?
You are not sure?
Well, let me give you some hints:
Group 3 is not going to do very well; not only do they receive no therapy at all, but they are also disappointed to have ended up in this group as they joined the study in the hope to get acupuncture. Therefore, they will (claim to) feel a lot of pain.
Group 2 will be pleased to receive some treatment. However, during the course of the 6 weeks, they will get more and more suspicious. As they were told during the process of obtaining informed consent that the trial entails treating some patients with a sham/placebo, they are bound to ask themselves whether they ended up in this group. They will see the short needles and the shallow needling, and a percentage of patients from this group will doubtlessly suspect that they are getting the sham treatment. The doubters will not show a powerful placebo response. Therefore, the average pain scores in this group will decrease – but only a little.
Group 3 will also be pleased to receive some treatment. As the therapists cannot be blinded, they will do their best to meet the high expectations of their patients. Consequently, they will benefit fully from the placebo effect of the intervention and the pain score of this group will decrease significantly.
So, now we can surely predict the most likely result of this trial without even conducting it. Assuming that acupuncture is a placebo-therapy, as many people do, we now see that group 3 will suffer the most pain. In comparison, groups 1 and 2 will show better outcomes.
Of course, the main question is, how do groups 1 and 2 compare to each other? After all, we designed our sham-controlled trial in order to answer exactly this issue: is acupuncture more than a placebo? As pointed out above, some patients in group 2 would have become suspicious and therefore would not have experienced the full placebo-response. This means that, provided the sample sizes are sufficiently large, there should be a significant difference between these two groups favouring real acupuncture over sham. In other words, our trial will conclude that acupuncture is better than placebo, even if acupuncture is a placebo.
THANK YOU FOR DOING THIS THOUGHT EXPERIMENT WITH ME.
Now I can tell you that it has a very real basis. The leading medical journal, JAMA, just published such a study and, to make matters worse, the trial was even sponsored by one of the most prestigious funding agencies: the NIH.
Here is the abstract:
Musculoskeletal symptoms are the most common adverse effects of aromatase inhibitors and often result in therapy discontinuation. Small studies suggest that acupuncture may decrease aromatase inhibitor-related joint symptoms.
To determine the effect of acupuncture in reducing aromatase inhibitor-related joint pain.
Design, Setting, and Patients:
Randomized clinical trial conducted at 11 academic centers and clinical sites in the United States from March 2012 to February 2017 (final date of follow-up, September 5, 2017). Eligible patients were postmenopausal women with early-stage breast cancer who were taking an aromatase inhibitor and scored at least 3 on the Brief Pain Inventory Worst Pain (BPI-WP) item (score range, 0-10; higher scores indicate greater pain).
Patients were randomized 2:1:1 to the true acupuncture (n = 110), sham acupuncture (n = 59), or waitlist control (n = 57) group. True acupuncture and sham acupuncture protocols consisted of 12 acupuncture sessions over 6 weeks (2 sessions per week), followed by 1 session per week for 6 weeks. The waitlist control group did not receive any intervention. All participants were offered 10 acupuncture sessions to be used between weeks 24 and 52.
Main Outcomes and Measures:
The primary end point was the 6-week BPI-WP score. Mean 6-week BPI-WP scores were compared by study group using linear regression, adjusted for baseline pain and stratification factors (clinically meaningful difference specified as 2 points).
Among 226 randomized patients (mean [SD] age, 60.7 [8.6] years; 88% white; mean [SD] baseline BPI-WP score, 6.6 [1.5]), 206 (91.1%) completed the trial. From baseline to 6 weeks, the mean observed BPI-WP score decreased by 2.05 points (reduced pain) in the true acupuncture group, by 1.07 points in the sham acupuncture group, and by 0.99 points in the waitlist control group. The adjusted difference for true acupuncture vs sham acupuncture was 0.92 points (95% CI, 0.20-1.65; P = .01) and for true acupuncture vs waitlist control was 0.96 points (95% CI, 0.24-1.67; P = .01). Patients in the true acupuncture group experienced more grade 1 bruising compared with patients in the sham acupuncture group (47% vs 25%; P = .01).
Conclusions and Relevance:
Among postmenopausal women with early-stage breast cancer and aromatase inhibitor-related arthralgias, true acupuncture compared with sham acupuncture or with waitlist control resulted in a statistically significant reduction in joint pain at 6 weeks, although the observed improvement was of uncertain clinical importance.
Do you see how easy it is to deceive (almost) everyone with a trial that looks rigorous to (almost) everyone?
My lesson from all this is as follows: whether consciously or unconsciously, SCAM-researchers often build into their trials more or less well-hidden little loopholes that ensure they generate a positive outcome. Thus even a placebo can appear to be effective. They are true masters of producing false-positive findings which later become part of a meta-analysis which is, of course, equally false-positive. It is a great shame, in my view, that even top journals (in the above case JAMA) and prestigious funders (in the above case the NIH) cannot (or want not to?) see behind this type of trickery.
“Non-reproducible single occurrences are of no significance to science”, this quote by Karl Popper often seems to get forgotten in medicine, particularly in alternative medicine. It indicates that findings have to be reproducible to be meaningful – if not, we cannot be sure that the outcome in question was caused by the treatment we applied.
This is thus a question of cause and effect.
The statistician Sir Austin Bradford Hill proposed in 1965 a set of 9 criteria to provide evidence of a relationship between a presumed cause and an observed effect while demonstrating the connection between cigarette smoking and lung cancer. One of his criteria is consistency or reproducibility: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.
By mentioning ‘different persons’, Hill seems to also establish the concept of INDEPENDENT replication.
Let me try to explain this with an example from the world of SCAM.
- A homeopath feels that childhood diarrhoea could perhaps be treated with individualised homeopathic remedies. She conducts a trial, finds a positive result and concludes that the statistically significant decrease in the duration of diarrhea in the treatment group suggests that homeopathic treatment might be useful in acute childhood diarrhea. Further study of this treatment deserves consideration.
- Unsurprisingly, this study is met with disbelieve by many experts. Some go as far as doubting its validity, and several letters to the editor appear expressing criticism. The homeopath is thus motivated to run another trial to prove her point. Its results are consistent with the finding from the previous study that individualized homeopathic treatment decreases the duration of diarrhea and number of stools in children with acute childhood diarrhea.
- We now have a replication of the original finding. Yet, for a range of reasons, sceptics are far from satisfied. The homeopath thus runs a further trial and publishes a meta-analysis of all there studies. The combined analysis shows a duration of diarrhoea of 3.3 days in the homeopathy group compared with 4.1 in the placebo group (P = 0.008). She thus concludes that the results from these studies confirm that individualized homeopathic treatment decreases the duration of acute childhood diarrhea and suggest that larger sample sizes be used in future homeopathic research to ensure adequate statistical power. Homeopathy should be considered for use as an adjunct to oral rehydration for this illness.
To most homeopaths it seems that this body of evidence from three replication is sound and solid. Consequently, they frequently cite these publications as a cast-iron proof of their assumption that individualised homeopathy is effective. Sceptics, however, are still not convinced.
The studies have been replicated alright, but what is missing is an INDEPENDENT replication.
To me this word implies two things:
- The results have to be reproduced by another research group that is unconnected to the one that conducted the three previous studies.
- That group needs to be independent from any bias that might get in the way of conducting a rigorous trial.
And why do I think this latter point is important?
Simply because I know from many years of experience that a researcher, who strongly believes in homeopathy or any other subject in question, will inadvertently introduce all sorts of biases into a study, even if its design is seemingly rigorous. In the end, these flaws will not necessarily show in the published article which means that the public will be mislead. In other words, the paper will report a false-positive finding.
It is possible, even likely, that this has happened with the three trials mentioned above. The fact is that, as far as I know, there is no independent replication of these studies.
In the light of all this, Popper’s axiom as applied to medicine should perhaps be modified: findings without independent replication are of no significance. Or, to put it even more bluntly: independent replication is an essential self-cleansing process of science by which it rids itself from errors, fraud and misunderstandings.
Several previous studies have suggested improvements in sperm quality after vitamin supplementation, and several reviews have drawn tentatively positive conclusions:
- The current literature seems to suggest a beneficial effect of antioxidants on male infertility.
- Several studies have reported a significant increase in sperm quality and pregnancy rates when the men were supplemented by specific vitamins and micronutrients
- For those undergoing assisted reproduction, the odds ratio that antioxidant use would improve pregnancy rates was 4.18, with a 4.85-fold improvement in live birth rate also noted.
Most of the primary trials lacked scientific rigour, however. Now a new study has emerged that overcomes many of the flaws of the previous research.
Professor Anne Steiner from the University of North Carolina at Chapel Hill, USA, presented her study yesterday at the 34th Annual Meeting of ESHRE in Barcelona. This clinical trial of 174 couples has found that an antioxidant formulation taken daily by the male partner for a minimum of three months made no difference to sperm concentration, motility or morphology, nor to the rate of DNA fragmentation. The study was performed in eight American fertility centres.
All men in the study had been diagnosed with male factor infertility, reflected in subnormal levels of sperm concentration, motility, or morphology, or higher than normal rates of DNA fragmentation. These parameters were measured at the start of the trial and at three months. In between, the men allocated to the antioxidant intervention were given a daily supplement containing vitamins C, D3 and E, folic acid, zinc, selenium and L-carnitine; the control group received a placebo.
At three months, results showed only a “slight” overall difference in sperm concentration between the two groups, and no significant differences in morphology, motility, or DNA fragmentation measurements. Sub-group analysis (according to different types of sperm abnormality) found no significant differences in sperm concentration (in oligospermic men), motility (in asthenospermic men), and morphology (in teratospermic men).(1) There was also no change seen after three months in men with high rates of DNA fragmentation (28.9% in the antioxidant group and 28.8 in the placebo group).
Natural conception during the initial three month study period did also not differ between the two groups of the entire cohort – a pregnancy rate of 10.5% in the antioxidant group and 9.1% in the placebo. These rates were also comparable at six months (after continued antioxidant or placebo for the male partner and three cycles of clomiphene and intrauterine insemination for the female partner).
The authors concluded that “the results do not support the empiric use of antioxidant therapy for male factor infertility in couples trying to conceive naturally”.
The story about supplements and health claims seems to be strangely repetitive:
- the claim that supplements help for condition xy is heavily promoted, e. g. via the Internet;
- a few flimsy trials seem to support the claim;
- these results are relentlessly hyped;
- the profit of the manufacturers grows;
- eventually a rigorous, independently-funded trial emerges with a negative finding;
- the card house seems to collapse;
- the manufacturers claim that the trial’s methodology was faulty (e. g. wrong does, wrong mixture of ingredients);
- thus another profitable card house is built elsewhere.
In the end, the only supplement-related effects are that 1) the consumers produce expensive urine and 2) the manufacturers have plenty of funds to start a new campaign based on yet another bogus heath claim.
The only time we discussed gua sha, it led to one of the most prolonged discussions we ever had on this blog (536 comments so far). It seems to be a topic that excites many. But what precisely is it?
Gua sha, sometimes referred to as “scraping”, “spooning” or “coining”, is a traditional Chinese treatment that has spread to several other Asian countries. It has long been popular in Vietnam and is now also becoming well-known in the West. The treatment consists of scraping the skin with a smooth edge placed against the pre-oiled skin surface, pressed down firmly, and then moved downwards along muscles or meridians. According to its proponents, gua sha stimulates the flow of the vital energy ‘chi’ and releases unhealthy bodily matter from blood stasis within sore, tired, stiff or injured muscle areas.
The technique is practised by TCM practitioners, acupuncturists, massage therapists, physical therapists, physicians and nurses. Practitioners claim that it stimulates blood flow to the treated areas, thus promoting cell metabolism, regeneration and healing. They also assume that it has anti-inflammatory effects and stimulates the immune system.
These effects are said to last for days or weeks after a single treatment. The treatment causes microvascular injuries which are visible as subcutaneous bleeding and redness. Gua sha practitioners make far-reaching therapeutic claims, including that the therapy alleviates pain, prevents infections, treats asthma, detoxifies the body, cures liver problems, reduces stress, and contributes to overall health.
Gua sha is mildly painful, almost invariably leads to unsightly blemishes on the skin which occasionally can become infected and might even be mistaken for physical abuse.
There is little research of gua sha, and the few trials that exist tend to be published in Chinese. But recently, a new paper has emerged that is written in English. The goal of this systematic review was to evaluate the available evidence from randomized controlled trials (RCTs) of gua sha for the treatment of patients with perimenopausal syndrome.
A total of 6 RCTs met the inclusion criteria. Most were of low methodological quality. When compared with Western medicine therapy alone, meta-analysis of 5 RCTs indicated favorable statistically significant effects of gua sha plus Western medicine. Moreover, study participants who received Gua Sha therapy plus Western medicine therapy showed significantly greater improvements in serum levels of follicle-stimulating hormone (FSH), luteinizing hormone (LH) compared to participants in the Western medicine therapy group.
The authors concluded that preliminary evidence supported the hypothesis that Gua Sha therapy effectively improved the treatment efficacy in patients with perimenopausal syndrome. Additional studies will be required to elucidate optimal frequency and dosage of Gua Sha.
This sounds as though gua sha is a reasonable therapy.
Yet, I think this notion is worth being critically analysed. Here are some caveats that spring into my mind:
- Gua sha lacks biological plausibility.
- The reviewed trials are too flawed to allow any firm conclusions.
- As most are published in Chinese, non-Chinese speakers have no possibility to evaluate them.
- The studies originate from China where close to 100% of TCM trials report positive results.
- In my view, this means they are less than trustworthy.
- The authors of the above-cited review are all from China and might not be willing, able or allowed to publish a critical paper on this subject.
- The review was published in Complement Ther Clin Pract., a journal not known for its high scientific standards or critical stance towards TCM.
So, is gua sha a reasonable therapy?
I let you make this judgement.
Is homeopathy effective for specific conditions? The FACULTY OF HOMEOPATHY (FoH, the professional organisation of UK doctor homeopaths) say YES. In support of this bold statement, they cite a total of 35 systematic reviews of homeopathy with a focus on specific clinical areas. “Nine of these 35 reviews presented conclusions that were positive for homeopathy”, they claim. Here they are:
Allergies and upper respiratory tract infections 8,9
Childhood diarrhoea 10
Post-operative ileus 11
Rheumatic diseases 12
Seasonal allergic rhinitis (hay fever) 13–15
And here are the references (I took the liberty of adding my comments in blod):
8. Bornhöft G, Wolf U, Ammon K, et al. Effectiveness, safety and cost-effectiveness of homeopathy in general practice – summarized health technology assessment. Forschende Komplementärmedizin, 2006; 13 Suppl 2: 19–29.
This is the infamous ‘Swiss report‘ which, nowadays, only homeopaths take seriously.
9. Bellavite P, Ortolani R, Pontarollo F, et al. Immunology and homeopathy. 4. Clinical studies – Part 1. Evidence-based Complementary and Alternative Medicine: eCAM, 2006; 3: 293–301.
This is not a systematic review as it lacks any critical assessment of the primary data and includes observational studies and even case series.
10. Jacobs J, Jonas WB, Jimenez-Perez M, Crothers D. Homeopathy for childhood diarrhea: combined results and metaanalysis from three randomized, controlled clinical trials. Pediatric Infectious Disease Journal, 2003; 22: 229–234.
This is a meta-analysis by Jennifer Jacobs (who recently featured on this blog) of 3 studies by Jennifer Jacobs; hardly convincing I’d say.
11. Barnes J, Resch K-L, Ernst E. Homeopathy for postoperative ileus? A meta-analysis. Journal of Clinical Gastroenterology, 1997; 25: 628–633.
This is my own paper! It concluded that “several caveats preclude a definitive judgment.”
12. Jonas WB, Linde K, Ramirez G. Homeopathy and rheumatic disease. Rheumatic Disease Clinics of North America, 2000; 26: 117–123.
This is not a systematic review; here is the (unabridged) abstract:
Despite a growing interest in uncovering the basic mechanisms of arthritis, medical treatment remains symptomatic. Current medical treatments do not consistently halt the long-term progression of these diseases, and surgery may still be needed to restore mechanical function in large joints. Patients with rheumatic syndromes often seek alternative therapies, with homeopathy being one of the most frequent. Homeopathy is one of the most frequently used complementary therapies worldwide.
13. Wiesenauer M, Lüdtke R. A meta-analysis of the homeopathic treatment of pollinosis with Galphimia glauca. Forschende Komplementärmedizin und Klassische Naturheilkunde, 1996; 3: 230–236.
This is a meta-analysis by Wiesenauer of trials conducted by Wiesenauer.
My own, more recent analysis of these data arrived at a considerably less favourable conclusion: “… three of the four currently available placebo-controlled RCTs of homeopathic Galphimia glauca (GG) suggest this therapy is an effective symptomatic treatment for hay fever. There are, however, important caveats. Most essentially, independent replication would be required before GG can be considered for the routine treatment of hay fever. (Focus on Alternative and Complementary Therapies September 2011 16(3))
14. Taylor MA, Reilly D, Llewellyn-Jones RH, et al. Randomised controlled trials of homoeopathy versus placebo in perennial allergic rhinitis with overview of four trial series. British Medical Journal, 2000; 321: 471–476.
15. Bellavite P, Ortolani R, Pontarollo F, et al. Immunology and homeopathy. 4. Clinical studies – Part 2. Evidence-based Complementary and Alternative Medicine: eCAM, 2006; 3: 397–409.
This is not a systematic review as it lacks any critical assessment of the primary data and includes observational studies and even case series.
16. Schneider B, Klein P, Weiser M. Treatment of vertigo with a homeopathic complex remedy compared with usual treatments: a meta-analysis of clinical trials. Arzneimittelforschung, 2005; 55: 23–29.
This is a meta-analysis of 2 (!) RCTs and 2 observational studies of ‘Vertigoheel’, a preparation which is not a homeopathic but a homotoxicologic remedy (it does not follow the ‘like cures like’ assumption of homeopathy) . Moreover, this product contains pharmacologically active substances (and nobody doubts that active substances can have effects).
So, positive evidence from 9 systematic reviews in 6 specific clinical areas?
I let you answer this question.
The HRI is an innovative international charity created to address the need for high quality scientific research in homeopathy… HRI is dedicated to promoting cutting research in homeopathy, using the most rigorous methods available, and communicating the results of such work beyond the usual academic circles… HRI aims to bring academically reliable information to a wide international audience, in an easy to understand form. This audience includes the general public, scientists, healthcare providers, healthcare policy makers, government and the media.
This sounds absolutely brilliant!
I should be a member of the HRI!
For years, I have pursued similar aims!
Hold on, perhaps not?
This article makes me wonder:
START OF QUOTE
… By the end of 2014, 189 randomised controlled trials of homeopathy on 100 different medical conditions had been published in peer-reviewed journals. Of these, 104 papers were placebo-controlled and were eligible for detailed review:
41% were positive (43 trials) – finding that homeopathy was effective
5% were negative (5 trials) – finding that homeopathy was ineffective
54% were inconclusive (56 trials)
How does this compare with evidence for conventional medicine?
An analysis of 1016 systematic reviews of RCTs of conventional medicine had strikingly similar findings2:
44% were positive – the treatments were likely to be beneficial
7% were negative – the treatments were likely to be harmful
49% were inconclusive – the evidence did not support either benefit or harm.
END OF QUOTE
The implication here is that the evidence base for homeopathy is strikingly similar to that of real medicine.
Nice try! But sadly it has nothing to do with ‘reliable information’!!!
In fact, it is grossly (and I suspect deliberately) misleading.
Regular readers of this blog will have spotted the reason, because we discussed (part of) it before. Let me remind you:
A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.
For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.
One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about INCONCLUSIVE?
That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!
How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo INCONCLUSIVE.
In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.
This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.
Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.
The little trick of applying the ‘INCONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.
But one trick is not enough for the HRI! For thoroughly misinforming the public they have a second one up their sleeve.
And that is ‘comparing apples with pears’ – RCTs with systematic reviews, in their case.
In contrast to RCTs, systematic reviews can be (and often are) INCONCLUSIVE. As they evaluate the totality of all RCTs on a given subject, it is possible that some RCTs are positive, while others are negative. When, for example, the number of high-quality, positive studies included in a systematic review is similar to the number of high-quality, negative trials, the overall result of that review would be INCONCLUSIVE. And this is one of the reasons why the findings of systematic reviews cannot be compared in this way to those of RCTs.
I suspect that the people at the HRI know all this. They are not daft! In fact, they are quite clever. But unfortunately, they seem to employ their cleverness not for informing but for misleading their ‘wide international audience’.
Personally, I find our good friend Dana Ullman truly priceless. There are several reasons for that; one is that he is often so exemplarily wrong that it helps me to explain fundamental things more clearly. With a bit of luck, this might enable me to better inform people who might be thinking a bit like Dana. In this sense, our good friend Dana has significant educational value.
According to present and former editors of THE LANCET and the NEW ENGLAND JOURNAL OF MEDICINE, “evidence based medicine” can no longer be trusted. There is obviously no irony in Ernst and his ilk “banking” on “evidence” that has no firm footing except their personal belief systems: https://medium.com/@drjasonfung/the-corruption-of-evidence-based-medicine-killing-for-profit-41f2812b8704
Ernst is a fundamentalist whose God is reductionistic science, a 20th century model that has little real meaning today…but this won’t stop the new attacks on me personally…
END OF COMMENT
Where to begin?
Let’s start with some definitions.
- Evidence is the body of facts that leads to a given conclusion. Because the outcomes of treatments such as homeopathy depend on a multitude of factors, the evidence for or against their effectiveness is best based not on experience but on clinical trials and systematic reviews of clinical trials (this is copied from my book).
- EBM is the integration of best research evidence with clinical expertise and patient values. It thus rests on three pillars: external evidence, ideally from systematic reviews, the clinician’s experience, and the patient’s preferences (and this is from another book).
Few people would argue that EBM, as it is applied currently, is without fault. Certainly I would not suggest that; I even used to give lectures about the limitations of EBM, and many experts (who are much wiser than I) have written about the many problems with EBM. It is important to note that such criticism demonstrates the strength of modern medicine and not its weakness, as Dana seems to think: it is a sign of a healthy debate aimed at generating progress. And it is noteworthy that internal criticism of this nature is largely absent in alternative medicine.
The criticism of EBM is often focussed on the unreliability of the what I called above the ‘best research evidence’. Let me therefore repeat what I wrote about it on this blog in 2012:
… The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.
Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.
Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.
Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.
Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-comings, they are far superior than any other method for determining the efficacy of medical interventions.
There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.
Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.
In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.
END OF QUOTE
Other criticism is aimed at the way EBM is currently used (and abused). This criticism is often justified and necessary, and it is again the expression of our efforts to generate progress. EBM is practised by humans; and humans are far from perfect. They can be corrupt, misguided, dishonest, sloppy, negligent, stupid, etc., etc. Sadly, that means that the practice of EBM can have all of these qualities as well. All we can do is to keep on criticising malpractice, educate people, and hope that this might prevent the worst abuses in future.
Dana and many of his fellow SCAMers have a different strategy; they claim that EBM “can no longer be trusted” (interestingly they never tell us what system might be better; eminence-based medicine? experience-based medicine? random-based medicine? Dana-based medicine?).
The claim that EBM can no longer be trusted is clearly not true, counter-productive and unethical; and I suspect they know it.
Why then do they make it?
Because they feel that it entitles them to argue that homeopathy (or any other form of SCAM) cannot be held to EBM-standards. If EBM is unreliable, surely, nobody can ask the ‘Danas of this world’ to provide anything like sound data!!! And that, of course, would be just dandy for business, wouldn’t it?
So, let’s not be deterred or misled by these deliberately destructive people. Their motives are transparent and their arguments are nonsensical. EBM is not flawless, but with our continued efforts it will improve. Or, to repeat something that I have said many times before: EBM is the worst form of healthcare, except for all other known options.
My previous post praised the validity and trustworthiness of Cochrane reviews. This post continues in the same line.
Like osteoarthritis, acute stroke has been a condition for which acupuncture-fans prided themselves of being able to produce fairly good evidence. A Cochrane review of 2005, however, was inconclusive and concluded that the number of patients is too small to be certain whether acupuncture is effective for treatment of acute ischaemic or haemorrhagic stroke. Larger, methodologically-sound trials are required.
So, 13 years later, we do have more evidence, and it would be interesting to know what the best evidence tells us today. This new review will tell us because it is the update of the previous Cochrane Review originally published in 2005.
The authors sought randomized clinical trials (RCTs) of acupuncture started within 30 days from stroke onset compared with placebo or sham acupuncture or open control (no placebo) in people with acute ischemic or haemorrhagic stroke, or both. Needling into the skin was required for acupuncture. Comparisons were made versus (1) all controls (open control or sham acupuncture), and (2) sham acupuncture controls.
Two review authors applied the inclusion criteria, assessed trial quality and risk of bias, and extracted data independently. They contacted study authors to ask for missing data and assessed the quality of the evidence by using the GRADE approach. The primary outcome was defined as death or dependency at the end of follow-up.
In total, 33 RCTs with 3946 participants were included. Twenty new trials with 2780 participants had been completed since the previous review. Outcome data were available for up to 22 trials (2865 participants) that compared acupuncture with any control (open control or sham acupuncture) but for only six trials (668 participants) that compared acupuncture with sham acupuncture control. The authors downgraded the evidence to low or very low quality because of risk of bias in included studies, inconsistency in the acupuncture intervention and outcome measures, and imprecision in effect estimates.
When compared with any control (11 trials with 1582 participants), findings of lower odds of death or dependency at the end of follow-up and over the long term (≥ three months) in the acupuncture group were uncertain (odds ratio [OR] 0.61, 95% confidence interval [CI] 0.46 to 0.79; very low-quality evidence; and OR 0.67, 95% CI 0.53 to 0.85; eight trials with 1436 participants; very low-quality evidence, respectively) and were not confirmed by trials comparing acupuncture with sham acupuncture (OR 0.71, 95% CI 0.43 to 1.18; low-quality evidence; and OR 0.67, 95% CI 0.40 to 1.12; low-quality evidence, respectively).In trials comparing acupuncture with any control, findings that acupuncture was associated with increases in the global neurological deficit score and in the motor function score were uncertain (standardized mean difference [SMD] 0.84, 95% CI 0.36 to 1.32; 12 trials with 1086 participants; very low-quality evidence; and SMD 1.08, 95% CI 0.45 to 1.71; 11 trials with 895 participants; very low-quality evidence).
These findings were not confirmed in trials comparing acupuncture with sham acupuncture (SMD 0.01, 95% CI -0.55 to 0.57; low-quality evidence; and SMD 0.10, 95% CI -0.38 to 0.17; low-quality evidence, respectively).Trials comparing acupuncture with any control reported little or no difference in death or institutional care at the end of follow-up (OR 0.78, 95% CI 0.54 to 1.12; five trials with 1120 participants; low-quality evidence), death within the first two weeks (OR 0.91, 95% CI 0.33 to 2.55; 18 trials with 1612 participants; low-quality evidence), or death at the end of follow-up (OR 1.08, 95% CI 0.74 to 1.58; 22 trials with 2865 participants; low-quality evidence).
The incidence of adverse events (eg, pain, dizziness, fainting) in the acupuncture arms of open and sham control trials was 6.2% (64/1037 participants), and 1.4% of these (14/1037 participants) discontinued acupuncture. When acupuncture was compared with sham acupuncture, findings for adverse events were uncertain (OR 0.58, 95% CI 0.29 to 1.16; five trials with 576 participants; low-quality evidence).
The authors concluded that this updated review indicates that apparently improved outcomes with acupuncture in acute stroke are confounded by the risk of bias related to use of open controls. Adverse events related to acupuncture were reported to be minor and usually did not result in stopping treatment. Future studies are needed to confirm or refute any effects of acupuncture in acute stroke. Trials should clearly report the method of randomization, concealment of allocation, and whether blinding of participants, personnel, and outcome assessors was achieved, while paying close attention to the effects of acupuncture on long-term functional outcomes.
These cautious conclusions might be explained by the fact that Chinese researchers are reluctant to state anything overtly negative about any TCM therapy. Recently, one expert who spoke out was even imprisoned for criticising a TCM product! But in truth, this review really shows that acupuncture has no convincing effect in acute stroke.
And for me, this conclusion is fascinating. I have been involved in acupuncture/stroke research since the early 1990s.
Our RCT produced a resounding negative result concluding that acupuncture is not superior to sham treatment for recovery in activities of daily living and health-related quality of life after stroke, although there may be a limited effect on leg function in more severely affected patients.
Our 1996 systematic review concluded that the evidence that acupuncture is a useful adjunct for stroke rehabilitation is encouraging but not compelling.
By 2001, more data had become available but the conclusion became even less encouraging: there is no compelling evidence to show that acupuncture is effective in stroke rehabilitation.
Finally, by 2010, there were 10 RCT and we were able to do a meta-analysis of the data. We concluded that our meta-analyses of data from rigorous randomized sham-controlled trials did not show a positive effect of acupuncture as a treatment for functional recovery after stroke.
Yes, my reviews were on slightly different research questions. Yet, they do reveal how a critical assessment of the slowly emerging evidence had to arrive at more and more negative conclusions about the role of acupuncture in the management of stroke patients. For a long time, this message was in stark contrast to what acupuncture-fans were claiming. I wonder whether they will now finally change their mind.
Does acupuncture increase birth rates after IVF?
You might be correct when pointing out that this is a rhetorical question.
Why should acupuncture increase the live birth rates after in vitro fertilization (IVF)?
Because it re-balances yin and yang?
Give me a break!!!
Yet acupuncture is widely used by women undergoing IVF, and therefore, we perhaps ought to know whether it works.
Laudably someone has conducted a trial so that we know the answer.
This study determined the efficacy of acupuncture compared with a sham acupuncture control performed during IVF on live births. It was designed as a single-blind, parallel-group RCT, including 848 women undergoing a fresh IVF cycle, and conducted at 16 IVF centres in Australia and New Zealand between June 29, 2011, and October 23, 2015, with 10 months of pregnancy follow-up until August 2016.
The women received either acupuncture (n = 424) or a sham acupuncture control (n = 424). The first treatment was administered between days 6 to 8 of follicle stimulation, and two treatments were administered prior to and following embryo transfer. The sham control used a non-invasive needle placed away from acupuncture points. The primary outcome was live birth, defined as the delivery of one or more living infants at greater than 20 weeks’ gestation or birth weight of at least 400 g.
Among the 848 women, 24 withdrew consent, and 824 were included in the study, 607 proceeded to an embryo transfer, and 809 (98.2%) had data available on live birth outcomes. Live births occurred among 74 of 405 women (18.3%) receiving acupuncture compared with 72 of 404 women (17.8%) receiving sham control.
The authors concluded that among women undergoing IVF, administration of acupuncture vs sham acupuncture at the time of ovarian stimulation and embryo transfer resulted in no significant difference in live birth rates. These findings do not support the use of acupuncture to improve the rate of live births among women undergoing IVF.
This is a clear result and technically a fairly decent study. I say ‘fairly decent’ because, had the result been positive, one would have to question the efficacy blinding as well as the fact that the acupuncturists might have (inadvertently?) influenced their verum-patients such that they were less anxious and thus produced better outcomes. Moreover, the trial was under-powered, and its publication so long after the end of the study is odd, in my view.
There have, of course, been plenty of trials and even systematic reviews of this topic. Here are the conclusions of the three most recent reviews:
- No significant benefits of acupuncture are found to improve the outcomes of IVF…
- No adjuvant therapy has been shown to be definitively advantageous.
- Currently available literature does not provide sufficient evidence that adjuvant acupuncture improves IVF clinical pregnancy rate.
Yet the authors state that “the evidence for efficacy is conflicting”.
The above conclusions seem crystal clear and not at all conflicting!
Is it because the authors needed to justify the no doubt huge costs for their study?
Is it because conducting such a trial while the evidence is already clear (and negative) is arguably not ethical?
Is it because the authors needed this alleged ‘uncertainty’ for getting their trial in a major journal?
I am, of course, not sure – but I am quite sure of one thing: the evidence that acupuncture is useless for IVF was already pretty clear when they started their study.
And pretending otherwise amounts to telling porkies, doesn’t it?
And telling porkies is unethical, isn’t it?
‘HELLO’ is, of course, a most reliable source of information when it comes to healthcare (and other subjects as well, I am sure). Therefore, I was thrilled to read their report on Meghan Markle’s list of supplements which, ‘HELLO’ claim, she takes for “calming any stress or nerves ahead of the royal wedding on 19 May.” The list includes the following:
- Vitamin B-12,
- ‘Cortisol Manager’ (30 tablets cost US$ 65)
Not only does ‘HELLO’ provide us with this most fascinating list, it tells us also what exactly these supplements are best used for:
Magnesium helps to keep blood pressure normal, increase energy, relieves muscle aches and spasms, and calms nerves, all of which will be beneficial to Meghan. Meanwhile, B12 drops will ensure Meghan doesn’t become deficient in the vitamin due to her diet, which is largely plant-based and contains very little animal products, which are one of the main sources of B12.
A multivitamin will provide Meghan with her recommended daily intake of various vitamins and minerals, while Cortisol Manager is a “stress hormone stabiliser”, which is designed to support the body’s natural rise and fall of cortisol, helping promote feelings of relaxation and aid better sleep. The supplement contains L-Theanine, Magnolia, Epimedium and Ashwagandha – although Meghan said she sometimes takes additional doses of the herb, likely at periods of high stress.
Ashwagandha is a herb that helps to moderate the body’s response to stress, bringing inner calm and also boosting energy. The supplement comes from the root of the ashwagandha plant and can be taken in tablet form…
I hope I don’t spoil the Royal wedding if I run a quick reality check on these supplements. Assuming she is generally healthy (she certainly looks it), and now being aware that Meghan eats a mostly plant-based diet, here are the most likely benefits of the above-listed supplements/ingredients:
- Magnesium: NONE
- Vitamin B-12: DEBATABLE
- Multivitamins: NONE
- L-Theanine: NONE
- Magnolia: NONE
- Epimedium: NONE
- Ashwagandha: NONE
Personally, I find Ashwagandha the most intriguing of all the listed ingredients, not least because Meghan said she sometimes takes additional doses of the herb. Why might that be? There is very little reliable research on this (or any of the other above-listed) remedy; but I found one placebo-controlled study which concluded that Ashwagandha “may improve sexual function in healthy women”.
Before my readers now rush out in droves to the next health food shop, I should issue a stern warning: the trial was flimsy and the results lack independent confirmation.
She also seems to have a weakness for homeopathy