If we listen to acupuncturists and their supporters, we might get the impression that acupuncture is totally devoid of risk. Readers of this blog will know that this is not quite true. A recent case report is a further reminder that acupuncture can cause serious complications; in extreme cases it can even kill.
A male patient in his late forties died right after an acupuncture treatment. A medico-legal autopsy disclosed severe haemorrhaging around the right vagus nerve in the neck. All other organs were normal, and laboratory findings revealed nothing significant. Thus, the authors of this case-report concluded that the man most probably died from severe vagal bradycardia and/or arrhythmia resulting from vagus nerve stimulation following acupuncture: To the best of our knowledge, this is the first report of a death due to vagus nerve injury after acupuncture.
In total, around 100 deaths have been reported after acupuncture in the medical literature. ‘This is a negligible small figure’ claim acupuncture fans. True, it is a small number, but it could just be the tip of a much larger ice-berg: there is no reporting system that could possibly pick up severe complications, and in the absence of such a scheme, nobody can name reliable incidence rates. And even if the numbers of severe complications and deaths are small – even a single fatality would seem one too many.
The deaths that are currently on record are mostly due to bilateral pneumothorax or cardiac tamponade. The present case of vagus nerve injury seems to be ‘a first’. Perhaps we should watch out for similar events?
IF WE DON’T LOOK, WE DON’T SEE.
Reflexology is the treatment of reflex zones, usually on the sole of the feet, with manual massage and pressure. Reflexologists assume that certain zones correspond to certain organs, and that their treatment can influence the function of these organs. Thus reflexology is advocated for all sorts of conditions. Proponents are keen to point out that their approach has many advantages: it is pleasant (the patient feels well with the treatment and the therapist feels even better with the money), safe and cheap, particularly if the patient does the treatment herself.
Self-administered foot reflexology could be practical because it is easy to learn and not difficult to apply. But is it also effective? A recent systematic review evaluated the effectiveness of self-foot reflexology for symptom management.
Participants were healthy persons not diagnosed with a specific disease. The intervention was foot reflexology administered by participants, not by practitioners or healthcare providers. Studies with either between groups or within group comparison were included. The electronic literature searches utilized core databases (MEDLINE, EMBASE, Cochrane, and CINAHL Chinese (CNKI), Japanese (J-STAGE), and Korean databases (KoreaMed, KMbase, KISS, NDSL, KISTI, and OASIS)).
Three non-randomized trials and three before-and-after studies met the inclusion criteria. No RCTs were located. The results of these studies showed that self-administered foot reflexology resulted in significant improvement in subjective outcomes such as perceived stress, fatigue, and depression. However, there was no significant improvement in objective outcomes such as cortisol levels, blood pressure, and pulse rate. We did not find any randomized controlled trial.
The authors concluded that this study presents the effectiveness of self-administered foot reflexology for healthy persons’ psychological and physiological symptoms. While objective outcomes showed limited results, significant improvements were found in subjective outcomes. However, owing to the small number of studies and methodological flaws, there was insufficient evidence supporting the use of self-performed foot reflexology. Well-designed randomized controlled trials are needed to assess the effect of self-administered foot reflexology in healthy people.
I find this review quite interesting, but I would draw very different conclusions from its findings.
The studies that are available turned out to be of very poor methodological quality: they lack randomisation or rely on before/after comparisons. This means they are wide open to bias and false-positive results, particularly in regards to subjective outcome measures. Predictably, the findings of this review confirm that no effects are seen on objective endpoints. This is in perfect agreement with the hypothesis that reflexology is a pure placebo. Considering the biological implausibility of the underlying assumptions of reflexology, this makes sense.
My conclusions of this review would therefore be as follows: THE RESULTS ARE IN KEEPING WITH REFLEXOLOGY BEING A PURE PLACEBO.
Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?
Here is a brand new one which might stand for dozens of others.
In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).
The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.
Good news then for enthusiasts of homeopathy? 91% improvement!
Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:
Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.
Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:
- How on earth can we take this and so many other articles on homeopathy seriously?
- When does this sort of article cross the line between wishful thinking and scientific misconduct?
On 1/12/2014 I published a post in which I offered to give lectures to students of alternative medicine:
Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .
A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine.
I did not think that this would create much interest – and I was right: only the ANGLO-EUROPEAN COLLEGE OF CHIROPRACTIC has so far hoisted me on my own petard and, after some discussion (see comment section of the original post) hosted me for a lecture. Several people seem keen on knowing how this went; so here is a brief report.
I was received, on 14/1/2015, with the utmost kindness by my host David Newell. We has a coffee and a chat and then it was time to start the lecture. The hall was packed with ~150 students and the same number was listening in a second lecture hall to which my talk was being transmitted.
We had agreed on the title CHIROPRACTIC: FALLACIES AND FACTS. So, after telling the audience about my professional background, I elaborated on 7 fallacies:
- Appeal to tradition
- Appeal to authority
- Appeal to popularity
- Subluxation exists
- Spinal manipulation is effective
- Spinal manipulation is safe
- Ad hominem attack
Numbers 3, 5 and 6 were dealt with in more detail than the rest. The organisers had asked me to finish by elaborating on what I perceive as the future challenges of chiropractic; so I did:
- Stop happily promoting bogus treatments
- Denounce obsolete concepts like ‘subluxation’
- Clarify differences between chiros, osteos and physios
- Start a culture of critical thinking
- Take action against charlatans in your ranks
- Stop attacking everyone who voices criticism
I ended by pointing out that the biggest challenge, in my view, was to “demonstrate with rigorous science which chiropractic treatments demonstrably generate more good than harm for which condition”.
We had agreed that my lecture would be followed by half an hour of discussion; this period turned out to be lively and had to be extended to a full hour. Most questions initially came from the tutors rather than the students, and most were polite – I had expected much more aggression.
In his email thanking me for coming to Bournemouth, David Newell wrote about the event: The general feedback from staff and students was one of relief that you possessed only one head, :-). I hope you may have felt the same about us. You came over as someone who had strong views, a fair amount of which we disagreed with, but that presented them in a calm, informative and courteous manner as we did in listening and discussing issues after your talk. I think everyone enjoyed the questions and debate and felt that some of the points you made were indeed fair critique of what the profession may need to do, to secure a more inclusive role in the health care arena.
My own impression of the day is that some of my messages were not really understood, that some of the questions, including some from the tutors, seemed like coming from a different planet, and that people were more out to teach me than to learn from my talk. One overall impression that I took home from that day is that, even in this college which prides itself of being open to scientific evidence and unimpressed by chiropractic fundamentalism, students are strangely different from other health care professionals. The most tangible aspect of this is the openly hostile attitude against drug therapies voiced during the discussion by some students.
The question I always ask myself after having invested a lot of time in preparing and delivering a lecture is: WAS IT WORTH IT? In the case of this lecture, I think the answer is YES. With 300 students present, I am fairly confident that I did manage to stimulate a tiny bit of critical thinking in a tiny percentage of them. The chiropractic profession needs this badly!
The very first article on a subject related to alternative medicine with a 2015 date that I came across is a case-report. I am afraid it will not delight our chiropractic friends who tend to deny that their main therapy can cause serious problems.
In this paper, US doctors tell the story of a young woman who developed headache, vomiting, diplopia, dizziness, and ataxia following a neck manipulation by her chiropractor. A computed tomography scan of the head was ordered and it revealed an infarct in the inferior half of the left cerebellar hemisphere and compression of the fourth ventricle causing moderately severe, acute obstructive hydrocephalus. Magnetic resonance angiography showed severe narrowing and low flow in the intracranial segment of the left distal vertebral artery. The patient was treated with mannitol and a ventriculostomy. Following these interventions, she made an excellent functional recovery.
The authors of the case-report draw the following conclusions: This report illustrates the potential hazards associated with neck trauma, including chiropractic manipulation. The vertebral arteries are at risk for aneurysm formation and/or dissection, which can cause acute stroke.
I can already hear the counter-arguments: this is not evidence, it’s an anecdote; the evidence from the Cassidy study shows there is no such risk!
Indeed the Cassidy study concluded that vertebral artery accident (VBA) stroke is a very rare event in the population. The increased risks of VBA stroke associated with chiropractic and primary care physician visits is likely due to patients with headache and neck pain from VBA dissection seeking care before their stroke. We found no evidence of excess risk of VBA stroke associated chiropractic care compared to primary care. That, of course, was what chiropractors longed to hear (and it is the main basis for their denial of risk) – so much so that Cassidy et al published the same results a second time (most experts feel that this is a violation of publication ethics).
But repeating arguments does not make them more true. What we should not forget is that the Cassidy study was but one of several case-control studies investigating this subject. And the totality of all such studies does not deny an association between neck manipulation and stroke.
Much more important is the fact that a re-analysis of the Cassidy data found that prior studies grossly misclassified cases of cervical dissection and mistakenly dismissed a causal association with manipulation. The authors of this new paper found a classification error of cases by Cassidy et al and they re-analysed the Cassidy data, which reported no association between spinal manipulation and cervical artery dissection (odds ratio [OR] 5 1.12, 95% CI .77-1.63). These re-calculated results reveal an odds ratio of 2.15 (95% CI.98-4.69). For patients less than 45 years of age, the OR was 6.91 (95% CI 2.59-13.74). The authors of the re-analysis conclude as follows: If our estimates of case misclassification are applicable outside the VA population, ORs for the association between SMT exposure and CAD are likely to be higher than those reported using the Rothwell/Cassidy strategy, particularly among younger populations. Future epidemiologic studies of this association should prioritize the accurate classification of cases and SMT exposure.
I think they are correct; but my conclusion of all this would be more pragmatic and much simpler: UNTIL WE HAVE CONVINCING EVIDENCE TO THE CONTRARY, WE HAVE TO ASSUME THAT CHIROPRACTIC NECK MANIPULATION CAN CAUSE A STROKE.
On this blog and elsewhere, I have repeatedly cast doubt on the efficacy of homeopathy – not because I have ‘an axe to grind’, as some seem to believe, but because
- the assumptions which underpin homeopathy fly in the face of science,
- the clinical evidence fails to show that it works beyond a placebo effect.
But was I correct?
A new systematic review and meta-analysis seems to indicate that I was mistaken. It tested the hypothesis that the outcome of an individualised homeopathic treatment (homeopaths would argue that this is the only true approach to homeopathy) is distinguishable from that with placebos.
The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.
Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).
The authors arrived at the following conclusion: medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.
One does not need to be a prophet to predict that the world of homeopathy will declare this article as the ultimate proof of homeopathy’s efficacy beyond placebo. Already the ‘British Homeopathic Association’ has issued the following press release:
Clinical evidence for homeopathy published
Research into the effectiveness of homeopathy as an individualised treatment has produced results that may surprise many from the worlds of science and medicine. The conclusions are reported cautiously, but the new publication is the first of its type to present evidence that medicines prescribed in individualised homeopathy may have specific effects.
The paper, published in the peer-reviewed journal Systematic Reviews,1 reports a rigorous systematic review and meta-analysis of 32 randomised controlled trials (RCTs) in which homeopathic medicines were prescribed on an individual basis to each participant, depending on their particular symptoms.
The overall quality of the RCT evidence was found to be low or unclear, preventing the researchers from reaching decisive conclusions. Three RCTs were identified as “reliable evidence”.
The study was led by Dr Robert Mathie, research development adviser for the British Homeopathic Association, in partnership with a number of collaborators, including colleagues at the Robertson Centre for Biostatistics, University of Glasgow, who independently verified the statistical methods and findings.
“What we found from the statistics,” says Dr Mathie, “is that the effect of individualised treatment using homeopathic medicines was significantly greater than placebos, and that this effect was retained when we included only the three trials with reliable evidence. This tentatively provides proof of concept that homeopathic medicines have clinical treatment effects.”
Surprised? I was stunned and thus studied the article in much detail (luckily the full text version is available online). Then I entered into an email exchange with the first author who I happen to know personally (to his credit, he responded regularly). In the end, this conversation helped me to better understand the review’s methodology; but it also resulted in me being very much underwhelmed by the reliability of the authors’ conclusion.
Normally I would now explain why. But, in this particular case, I thought it would be interesting and helpful to give others the opportunity to examine the article and come up with their own comments. Subsequently I will add my criticisms.
SO PLEASE TAKE SOME TIME TO STUDY THIS PAPER AND TELL US WHAT YOU THINK.
Few subjects make chiropractors more uneasy than a discussion of the safety of their spinal manipulations. Many chiropractors flatly deny that there are any risks at all. However, the evidence seems to tell a different story.
The purpose of a new review was to summarise the literature for cases of adverse events in infants and children treated by chiropractors or other manual therapists, identifying treatment type and if a preexisting pathology was present. English language, peer-reviewed journals and non-peer-reviewed case reports discussing adverse events (ranging from minor to serious) were systematically searched from inception of the relevant searchable bibliographic databases through March 2014. Articles not referring to infants or children were excluded.
Thirty-one articles met the selection criteria. A total of 12 articles reporting 15 serious adverse events were found. Three deaths occurred under the care of various providers (1 physical therapist, 1 unknown practitioner, and 1 craniosacral therapist) and 12 serious injuries were reported (7 chiropractors/doctors of chiropractic, 1 medical practitioner, 1 osteopath, 2 physical therapists, and 1 unknown practitioner). High-velocity, extension, and rotational spinal manipulation was reported in most cases, with 1 case involving forcibly applied craniosacral dural tension and another involving use of an adjusting instrument. Underlying preexisting pathology was identified in a majority of the cases.
The authors concluded that published cases of serious adverse events in infants and children receiving chiropractic, osteopathic, physiotherapy, or manual medical therapy are rare. The 3 deaths that have been reported were associated with various manual therapists; however, no deaths associated with chiropractic care were found in the literature to date. Because underlying preexisting pathology was associated in a majority of reported cases, performing a thorough history and examination to exclude anatomical or neurologic anomalies before applying any manual therapy may further reduce adverse events across all manual therapy professions.
This review is a valuable addition to our knowledge about the risks of spinal manipulations. My own review summarised 26 deaths after chiropractic manipulations. In several of these instances, the age of the patient had not been reported. Therefore the above conclusion (no deaths associated with chiropractic) seems a little odd.
The following text is a shortened version of the discussion of my review which, I think, addresses most of the pertinent issues.
… numerous deaths have been associated with chiropractic. Usually high-velocity, short-lever thrusts of the upper spine with rotation are implicated. They are believed to cause vertebral arterial dissection in predisposed individuals which, in turn, can lead to a chain of events including stroke and death. Many chiropractors claim that, because arterial dissection can also occur spontaneously, causality between the chiropractic intervention and arterial dissection is not proven. However, when carefully evaluating the known facts, one does arrive at the conclusion that causality is at least likely. Even if it were merely a remote possibility, the precautionary principle in healthcare would mean that neck manipulations should be considered unsafe until proven otherwise. Moreover, there is no good evidence for assuming that neck manipulation is an effective therapy for any medical condition. Thus, the risk-benefit balance for chiropractic neck manipulation fails to be positive.
Reliable estimates of the frequency of vascular accidents are prevented by the fact that underreporting is known to be substantial. In a survey of UK neurologists, for instance, under-reporting of serious complications was 100%. Those cases which are published often turn out to be incomplete. Of 40 case reports of serious adverse effects associated with spinal manipulation, nine failed to provide any information about the clinical outcome. Incomplete reporting of outcomes might therefore further increase the true number of fatalities.
This review is focussed on deaths after chiropractic, yet neck manipulations are, of course, used by other healthcare professionals as well. The reason for this focus is simple: chiropractors are more frequently associated with serious manipulation-related adverse effects than osteopaths, physiotherapists, doctors or other professionals. Of the 40 cases of serious adverse effects mentioned above, 28 can be traced back to a chiropractor and none to a osteopath. A review of complications after spinal manipulations by any type of healthcare professional included three deaths related to osteopaths, nine to medical practitioners, none to a physiotherapist, one to a naturopath and 17 to chiropractors. This article also summarised a total of 265 vascular accidents of which 142 were linked to chiropractors. Another review of complications after neck manipulations published by 1997 included 177 vascular accidents, 32 of which were fatal. The vast majority of these cases were associated with chiropractic and none with physiotherapy. The most obvious explanation for the dominance of chiropractic is that chiropractors routinely employ high-velocity, short-lever thrusts on the upper spine with a rotational element, while the other healthcare professionals use them much more sparingly.
[REFERENCES FOR THE ABOVE STATEMENTS CAN BE FOUND IN MY REVIEW]
Adverse events have been reported extensively following chiropractic. About 50% of patients suffer side-effects after seeing a chiropractor. The majority of these events are mild, transitory and self-limiting. However, chiropractic spinal manipulations, particularly those of the upper spine, have also been associated with very serious complications; several hundred such cases have been reported in the medical literature and, as there is no monitoring system to record these instances, this figure is almost certainly just the tip of a much larger iceberg.
Despite these facts, little is known about patient filed compensation claims related to the chiropractic consultation process. The aim of a new study was to describe claims reported to the Danish Patient Compensation Association and the Norwegian System of Compensation to Patients related to chiropractic from 2004 to 2012.
All finalized compensation claims involving chiropractors reported to one of the two associations between 2004 and 2012 were assessed for age, gender, type of complaint, decisions and appeals. Descriptive statistics were used to describe the study population.
338 claims were registered in Denmark and Norway between 2004 and 2012 of which 300 were included in the analysis. 41 (13.7%) were approved for financial compensation. The most frequent complaints were worsening of symptoms following treatment (n = 91, 30.3%), alleged disk herniations (n = 57, 19%) and cases with delayed referral (n = 46, 15.3%). A total financial payment of €2,305,757 (median payment €7,730) were distributed among the forty-one cases with complaints relating to a few cases of cervical artery dissection (n = 11, 5.7%) accounting for 88.7% of the total amount.
The authors concluded that chiropractors in Denmark and Norway received approximately one compensation claim per 100.000 consultations. The approval rate was low across the majority of complaint categories and lower than the approval rates for general practitioners and physiotherapists. Many claims can probably be prevented if chiropractors would prioritize informing patients about the normal course of their complaint and normal benign reactions to treatment.
Despite its somewhat odd conclusion (it is not truly based on the data), this is a unique article; I am not aware that other studies of chiropractic compensation claims exist in an European context. The authors should be applauded for their work. Clearly we need more of the same from other countries and from all professions doing manipulative therapies.
In the discussion section of their article, the authors point out that Norwegian and Danish chiropractors both deliver approximately two million consultations annually. They receive on average 42 claims combined suggesting roughly one claim per 100.000 consultations. By comparison, Danish statistics show that in the period 2007–2012 chiropractors, GPs and physiotherapists (+ occupational therapists) received 1.76, 1.32 and 0.52 claims per 100.000 consultations, respectively with approval rates of 13%, 25% and 21%, respectively. During this period these three groups were reimbursed on average €58,000, €29,000 and €18,000 per approved claim, respectively.
These data are preliminary and their interpretation might be a matter of debate. However, one thing seems clear enough: contrary to what we frequently hear from apologists, chiropractors do receive a considerable amount of compensation claims which means many patients do get harmed.
Guest post by Pete Attkins
Commentator “jm” asked a profound and pertinent question: “What DOES it take for people to get real in this world, practice some common sense, and pay attention to what’s going on with themselves?” This question was asked in the context of asserting that personal experience always trumps the results of large-scale scientific experiments; and asserting that alt-med experts are better able to provide individulized healthcare than 21st Century orthodox medicine.
What does common sense and paying attention lead us to conclude about the following? We test a six-sided die for bias by rolling it 100 times. The number 1 occurs only once and the number 6 occurs many times, never on its own, but in several groups of consecutive sixes.
I think it is reasonable to say that common sense would, and should, lead everyone to conclude that the die is biased and not fit for its purpose as a source of random numbers.
In other words, we have a gut feeling that the die is untrustworthy. Gut instincts and common sense are geared towards maximizing our chances of survival in our complex and unpredictable world — these are innate and learnt behaviours that have enabled humans to survive despite the harshness of our ever changing habitat.
Only very recently in the long history of our species have we developed specialized tools that enable us to better understand our harsh and complex world: science and critical thinking. These tools are difficult to master because they still haven’t been incorporated into our primary and secondary formal education systems.
The vast majority of people do not have these skills therefore, when a scientific finding flies in the face of our gut instincts and/or common sense, it creates an overwhelming desire to reject the finding and classify the scientist(s) as being irrational and lacking basic common sense. It does not create an intense desire to accept the finding then painstakingly learn all of the science that went into producing the finding.
With that in mind, let’s rethink our common sense conclusion that the six-sided die is biased and untrustworthy. What we really mean is that the results have given all of us good reason to be highly suspicious of this die. We aren’t 100% certain that this die is biased, but our gut feeling and common sense are more than adequate to form a reasonable mistrust of it and to avoid using it for anything important to us. Reasons to keep this die rather than discard it might be to provide a source of mild entertainment or to use its bias for the purposes of cheating.
Some readers might be surprised to discover at this point that the results I presented from this apparently heavily-biased die are not only perfectly valid results obtained from a truly random unbiased die, they are to be fully expected. Even if the die had produced 100 sixes in that test, it would not confirm that the die is biased in any way whatsoever. Rolling a truly unbiased die once will produce one of six possible outcomes. Rolling the same die 100 times will produce one unique sequence out of the 6^100 (6.5 x 10^77) possible sequences: all of which are equally valid!
Gut feeling plus common sense rightfully informs us that the probability of a random die producing one hundred consecutive sixes is so incredibly remote that nobody will ever see it occur in reality. This conclusion is also mathematically sound: if there were 6.5 x 10^77 people on Earth, each performing the same test on truly random dice, there is no guarantee that anyone would observe a sequence of one hundred consecutive sixes.
When we observe a sequence such as 2 5 1 4 6 3 1 4 3 6 5 2… common sense informs us that the die is very likely random. If we calculate the arithmetic mean to be very close to 3.5 then common sense will lead us to conclude that the die is both random and unbiased enough to use it as a reliable source of random numbers.
Unfortunately, this is a perfect example of our gut feelings and common sense failing us abysmally. They totally failed to warn us that the 2 5 1 4 6 3 1 4 3 6 5 2… sequence we observed had exactly the same (im)probability of occurring as a sequence of one hundred 6s or any other sequence that one can think of that doesn’t look random to a human observer.
The 100-roll die test is nowhere near powerful enough to properly test a six-sided die, but this test is more than adequately powered to reveal some of our cognitive biases and some of the deficits in our personal mastery of science and critical thinking.
To properly test the die we need to provide solid evidence that it is both truly random and that its measured bias tends towards zero as the number of rolls tends towards infinity. We could use the services of one testing lab to conduct billions of test rolls, but this would not exclude errors caused by such things as miscalibrated equipment and experimenter bias. It is better to subdivide the testing across multiple labs then carefully analyse and appropriately aggregate the results: this dramatically reduces errors caused by equipment and humans.
In medicine, this testing process is performed via systematic reviews of multiple, independent, double-blind, placebo-controlled trials — every trial that is insufficiently powered to add meaningfully to the result is rightfully excluded from the aggregation.
Alt-med relies on a diametrically opposed testing process. It performs a plethora of only underpowered tests; presents those that just happen to show a positive result (just as a random die could’ve produced); and sweeps under the carpet the overwhelming number of tests that produced a negative result. It publishes only the ‘successes’, not its failures. By sweeping its failures under the carpet it feels justified in making the very bold claim: Our plethora of collected evidence shows clearly that it mostly ‘works’ and, when it doesn’t, it causes no harm.
One of the most acidic tests for a hypothesis and its supporting data (which is a mandatory test in a few branches of critical engineering) is to substitute the collected data for random data that has been carefully crafted to emulate the probability mass functions of the collected datasets. This test has to be run multiple times for reasons that I’ve attempted to explain in my random die example. If the proposer of the hypothesis is unable to explain the multiple failures resulting from this acid test then it is highly likely that the proposer either does not fully understand their hypothesis or that their hypothesis is indistinguishable from the null hypothesis.
Guest post by Jan Oude-Aost
ADHD is a common disorder among children. There are evidence based pharmacological treatments, the best known being methylphenidate (MPH). MPH has kind of a bad reputation, but is effective and reasonably safe. The market is also full of alternative treatments, pharmacological and others, some of them under investigation, some unproven and many disproven. So I was not surprised to find a study about Ginkgo biloba as a treatment for ADHD. I was surprised, however, to find this study in the German Journal of Child and Adolescent Psychiatry and Psychotherapy, officially published by the “German Society of Child and Adolescent Psychiatry and Psychotherapy“ (Deutsche Gesellschaft für Kinder- und Jugendpsychiatrie und Psychotherapie). The journal’s guidelines state that studies should provide new scientific results.
The study is called “Ginkgo biloba Extract EGb 761® in Children with ADHD“. EGb 761® is the key ingredient in “Tebonin®“, a herbal drug made by “Dr. Wilma Schwabe GmbH“. The abstract states:
“One possible treatment, at least for cognitive problems, might be the administration of Ginkgo biloba, though evidence is rare.This study tests the clinical efficacy of a Ginkgo biloba special extract (EGb 761®) (…) in children with ADHD (…).“
“Eine erfolgversprechende, bislang kaum untersuchte Möglichkeit zur Behandlung kognitiver Aspekte ist die Gabe von Ginkgo biloba. Ziel der vorliegenden Studie war die Prüfung klinischer Wirksamkeit (…) von Ginkgo biloba-Extrakt Egb 761® bei Kindern mit ADHS.“ (Taken from the English and German abstracts.)
The study sample (20!) was recruited among children who “did not tolerate or were unwilling“ to take MPH. The unwilling part struck me as problematic. There is likely a strong selection bias towards parents who are unwilling to give their children MPH. I guess it is not the children who are unwilling to take MPH, but the parents who are unwilling to administer it. At least some of these parents might be biased against MPH and might already favor CAMmodalities.
The authors state three main problems with “herbal therapy“ that require more empirical evidence: First of all the question of adverse reactions, which they claim occur in about 1% of cases with “some CAMs“ (mind you, not “herbal therapy“). Secondly, the question of drug interactions and thirdly, the lack of information physicians have about the CAMs their patients use.
A large part of the study is based on results of an EEG-protocol, which I choose to ignore, because the clinical results are too weak to give the EEG findings any clinical relevance.
Before looking at the study itself, let’s look at what is known about Ginkgo biloba as a drug. Ginkgo is best known for its use in patients with dementia, cognitive impairment und tinnitus. A Cochrane review from 2009 concluded:
“There is no convincing evidence that Ginkgo biloba is efficacious for dementia and cognitive impairment“ .
The authors of the current Study cite Sarris et al. (2011), a systematic review of complementary treatment of ADHD. Sarris et al. mention Salehi et al. (2010) who tested Ginkgo against MPH. MPH turned out to be much more effective than Ginkgo, but Sarris et al. argue that the duration of treatment (6 weeks) might have been too short to see the full effects of Ginkgo.
Given the above information it is unclear why Ginkgo is judged a “possible“ treatment, properly translated from German even “promising”, and why the authors state that Ginkgo has been “barely studied“.
In an unblinded, uncontrolled study with a sample likely to be biased toward the tested intervention, anything other than a positive result would be odd. In the treatment of autism there are several examples of implausible treatments that worked as long as parents knew that their children were getting the treatment, but didn’t after proper blinding (e.g. secretin).
This study’s aim was to test clinical efficacy, but the conclusion begins with how well tolerated Ginkgo was. The efficacy is mentioned subsequently: “Following administration, interrelated improvements on behavioral ratings of ADHD symptoms (…) were detected (…).“ But the way they where “detected“ is interesting. The authors used an established questionnaire (FBB-HKS) to let parents rate their children. Only the parents. The children and their teachers where not given the FBB-HKS-questionnaires, inspite of this being standard clinical practice (and inspite of giving children questionnaires to determine changes in quality of life, which were not found).
None of the three problems that the authors describe as important (adverse reactions, drug interactions, lack of information) can be answered by this study. I am no expert in statistics but it seems unlikely to me to meaningfully determine adverse effects in just 20 patients especially when adverse effects occur at a rate of 1%. The authors claim they found an incidence rate of 0,004% in “700 observation days“. Well, if they say so.
The authors conclude:
“Taken together, the current study provides some preliminary evidence that Ginkgo biloba Egb 761® seems to be well tolerated in the short term and may be a clinically useful treatment for children with ADHD. Double-blind randomized trials are required to clarify the value of the presented data.“
Given the available information mentioned earlier, one could have started with that conclusion and conducted a double blind RCT in the first place!
The trends of this preliminary open study may suggest that Ginkgo biloba Egb 761® might be considered as a complementary or alternative medicine for treating children with ADHD.“
So, why do I care? If preliminary evidence “may suggest“ that something “might be considered“ as a treatment? Because I think that this study does not answer any important questions or give us any new or useful knowledge. Following the journal’s guidelines, it should therefore not have been published. I also think it is an example of bad science. Bad not just because of the lack of critical thinking. It also adds to the misinformation about possible ADHD treatments spreading through the internet. The study was published in September. In November I found a website citing the study and calling it “clinical proof“ when it is not. But child psychiatrists will have to explain that to many parents, instead of talking about their children’s health.
I somehow got the impression that this study was more about marketing than about science. I wonder if Schwabe will help finance the necessary double-blind randomized trial…