“There is a ton of chiropractor journals. If you want evidence then read some.”
This was the comment by a defender of chiropractic to a recent post of mine. And it’s true, of course: there are quite a few chiro journals, but are they a reliable source of information?
One way of quantifying the reliability of medical journals is to calculate what percentage of its published articles arrive at negative conclusion. In the extreme instance of a journal publishing nothing but positive results, we cannot assume that it is a credible publication. In this case, it would be not a scientific journal at all, but it would be akin to a promotional rag.
Back in 1997, we published our first analysis of journals of so-called alternative medicine (SCAM). It showed that just 1% of the papers published in SCAM journals reported findings that were not positive. In the years that followed, we confirmed this deplorable state of affairs repeatedly, and on this blog I have shown that the relatively new EBCAM journal is similarly dubious.
But these were not journals focussing specifically on chiropractic. Therefore, the question whether chiro journals are any different from the rest of SCAM is as yet unanswered. Enough reason for me to bite the bullet and test this hypothesis. I thus went on Medline and assessed all the articles published in 2018 in two of the leading chiro journals.
- JOURNAL OF CHIROPRACTIC MEDICINE (JCM)
- CHIROPRACTIC AND MANUAL THERAPY (CMT)
I evaluated them according to
- TYPE OF ARTICLE
- DIRECTION OF CONCLUSION
The results of my analysis are as follows:
- The JCM published 39 Medline-listed papers in 2018.
- The CMT published 50 such papers in 2018.
- Together, the 2 journals published:
- 18 surveys,
- 17 case reports,
- 10 reviews,
- 8 diagnostic papers,
- 7 pilot studies,
- 4 protocols,
- 2 RCTs,
- 2 non-randomised trials,
- 2 case-series,
- the rest are miscellaneous types of articles.
4. None of these papers arrived at a conclusion that is negative or contrary to chiropractors’ current belief in chiropractic care. The percentage of publishing negative findings is thus exactly 0%, a figure that is almost identical to the 1% we found for SCAM journals in 1997.
I conclude: these results suggest that the hypothesis of chiro journals publishing reliable information is not based on sound evidence.
On this blog, we have often noted that (almost) all TCM trials from China report positive results. Essentially, this means we might as well discard them, because we simply cannot trust their findings. While being asked to comment on a related issue, it occurred to me that this might be not so much different with Korean acupuncture studies. So, I tried to test the hypothesis by running a quick Medline search for Korean acupuncture RCTs. What I found surprised me and eventually turned into a reminder of the importance of critical thinking.
Even though I found pleanty of articles on acupuncture coming out of Korea, my search generated merely 3 RCTs. Here are their conclusions:
The results of this study show that moxibustion (3 sessions/week for 4 weeks) might lower blood pressure in patients with prehypertension or stage I hypertension and treatment frequency might affect effectiveness of moxibustion in BP regulation. Further randomized controlled trials with a large sample size on prehypertension and hypertension should be conducted.
The results of this study show that acupuncture might lower blood pressure in prehypertension and stage I hypertension, and further RCT need 97 participants in each group. The effect of acupuncture on prehypertension and mild hypertension should be confirmed in larger studies.
Bee venom acupuncture combined with physiotherapy remains clinically effective 1 year after treatment and may help improve long-term quality of life in patients with AC of the shoulder.
So yes, according to this mini-analysis, 100% of the acupuncture RCTs from Korea are positive. But the sample size is tiny and I many not have located all RCTs with my ‘rough and ready’ search.
But what are all the other Korean acupuncture articles about?
Many are protocols for RCTs which is puzzling because some of them are now so old that the RCT itself should long have emerged. Could it be that some Korean researchers publish protocols without ever publishing the trial? If so, why? But most are systematic reviews of RCTs of acupuncture. There must be about one order of magnitude more systematic reviews than RCTs!
Why so many?
Perhaps I can contribute to the answer of this question; perhaps I am even guilty of the bonanza.
In the period between 2008 and 2010, I had several Korean co-workers on my team at Exeter, and we regularly conducted systematic reviews of acupuncture for various indications. In fact, the first 6 systematic reviews include my name. This research seems to have created a trend with Korean acupuncture researchers, because ever since they seem unable to stop themselves publishing such articles.
So far so good, a plethora of systematic reviews is not necessarily a bad thing. But looking at the conclusions of these systematic reviews, I seem to notice a worrying trend: while our reviews from the 2008-2010 period arrived at adequately cautious conclusions, the new reviews are distinctly more positive in their conclusions and uncritical in their tone.
Let me explain this by citing the conclusions of the very first (includes me as senior author) and the very last review (does not include me) currently listed in Medline:
penetrating or non-penetrating sham-controlled RCTs failed to show specific effects of acupuncture for pain control in patients with rheumatoid arthritis. More rigorous research seems to be warranted.
Electroacupuncture was an effective treatment for MCI [mild cognitive impairment] patients by improving cognitive function. However, the included studies presented a low methodological quality and no adverse effects were reported. Thus, further comprehensive studies with a design in depth are needed to derive significant results.
Now, you might claim that the evidence for acupuncture has overall become more positive over time, and that this phenomenon is the cause for the observed shift. Yet, I don’t see that at all. I very much fear that there is something else going on, something that could be called the suspension of critical thinking.
Whenever I have asked a Chinese researcher why they only publish positive conclusions, the answer was that, in China, it would be most impolite to publish anything that contradicts the views of the researchers’ peers. Therefore, no Chinese researcher would dream of doing it, and consequently, critical thinking is dangerously thin on the ground.
I think that a similar phenomenon might be at the heart of what I observe in the Korean acupuncture literature: while I always tried to make sure that the conclusions were adequately based on the data, the systematic reviews were ok. When my influence disappeared and the reviews were done exclusively by Korean researchers, the pressure of pleasing the Korean peers (and funders) became dominant. I suggest that this is why conclusions now tend to first state that the evidence is positive and subsequently (almost as an after-thought) add that the primary trials were flimsy. The results of this phenomenon could be serious:
- progress is being stifled,
- the public is being misled,
- funds are being wasted,
- the reputation of science is being tarnished.
Of course, the only right way to express this situation goes something like this:
BECAUSE THE QUALITY OF THE PRIMARY TRIALS IS INADEQUATE, THE EFFECTIVENESS OF ACUPUNCTURE REMAINS UNPROVEN.
The journal NATURE has just published an excellent article by Andrew D. Oxman and an alliance of 24 leading scientists outlining the importance and key concepts of critical thinking in healthcare and beyond. The authors state that the Key Concepts for Informed Choices is not a checklist. It is a starting point. Although we have organized the ideas into three groups (claims, comparisons and choices), they can be used to develop learning resources that include any combination of these, presented in any order. We hope that the concepts will prove useful to people who help others to think critically about what evidence to trust and what to do, including those who teach critical thinking and those responsible for communicating research findings.
Here I take the liberty of citing a short excerpt from this paper:
Claims about effects should be supported by evidence from fair comparisons. Other claims are not necessarily wrong, but there is an insufficient basis for believing them.
Claims should not assume that interventions are safe, effective or certain.
- Interventions can cause harm as well as benefits.
- Large, dramatic effects are rare.
- We can rarely, if ever, be certain about the effects of interventions.
Seemingly logical assumptions are not a sufficient basis for claims.
- Beliefs alone about how interventions work are not reliable predictors of the presence or size of effects.
- An outcome may be associated with an intervention but not caused by it.
- More data are not necessarily better data.
- The results of one study considered in isolation can be misleading.
- Widely used interventions or those that have been used for decades are not necessarily beneficial or safe.
- Interventions that are new or technologically impressive might not be better than available alternatives.
- Increasing the amount of an intervention does not necessarily increase its benefits and might cause harm.
Trust in a source alone is not a sufficient basis for believing a claim.
- Competing interests can result in misleading claims.
- Personal experiences or anecdotes alone are an unreliable basis for most claims.
- Opinions of experts, authorities, celebrities or other respected individuals are not solely a reliable basis for claims.
- Peer review and publication by a journal do not guarantee that comparisons have been fair.
Studies should make fair comparisons, designed to minimize the risk of systematic errors (biases) and random errors (the play of chance).
Comparisons of interventions should be fair.
- Comparison groups and conditions should be as similar as possible.
- Indirect comparisons of interventions across different studies can be misleading.
- The people, groups or conditions being compared should be treated similarly, apart from the interventions being studied.
- Outcomes should be assessed in the same way in the groups or conditions being compared.
- Outcomes should be assessed using methods that have been shown to be reliable.
- It is important to assess outcomes in all (or nearly all) the people or subjects in a study.
- When random allocation is used, people’s or subjects’ outcomes should be counted in the group to which they were allocated.
Syntheses of studies should be reliable.
- Reviews of studies comparing interventions should use systematic methods.
- Failure to consider unpublished results of fair comparisons can bias estimates of effects.
- Comparisons of interventions might be sensitive to underlying assumptions.
Descriptions should reflect the size of effects and the risk of being misled by chance.
- Verbal descriptions of the size of effects alone can be misleading.
- Small studies might be misleading.
- Confidence intervals should be reported for estimates of effects.
- Deeming results to be ‘statistically significant’ or ‘non-significant’ can be misleading.
- Lack of evidence for a difference is not the same as evidence of no difference.
What to do depends on judgements about the problem, the relevance (applicability or transferability) of evidence available and the balance of expected benefits, harm and costs.
Problems, goals and options should be defined.
- The problem should be diagnosed or described correctly.
- The goals and options should be acceptable and feasible.
Available evidence should be relevant.
- Attention should focus on important, not surrogate, outcomes of interventions.
- There should not be important differences between the people in studies and those to whom the study results will be applied.
- The interventions compared should be similar to those of interest.
- The circumstances in which the interventions were compared should be similar to those of interest.
Expected pros should outweigh cons.
- Weigh the benefits and savings against the harm and costs of acting or not.
- Consider how these are valued, their certainty and how they are distributed.
- Important uncertainties about the effects of interventions should be reduced by further fair comparisons.
END OF QUOTE
I have nothing to add to this, except perhaps to point out how very relevant all of this, of course, is for SCAM and to warmly recommend you study the full text of this brilliant paper.
One of the favourite arguments of proponents of so-called alternative medicine (SCAM) is that conventional medicine is amongst the world’s biggest killers. The argument is used cleverly to discredit conventional medicine and promote SCAM. It has been shown to be wrong many times, but it nevertheless is much-loved by SCAM enthusiasts and thus refuses to disappear. Perhaps this new and important review might help instilling some realism into this endless discussion? Here is its abstract:
Objective To systematically quantify the prevalence, severity, and nature of preventable patient harm across a range of medical settings globally.
Design Systematic review and meta-analysis.
Data sources Medline, PubMed, PsycINFO, Cinahl and Embase, WHOLIS, Google Scholar, and SIGLE from January 2000 to January 2019. The reference lists of eligible studies and other relevant systematic reviews were also searched.
Review methods Observational studies reporting preventable patient harm in medical care. The core outcomes were the prevalence, severity, and types of preventable patient harm reported as percentages and their 95% confidence intervals. Data extraction and critical appraisal were undertaken by two reviewers working independently. Random effects meta-analysis was employed followed by univariable and multivariable meta regression. Heterogeneity was quantified by using the I2 statistic, and publication bias was evaluated.
Results Of the 7313 records identified, 70 studies involving 337 025 patients were included in the meta-analysis. The pooled prevalence for preventable patient harm was 6% (95% confidence interval 5% to 7%). A pooled proportion of 12% (9% to 15%) of preventable patient harm was severe or led to death. Incidents related to drugs (25%, 95% confidence interval 16% to 34%) and other treatments (24%, 21% to 30%) accounted for the largest proportion of preventable patient harm. Compared with general hospitals (where most evidence originated), preventable patient harm was more prevalent in advanced specialties (intensive care or surgery; regression coefficient b=0.07, 95% confidence interval 0.04 to 0.10).
Conclusions Around one in 20 patients are exposed to preventable harm in medical care. Although a focus on preventable patient harm has been encouraged by the international patient safety policy agenda, there are limited quality improvement practices specifically targeting incidents of preventable patient harm rather than overall patient harm (preventable and non-preventable). Developing and implementing evidence-based mitigation strategies specifically targeting preventable patient harm could lead to major service quality improvements in medical care which could also be more cost effective.
One in 20 patients is undoubtedly an unacceptably high proportion, but it is nowhere close to some of the extraordinarily alarming claims by SCAM enthusiasts. And, as I try regularly to remind people, the harm must be viewed in relation to the benefit. For the vast majority of conventional treatments, the benefits outweigh the risks. But, if there is no benefit at all – as with some form of SCAM – a risk/benefit balance can never be positive. Moreover, many experts work hard and do their very best to improve the risk/benefit balance of conventional healthcare by educating clinicians, maximising the benefits, minimising the risks, and filling the gaps in our current knowledge. Do equivalent activities exist in SCAM? The answer is VERY FEW?
Treating children is an important income stream for chiropractors and osteopaths. There is plenty of evidence to suspect that their spinal manipulations generate more harm than good; on this blog, we have discussed this problem more often than I care to remember (see for instance here, here, here, here and here). Yet, osteopaths and chiropractors carry on misleading parents to abuse their children with ineffective and dangerous spinal manipulations. A new and thorough assessment of the evidence seems to confirm this suspicion.
This systematic review evaluated the evidence for effectiveness and harms of specific SMT techniques for infants, children and adolescents. Controlled studies, describing primary SMT treatment in infants (<1 year) and children/adolescents (1-18 years), were included to determine effectiveness.
Of the 1,236 identified studies, 26 studies were eligible. Infants and children/adolescents were treated for various (non-)musculoskeletal indications, hypothesized to be related to spinal joint dysfunction. Studies examining the same population, indication and treatment comparison were scarce. The results showed that:
- Due to very low quality evidence, it is uncertain whether gentle, low-velocity mobilizations reduce complaints in infants with colic or torticollis, and whether high-velocity, low-amplitude manipulations reduce complaints in children/adolescents with autism, asthma, nocturnal enuresis, headache or idiopathic scoliosis.
- Five case reports described severe harms after HVLA manipulations in 4 infants and one child. Mild, transient harms were reported after gentle spinal mobilizations in infants and children, and could be interpreted as side effect of treatment.
The authors concluded that due to very low quality of the evidence, the effectiveness of gentle, low-velocity mobilizations in infants and HVLA manipulations in children and/or adolescents is uncertain. Assessments of intermediate outcomes are lacking in current pediatric SMT research. Therefore, the relationship between specific treatment and its effect on the hypothesized spinal dysfunction remains unclear. Gentle, low-velocity spinal mobilizations seem to be a safe treatment technique. Although scarcely reported, HVLA manipulations in infants and young children could lead to severe harms. Severe harms were likely to be associated with unexamined or missed underlying medical pathology. Nevertheless, there is a need for high quality research to increase certainty about effectiveness and safety of specific SMT techniques in infants, children and adolescents. We encourage conduction of controlled studies that focus on the effectiveness of specific SMT techniques on spinal dysfunction, instead of concluding about SMT as a general treatment approach. Large observational studies could be conducted to monitor the course of complaints/symptoms in children and to gain a greater understanding of potential harms.
The situation regarding spinal manipulation for children might be summarised as follows:
- Spinal manipulations are not demonstrably effective for paediatric conditions.
- They can cause serious direct and indirect harm.
- Chiropractors and osteopaths are not usually competent to treat children.
- They nevertheless treat children regularly.
In my view, this is unethical and can amount to child abuse.
George Vithoulkas, has been mentioned on this blog repeatedly. He is a lay homeopath – one that has no medical background – and has, over the years, become an undisputed hero within the world of homeopathy. Yet, Vithoulkas’ contribution to homeopathy research is perilously close to zero. Judging from a recent article in which he outlines the rules of rigorous research, his understanding of research methodology is even closer to zero. Here is a crucial excerpt from this paper intercepted by a few comment from me in brackets and bold print.
Which are [the] homoeopathic principles to be respected [in clinical trials and meta-analyses]?
1. Homoeopathy does not treat diseases, but only diseased individuals. Therefore, every case may need a different remedy although the individuals may be suffering from the same pathology. This rule was violated by almost all the trials in most meta-analyses. (This statement is demonstrably false; there even has been a meta-analysis of 32 trials that respect this demand)
2. In the homoeopathic treatment of serious chronic pathology, if the remedy is correct usually a strong initial aggravation takes place [14–16]. Such an aggravation may last from a few hours to a few weeks and even then we may have a syndrome-shift and not the therapeutic results expected. If the measurements take place in the aggravation period, the outcome will be classified negative. (Homeopathic aggravations exist only in the mind of homeopaths; our systematic review failed to find proof for their existence.)
This factor was also ignored in most trials . At least sufficient time should be given in the design of the trial, in order to account for the aggravation period. The contrary happened in a recent study , where the aggravation period was evaluated as a negative sign and the homoeopathic group was pronounced worse than the placebo . (There are plenty of trials where the follow-up period is long enough to account for this [non-existing] phenomenon.)
3. In severe chronic conditions, the homoeopath may need to correctly prescribe a series of remedies before the improvement is apparent. Such a second or third prescription should take place only after evaluating the effects of the previous remedies . Again, this rule has also been ignored in most studies. (Again, this is demonstrably wrong; there are many trials where the homeopath was able to adjust his/her prescription according to the clinical response of the patient.)
4. As the prognosis of a chronic condition and the length of time after which any amelioration set in may differ from one to another case , the treatment and the study-design respectively should take into consideration the length of time the disease was active and also the severity of the case. (This would mean that conditions that have a short history, like post-operative ileus, bruising after injury, common cold, etc. should respond well after merely a short treatment with homeopathics. As this is not so, Vithoulkas’ argument seems to be invalid.)
5. In our experience, Homeopathy has its best results in the beginning stages of chronic diseases, where it might be possible to prevent the further development of the chronic state and this is its most important contribution. Examples of pathologies to be included in such RCTs trials are ulcerative colitis, sinusitis, asthma, allergic conditions, eczema, gangrene rheumatoid arthritis as long as they are within the first six months of their appearance. (Why then is there a lack of evidence that any of the named conditions respond to homeopathy?)
In conclusion, three points should be taken into consideration relating to trials that attempt to evaluate the effectiveness of homoeopathy.
First, it is imperative that from the point of view of homoeopathy, the above-mentioned principles should be discussed with expert homoeopaths before researchers undertake the design of any homoeopathic protocol. (I am not aware of any trial where this was NOT done!)
Second, it would be helpful if medical journals invited more knowledgeable peer-reviewers who understand the principles of homoeopathy. (I am not aware of any trial where this was NOT done!)
Third, there is a need for at least one standardized protocol for clinical trials that will respect not only the state-of-the-art parameters from conventional medicine but also the homoeopathic principles . (Any standardised protocol would be severely criticised; a good study protocol must always take account of the specific research question and therefore cannot be standardised.)
Fourth, experience so far has shown that the therapeutic results in homeopathy vary according to the expertise of the practitioner. Therefore, if the objective is to validate the homeopathic therapeutic modality, the organizers of the trial have to pick the best possible prescribers existing in the field. (I am not aware of any trial where this was NOT done!)
Only when these points are transposed and put into practice, the trials will be respected and accepted by both homoeopathic practitioners and conventional medicine and can be eligible for meta-analysis.
I suspect what the ‘GREAT VITHOULKAS’ really wanted to express are ‘THE TWO ESSENTIAL PRINCIPLES OF HOMEOPATHY RESEARCH’:
- A well-designed study of homeopathy can always be recognised by its positive result.
- Any trial that fails to yield a positive finding is, by definition, wrongly designed.
This press-release caught my attention:
Following the publication in Australia earlier this year of a video showing a chiropractor treating a baby, the Health Minster for the state of Victoria called for the prohibition of chiropractic spinal manipulation for children under the age of 12 years. As a result, an independent panel has been appointed by Safer Care Victoria to examine the evidence and provide recommendations for the chiropractic care of children.
The role of the panel is to (a) examine and assess the available evidence, including information from consumers, providers, and other stakeholders, for the use of spinal manipulation by chiropractors on children less than 12 years of age and (b) provide recommendations regarding this practice to the Victorian Minister for Health.
Members of the public and key stakeholders, including the WFC’s member for Australia, the Australia Chiropractors Association (AusCA), were invited to submit observations. The AusCA’s submission can be read here…
This submission turns out to be lengthy and full of irrelevant platitudes, repetitions and nonsense. In fact, it is hard to find in it any definitive statements at all. Here are two sections (both in bold print) which I found noteworthy:
1. There is no need to restrict parental or patient choice for chiropractic care for children under 12 years of age as there is no evidence of harm. There is however, expressed outcome of benefit by parents70 who actively choose chiropractic care for their children …
No evidence of harm? Really! This is an outright lie. Firstly, one has to stress that there is no monitoring system and that therefore we simply do not learn about adverse effects. Secondly, there is no reason to assume that the adverse effects that have been reported in adults are not also relevant for children. Thirdly, adverse effects in children have been reported; see for instance here. Fourthly, we need to be aware of the fact that any ineffective therapy causes harm by preventing effective therapies from being applied. And fifthly, we need to remember that some chiropractors harm children by advising their parents against vaccination.
2. Three recent systematic reviews have focused on the effectiveness of manual therapy for paediatric conditions. For example, Lanaro et al. assessed osteopathic manipulative treatment for use on preterm infants. This systematic review looked at five clinical trials and found a reduction of length of stay and costs in a large population of preterm infants with no adverse events (96).
Carnes et al.’s 2018 systematic review focused on unsettled, distressed and excessively crying infants following any type of manual therapy. Of the seven clinical trials included, five involved chiropractic manipulative therapy; however, meta-analyses of outcomes were not possible due to the heterogeneity of the clinical trials. The review also analysed an additional 12 observational studies: seven case series, three cohort studies, one service evaluation survey, and one qualitative study. Overall, the systematic review concluded that small benefits were found. Additionally, the reporting of adverse events was low. Interestingly, when a relative risk analysis was done, those who had manual therapy were found to have an 88% reduced risk of having an adverse event compared to those who did not have manual therapy (97).
A third systematic review by Parnell Prevost et al. in 2019 evaluated the effectiveness of any paediatric condition following manual therapy of any type and summarizes the findings of studies of children 18 years of age or younger, as well as all adverse event information. While mostly inconclusive data were found due to lack of high-quality studies, of the 32 clinical trials and 18 observational studies included, favourable outcomes were found for all age groups, including improvements in suboptimal breastfeeding and musculoskeletal conditions. Adverse events were mentioned in only 24 of the included studies with no serious adverse events reported in them (98).
(96) Lanaro D, Ruffini N, Manzotti A, Lista G. Osteopathic manipulative treatment showed reduction of length of stay and costs in preterm infants: A systematic review and meta-analysis. Medicine (Baltimore). 2017; 96(12):e6408 10.1097/MD.0000000000006408.
(97) Carnes D, Plunkett A, Ellwood J, Miles C. Manual therapy for unsettled, distressed and excessively crying infants: a systematic review and meta-analyses. BMJ Open 2018;8:e019040. doi:10.1136/bmjopen-2017-019040.
(98) Parnell Prevost et al. 2019.
And here are my comments:
(96) Lanaro et al is about osteopathy, not chiropractic (4 of the 5 primary trials were by the same research group).
(97) The review by Carnes et al has been discussed previously on this blog. This is what I wrote about it at the time:
The authors concluded that some small benefits were found, but whether these are meaningful to parents remains unclear as does the mechanisms of action. Manual therapy appears relatively safe.
For several reasons, I find this review, although technically sound, quite odd.
Why review uncontrolled data when RCTs are available?
How can a qualitative study be rated as high quality for assessing the effectiveness of a therapy?
How can the authors categorically conclude that there were benefits when there were only 4 RCTs of high quality?
Why do they not explain the implications of none of the RCTs being placebo-controlled?
How can anyone pool the results of all types of manual therapies which, as most of us know, are highly diverse?
How can the authors conclude about the safety of manual therapies when most trials failed to report on this issue?
Why do they not point out that this is unethical?
My greatest general concern about this review is the overt lack of critical input. A systematic review is not a means of promoting an intervention but of critically assessing its value. This void of critical thinking is palpable throughout the paper. In the discussion section, for instance, the authors state that “previous systematic reviews from 2012 and 2014 concluded there was favourable but inconclusive and weak evidence for manual therapy for infantile colic. They mention two reviews to back up this claim. They conveniently forget my own review of 2009 (the first on this subject). Why? Perhaps because it did not fit their preconceived ideas? Here is my abstract:
Some chiropractors claim that spinal manipulation is an effective treatment for infant colic. This systematic review was aimed at evaluating the evidence for this claim. Four databases were searched and three randomised clinical trials met all the inclusion criteria. The totality of this evidence fails to demonstrate the effectiveness of this treatment. It is concluded that the above claim is not based on convincing data from rigorous clinical trials.
Towards the end of their paper, the authors state that “this was a comprehensive and rigorously conducted review…” I beg to differ; it turned out to be uncritical and biased, in my view. And at the very end of the article, we learn a possible reason for this phenomenon: “CM had financial support from the National Council for Osteopathic Research from crowd-funded donations.”
(98) Parnell et al was easy to find despite the incomplete reference in the submission. This paper has also been discussed previously. Here is my post on it:
This systematic review is an attempt [at] … evaluating the use of manual therapy for clinical conditions in the paediatric population, assessing the methodological quality of the studies found, and synthesizing findings based on health condition.
Of the 3563 articles identified through various literature searches, 165 full articles were screened, and 50 studies (32 RCTs and 18 observational studies) met the inclusion criteria. Only 18 studies were judged to be of high quality. Conditions evaluated were:
- attention deficit hyperactivity disorder (ADHD),
- cerebral palsy,
- cranial asymmetry,
- cuboid syndrome,
- infantile colic,
- low back pain,
- obstructive apnoea,
- otitis media,
- paediatric dysfunctional voiding,
- paediatric nocturnal enuresis,
- postural asymmetry,
- preterm infants,
- pulled elbow,
- suboptimal infant breastfeeding,
- suboptimal infant breastfeeding,
- temporomandibular dysfunction,
- upper cervical dysfunction.
Musculoskeletal conditions, including low back pain and headache, were evaluated in seven studies. Only 20 studies reported adverse events.
The authors concluded that fifty studies investigated the clinical effects of manual therapies for a wide variety of pediatric conditions. Moderate-positive overall assessment was found for 3 conditions: low back pain, pulled elbow, and premature infants. Inconclusive unfavorable outcomes were found for 2 conditions: scoliosis (OMT) and torticollis (MT). All other condition’s overall assessments were either inconclusive favorable or unclear. Adverse events were uncommonly reported. More robust clinical trials in this area of healthcare are needed.
There are many things that I find remarkable about this review:
- The list of indications for which studies have been published confirms the notion that manual therapists – especially chiropractors – regard their approach as a panacea.
- A systematic review evaluating the effectiveness of a therapy that includes observational studies without a control group is, in my view, highly suspect.
- Many of the RCTs included in the review are meaningless; for instance, if a trial compares the effectiveness of two different manual therapies none of which has been shown to work, it cannot generate a meaningful result.
- Again, we find that the majority of trialists fail to report adverse effects. This is unethical to a degree that I lose faith in such studies altogether.
- Only three conditions are, according to the authors, based on evidence. This is hardly enough to sustain an entire speciality of paediatric chiropractors.
Allow me to have a closer look at these three conditions.
- Low back pain: the verdict ‘moderate positive’ is based on two RCTs and two observational studies. The latter are irrelevant for evaluating the effectiveness of a therapy. One of the two RCTs should have been excluded because the age of the patients exceeded the age range named by the authors as an inclusion criterion. This leaves us with one single ‘medium quality’ RCT that included a mere 35 patients. In my view, it would be foolish to base a positive verdict on such evidence.
- Pulled elbow: here the verdict is based on one RCT that compared two different approaches of unknown value. In my view, it would be foolish to base a positive verdict on such evidence.
- Preterm: Here we have 4 RCTs; one was a mere pilot study of craniosacral therapy following the infamous A+B vs B design. The other three RCTs were all from the same Italian research group; their findings have never been independently replicated. In my view, it would be foolish to base a positive verdict on such evidence.
So, what can be concluded from this?
I would say that there is no good evidence for chiropractic, osteopathic or other manual treatments for children suffering from any condition.
The ACA’s submission ends with the following conclusion:
The Australian Chiropractors Association (ACA) intent is to improve the general health of all Australians and the ACA supports the following attributes to achieve this:
- The highest standards of ethics and conduct in all areas of research, education and practise
- Chiropractors as the leaders in high quality spinal health and wellbeing
- A commitment to evidence-based practice – the integration of best available research evidence, clinical expertise and patient values
- The profound significance and value of patient-centred chiropractic care in healthcare in Australia.
- Inclusiveness and collaborative relationships within and outside the chiropractic profession…
After reading through the entire, tedious document, I arrived at the conclusion that
THIS SUBMISSION CAN ONLY BE A CALL FOR THE PROHIBITION OF CHIROPRACTIC SPINAL MANIPULATION FOR CHILDREN.
‘Acute-on-chronic liver failure’ (ACLF) is an acute deterioration of liver function in patients with pre-existing liver disease. It is usually associated with a precipitating event and results in the failure of one or more organs and high short term mortality.
An international team of researchers published a analysis examining data regarding drugs producing ACLF. They evaluated clinical features, laboratory characteristics, outcome, and predictors of mortality in patients with drug-induced ACLF. They identified drugs as precipitants of ACLF among prospective cohort of patients with ACLF from the Asian Pacific Association of Study of Liver (APASL) ACLF Research Consortium (AARC) database. Drugs were considered precipitants after exclusion of known causes together with a temporal association between exposure and decompensation. Outcome was defined as death from decompensation.
Of the 3,132 patients with ACLF, drugs were implicated as a cause in 10.5% of all cases and other non-drug causes in 89.5%. Within the first group, so-called alternative medications (SCAMs) were the commonest cause (71.7%), followed by combination anti-tuberculosis therapy drugs (27.3%). Alcoholic liver disease (28.6%), cryptogenic liver disease (25.5%), and non-alcoholic steatohepatitis (NASH) (16.7%) were common causes of underlying liver diseases. Patients with drug-induced ACLF had jaundice (100%), ascites (88%), encephalopathy (46.5%), high Model for End-Stage Liver Disease (MELD) (30.2), and Child-Turcotte-Pugh score (12.1). The overall 90-day mortality was higher in drug-induced (46.5%) than in non-drug-induced ACLF (38.8%).
The authors concluded that drugs are important identifiable causes of ACLF in Asia-Pacific countries, predominantly from complementary and alternative medications, followed by anti-tuberculosis drugs. Encephalopathy, bilirubin, blood urea, lactate, and international normalized ratio (INR) predict mortality in drug-induced ACLF.
Systematic literature searches were performed on Medline, Embase, The Cochrane Library, Amed and Ciscom. To identify additional data, searches were conducted by hand in relevant medical journals and in our own files. The screening and selection of articles and the extraction of data were performed independently by the two authors. There were no restrictions regarding the language of publication. In order to be included articles were required to report data on hepatotoxic events associated with the therapeutic use of herbal medicinal products.
Single medicinal herbs and combination preparations are associated with hepatotoxic events. Clinically, the spectrum ranges from transient elevations of liver enzyme levels to fulminant liver failure and death. In most instances hepatotoxic herbal constituents are believed to be the cause, while others may be due to herb-drug interactions, contamination and/or adulteration.
A number of herbal medicinal products are associated with serious hepatotoxic events. Incidence figures are largely unknown, and in most cases a causal attribution is not established. The challenge for the future is to systematically research this area, educate all parties involved, and minimize patient risk.
Despite these warnings, progress is almost non-existent. If anything the problem seems to increase in proportion with the rise in the use of SCAM. Hence, one cannot but agree with the conclusion of a more recent overview: The actual incidence and prevalence of herb-induced liver injury in developing nations remain largely unknown due to both poor pharmacovigilance programs and non-application of emerging technologies. Improving education and public awareness of the potential risks of herbals and herbal products is desirable to ensure that suspected adverse effects are formally reported. There is need for stricter regulations and pre-clinical studies necessary for efficacy and safety.
“Eating elderberries can help minimise influenza symptoms.” This statement comes from a press release by the University of Sydney. As it turned out, the announcement was not just erroneous but it also had concealed that the in-vitro study that formed the basis for the press-release was part-funded by the very company, Pharmacare, which sells elderberry-based flu remedies.
“This is an appalling misrepresentation of this Pharmacare-funded in-vitro study,” said associate professor Ken Harvey, president of Friends of Science in Medicine. “It was inappropriate and misleading to imply from this study that an extract was ‘proven to fight flu’.” A University of Sydney spokeswoman confirmed Pharmacare was shown a copy of the press release before it was published.
This is an embarrassing turn of events, no doubt. But what about elderberry (Sambucus nigra) and the flu? Is there any evidence?
A systematic review quantified the effects of elderberry supplementation. Supplementation with elderberry was found to substantially reduce upper respiratory symptoms. The quantitative synthesis of the effects yielded a large mean effect size. The authors concluded that these findings present an alternative to antibiotic misuse for upper respiratory symptoms due to viral infections, and a potentially safer alternative to prescription drugs for routine cases of the common cold and influenza.
The alternative to antibiotic misuse can only be the correct use of antibiotics. And, in the case of viral infections such as the flu, this can only be the non-use of antibiotics. My trust in this review, published in a SCAM journal of dubious repute, has instantly dropped to zero.
Perhaps a recent overview recently published in THE MEDICAL LETTER provides a more trustworthy picture:
No large randomized, controlled trials evaluating the effectiveness of elderberry for prevention or treatment of influenza have been conducted to date. Elderberry appears to have some activity against influenza virus strains in vitro. In two small studies (conducted outside the US), adults with influenza A or B virus infection taking elderberry extract reported a shorter duration of symptoms compared to those taking placebo. Consuming uncooked blue or black elderberries can cause nausea and vomiting. The rest of the plant (bark, stems, leaves, and root) contains sambunigrin, which can release cyanide. No data are available on the safety of elderberry use during pregnancy or while breastfeeding. CONCLUSION — Prompt treatment with an antiviral drug such as oseltamivir (Tamiflu, and generics) has been shown to be effective in large randomized, controlled trials in reducing the duration of influenza symptoms, and it may reduce the risk of influenza-related complications. There is no acceptable evidence to date that elderberry is effective for prevention or treatment of influenza and its safety is unclear.
Any take-home messages?
- Elderberry supplements are not of proven effectiveness against the flu.
- The press officers at universities should be more cautious when writing press-releases.
- They should involve the scientists and avoid the sponsors of the research.
- In-vitro studies can never tell us anything about clinical effectiveness.
- SCAM-journals’ articles must be taken with a pinch of salt.
- Consumers are being misled left, right and centre.
Radix Salviae Miltiorrhizae (Danshen) is a herbal remedy that is part of many TCM herbal mixtures. Allegedly, Danshen has been used in clinical practice for over 2000 years.
But is it effective?
The aim of this systematic review was to evaluate the current available evidence of Danshen for the treatment of cancer. English and Chinese electronic databases were searched from PubMed, the Cochrane Library, EMBASE, and the China National Knowledge Infrastructure (CNKI), VIP database, Wanfang database until September 2018. The methodological quality of the included studies was evaluated by using the method of Cochrane system.
Thirteen RCTs with 1045 participants were identified. The studies investigated the lung cancer (n = 5), leukemia (n = 3), liver cancer (n = 3), breast or colon cancer (n = 1), and gastric cancer (n = 1). A total of 83 traditional Chinese medicines were used in all prescriptions and there were three different dosage forms. The meta-analysis suggested that Danshen formulae had a significant effect on RR (response rate) (OR 2.38, 95% CI 1.66-3.42), 1-year survival (OR 1.70 95% CI 1.22-2.36), 3-year survival (OR 2.78, 95% CI 1.62-4.78), and 5-year survival (OR 8.45, 95% CI 2.53-28.27).
The authors concluded that the current research results showed that Danshen formulae combined with chemotherapy for cancer treatment was better than conventional drug treatment plan alone.
I am getting a little tired of discussing systematic reviews of so-called alternative medicine (SCAM) that are little more than promotion, free of good science. But, because such articles do seriously endanger the life of many patients, I do nevertheless succumb occasionally. So here are a few points to explain why the conclusions of the Chinese authors are nonsense:
- Even though the authors claim the trials included in their review were of high quality, most were, in fact, flimsy.
- The trials used no less than 83 different herbal mixtures of dubious quality containing Danshen. It is therefore not possible to define which mixture worked and which did not.
- There is no detailed discussion of the adverse effects and no mention of possible herb-drug interactions.
- There seemed to be a sizable publication bias hidden in the data.
- All the eligible studies were conducted in China, and we know that such trials are unreliable to say the least.
- Only four articles were published in English which means those of us who cannot read Chinese are unable to check the correctness of the data extraction of the review authors.
I know it sounds terribly chauvinistic, but I do truly believe that we should simply ignore Chinese articles, if they have defects that set our alarm bells ringing – if not, we are likely to do a significant disservice to healthcare and progress.