On this blog, I have often pointed out how dismally poor most of the trials of alternative therapies frequently are, particularly those in the realm of chiropractic. A brand-new study seems to prove my point.
The aim of this trial was to determine whether spinal manipulative therapy (SMT) plus home exercise and advice (HEA) compared with HEA alone reduces leg pain in the short and long term in adults with sub-acute and chronic back-related leg-pain (BRLP).
Patients aged 21 years or older with BRLP for least 4 weeks were randomised to receive 12 weeks of SMT plus HEA or HEA alone. Eleven chiropractors with a minimum of 5 years of practice experience delivered SMT in the SMT plus HEA group. The primary outcome was subjective BRLP at 12 and 52 weeks. Secondary outcomes were self-reported low back pain, disability, global improvement, satisfaction, medication use, and general health status at 12 and 52 weeks.
Of the 192 enrolled patients, 191 (99%) provided follow-up data at 12 weeks and 179 (93%) at 52 weeks. For leg pain, SMT plus HEA had a clinically important advantage over HEA (difference, 10 percentage points [95% CI, 2 to 19]; P = 0.008) at 12 weeks but not at 52 weeks (difference, 7 percentage points [CI, -2 to 15]; P = 0.146). Nearly all secondary outcomes improved more with SMT plus HEA at 12 weeks, but only global improvement, satisfaction, and medication use had sustained improvements at 52 weeks. No serious treatment-related adverse events or deaths occurred.
The authors conclude that, for patients with BRLP, SMT plus HEA was more effective than HEA alone after 12 weeks, but the benefit was sustained only for some secondary outcomes at 52 weeks.
This is yet another pragmatic trial following the notorious and increasingly popular A+B versus B design. As pointed out repeatedly on this blog, this study design can hardly ever generate a negative result (A+B is always more than B, unless A has a negative value [which even placebos don’t have]). Thus it is not a true test of the experimental treatment but all an exercise to create a positive finding for a potentially useless treatment. Had the investigators used any mildly pleasant placebo with SMT, the result would have been the same. In this way, they could create results showing that getting a £10 cheque or meeting with pleasant company every other day, together with HEA, is more effective than HEA alone. The conclusion that the SMT, the cheque or the company have specific effects is as implicit in this article as it is potentially wrong.
The authors claim that their study was limited because patient-blinding was not possible. This is not entirely true, I think; it was limited mostly because it failed to point out that the observed outcomes could be and most likely are due to a whole range of factors which are not directly related to SMT and, most crucially, because its write-up, particularly the conclusions, wrongly implied cause and effect between SMT and the outcome. A more accurate conclusion could have been as follows: SMT plus HEA was more effective than HEA alone after 12 weeks, but the benefit was sustained only for some secondary outcomes at 52 weeks. Because the trial design did not control for non-specific effects, the observed outcomes are consistent with SMT being an impressive placebo.
No such critical thought can be found in the article; on the contrary, the authors claim in their discussion section that the current trial adds to the much-needed evidence base about SMT for subacute and chronic BRLP. Such phraseology is designed to mislead decision makers and get SMT accepted as a treatment of conditions for which it is not necessarily useful.
Research where the result is known before the study has even started (studies with a A+B versus B design) is not just useless, it is, in my view, unethical: it fails to answer a real question and is merely a waste of resources as well as an abuse of patients willingness to participate in clinical trials. But the authors of this new trial are in good and numerous company: in the realm of alternative medicine, such pseudo-research is currently being published almost on a daily basis. What is relatively new, however, that even some of the top journals are beginning to fall victim to this incessant stream of nonsense.
EE stated ” I think; it was limited mostly because it failed to point out that the observed outcomes could be and most likely are due to a whole range of factors which are not directly related to SMT”.
Would not those factors be equally applicable to both groups and covered in Baseline Characteristics (Table1)?
There are also the pilot studies that address some of the issues you pointed out:
Nonoperative treatments for sciatica: a pilot study for a randomized clinical trial.
Spinal manipulation, epidural injections, and self-care for sciatica: a pilot study for a randomized clinical trial.
They also state here:
“The magnitude of 10 percentage points for the group differences of the primary outcome, leg pain, translates to a medium effect size of 0.6 (60) in favor of SMT plus HEA, which is considered clinically important. Further, we saw consistent statistically significant and clinically important group differences for nearly all other outcomes in the short term and for some secondary outcomes in the long term in favor of SMT plus HEA, including global improvement, an important and recommended patient-centered outcome (45, 61). Group differences in the responder analyses for patient-rated leg pain consistently favored SMT plus HEA. The SMT plus HEA group had less aggravation of leg pain. Of importance, patients receiving SMT plus HEA used less medication during the treatment phase and at the 52-week follow-up. On the basis of these factors, we consider the group differences in aggregate in this trial to be clinically important, consistently favoring SMT plus HEA over HEA alone, especially in the short term.”
An interesting recent paper discussing research into chronic pain:
Research designs for proof-of-concept chronic pain clinical trials: IMMPACT recommendations.
The Bronfort study ticks most of the boxes they discuss in designing a chronic pain trial.
I would have thought that publishing in highly respected medical peer review journals would be applauded by you? Are you saying that they have lowered their standard in accepting this paper and if so have you written a letter to the editors as you have done previously for one of Bronforts papers in the same journal?
Spinal manipulation, medication, or home exercise with advice for acute and subacute neck pain: a randomized trial.
Acute and subacute neck pain- E Ernst
the experimental group would benefit from nonspecific factors that the controls are not exposed to including:
extra TLC by the chiros [who must have had an interest in a positive outcome]
the placebo effect of SMT
their own expectation in SMT
I am applauding good research but this one is lousy.
The chiropractors who provided SMT could not be blinded, nor the patients who were all given the exercises (Impossible to blind as they are actively involved in the exercises), but the researchers who gathered all the data were blinded. The HEA group each had four hour long sessions that individualized their exercises with follow up encouragement to promote adherence. So there was a large amount of patient interaction in both groups, placebo effect in both! Additionally, the functional measures used on both would be useful as placebo would have less of an impact compared to the pain rating scale! The long term follow up is discussed here:
“The primary outcome variable, patient-rated leg pain,
was modeled with mixed-effects regression over baseline (the average value obtained at the 2 baseline visits) and 3, 12, 26, and 52 weeks. After assuming that group means were the same at baseline, the additional terms in the model were time (as a categorical variable) and site-by- group and time-by-group interactions. The site-by-group interaction was removed if it was not significant at the 0.05 level. Because we tested between-group differences at 2 primary end points, we used the Bonferroni method to control for 2 tests. Responder analyses were done for pain reduction of
50%, 75%, and 100% at the end of treatment at 12 weeks and at the 52-week follow-up (55–57). The differences in proportions between groups were calculated for patients with data at each end point based on each criterion, and 95% CIs were based on the Wilson score method (58). The secondary outcome variables, patient-rated LBP
scores, disability scores, SF-36 physical and mental health component scores, global improvement scores, and satisfaction scores, were analyzed with the same methods as patient-rated leg pain but without controlling for multiple testing. Two approaches were used for sensitivity analyses to examine the possible effects of missing data on the re- sults (Appendix and Appendix Tables 1 and 2, available at http://www.annals.org)
They also discuss the studys limits here:
“The study is limited by the inability to blind patients and providers to the nature of the treatments and differentiate between the specific treatment effects and contextual (nonspecific) effects (such as patient–provider interactions). Qualitative data collection examining pa- tients’ perspectives will shed more light on these issues and are planned for future publications. This study was not designed to assess the effectiveness of SMT alone. Al- though that is a worthwhile question, this trial was inten- tionally pragmatic in nature, comparing the relative clinical effectiveness of commonly used treatment approaches by approximating how they are delivered in practice”.
I know all this, but the facts remain
1) there is no way this study could have generated a negative result
2) the conclusions are highly misleading
3) it is possible to provide sham SMT such that patients are blinded
Why are top journals falling for these poorly done studies? Is it a lack of education on the part of the editors? Are they so eager to appear fair-minded to CAM that they are willing to overlook these seemingly glaring flaws? Are they so in need of studies to publish that they are, maybe without knowing it, letting their guard down? Whatever the cause, lastly, what can be done to show these journal editors how they are allowing flimsy studies to be treated as rigorously done research?
my subsequent post is an attempt to explain the methodological problems of this study design in more detail.