Mindfulness-based stress reduction (MBSR) has not been rigorously evaluated as a treatment of chronic low back pain. According to its authors, this RCT was aimed at evaluating “the effectiveness for chronic low back pain of MBSR vs cognitive behavioural therapy (CBT) or usual care.”
The investigators randomly assigned patients to receive MBSR (n = 116), CBT (n = 113), or usual care (n = 113). CBT meant training to change pain-related thoughts and behaviours and MBSR meant training in mindfulness meditation and yoga. Both were delivered in 8 weekly 2-hour groups. Usual care included whatever care participants received.
Coprimary outcomes were the percentages of participants with clinically meaningful (≥30%) improvement from baseline in functional limitations (modified Roland Disability Questionnaire [RDQ]; range, 0-23) and in self-reported back pain bothersomeness (scale, 0-10) at 26 weeks. Outcomes were also assessed at 4, 8, and 52 weeks.
There were 342 randomized participants with a mean duration of back pain of 7.3 years. They attended 6 or more of the 8 sessions, 294 patients completed the study at 26 weeks, and 290 completed it at 52 weeks. In intent-to-treat analyses at 26 weeks, the percentage of participants with clinically meaningful improvement on the RDQ was higher for those who received MBSR (60.5%) and CBT (57.7%) than for usual care (44.1%), and RR for CBT vs usual care, 1.31 [95% CI, 1.01-1.69]). The percentage of participants with clinically meaningful improvement in pain bothersomeness at 26 weeks was 43.6% in the MBSR group and 44.9% in the CBT group, vs 26.6% in the usual care group, and RR for CBT vs usual care was 1.69 [95% CI, 1.18-2.41]). Findings for MBSR persisted with little change at 52 weeks for both primary outcomes.
The authors concluded that among adults with chronic low back pain, treatment with MBSR or CBT, compared with usual care, resulted in greater improvement in back pain and functional limitations at 26 weeks, with no significant differences in outcomes between MBSR and CBT. These findings suggest that MBSR may be an effective treatment option for patients with chronic low back pain.
At first glance, this seems like a well-conducted study. It was conducted by one of the leading back pain research team and was published in a top-journal. It will therefore have considerable impact. However, on closer examination, I have serious doubts about certain aspects of this trial. In my view, both the aims and the conclusions of this RCT are quite simply wrong.
The authors state that they aimed at evaluating “the effectiveness for chronic low back pain of MBSR vs cognitive behavioural therapy (CBT) or usual care.” This is not just misleading, it is wrong! The correct aim should have been to evaluate “the effectiveness for chronic low back pain of MBSR plus usual care vs cognitive behavioural therapy plus usual care or usual care alone.” One has to go into the method section to find the crucial statement: “All participants received any medical care they would normally receive.”
Consequently, the conclusions are equally wrong. They should have read as follows: Among adults with chronic low back pain, treatment with MBSR plus usual care or CBT plus usual care, compared with usual care alone, resulted in greater improvement in back pain and functional limitations at 26 weeks, with no significant differences in outcomes between MBSR and CBT.
In other words, this is yet another trial with the dreaded ‘A+B vs B’ design. Because A+B is always more than B (even if A is just a placebo), such a study will never generate a negative result (even if A is just a placebo). The results are therefore entirely compatible with the notion that the two tested treatments are pure placebos. Add to this the disappointment many patients in the ‘usual care group’ might have felt for not receiving an additional therapy for their pain, and you have a most plausible explanation for the observed outcomes.
I am totally puzzled why the authors failed to discuss these possibilities and limitations in full, and I am equally bewildered that JAMA published such questionable research.
has JAMA responded to this criticism?
I wonder why.
They’ve been pretty lax about corrections lately. That makes them a liability to public health.
I agree with, and understand why “A+B is always more than B (even if A is just a placebo), such a study will never generate a negative result (even if A is just a placebo)” in the context of trials/studies because Prof. Ernst has previously explained it to the readers.
I’ve just read a report on a seemingly unrelated research study, which has caused me to wonder if the popular A+B versus B study design might qualify as an instance [a special case] of the commonly-committed conjunction fallacy. But, I can’t figure out how to relate them succinctly.
“The conjunction fallacy is a formal fallacy that occurs when it is assumed that specific conditions are more probable than a single general one.”
Logically, the condition/specification C+D is always *less* probable than the more general C because the addition of D to the specification is more restrictive. Why? Because C+D is a subset of the wider group C therefore it is mathematically less probable. Thinking that C+D is more probable than C would be committing the conjunction fallacy. On the surface, this seems to support the notion that A+B versus B studies are valid. However, it is trivial to demonstrate the inadvertent or deliberate deployment of the conjunction fallacy in the design of A+B versus B trials…
Let C = A+B, which is the whole group.
Let D = -A.
Therefore, C+D = A+B-A = B, which is a restricted subgroup of C.
Substituting, we get
A = -D
B = C+D
Therefore, A+B versus B = C versus C+D.
As shown above, C is always more probable than C+D.
Therefore, A+B is always more probable than B.
More explicitly, and much more importantly, the probability of A+B cannot be logically or mathematically less than the probability of B.
The details of the methods used in all A+B versus B studies are totally and utterly irrelevant because these study designs cannot logically [epistemically] produce a result that is worse than A+B = B. Ontologically, it is feasible that this study design will sometimes produce negative results, but I doubt the results would be published because negative values of probability don’t exist in reality: the domain of real, epistemic probability, resides on a scale from zero to unity (0 — 1).
A study design that attempts to compare A+B versus B to X+B versus B is, I think, a worthy exemplar of abject bullshit being masqueraded as science.
To my simple mind – if you are testing A vs A+B, algebraically this is the same as B, so you are just testing B alone but against nothing.
I agree. I’m just trying to figure out why these studies get published and why so many people don’t realise that the always positive results they produce are meaningless. My guess was the conjunction fallacy because so many people fall foul of it. What puzzles me the most is why the A+B versus B study design never seems to be a career limiting or terminating step for its authors.