MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

Osteopathy is a confusing subject about which I have reported regularly on this blog (for instance here and here).

Recently, I came across a good article where someone had assessed 100 websites by UK osteopaths. The findings are impressive:

57% of websites in the survey published the ‘self-healing’ claim

70% publicised the fact they offered cranial therapy;

61% made a claim to treat one or more specific ailments not related to the musculoskeletal system;

48% of practitioners also personally offered another CAM therapy;

71% of all sites surveyed located in a setting where other CAM was immediately available.

In total, 93% of the randomly selected websites checked at least one, often more, of the criteria for pseudoscientific claims. The author concluded that quackery is far from existing only on the fringe of osteopathic practice.

In a previous article, the author had stated that “there’s some (not strong) evidence that manual therapy may have some benefit in the case of lower back pain.” This evidence for the assumption that osteopathy works for back pain seems to rely heavily on one researcher: Licciardone JC. He comes from ‘The Osteopathic Research Center, University of North Texas Health Science Center, Fort Worth‘. which also is the flag-ship of research into osteopathy with plenty of funds and a worldwide reputation.

In 2005, he and his team published a systematic review/meta-analysis of RCTs which concluded that “osteopathic manipulative therapy (OMT) significantly reduces low back pain. The level of pain reduction is greater than expected from placebo effects alone and persists for at least three months. Additional research is warranted to elucidate mechanistically how OMT exerts its effects, to determine if OMT benefits are long lasting, and to assess the cost-effectiveness of OMT as a complementary treatment for low back pain.”

This is the article cited regularly to support the statement that osteopathy is an effective therapy for back pain. As the paper is now over 10 years old, we clearly need a more up-to-date systematic review. Such an assessment of clinical research into osteopathic intervention for chronic non-specific low back pain (CNSLBP) was recently published by an Australian team. A thorough search of the literature in multiple electronic databases was undertaken, and all articles were included that reported clinical trials; had adult participants; tested the effectiveness and/or efficacy of osteopathic manual therapies applied by osteopaths, and had a study condition of CNSLBP. The quality of the trials was assessed using the Cochrane criteria. Initial searches located 809 papers, 772 of which were excluded on the basis of abstract alone. The remaining 37 papers were subjected to a detailed analysis of the full text, which resulted in 35 further articles being excluded. There were thus only two studies assessing the effectiveness of manual therapies applied by osteopaths in adult patients with CNSLBP. The results of one trial suggested that the osteopathic intervention was similar in effect to a sham intervention, and the other implies equivalence of effect between osteopathic intervention, exercise and physiotherapy.

In other words, there seems to be an overt contradiction between the conclusions of Licciardone JC and those of the Australian team. Why? we may well ask. Perhaps the Osteopathic Research Center is not in the best position to be impartial? In order to check them out, I decided to have a closer look at their publications.

This team has published around 80 articles mostly in very low-impact osteopathic journals. They include several RCTs, and I decided to extract the conclusions of the last 10 papers reporting RCTs. Here they are:

RCT No 1 (2016)

Subgrouping according to baseline levels of chronic LBP intensity and back-specific functioning appears to be a simple strategy for identifying sizeable numbers of patients who achieve substantial improvement with OMT and may thereby be less likely to use more costly and invasive interventions.

RCT No 2 (2016)

The OMT regimen was associated with significant and clinically relevant measures for recovery from chronic LBP. A trial of OMT may be useful before progressing to other more costly or invasive interventions in the medical management of patients with chronic LBP.

RCT No 3 (2014)

Overall, 49 (52%) patients in the OMT group attained or maintained a clinical response at week 12 vs. 23 (25%) patients in the sham OMT group (RR, 2.04; 95% CI, 1.36-3.05). The large effect size for short-term efficacy of OMT was driven by stable responders who did not relapse.

RCT No 4 (2014)

These findings suggest that remission of psoas syndrome may be an important and previously unrecognized mechanism explaining clinical improvement in patients with chronic LBP following OMT.

RCT No 5 (2013)

The large effect size for OMT in providing substantial pain reduction in patients with chronic LBP of high severity was associated with clinically important improvement in back-specific functioning. Thus, OMT may be an attractive option in such patients before proceeding to more invasive and costly treatments.

RCT No 6 (2013)

The OMT regimen met or exceeded the Cochrane Back Review Group criterion for a medium effect size in relieving chronic low back pain. It was safe, parsimonious, and well accepted by patients.

RCT No 7 (2012)

This study found associations between IL-1β and IL-6 concentrations and the number of key osteopathic lesions and between IL-6 and LBP severity at baseline. However, only TNF-α concentration changed significantly after 12 weeks in response to OMT. These discordant findings indicate that additional research is needed to elucidate the underlying mechanisms of action of OMT in patients with nonspecific chronic LBP.

RCT No 8 (2010)

Osteopathic manipulative treatment slows or halts the deterioration of back-specific functioning during the third trimester of pregnancy.

RCT No 8 (2004)

The OMT protocol used does not appear to be efficacious in this hospital rehabilitation population.

RCT No 9 (2003)

Osteopathic manipulative treatment and sham manipulation both appear to provide some benefits when used in addition to usual care for the treatment of chronic nonspecific low back pain. It remains unclear whether the benefits of osteopathic manipulative treatment can be attributed to the manipulative techniques themselves or whether they are related to other aspects of osteopathic manipulative treatment, such as range of motion activities or time spent interacting with patients, which may represent placebo effects.

RCT No 10

Sorry, there is no 10th paper reporting an RCT.

Most of the remaining articles listed on Medline are comments and opinion papers. Crucially, it would be erroneous to assume that they conducted a total of 9 RCTs. Several of the above cited articles refer to the same RCT.

However, the most remarkable feature, in my view, is that the conclusions are almost invariably positive. Whenever I find a research team that manages to publish almost nothing but positive findings on one subject which most other experts are sceptical about, my alarm-bells start ringing.

In a previous blog, I have explained this in greater detail. Suffice to say that, according to my theory, the trustworthiness of the ‘Osteopathic Research Center’ is nothing to write home about.

What, I wonder, does that tell us about the reliability of the claim that osteopathy is effective for back pain?

17 Responses to Osteopathy revisited

  • DO’s are widely accepted as equal to MD’s in the US, but I avoid them. I don’t want to spend a first visit asking a lot of questions, though lately I find I need to do that with an MD, since many are dabbling in woo. I get my care at a large regional academic medical center. At my branch location, there is a DO, and in his office I found literature provided by the Medical Center about the “benefits” of osteopathy–it treats “the whole patient” and such. I saw him once when my doctor was on maternity leave and there was no obvious woo. He phoned me later to see how I was doing, which was very nice.

    The question I always have is: Did you go to DO school because you couldn’t get into MD school or do you think DO is better somehow–and why?

    I am sorry for DO’s who do not include woo, because they bear the stigma of those who do, but as long as an MD is available, I’ll stay with them.

  • I guess we could give a confidence level of ~70% +/- 5% for a sample size of ~100 in a population of approx 5,500 osteopaths, for the result in the original article.

    If someone wants to repeat this with a sample size of, say, 360 then that would rise to ~95% +/- 5%.

    • Osteofiles said:

      I guess we could give a confidence level of ~70% +/- 5% for a sample size of ~100 in a population of approx 5,500 osteopaths, for the result in the original article.

      If someone wants to repeat this with a sample size of, say, 360 then that would rise to ~95% +/- 5%.

      There are currently 4,623 registrants on the GOsC’s public database (world wide). A sample size of 100 (assuming a response distribution of the worst case 50%) gives an error margin of ±9.69% with a confidence level of 95%. Taking the 93% identified as breaching at least one of the criteria as the response distribution, a sample size of 100 gives an error margin of ±4.95% at 95% confidence level. Perfectly adequate for most purposes.

      However, now that we have an idea of just how bad the situation is, who do you think should be investigating this fully and properly and actually doing something about it?

      • Well, I was suggesting that more data would be useful to strengthen the assertions made. You say that the data is already adequate. Fair enough, I don’t wish to split hairs.

        I thought the number of registered osteopaths was “over 5,000” as per http://www.osteopathy.org.uk/training-and-registration/ I don’t doubt your figure, just curious where it came from. I don’t see it in the GOsC’s public database at http://www.osteopathy.org.uk/register-search/ I’m sure the accurate total is publicly available somewhere.

        The question you ask seems predicated on the initial review being worthy of investigation ( https://appletzara.wordpress.com/2016/04/24/osteopathy-part-2-a-review-of-100-osteopathy-websites/ ). Again, at risk of splitting hairs, I’d suggest the criteria are a little loose. Why those particular 4 criteria? Add in a few more, it would bump up the numbers. What about mentions of “holistic” or “toxins” or forms of “ancient wisdom”? That would probably bring the total nearer 100%.

        I do like the initial review, though.

        As for “investigating”, what should be the next steps? The reviewer has selected some arbitrary criteria about healthcare beliefs, conducted a manual search, and identified a percentage of the population. Beyond that, an investigation might look into why the identified members of that population express those beliefs…. perhaps looking at their beliefs, education and social environment? I’m not really sure what this achieves.

        Perhaps a more robust version of this review could be compared against surveys of the profession and those who use osteopaths to see if anything interesting emerges.

        As for “doing something about it”, what would you suggest?

        Many thanks for your comments 🙂

          • Hi Alan, thanks for the comment. That number is inaccurate, unless you want to include the handful who are currently suspended. Also it probably doesn’t include those who have very recently left or joined the register. And today it says 4,615 so it clearly fluctuates. So… maybe settling on an approximation is the best we can hope for.

          • That the number changes frequently could well be an indication that it is being kept up to date as registrants join and leave the register. I would hope that the GOsC would maintain the register so that it was no more than, say, a day out of date so the public can check the current status of anyone claiming to be an osteo. The GOsC also have a legal duty to maintain the register. It would therefore seem to be safe to assume it is at least fairly close to the actual number of registrants (although there could be legitimate reasons for a few not being included). However, the actual population size will make little difference to the random sample stats.

      • Hi again.

        Out of curiosity, why elect to believe the 93% response distribution? Since we don’t know the true population distribution, we can’t determine if that 93% figure is reliable or not. We can’t even determine if it’s close, or wildly inaccurate.

        Is it typical practice to accept such a high response distribution from such a small sample?

        Perhaps a mean of all responses might be more appropriate? That works out as 47.2% (or 61.4% if we consider the 71% figure that’s somewhat arbitrarily included toward the end). Those aren’t very far from the 50% “worst case” scenario.

        The sample size is also probably smaller than 100, since the author states “If a random number selected a GOC register entry with no website (or a dead link), I went down the list until I hit the next entry with a working website.”

        Just curious. Thanks again 🙂

        • Thanks for your thoughts on the survey – I’m finding your more statistical look at it very interesting 🙂 I’m curious – why would the sample size be <100 if I hopped to the next website in the list due to a dead link/no website?

          The way I did it was to enter the county in the search box and hit return – that gave me all the GoC entries for that county. I then used the random number generator to generate 10 random numbers. So if the next number to look at was 15, I'd go to no. 15 in the register search results; if no.15 had no working website, then I'd look at no. 16, 17, 18, etc. until I got to one with a working link.

          Many thanks!

        • Sorry, I think I’ve just realised what you meant – you’re interpreting ‘the list’ as the 100 selected websites, where what I meant is the GoC register ‘list’. That’s due to me being unclear in my report. I’ve made a slight edit to my text.

          • Hi there. Yep, I had interpreted your original text to mean you selected 100 IDs and skipped those without websites etc, leaving a list less than 100. I now understand that you kept the list “topped up” 🙂

            Many thanks for the clarification.

      • Would we be discussing the intricacies of error margins and confidence levels if Felix Uncia had found that, say, only 7% of osteo websites were pushing quackery? I somehow doubt it.

        This simply avoids the elephant in the room: that – to some degree and whatever confidence level – there is something there that should be thoroughly investigated if the public are to be protected. Now, who do you think might be well-placed to do this and protect the public from the quackery of their members…

        • Well, I only asked out of curiosity, so I might still be asking procedural questions regardless of the numbers.

          It seems that the regulator is tasked with protecting the public. I wonder how a regulator should operate where there is public demand for a service? Regulators typically respond to complaints from the public, or changes in any Act of Parliament that governs their work. Consulting this Osteopaths Act 1993, perhaps your concerns are addressed by the General Osteopathic Council’s Code of Practice, since this seems to be a “professional conduct” issue?

          • Osteofiles said:

            It seems that the regulator is tasked with protecting the public.

            They are indeed, but some might conclude they have failed in that simple task.

            I wonder how a regulator should operate where there is public demand for a service?

            In what way? Surely the demand for their registrants is no business of the regulator?

            Regulators typically respond to complaints from the public, or changes in any Act of Parliament that governs their work. Consulting this Osteopaths Act 1993, perhaps your concerns are addressed by the General Osteopathic Council’s Code of Practice, since this seems to be a “professional conduct” issue?

            And the OPS includes:

            D14 Act with integrity in your professional practice.

            You should make sure that:
            2.1. Your advertising is legal, decent, honest and truthful as defined by the Advertising Standards Authority (ASA) and conforms to the current guidance, such as the UK Code of Non-broadcast Advertising, Sales Promotion and Direct Marketing (CAP Code).

          • Thanks for the comments Alan. I can’t reply to your message for some reason, there’s no reply button underneath it.

            “the regulator is tasked with protecting the public.”
            “some might conclude they have failed in that simple task”

            Sure, and consequently “some” doubtless conclude differently. I guess it depends what you feel the public need protecting from. Adverse events seem within the profession’s remit, but beyond that… what else? Needless expense perhaps, i.e. charging for ineffective treatment? Or CAM as a predictor of hazardous behaviours such as avoidance of “conventional” medicine…? I’m not sure where that all sits. An interesting question. The public choice of CAM is well-known to be a complex area, e.g. http://www.ncbi.nlm.nih.gov/pubmed/23239765

            “I wonder how a regulator should operate where there is public demand for a service?”
            “Surely the demand for their registrants is no business of the regulator?”

            So a regulator has no role in understanding why there is high public demand for a service…?

            Regarding the Osteopathic Practice Standards and advertising being “legal, decent, honest and truthful” this seems like it requires a case-by-case decision by the Advertising Standards Authority, which sounds like a lot of work. I wonder if there’s a better process?

            Apologies for the questions, I’m just trying to understand your position. Thanks! 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories