MD, PhD, FMedSci, FSB, FRCP, FRCPEd

causation

1 2 3 14

I just came across a new article entitled ” Vaccinated children four times more likely to suffer from ADHD, autism“. It was published in WDDTY, my favourite source of misleading information. Here it is:

Vaccinated children are nearly four times more likely to suffer from learning disabilities, ADHD and autism, a major new study has discovered—and they are six times more likely to suffer from one of these neuro-developmental problems if they were also born prematurely.

The vaccinated child is also more likely to suffer from otitis media, the ear infection, and nearly six times more likely to contract pneumonia.

But the standard childhood vaccines do at least do their job: the vaccinated child is nearly eight times less likely than the unvaccinated to develop chicken pox, and also less likely to suffer from whooping cough (pertussis).

Researchers from Jackson State University are some of the first to look at the long-term effects of vaccination. They monitored the health of 666 children for six years from the time they were six—when the full vaccination programme had been completed—until they were 12. All the children were being home-schooled because it was one of the few communities where researchers could find enough unvaccinated children for comparison; 261 of the children hadn’t been vaccinated and 208 hadn’t had all their vaccinations, while 197 had received the full 48-dose course.

The vaccinated were more likely to suffer from allergic rhinitis, such as hay fever, eczema and atopic dermatitis, learning disability, ADHD (attention-deficit, hyperactive disorder), and autism. The risk was lower among the children who had been partially vaccinated.

Vaccinated children were also more likely to have taken medication, such as an antibiotic, or treatment for allergies or for a fever, than the unvaccinated.

END OF QUOTE

I looked up the original study to check and found several surprises.

The first surprise was that the study was called a ‘pilot’ by its authors, even in the title of the paper: “Pilot comparative study on the health of vaccinated and unvaccinated 6- to 12-year-old U.S. children.”

The second surprise was that even the authors admit to important limitations of their research:

We did not set out to test a specific hypothesis about the association between vaccination and health. The aim of the study was to determine whether the health outcomes of vaccinated children differed from those of unvaccinated homeschool children, given that vaccines have nonspecific effects on morbidity and mortality in addition to protecting against targeted pathogens [11]. Comparisons were based on mothers’ reports of pregnancy-related factors, birth histories, vaccinations, physician-diagnosed illnesses, medications, and the use of health services. We tested the null hypothesis of no difference in outcomes using chi-square tests, and then used Odds Ratios and 96% Confidence Intervals to determine the strength and significance of the association…

What credence can be given to the findings? This study was not intended to be based on a representative sample of homeschool children but on a convenience sample of sufficient size to test for significant differences in outcomes. Homeschoolers were targeted for the study because their vaccination completion rates are lower than those of children in the general population. In this respect our pilot survey was successful, since data were available on 261 unvaccinated children…

Mothers’ reports could not be validated by clinical records because the survey was designed to be anonymous. However, self-reports about significant events provide a valid proxy for official records when medical records and administrative data are unavailable [70]. Had mothers been asked to provide copies of their children’s medical records it would no longer have been an anonymous study and would have resulted in few completed questionnaires. We were advised by homeschool leaders that recruitment efforts would have been unsuccessful had we insisted on obtaining the children’s medical records as a requirement for participating in the study.

A further potential limitation is under-ascertainment of disease in unvaccinated children. Could the unvaccinated have artificially reduced rates of illness because they are seen less often by physicians and would therefore have been less likely to be diagnosed with a disease? The vaccinated were indeed more likely to have seen a doctor for a routine checkup in the past 12 months (57.5% vs. 37.1%, p < 0.001; OR 2.3, 95% CI: 1.7, 3.1). Such visits usually involve vaccinations, which nonvaccinating families would be expected to refuse. However, fewer visits to physicians would not necessarily mean that unvaccinated children are less likely to be seen by a physician if their condition warranted it. In fact, since unvaccinated children were more likely to be diagnosed with chickenpox and whooping cough, which would have involved a visit to the pediatrician, differences in health outcomes are unlikely to be due to under-ascertainment.

The third surprise was that the authors were not at all as certain as WDDTY in their conclusions: “the study findings should be interpreted with caution. First, additional research is needed to replicate the findings in studies with larger samples and stronger research designs. Second, subject to replication, potentially detrimental factors associated with the vaccination schedule should be identified and addressed and underlying mechanisms better understood. Such studies are essential in order to optimize the impact of vaccination of children’s health.”

The fourth surprise was to find the sponsors of this research:

Generation Rescue is, according to Wikipedia, a nonprofit organization that advocates the incorrect view that autism and related disorders are primarily caused by environmental factors, particularly vaccines. These claims are biologically implausible and are disproven by scientific evidence. The organization was established in 2005 by Lisa and J.B. Handley. They have gained attention through use of a media campaign, including full page ads in the New York Times and USA Today. Today, Generation Rescue is known as a platform for Jenny McCarthy‘s autism and anti-vaccine advocacy.

The Children’s Medical Safety Research Institute (CMSRI) was, according to Vaxopedia, created by and is funded by the Dwoskin Family Foundation. It provides grants to folks who will do research on “vaccine induced brain and immune dysfunction” and on what they believe are other “gaps in our knowledge about vaccines and vaccine safety”, including:

While they claim that they are not an anti-vaccine organization, it should be noted that  Claire Dwoskin once said that “Vaccines are a holocaust of poison on our children’s brains and immune systems.”

Did I say SURPRISE?

I take it back!

When it comes to WDDTY, nothing does surprise me.

On this blog, we have often discussed the risks of spinal manipulation. As I see it, the information we have at present suggests that

  • mild to moderate adverse effects are extremely frequent and occur in about half of all patients;
  • serious adverse effects are being reported regularly;
  • the occur usually with chiropractic manipulations of the neck (which are not of proven efficacy for any condition) and often relate to vascular accidents;
  • the consequences can be permanent neurological deficits and even deaths;
  • under-reporting of such cases might be considerable and therefore precise incidence figures are not available;
  • there is no system to accurately monitor the risks;
  • chiropractors are in denial of these problems.

Considering the seriousness of these issues, it is important to do more rigorous research. Therefore, any new paper published on this subject is welcome. A recent article might shed new light on the topic.

The objective of this systematic review was to identify characteristics of 1) patients, 2) practitioners, 3) treatment process and 4) adverse events (AE) occurring after cervical spinal manipulation (CSM) or cervical mobilization. A systematic searches were performed in 6 electronic databases up to December 2014. Of the initial 1043 articles thus located, 144 were included, containing 227 cases. 117 cases described male patients with a mean age of 45 and a mean age of 39 for females. Most patients were treated by chiropractors (66%). Manipulation was reported in 95% of the cases, and neck pain was the most frequent indication for the treatment. Cervical arterial dissection (CAD) was reported in 57%  of the cases and 45.8% had immediate onset symptoms. The overall distribution of gender for CAD was 55% for female. Patient characteristics were described poorly. No clear patient profile, related to the risk of AE after CSM, could be extracted, except that women seemed more at risk for CAD. The authors of this review concluded that there seems to be under-reporting of cases. Further research should focus on a more uniform and complete registration of AE using standardized terminology.

This article provides little new information; but it does confirm what I have been saying since many years: NECK MANIPULATIONS ARE ASSOCIATED WITH SERIOUS RISKS AND SHOULD THEREFORE BE AVOIDED.

This new RCT by researchers from the National Institute of Complementary Medicine in Sydney, Australia was aimed at ‘examining the effect of changing treatment timing and the use of manual, electro acupuncture on the symptoms of primary dysmenorrhea’. It had four arms:

  1. low frequency manual acupuncture (LF-MA),
  2. high frequency manual acupuncture (HF-MA),
  3. low frequency electro acupuncture (LF-EA)
  4. and high frequency electro acupuncture (HF-EA).

A total of 74 women were given 12 treatments over three menstrual cycles, either once per week (LF groups) or three times in the week prior to menses (HF groups). All groups received a treatment in the first 48 hours of menses. The primary outcome was the reduction in peak menstrual pain at 12 months from trial entry.

During the treatment period and 9 month follow-up all groups showed statistically significant reductions in peak and average menstrual pain compared to baseline. However, there were no differences between groups. Health related quality of life increased significantly in 6 domains in groups having high frequency of treatment compared to two domains in low frequency groups. Manual acupuncture groups required less analgesic medication than electro-acupuncture groups. HF-MA was most effective in reducing secondary menstrual symptoms compared to both–EA groups.

The authors concluded that acupuncture treatment reduced menstrual pain intensity and duration after three months of treatment and this was sustained for up to one year after trial entry. The effect of changing mode of stimulation or frequency of treatment on menstrual pain was not significant. This may be due to a lack of power. The role of acupuncture stimulation on menstrual pain needs to be investigated in appropriately powered randomised controlled trials.

If I were not used to reading rubbish research of alternative medicine in general and acupuncture in particular, this RCT would amaze me – not so much because of its design, execution, or write-up, but primarily because of its conclusion (why, oh why, I ask myself, did PLOS ONE publish this paper?). They are, I think, utterly barmy.

Let me explain:

  • acupuncture treatment reduced menstrual pain intensity” – oh no, it didn’t; at least this is not what the study proves; the fact that pain was perceived as less could be due to a host of factors, for instance regression towards the mean, or social desirability; as there was no proper control group, nobody can tell;
  • the lack of difference between treatments “may be due to a lack of power”. Yes, but more likely it is due to the fact that all versions of a placebo therapy generate similar outcomes.
  • acupuncture stimulation on menstrual pain needs to be investigated in appropriately powered randomised controlled trials”. Why? Because the authors have a quasi-religious belief in acupuncture? And if they have, why did they not design their study ‘appropriately’?

The best conclusion I can suggest for this daft trial is this: IN THIS STUDY, THE PRIMARY ENDPOINT SHOWED NO DIFFERENCE BETWEEN THE 4 TREATMENT GROUPS. THE RESULTS ARE THEREFORE FULLY COMPATIBLE WITH THE NOTION THAT ACUPUNCTURE IS A PLACEBO THERAPY.

Something along these lines would, in my view, have been honest and scientific. Sadly, in acupuncture research, we very rarely get such honest science and the ‘National Institute of Complementary Medicine in Sydney, Australia’ has no track record of being the laudable exception to this rule.

It used to be called ‘good bedside manners’. The term is an umbrella for a range of attitudes and behaviours including compassion, empathy and conveying positive messages. What could be more obvious than the assumption that good bedside manners are better than bad ones?

But as sceptics, we need to doubt obvious assumptions! Where is the evidence? we need to ask. So, where is the evidence that positive messages have any clinical effects? A meta-analysis has tackled the issue, and the results are noteworthy.

The researchers aimed to estimate the efficacy of positive messages for pain reduction. They included RCTs of the effects of positive messages. Their primary outcome measures were differences in patient- or observer reported pain between groups who were given positive messages and those who were not. Of the 16 RCTs (1703 patients) that met the inclusion criteria, 12 trials had sufficient data for meta-analysis. The pooled standardized effect size was −0.31 (95% CI −0.61 to −0.01, P = 0.04, I² = 82%). The effect size remained positive but not statistically significant after we excluded studies considered to have a high risk of bias (standard effect size −0.17, 95% CI −0.54 to 0.19, P = 0.36, I² = 84%). The authors concluded that care of patients with chronic or acute pain may be enhanced when clinicians deliver positive messages about possible clinical outcomes. However, we have identified several limitations of the present study that suggest caution when interpreting the results. We recommend further high quality studies to confirm (or falsify) our result.

The 1st author of this paper published a comment in which he stated that our recent mega-study with 12 randomized trials confirmed that doctors who use positive language reduce patient pain by a similar amount to drugs. Other trials show that positive messages can:

• help Parkinson’s patients move their hands faster,
• increase ‘peak flow’ (a measure of how much air is breathed) in asthma patients,
• improve the diameter of arteries in heart surgery patients, and
• reduce the amount of pain medication patients use.

The way a positive message seems to help is biological. When a patient anticipates a good thing happening (for example that their pain will go away), this activates parts of the brain that help the body make its own drugs like endorphins. A positive doctor may also help a patient relax which can also improve health.

I am not sure that this is entirely correct. When the authors excluded the methodologically weak and therefore unreliable studies, the effect was no longer significant. That is to say, it was likely due to chance.

And what about the other papers cited above? I am not sure about them either. Firstly, they do not necessarily show that positive messages are effective. Secondly, there is just one study for each claim, and one swallow does not make a summer; we would need independent replications.

So, am I saying that being positive as a clinician is ineffective? No! I am saying that the evidence is too flimsy to be sure. And possibly, this means that the effect of positive messages is smaller than we all thought.

In the US, some right-wing politicians might answer this question in the affirmative, having suggested that American citizens don’t really need healthcare, if only they believed stronger in God. Here in the UK, some right-wing MPs are not that far from such an attitude, it seems.

A 2012 article in the ‘Plymouth Harald’ revealed that the Tory MP for South West Devon, Gary Streeter , has challenged the UK Advertising Standards Authority (ASA) for banning claims that ‘God can heal’. Mr Streeter was reported to have written to the ASA demanding it produce “indisputable scientific evidence” to prove that prayer does not work – otherwise they will raise the issue in Parliament, he threatened. Mr Streeter also accused the ASA of “poor judgement” after it banned a Christian group from using leaflets stating: “Need healing? God can heal today!… We believe that God loves you and can heal you from any sickness.”

The ASA said such claims were misleading and could discourage people from seeking essential medical treatment.

The letter to ASA was written on behalf of the all-party Christians in Parliament group, which Mr Streeter chairs. Here are a few quotes from this bizarre document:

“We write to express our concern at this decision and to enquire about the basis on which it has been made… It appears to cut across two thousand years of Christian tradition and the very clear teaching in the Bible. Many of us have seen and experienced physical healing ourselves in our own families and churches and wonder why you have decided that this is not possible. On what scientific research or empirical evidence have you based this decision?… You might be interested to know that I (Gary Streeter) received divine healing myself at a church meeting in 1983 on my right hand, which was in pain for many years. After prayer at that meeting, my hand was immediately free from pain and has been ever since. What does the ASA say about that? I would be the first to accept that prayed for people do not always get healed, but sometimes they do… It is interesting to note that since the traumatic collapse of the footballer Fabrice Muamba the whole nation appears to be praying for a physical healing for him. I enclose some media extracts. Are they wrong also and will you seek to intervene? … We invite your detailed response to this letter and unless you can persuade us that you have reached your ruling on the basis of indisputable scientific evidence, we intend to raise this matter in Parliament.”

Mr Streeter displays, of course, a profound and embarrassing ignorance of science, healthcare and common sense:

  • ‘Indisputable’ evidence that something is ineffective is usually not obtainable in science.
  • In healthcare it is also not relevant, because we try to employ treatments that are proven to work and avoid those for which this is not the case.
  • It is common sense that those who make a claim must also prove it to be true; those who doubt it need not prove that it is untrue.
  • Chronic pain disappearing spontaneously is not uncommon.
  • The plural of anecdote is anecdotes, not evidence!

Personally, I find it worrying that a man with such views sits in parliament and exerts influence over me and our country.

“Highly diluted homeopathic remedies cannot possibly work beyond a placebo effect because there is nothing in them”. This is the argument, we often hear. It is, I think correct. But homeopaths have always disagreed. Hahnemann claimed that the healing power of his remedies was due to a ‘vital force’, and for a long time his followers repeated this mantra. Nowadays, it sounds too obsolete to be taken seriously, and homeopaths came up with new theories as to how their remedies work. The current favourite is the ‘nano-theory’.

This article explains it quite well: “… some of the most exciting findings have been in the world of tiny nano-particles.   Nano-particles are described as particles between 1 and 100 nanometers in size.  For an idea of scale, a nanometer is 1 billionth of a meter.  A single atom is one-tenth of a nanometer, and subatomic particles are still smaller than that.  Quantum mechanics (the study of these very small particles) has shown that these tiny particles can and do have impact our macro world, and can be useful in everything from medical PET scans to quantum computing. But the breakthrough that I’m most excited about is the latest study around nano-particles which has shown that at the very highest prescription strength dilutions of a homeopathic substance (50M) there are still nano-particles of the original substance that exist.  Further, not only did researchers discover that these particles exist, but they showed that they had demonstrable effects when tests were run on homeopathic dilutions versus a control substance…”

Right!

So, the claim is that, during the process of potentisation of a homeopathic remedy, nano-particles of the original stock are formed. Therefore, even ultra-molecular dilutions are not devoid of material but do contain tiny bits of what is says on the bottle. This is the reason why homeopaths now claim WE WERE RIGHT ALL ALONG; HOMEOPATHY WORKS!!!

I Have several problems with this assumption:

  • The nano-particles have been shown by just 1 or 2 research groups. I would like to see independent confirmations of their findings because I am not convinced that this is not simply an artefact without real meaning.
  • Even if we accept the ‘nano-theory’ for a moment, there are numerous other issues.
  • What about the many homeopathic remedies that use stock which is not material by nature, for instance, X-ray, luna, etc.? Do we need to assume that there are also nano-particles of non-materials?
  • And for remedies that are based on a material stock (like arnica or nux vomica, or Berlin Wall, for instance), how do the nano-particles generate heath effects? How do a few nano-particles of arnica make cuts and bruises heal faster? How do nano-particles of nux vomica stop a patient from vomiting? How do nano-particles of the Berlin Wall do anything at all?

If the ‘nano-theory’ were true (which I doubt very much), it totally fails to provide an explanation as to how homeopathy works. This explanation would still need to be identified for each of the thousands of different remedies in separate investigations.

If nano-particles are truly generated during the potentisation process, it proves almost nothing. All it would show is that shaken water differs from unshaken water. The water in my kitchen sink also differs from pure water; this, however, does not mean that it has healing properties.

My conclusion: there is no plausible mode of action of highly diluted homeopathic remedies.

This double-blind RCT aimed to test the efficacy of self-administered acupressure for pain and physical function in adults with knee osteoarthritis (KOA).

150 patients with symptomatic KOA participated and were randomized to

  1. verum acupressure,
  2. sham acupressure,
  3. or usual care.

Verum and sham, but not usual care, participants were taught to self-apply acupressure once daily, five days/week for eight weeks. Assessments were collected at baseline, 4 and 8 weeks. The numeric rating scale (NRS) for pain was administered during weekly phone calls. Outcomes included the WOMAC pain subscale (primary), the NRS and physical function measures (secondary). Linear mixed regression was conducted to test between group differences in mean changes from baseline for the outcomes at eight weeks.

Compared with usual care, both verum and sham participants experienced significant improvements in WOMAC pain, NRS pain and WOMAC function at 8 weeks. There were no significant differences between verum and sham acupressure groups in any of the outcomes.

The authors concluded that self-administered acupressure is superior to usual care in pain and physical function improvement for older people with KOA. The reason for the benefits is unclear and placebo effects may have played a role.

Another very odd conclusion!

The authors’ stated aim was to TEST THE EFFICACY OF ACUPRESSURE. To achieve this aim, they rightly compared it to a placebo (sham) intervention. This comparison did not show any differences between the two. Ergo, the only correct conclusion is that acupressure is a placebo.

I know, the authors (sort of) try to say this in their conclusions: placebo effects may have played a role. But surely, this is more than a little confusing. Placebo effects were quite evidently the sole cause of the observed outcomes. Is it ethical to confuse the public in this way, I wonder.

 

 

On this blog, we have had (mostly unproductive) discussions with homeopath so often that sometimes they sound like a broken disk. I don’t want to add to this kerfuffle; what I hope to do today is to summarise  a certain line of argument which, from the homeopaths’ point of view, seems entirely logical. I do this in the form of a fictitious conversation between a scientist (S) and a classical homeopath (H). My aim is to make the reader understand homeopaths better so that, future debates might be better informed.

HERE WE GO:

S: I have studied the evidence from studies of homeopathy in some detail, and I have to tell you, it fails to show that homeopathy works.

H: This is not true! We have plenty of evidence to prove that patients get better after seeing a homeopath.

S: Yes, but this is not because of the remedy; it is due to non-specific effect like the empathetic consultation with a homeopath. If one controls for these factors in adequately designed trials, the result usually is negative.

I will re-phrase my claim: the evidence fails to show that highly diluted homeopathic remedies are more effective than placebos.

H: I disagree, there are positive studies as well.

S: Let’s not cherry pick. We must always consider the totality of the reliable evidence. We now have a meta-analysis published by homeopaths that demonstrates the ineffectiveness of homeopathy quite clearly.

H: This is because homeopathy was not used correctly in the primary trials. Homeopathy must be individualised for each unique patient; no two cases are alike! Remember: homeopathy is based on the principle that like cures like!!!

S: Are you saying that all other forms of using homeopathy are wrong?

H: They are certainly not adhering to what Hahnemann told us to do; therefore you cannot take their ineffectiveness as proof that homeopathy does not work.

S: This means that much, if not most of homeopathy as it is used today is to be condemned as fake.

H: I would not go that far, but it is definitely not the real thing; it does not obey the law of similars.

S: Let’s leave this to one side for the moment. If you insist on individualised homeopathy, I must tell you that this approach can also be tested in clinical trials.

H: I know; and there is a meta-analysis which proves that it is effective.

S: Not quite; it concluded that medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.

If you call this a proof of efficacy, I would have to disagree with you. The effect was tiny and at least two of the best studies relevant to the subject were left out. If anything, this paper is yet another proof that homeopathy is useless!

H: You simply don’t understand homeopathy enough to say that. I tried to tell you that the remedy must be carefully chosen to fit each unique patient. This is a very difficult task, and sometimes it is not successful – mainly because the homeopaths employed in clinical trials are not skilled enough to find it. This means that, in these studies, we will always have a certain failure rate which, in turn, is responsible for the small average effect size.

S: But these studies are always conducted by experienced homeopaths, and only the very best, most experienced homeopaths were chosen to cooperate in them. Your argument that the trials are negative because of the ineffectiveness of the homeopaths – rather than the ineffectiveness of homeopathy – is therefore nonsense.

H: This is what you say because you don’t understand homeopathy!

S: No, it is what you say because you don’t understand science. How else would you prove that your hypothesis is correct?

H: Simple! Just look at individual cases from the primary studies within this meta-analysis . You will see that there are always patients who did improve. These cases are the proof we need. The method of the RCT is only good for defining average effects; this is not what we should be looking at, and it is certainly not what homeopaths are interested in.

S: Are you saying that the method of the RCT is wrong?

H: It is not always wrong. Some RCTs of homeopathy are positive and do very clearly prove that homeopathy works. These are obviously the studies where homeopathy has been applied correctly. We have to make a meta-analysis of such trials, and you will see that the result turns out to be positive.

S: So, you claim that all the positive studies have used the correct method, while all the negative ones have used homeopathy incorrectly.

H: If you insist to put it like that, yes.

S: I see, you define a trial to have used homeopathy correctly by its result. Essentially you accept science only if it generates the outcome you like.

H: Yes, that sounds odd to you – because you don’t understand enough of homeopathy.

S: No, what you seem to insist on is nothing short of double standards. Or would you accept a drug company claiming: some patients did feel better after taking our new drug, and this is proof that it works?

H: You see, not understanding homeopathy leads to serious errors.

S: I give up.

The question whether spinal manipulative therapy (SMT) is effective for acute low back pain is still discussed controversially. Chiropractors (they use SMT more regularly than other professionals) try everything to make us believe it does work, while the evidence is far less certain. Therefore, it is worth considering the best and most up-to-date data.

The  aim of this paper was to systematically review studies of the effectiveness and harms of SMT for acute (≤6 weeks) low back pain. The research question was straight forward: Is the use of SMT in the management of acute (≤6 weeks) low back pain associated with improvements in pain or function?

A through literature search was conducted to locate all relevant papers. Study quality was assessed using the Cochrane Back and Neck (CBN) Risk of Bias tool. The evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) criteria. The main outcome measures were pain (measured by either the 100-mm visual analog scale, 11-point numeric rating scale, or other numeric pain scale), function (measured by the 24-point Roland Morris Disability Questionnaire or Oswestry Disability Index [range, 0-100]), or any harms measured within 6 weeks.

Of 26 eligible RCTs identified, 15 RCTs (1711 patients) provided moderate-quality evidence that SMT has a statistically significant association with improvements in pain (pooled mean improvement in the 100-mm visual analog pain scale, −9.95 [95% CI, −15.6 to −4.3]). Twelve RCTs (1381 patients) produced moderate-quality evidence that SMT has a statistically significant association with improvements in function (pooled mean effect size, −0.39 [95% CI, −0.71 to −0.07]). Heterogeneity was not explained by type of clinician performing SMT, type of manipulation, study quality, or whether SMT was given alone or as part of a package of therapies. No RCT reported any serious adverse event. Minor transient adverse events such as increased pain, muscle stiffness, and headache were reported 50% to 67% of the time in large case series of patients treated with SMT.

The authors concluded that among patients with acute low back pain, spinal manipulative therapy was associated with modest improvements in pain and function at up to 6 weeks, with transient minor musculoskeletal harms. However, heterogeneity in study results was large.

This meta-analysis has been celebrated by chiropractors around the world as a triumph for their hallmark therapy, SMT. But there have also been more cautionary voices – not least from the lead author of the paper. Patients undergoing spinal manipulation experienced a decline of 1 point in their pain rating, says Dr. Paul Shekelle, an internist with the West Los Angeles Veterans Affairs Medical Center and the Rand Corporation who headed the study. That’s about the same amount of pain relief as from NSAIDs, over-the-counter nonsteroidal anti-inflammatory medication, such as ibuprofen. The study also found spinal manipulation modestly improved function. On average, patients reported greater ease and comfort engaging in two day-to-day activities — such as finding they could walk more quickly, were having less difficulty turning over in bed or were sleeping more soundly.

It’s not clear exactly how spinal manipulation relieves back pain. But it may reposition the small joints in the spine in a way that causes less pain, according to Dr. Richard Deyo, an internist and professor of evidence-based medicine at the Oregon Health and Science University. Deyo wrote an editorial published along with the study. Another possibility, Deyo says, is that spinal manipulation may restore some material in the disk between the vertebrae, or it may simply relax muscles, which could be important. There may also be mind-body interaction that comes from the “laying of hands” or a trusting relationship between patients and their health care provider, he says.

Deyo notes that there are many possible treatments for lower back pain, including oral medicine, injected medicine, corsets, traction, surgery, acupuncture and massage therapy. But of about 200 treatment options, “no single treatment is clearly superior,” he says.

In another comment by Paul Ingraham the critical tone was much clearer: “Claiming it as a victory is one of the best examples I’ve ever seen of making lemonade out of science lemons! But I can understand the mistake, because the review itself does seem positive at first glance: the benefits of SMT are disingenuously summarized as “statistically significant” in the abstract, with no mention of clinical significance (effect size; see Statistical Significance Abuse). So the abstract sounds like good news to anyone but the most wary readers, while deep in the main text the same results are eventually conceded to be “clinically modest.” But even even that seems excessively generous: personally, I need at least a 2-point improvement in pain on a scale of 10 to consider it a “modest” improvement! This is not a clearly positive review: it shows weak evidence of minor efficacy, based on “significant unexplained heterogeneity” in the results. That is, the results were all over the place — but without any impressive benefits reported by any study — and the mixture can’t be explained by any obvious, measurable factor. This probably means there’s just a lot of noise in the data, too many things that are at least as influential as the treatment itself. Or — more optimistically — it could mean that SMT is “just” disappointingly mediocre on average, but might have more potent benefits in a minority of cases (that no one seems to be able to reliably identify). Far from being good news, this review continues a strong trend (eg Rubinstein 2012) of damning SMT with faint praise, and also adds evidence of backfiring to mix. Although fortunately “no RCT reported any serious adverse event,” it seems that minor harms were legion: “increased pain, muscle stiffness, and headache were reported 50% to 67% of the time in large case series of patients treated with SMT.” That’s a lot of undesirable outcomes. So the average patient has a roughly fifty-fifty chance of up to roughly maybe a 20% improvement… or feeling worse to some unknown degree! That does not sound like a good deal to me. It certainly doesn’t sound like good medicine.”

END OF QUOTE

As I have made clear in many previous posts, I do fully agree with these latter statements and would add just three points:

  1. We know that many of the SMT studies completely neglect reporting adverse effects. Therefore it is hardly surprising that no serious complications were on record. Yet, we know that they do occur with sad regularity.
  2. None of the studies controlled for placebo effects. It is therefore possible – I would say even likely – that a large chunk of the observed benefit is not due to SMT per se but to a placebo response.
  3. It seems more than questionable whether the benefits of SMT outweigh its risks.

The aim of this pragmatic study was “to investigate the effectiveness of acupuncture in addition to routine care in patients with allergic asthma compared to treatment with routine care alone.”

Patients with allergic asthma were included in a controlled trial and randomized to receive up to 15 acupuncture sessions over 3 months plus routine care, or to a control group receiving routine care alone. Patients who did not consent to randomization received acupuncture treatment for the first 3 months and were followed as a cohort. All trial patients were allowed to receive routine care in addition to study treatment. The primary endpoint was the asthma quality of life questionnaire (AQLQ, range: 1–7) at 3 months. Secondary endpoints included general health related to quality of life (Short-Form-36, SF-36, range 0–100). Outcome parameters were assessed at baseline and at 3 and 6 months.

A total of 1,445 patients were randomized and included in the analysis (184 patients randomized to acupuncture plus routine care and 173 to routine care alone, and 1,088 in the nonrandomized acupuncture plus routine care group). In the randomized part, acupuncture was associated with an improvement in the AQLQ score compared to the control group (difference acupuncture vs. control group 0.7 [95% confidence interval (CI) 0.5–1.0]) as well as in the physical component scale and the mental component scale of the SF-36 (physical: 2.5 [1.0–4.0]; mental 4.0 [2.1–6.0]) after 3 months. Treatment success was maintained throughout 6 months. Patients not consenting to randomization showed similar improvements as the randomized acupuncture group.

The authors concluded that in patients with allergic asthma, additional acupuncture treatment to routine care was associated with increased disease-specific and health-related quality of life compared to treatment with routine care alone.

We have been over this so many times (see for instance here, here and here) that I am almost a little embarrassed to explain it again: it is fairly easy to design an RCT such that it can only produce a positive result. The currently most popular way to achieve this aim in alternative medicine research is to do a ‘A+B versus B’ study, where A = the experimental treatment, and B = routine care. As A always amounts to more than nothing – in the above trial acupuncture would have placebo effects and the extra attention would also amount to something – A+B must always be more than B alone. The easiest way of thinking of this is to imagine that A and B are both finite amounts of money; everyone can understand that A+B must always be more than B!

Why then do acupuncture researchers not get the point? Are they that stupid? I happen to know some of the authors of the above paper personally, and I can assure you, they are not stupid!

So, why?

I am afraid there is only one reason I can think of: they know perfectly well that such an RCT can only produce a positive finding, and precisely that is their reason for conducting such a study. In other words, they are not using science to test a hypothesis, they deliberately abuse it to promote their pet therapy or hypothesis.

As I stated above, it is fairly easy to design an RCT such that it can only produce a positive result. Yet, it is arguably also unethical, perhaps even fraudulent, to do this. In my view, such RCTs amount to pseudoscience and scientific misconduct.

1 2 3 14

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories