MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

research methodology

The UK General Chiropractic Council has commissioned a survey of chiropractic patients’ views of chiropractic. Initially, 600 chiropractors were approached to recruit patients, but only 47 volunteered to participate. Eventually, 70 chiropractors consented and recruited a total of 544 patients who completed the questionnaire in 2012. The final report of this exercise has just become available.

I have to admit, I found it intensely boring. This is mainly because the questions asked avoided contentious issues. One has to dig deep to find nuggets of interest. Here are some of the findings that I thought were perhaps mildly intriguing:

15% of all patients did not receive information about possible adverse effects (AEs) of their treatment.

20% received no explanations why investigations such as X-rays were necessary and what risks they carried.

17% were not told how much their treatment would cost during the initial consultation.

38% were not informed about complaint procedures.

9% were not told about further treatment options for their condition.

18% said they were not referred to another health care professional when the condition failed to improve.

20% noted that the chiropractor did not liaise with the patient’s GP.

I think, one has to take such surveys with more than just a pinch of salt. At best, they give a vague impression of what patients believe. At worst, they are not worth the paper they are printed on.

Perhaps the most remarkable finding from the report is the unwillingness of chiropractors to co-operate with the GCC which, after all, is their regulating body. To recruit only ~10% of all UK chiropractors is more than disappointing. This low response rate will inevitably impact on the validity of the results and the conclusions.

It can be assumed that those practitioners who did volunteer are a self-selected sample and thus not representative of the UK chiropractic profession; they might be especially good, correct or obedient. This, in turn, also applies to the sample of patients recruited for this research. If that is so, the picture that emerged from the survey is likely to be be far too positive.

In any case, with a response rate of only ~10%, any survey is next to useless. I would therefore put it in the category of ‘not worth the paper it is printed on’.

 

If I had a pint of beer for every time I have been accused of bias against chiropractic, I would rarely be sober. The thing is that I do like to report about decent research in this field and I am almost every day looking out for new articles which might be worth writing about – but they are like gold dust!

“Huuuuuuuuh, that just shows how very biased he is” I hear the chiro community shout. Well let’s put my hypothesis to the test. Here is a complete list of recent (2013)Medline-listed articles on chiropractic; no omission, no bias, just facts (for clarity, the Pubmed-link is listed first, then the title in bold followed by a short comment in italics):

http://www.ncbi.nlm.nih.gov/pubmed/23360894

Towards establishing an occupational threshold for cumulative shear force in the vertebral joint – An in vitro evaluation of a risk factor for spondylolytic fractures using porcine specimens.

This is an interesting study of the shear forces observed in porcine vertebral specimen during maneuvers which might resemble spinal manipulation in humans. The authors conclude that “Our investigation suggested that pars interarticularis damage may begin non-linearly accumulating with shear forces between 20% and 40% of failure tolerance (approximately 430 to 860N”

http://www.ncbi.nlm.nih.gov/pubmed/23337706

Development of an equation for calculating vertebral shear failure tolerance without destructive mechanical testing using iterative linear regression.

This is a mathematical modelling of the forces that might act on the spine during manipulation. The authors draw no conclusions.

http://www.ncbi.nlm.nih.gov/pubmed/23324133

Collaborative Care for Older Adults with low back pain by family medicine physicians and doctors of chiropractic (COCOA): study protocol for a randomized controlled trial.

This is merely the publication of a trial that is about to commence.

http://www.ncbi.nlm.nih.gov/pubmed/23323682

Military Report More Complementary and Alternative Medicine Use than Civilians.

This is a survey which suggests that ~45% of all military personnel use some form of alternative medicine.

http://www.ncbi.nlm.nih.gov/pubmed/23319526

Complementary and Alternative Medicine Use by Pediatric Specialty Outpatients

This is another survey; it concludes that ” that CAM use is high among pediatric specialty clinic outpatients”

http://www.ncbi.nlm.nih.gov/pubmed/23311664

Extending ICPC-2 PLUS terminology to develop a classification system specific for the study of chiropractic encounters

This is an article on chiropractic terminology which concludes that “existing ICPC-2 PLUS terminology could not fully represent chiropractic practice, adding terms specific to chiropractic enabled coding of a large number of chiropractic encounters at the desired level. Further, the new system attempted to record the diversity among chiropractic encounters while enabling generalisation for reporting where required. COAST is ongoing, and as such, any further encounters received from chiropractors will enable addition and refinement of ICPC-2 PLUS (Chiro)”.

http://www.ncbi.nlm.nih.gov/pubmed/23297270

US Spending On Complementary And Alternative Medicine During 2002-08 Plateaued, Suggesting Role In Reformed Health System

This is a study of the money spent on alternative medicine concluding as follows “Should some forms of complementary and alternative medicine-for example, chiropractic care for back pain-be proven more efficient than allopathic and specialty medicine, the inclusion of complementary and alternative medicine providers in new delivery systems such as accountable care organizations could help slow growth in national health care spending”

http://www.ncbi.nlm.nih.gov/pubmed/23289610

A Royal Chartered College joins Chiropractic & Manual Therapies.

This is a short comment on the fact that a chiro institution received a Royal Charter.

http://www.ncbi.nlm.nih.gov/pubmed/23242960

Exposure-adjusted incidence rates and severity of competition injuries in Australian amateur taekwondo athletes: a 2-year prospective study.

This is a study by chiros to determine the frequency of injuries in taekwondo athletes.

The first thing that strikes me is the paucity of articles; ok, we are talking of just january 2013 but by comparison most medical fields like neurology, rheumatology have produced hundreds of articles during this period and even the field of acupuncture research has generated about three times more.

The second and much more important point is that I fail to see much chiropractic research that is truly meaningful or tells us anything about what I consider the most urgent questions in this area, e.g. do chiropractic interventions work? are they safe?

My last point is equally critical. After reading the 9 papers, I have to honestly say that none of them impressed me in terms of its scientific rigor.

So, what does this tiny investigation suggest? Not a lot, I have to admit, but I think it supports the hypothesis that research into chiropractic is not very active, nor high quality, nor does it address the most urgent questions.

On January 27, 1945, the concentration camp in Auschwitz was liberated. By May of the same year, around 20 similar camps had been discovered. What they revealed is so shocking that it is difficult to put it in words.

Today, on ‘HOCOCAUST MEMORIAL DAY’, I quote (shortened and slightly modified) from articles I published many years ago (references can be found in the originals) to remind us of the unspeakable atrocities that occurred during the Nazi period and of the crucial role the German medical profession played in them.

The Nazi’s euthanasia programme, also known as “Action T4″, started in specialized medicinal departments in 1939. Initially, it was aimed at children suffering from “idiocy, Down’s syndrome, hydrocephalus and other abnormalities”. By the end of 1939, the programme was extended to adults “unworthy of living.” We estimate that, when it was stopped, more than 70,000 patients had been killed.

Action T4 (named after its address: Tiergarten Strasse 4) was the Berlin headquarters of the euthanasia programme. It was run by approximately 50 physicians who, amongst other activities, sent questionnaires to (mostly psychiatric) hospitals urging them to return lists of patients for euthanasia. The victims were transported to specialized centers where they were gassed or poisoned. Action T4 was thus responsible for medically supervised, large-scale murder. Its true significance, however, lies elsewhere. Action T4 turned out to be nothing less than a “pilot project” for the extinction of millions of prisoners of the concentration camps.

The T4 units had developed the technology for killing on an industrial scale. It was only with this know-how that the total extinction of all Jews of the Reich could be planned. This truly monstrous task required medical expertise.

Almost without exception, those physicians who had worked for T4 went on to take charge of what the Nazis called the ‘Final Solution’. While action T4 had killed thousands, its offspring would murder millions under the trained instructions of Nazi doctors.

The medical profession’s role in these crimes was critical and essential. German physicians had been involved at all levels and stages. They had created and embraced the pseudo-science of race hygiene. They were instrumental in developing it further into applied racism. They had generated the know-how of mass extinction. Finally, they also performed outrageously cruel and criminal experiments under the guise of scientific inquiry [see below]. German doctors had thus betrayed all the ideals medicine had previously stood for, and had become involved in criminal activities unprecedented in the history of medicine (full details and references on all of this are provided in my article, see link above).

Alternative medicine

It is well-documented that alternative medicine was strongly supported by the Nazis. The general belief is that this had nothing to do with the sickening atrocities of this period. I believe that this assumption is not entirely correct. In 2001, I published an article which reviews the this subject; I take the liberty of borrowing from it here.

Based on a general movement in favour of all things natural, a powerful trend towards natural ways of healing had developed in the 19(th)century. By 1930, this had led to a situation in Germany where roughly as many lay-practitioners of alternative medicine as conventional doctors were in practice.This had led to considerable tensions between the two camps. To re-unify German medicine under the banner of ‘Neue Deutsche Heilkunde’ (New German Medicine), Nazi officials eventually decided to create  the profession of the ‘Heilpraktiker‘ (healing practitioner). Heilpraktiker were not allowed to train students and their profession was thus meant to become extinct within one generation; Goebbels spoke of having created the cradle and the grave of the Heilpraktiker. However, after 1945, this decision was challenged in the courts and eventually over-turned – and this is why Heilpraktiker are still thriving today.

The ‘flag ship’ of the ‘Neue Deutsche Heilkunde’ was the ‘Rudolf Hess Krankenhaus‘ in Dresden (which was re-named into Gerhard Wagner Krankenhaus after Hess’ flight to the UK). It represented a full integration of alternative and orthodox medicine.

‘Research’

An example of systematic research into alternative medicine is the Nazi government’s project to validate homoeopathy. The data of this massive research programme are now lost (some speculate that homeopaths made them disappear) but, according to an eye-witness report, its results were entirely negative (full details and references on alt med in 3rd Reich are in the article cited above).

There is,of course, plenty of literature on the subject of Nazi ‘research’ (actually, it was pseudo-research) and the unspeakable crimes it entailed. By contrast, there is almost no published evidence that these activities included in any way alternative medicine, and the general opinion seems to be that there are no connections whatsoever. I fear that this notion might be erroneous.

As far as I can make out, no systematic study of the subject has so far been published, but I found several hints and indications that the criminal experiments of Nazi doctors also involved alternative medicine (the sources are provided in my articles cited above or in the links provided below). Here are but a few leads:

Dr Wagner, the chief medical officer of the Nazis was a dedicated and most active proponent of alternative medicine.

Doctors in the alternative “Rudolf Hess Krankenhaus” [see above] experimented on speeding up the recovery of wounded soldiers, on curing syphilis with fasting, and on various other projects to help the war effort.

The Dachau concentration camp housed the largest plantation of medicinal herbs in Germany.

Dr Madaus (founder of the still existing company for natural medicines by the same name) experimented on the sterilisation of humans with herbal and homeopathic remedies, a project that was deemed of great importance for controlling the predicted population growth in the East of the expanding Reich.

Dr Grawitz infected Dachau prisoners with various pathogens to test the effectiveness of homeopathic remedies.

Schuessler salts were also tested on concentration camp inmates.

So, why bring all of this up today? Is it not time that we let grass grow over these most disturbing events? I think not! For many years, I actively researched this area (you can find many of my articles on Medline) because I am convinced that the unprecedented horrors of Nazi medicine need to be told and re-told – not just on HOLOCAUST MEMORIAL DAY, but continually. This, I hope, will minimize the risk of such incredible abuses ever happening again.

As I am drafting this post, I am in a plane flying back from Finland. The in-flight meal reminded me of the fact that no food is so delicious that it cannot be spoilt by the addition of too many capers. In turn, this made me think about the paper I happened to be reading at the time, and I arrived at the following theory: no trial design is so rigorous that it cannot to be turned into something utterly nonsensical by the addition of a few amateur researchers.

The paper I was reading when this idea occurred to me was a randomised, triple-blind, placebo-controlled cross-over trial of homeopathy. Sounds rigorous and top quality? Yes, but wait!

Essentially, the authors recruited 86 volunteers who all claimed to be suffering from “mental fatigue” and treated them with Kali-Phos 6X or placebo for one week (X-potencies signify dilution steps of 1: 10, and 6X therefore means that the salt had been diluted 1: 1000000 ). Subsequently, the volunteers were crossed-over to receive the other treatment for one week.

The results failed to show that the homeopathic medication had any effect (not even homeopaths can be surprised about this!). The authors concluded that Kali-Phos was not effective but cautioned that, because of the possibility of a type-2-error, they might have missed an effect which, in truth, does exist.

In my view, this article provides an almost classic example of how time, money and other resources can be wasted in a pretence of conducting reasonable research. As we all know, clinical trials usually are for testing hypotheses. But what is the hypothesis tested here?

According to the authors, the aim was to “assess the effectiveness of Kali-Phos 6X for attention problems associated with mental fatigue”. In other words, their hyposesis was that this remedy is effective for treating the symptom of mental fatigue. This notion, I would claim, is not a scientific hypothesis, it is a foolish conjecture!

Arguably any hypothesis about the effectiveness of a highly diluted homeopathic remedy is mere wishful thinking. But, if there were at least some promissing data, some might conclude that a trial was justified. By way of justification for the RCT in question, the authors inform us that one previous trial had suggested an effect; however, this study did not employ just Kali-Phos but a combined homeopathic preparation which contained Kalium-Phos as one of several components. Thus the authors’ “hypothesis” does not even amount to a hunch, not even to a slight incling! To me, it is less than a shot in the dark fired by blind optimists – nobody should be surprised that the bullet failed to hit anything.

It could even be that the investigators themselves dimly realised that something is amiss with the basis of their study; this might be the reason why they called it an “exploratory trial”. But an exploratory study is one whithout a hypothesis, and the trial in question does have a hyposis of sorts – only that it is rubbish. And what exactly did the authos meant to explore anyway?

That self-reported mental fatigue in healthy volunteers is a condition that can be mediatised such that it merits treatment?

That the test they used for quantifying its severity is adequate?

That a homeopathic remedy with virtually no active ingredient generates outcomes which are different from placebo?

That Hahnemann’s teaching of homeopathy was nonsense and can thus be discarded (he would have sharply condemned the approach of treating all volunteers with the same remedy, as it contradicts many of his concepts)?

That funding bodies can be fooled to pay for even the most ridiculous trial?

That ethics-committees might pass applications which are pure nonsense and which are thus unethical?

A scientific hypothesis should be more than a vague hunch; at its simplest, it aims to explain an observation or phenomenon, and it ought to have certain features which many alt med researchers seem to have never heard of. If they test nonsense, the result can only be nonsense.

The issue of conducting research that does not make much sense is far from trivial, particularly as so much (I would say most) of alt med research is of such or even worst calibre (if you do not believe me, please go on Medline and see for yourself how many of the recent articles in the category “complementary alternative medicine” truly contribute to knowledge worth knowing). It would be easy therefore to cite more hypothesis-free trials of homeopathy.

One recent example from Germany will have to suffice: in this trial, the only justification for conducting a full-blown RCT was that the manufacturer of the remedy allegedly knew of a few unpublished case-reports which suggested the treatment to work – and, of course, the results of the RCT eventually showed that it didn’t. Anyone with a background in science might have predicied that outcome – which is why such trials are so deplorably wastefull.

Research-funds are increasingly scarce, and they must not be spent on nonsensical projects! The money and time should be invested more fruitfully elsewhere. Participants of clinical trials give their cooperation willingly; but if they learn that their efforts have been wasted unnecessarily, they might think twice next time they are asked. Thus nonsensical research may have knock-on effects with far-reaching consequences.

Being a researcher is at least as serious a profession as most other occupations; perhaps we should stop allowing total amateurs wasting money while playing at being professioal. If someone driving a car does something seriously wrong, we take away his licence; why is there not a similar mechanism for inadequate researchers, funders, ethics-committees which prevents them doing further damage?

At the very minimum, we should critically evaluate the hypothesis that the applicants for research-funds propose to test. Had someone done this properly in relatiom to the two above-named studies, we would have saved about £150,000 per trial (my estimate). But as it stands, the authors will probably claim that they have produced fascinating findings which urgently need further investigation – and we (normally you and I) will have to spend three times the above-named amount (again, my estimate) to finance a “definitive” trial. Nonsense, I am afraid, tends to beget more nonsense.

 

In my last post, we discussed the “A+B versus B” trial design as a tool to produce false positive results. This method is currently very popular in alternative medicine, yet it is by no means the only approach that can mislead us. Today, let’s look at other popular options with a view of protecting us against trialists who naively or willfully might fool us.

The crucial flaw of the “A+B versus B” design is that it fails to account for non-specific effects. If the patients in the experimental group experience better outcomes than the control group, this difference could well be due to effects that are unrelated to the experimental treatment. There are, of course, several further ways to ignore non-specific effects in clinical research. The simplest option is to include no control group at all. Homeopaths, for instance, are very proud of studies which show that ~70% of their patients experience benefit after taking their remedies. This type of result tends to impress journalists, politicians and other people who fail to realise that such a result might be due to a host of factors, e.g. the placebo-effect, the natural history of the disease, regression towards the mean or treatments which patients self-administered while taking the homeopathic remedies. It is therefore misleading to make causal inferences from such data.

Another easy method to generate false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the outcomes in clinical trials is to make sure that expectation is not the cause of or contributor to the outcome. They say that expectation can move mountains; this might be an exaggeration, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment. Similarly, the parents of a child or the owners of an animal can transfer their expectations, and this is one of several reasons why it is incorrect to claim that children and animals are immune to placebo-effects.

Failure to randomise is another source of bias which can make an ineffective therapy look like an effective one when tested in a clinical trial. If we allow patients or trialists to select or choose which patients receive the experimental and which get the control-treatment, it is likely that the two groups differ in a number of variables. Some of these variables might, in turn, impact on the outcome. If, for instance, doctors allocate their patients to the experimental and control groups, they might select those who will respond to the former and those who don’t to the latter. This may not happen with malicious intent but through intuition or instinct: responsible health care professionals want those patients who, in their experience, have the best chances to benefit from a given treatment to receive that treatment. Only randomisation can, when done properly, make sure we are comparing comparable groups of patients, and non-randomisation is likely to produce misleading findings.

While these options for producing false positives are all too obvious, the next possibility is slightly more intriguing. It refers to studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but to investigations attempting to assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce the same or similarly positive results, both must be effective. For instance, such a study might compare the effects of acupuncture to a common pain-killer. Such trials are aptly called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a deliberately silly example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.

Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In our hypothetical example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then apparently confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition, and people who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.

Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, the most useless or even slightly detrimental treatment would appear to be effective simply because it is equivalent to or less harmful than the comparator.

A variation of this theme is the plethora of controlled clinical trials which compare one unproven therapy to another unproven treatment. Perdicatbly, the results indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic researchers then tend to conclude that this proves both treatments to be equally effective.

Another option for creating misleadingly positive findings is to cherry-pick the results. Most trails have many outcome measures; for instance, a study of acupuncture for pain-control might quantify pain in half a dozen different ways, it might also measure the length of the treatment until pain has subsided, the amount of medication the patients took in addition to receiving acupuncture, the days off work because of pain, the partner’s impression of the patient’s health status, the quality of life of the patient, the frequency of sleep being disrupted by pain etc. If the researchers then evaluate all the results, they are likely to find that one or two of them have changed in the direction they wanted. This can well be a chance finding: with the typical statistical tests, one in 20 outcome measures would produce a significant result purely by chance. In order to mislead us, the researchers only need to “forget” about all the negative results and focus their publication on the ones which by chance have come out as they had hoped.

One fail-proof method for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.

And finally, there is always the possibility of overt cheating. Researchers are only human and are thus not immune to temptation. They may have conflicts of interest or may know that positive results are much easier to publish than negative ones. Certainly they want to publish their work – “publish or perish”! So, faced with disappointing results of a study, they might decide to prettify them or even invent new ones which are more pleasing to them, their peers, or their sponsors.

Am I claiming that this sort of thing only happens in alternative medicine? No! Obviously, the way to minimise the risk of such misconduct is to train researchers properly and make sure they are able to think critically. Am I suggesting that investigators of alternative medicine are often not well-trained and almost always uncritical? Yes.

Would it not be nice to have a world where everything is positive? No negative findings ever! A dream! No, it’s not a dream; it is reality, albeit a reality that exists mostly in the narrow realm of alternative medicine research. Quite a while ago, we have demonstrated that journals of alternative medicine never publish negative results. Meanwhile, my colleagues investigating acupuncture, homeopathy, chiropractic etc. seem to have perfected their strategy of avoiding the embarrassment of a negative finding.

Since several years, researchers in this field have adopted a study-design which is virtually sure to generate nothing but positive results. It is being employed widely by enthusiasts of placebo-therapies, and it is easy to understand why: it allows them to conduct seemingly rigorous trials which can impress decision-makers and invariably suggests even the most useless treatment to work wonders.

One of the latest examples of this type of approach is a trial where acupuncture was tested as a treatment of cancer-related fatigue. Most cancer patients suffer from this symptom which can seriously reduce their quality of life. Unfortunately there is little conventional oncologists can do about it, and therefore alternative practitioners have a field-day claiming that their interventions are effective. It goes without saying that desperate cancer victims fall for this.

In this new study, cancer patients who were suffering from fatigue were randomised to receive usual care or usual care plus regular acupuncture. The researchers then monitored the patients’ experience of fatigue and found that the acupuncture group did better than the control group. The effect was statistically significant, and an editorial in the journal where it was published called this evidence “compelling”.

Due to a cleverly over-stated press-release, news spread fast, and the study was celebrated worldwide as a major breakthrough in cancer-care. Finally, most commentators felt, research has identified an effective therapy for this debilitating symptom which affects so many of the most desperate patients. Few people seemed to realise that this trial tells us next to nothing about what effects acupuncture really has on cancer-related fatigue.

In order to understand my concern, we need to look at the trial-design a little closer. Imagine you have an amount of money A and your friend owns the same sum plus another amount B. Who has more money? Simple, it is, of course your friend: A+B will always be more than A [unless B is a negative amount]. For the same reason, such “pragmatic” trials will always generate positive results [unless the treatment in question does actual harm]. Treatment as usual plus acupuncture is more than treatment as usual, and the former is therefore moer than likely to produce a better result. This will be true, even if acupuncture is no more than a placebo – after all, a placebo is more than nothing, and the placebo effect will impact on the outcome, particularly if we are dealing with a highly subjective symptom such as fatigue.

I can be fairly confident that this is more than a theory because, some time ago, we analysed all acupuncture studies with such an “A+B versus B” design. Our hypothesis was that none of these trials would generate a negative result. I probably do not need to tell you that our hypothesis was confirmed by the findings of our analysis. Theory and fact are in perfect harmony.

You might say that the above-mentioned acupuncture trial does still provide important information. Its authors certainly think so and firmly conclude that “acupuncture is an effective intervention for managing the symptom of cancer-related fatigue and improving patients’ quality of life”. Authors of similarly designed trials will most likely arrive at similar conclusions. But, if they are true, they must be important!

Are they true? Such studies appear to be rigorous – e.g. they are randomised – and thus can fool a lot of people, but they do not allow conclusions about cause and effect; in other words, they fail to show that the therapy in question has led to the observed result.

Acupuncture might be utterly ineffective as a treatment of cancer-related fatigue, and the observed outcome might be due to the extra care, to a placebo-response or to other non-specific effects. And this is much more than a theoretical concern: rolling out acupuncture across all oncology centres at high cost to us all might be entirely the wrong solution. Providing good care and warm sympathy could be much more effective as well as less expensive. Adopting acupuncture on a grand scale would also stop us looking for a treatment that is truly effective beyond a placebo – and that surely would not be in the best interest of the patient.

I have seen far too many of those bogus studies to have much patience left. They do not represent an honest test of anything, simply because we know their result even before the trial has started. They are not science but thinly disguised promotion. They are not just a waste of money, they are dangerous – because they produce misleading results – and they are thus also unethical.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories