MD, PhD, FMedSci, FRSB, FRCP, FRCPEd.

false positive

1 9 10 11

Whenever we consider alternative medicine, we think of therapeutic interventions and tend to forget that alternative practitioners frequently employ diagnostic methods which are alien to mainstream health care. Acupuncturists, iridologists, spiritual healers, massage therapists, reflexologists, applied kinesiologists, homeopaths, chiropractors, osteopaths and many other types of alternative practitioners all have their very own ways of diagnosing what might be wrong  with their patients.

The purpose of a diagnostic test or technique is, of course, to establish the presence or absence of an abnormality, condition or disease. Conventional doctors use all sorts of validated diagnostic methods, from physical examination to laboratory tests, from blood pressure measurements to X-rays. Alternative practitioners use mostly alternative methods for arriving at a diagnosis, and we should ask: how reliable are these techniques?

Anyone trying to answer this question, will be surprised to find how very little reliable information on this topic exists. Scientific tests of the validity of alternative diagnostic tests are a bit like gold dust. And this is why a recently published article is, in my view, of particular importance and value.

The aim of this study was to evaluate the inter-rater reliability of pulse-diagnosis as performed by Traditional Korean Medicine (TKM) clinicians. A total 658 patients with stroke who were admitted into Korean oriental medical university hospitals were included. Each patient was seen by two TKM-experts for an examination of the pulse signs – pulse diagnosis is regularly used by practitioners of TKM and Traditional Chinese Medicine (TCM), and is entirely different from what conventional doctors do when they feel the pulse of a patient. Inter-observer reliability was assessed using three methods: simple percentage agreement, the kappa value, and the AC(1) statistic. The kappa value indicated that the inter-observer reliability in evaluating the pulse signs ranged from poor to moderate, whereas the AC(1) analysis suggested that agreement between the two experts was generally high (with the exception of ‘slippery pulse’). The kappa value indicated that the inter-observer reliability was generally moderate to good (with the exceptions of ‘rough pulse’ and ‘sunken pulse’) and that the AC(1) measure of agreement between the two experts was generally high.

Based on these findings, the authors drew the following conclusion: “Pulse diagnosis is regarded as one of the most important procedures in TKM… This study reveals that the inter-observer reliability in making a pulse diagnosis in stroke patients is not particularly high when objectively quantified. Additional research is needed to help reduce this lack of reliability for various portions of the pulse diagnosis.”

This indicates, I think, that the researchers (who are themselves practitioners of TCM!) are not impressed with the inter-rater reliability of the most commonly used diagnostic tool in TCM/TKM. Imagine this to be true for a commonly used test in conventional medicine; imagine, for instance, that one doctor measuring your blood pressure produces entirely different readings than the next one. Hardly acceptable, don’t you think?

And, of course, inter-rater reliability would be only one of several preconditions for their diagnostic methods to be valid. Other essential preconditions for diagnostic tests to be of value are their specificity and their sensitivity; do they discriminate between healthy and unhealthy, and are they capable of differentiating between severely abnormal findings and those that are just a little out of the normal range?

Until we have answers to all the open questions about each specific alternative diagnostic method, it would be unwise to pretend these tests are valid. Imagine a doctor prescribing a life-long anti-hypertensive therapy on the basis of a blood pressure reading that is little more than guess-work!

Since non-validated diagnostic tests can generate both false positive and false negative results, the danger of using them should not be under-estimated. In a way, invalid diagnostic tests are akin to bogus bomb-detectors (which made headlines recently): both are techniques to identify a problem. If the method generates a false positive result, an alert will be issued in vain, people will get anxious for nothing, time and money will be lost, etc. If the method generates a false negative result, we will assume to be safe while, in fact, we are not. In extreme cases, such an error will cost lives.

It is difficult to call those ‘experts’ who advocate using such tests anything else than irresponsible, I’d say. And it is even more difficult to have any confidence in the treatments that might be administered on the basis of such diagnostic methods, wouldn’t you agree?

This post has an odd title and addresses an odd subject. I am sure some people reading it will ask themselves “has he finally gone potty; is he a bit xenophobic, chauvinistic, or what?” I can assure you none of the above is the case.

Since many years, I have been asked to peer-review Chinese systematic reviews and meta-analyses of TCM-trials submitted to various journals and to the Cochrane Collaboration for publication, and I estimate that around 300 such articles are available today. Initially, I thought they were a valuable contribution to our knowledge, particularly for the many of us who cannot read Chinese languages. I hoped they might provide reliable information about this huge and potentially important section of the TCM-evidence. After doing this type of work for some time, I became more and more frustrated; now I have decided not to accept this task any longer – not because it is too much trouble, but because I have come to the conclusion that these articles are far less helpful than I had once assumed; in fact, I now fear that they are counter-productive.

In order to better understand what I mean, it might be best to use an example; this recent systematic review seems as good for that purpose as any.

Its Chinese authors “hypothesized that the eligible trials would provide evidence of the effect of Chinese herbs on bone mineral density (BMD) and the therapeutic benefits of Chinese medicine treatment in patients with bone loss. Randomized controlled trials (RCTs) were thus retrieved for a systematic review from Medline and 8 Chinese databases. The authors identified 12 RCTs involving a total of 1816 patients. The studies compared Chinese herbs with placebo or standard anti-osteoporotic therapy. The pooled data from these RCTs showed that the change of BMD in the spine was more pronounced with Chinese herbs compared to the effects noted with placebo. Also, in the femoral neck, Chinese herbs generated significantly higher increments of BMD compared to placebo. Compared to conventional anti-osteoporotic drugs, Chinese herbs generated greater BMD changes.

In their abstract, the part on the paper that most readers access, the authors reached the following conclusions: “Our results demonstrated that Chinese herb significantly increased lumbar spine BMD as compared to the placebo or other standard anti-osteoporotic drugs.” In the article itself, we find this more detailed conclusion: “We conclude that Chinese herbs substantially increased BMD of the lumbar spine compared to placebo or anti-osteoporotic drugs as indicated in the current clinical reports on osteoporosis treatment. Long term of Chinese herbs over 12 months of treatment duration may increase BMD in the hip more effectively. However, further studies are needed to corroborate the positive effect of increasing the duration of Chinese herbs on outcome as the results in this analysis are based on indirect comparisons. To date there are no studies available that compare Chinese herbs, Chinese herbs plus anti-osteoporotic drugs, and anti-osteoporotic drug versus placebo in a factorial design. Consequently, we are unable to draw any conclusions on the possible superiority of Chinese herbs plus anti-osteoporotic drug versus anti-osteoporotic drug or Chinese herb alone in the context of BMD.

Most readers will feel that this evidence is quite impressive and amazingly solid; they might therefore advocate routinely using Chinese herbs for the common and difficult to treat problem of osteoporosis. The integration of TCM might avoid lots of human suffering, prolong the life of many elderly patients, and save us all a lot of money. Why then am I not at all convinced?

The first thing to notice is the fact that we do not really know which of the ~7000 different Chinese herbs should be used. The article tells us surprisingly little about this crucial point. And even, if we manage to study this question in more depth, we are bound to get thoroughly confused; there are simply too many herbal mixtures and patent medicines to easily identify the most promising candidates.

The second and more important hurdle to making sense of these data is the fact that most of the primary studies originate from inaccessible Chinese journals and were published in Chinese languages which, of course, few people in the West can understand. This is entirely our fault, some might argue, but it does mean that we have to believe the authors, take their words at face value, and cannot check the original data. You may think this is fine, after all, the paper has gone through a rigorous peer-review process where it has been thoroughly checked by several top experts in the field. This, however, is a fallacy; like you and me, the peer-reviewers might not read Chinese either! (I don’t, and I reviewed quite a few of these papers; in some instances, I even asked for translations of the originals to do the job properly but this request was understandably turned down) In all likelihood, the above paper and most similar articles have not been properly peer-reviewed at all.

The third and perhaps most crucial point can only be fully appreciated, if we were able to access and understand the primary studies; it relates to the quality of the original RCTs summarised in such systematic reviews. The abstract of the present paper tells us nothing at all about this issue. In the paper, however, we do find a formal assessment of the studies’ risk of bias which shows that the quality of the included RCTs was poor to very poor. We also find a short but revealing sentence: “The reports of all trials mentioned randomization, but only seven described the method of randomization.” This remark is much more significant than it may seem: we have shown that such studies use such terminology in a rather adventurous way; reviewing about 2000 of these allegedly randomised trials, we found that many Chinese authors call a trial “randomised” even in the absence of a control group (one cannot randomise patients and have no control group)! They seem to like the term because it is fashionable and makes publication of their work easier. We thus have good reason to fear that some/many/most of the studies were not RCTs at all.

The fourth issue that needs mentioning is the fact that very close to 100% of all Chinese TCM-trials report positive findings. This means that either TCM is effective for every indication it is tested for (most unlikely, not least because there are many negative non-Chinese trials of TCM), or there is something very fundamentally wrong with Chinese research into TCM. Over the years, I have had several Chinese co-workers in my team and was invariably impressed by their ability to work hard and efficiently; we often discussed the possible reasons for the extraordinary phenomenon of 0% negative Chinese trials. The most plausible answer they offered was this: it would be most impolite for a Chinese researcher to produce findings which contradict the opinion of his/her peers.

In view of these concerns, can we trust the conclusions of such systematic reviews? I don’t think so – and this is why I have problems with research of this nature. If there are good reasons to doubt their conclusions, these reviews might misinform us systematically, they might not further but hinder progress, and they might send us up the garden path. This could well be in the commercial interest of the Chinese multi-billion dollar TCM-industry, but it would certainly not be in the interest of patients and good health care.

Clinical trials of acupuncture can be quite challenging. In particular, it is often difficult to make sure that any observed outcome is truly due to the treatment and not caused by some other factor(s). How tricky this can be, shows a recently published study.

A new RCT has all (well, almost all) the features of a rigorous study. It tested the effects of acupuncture in patients suffering from hay fever. The German investigators recruited 46 specialized physicians in 6 hospital clinics and 32 private outpatient clinics. In total, 422 patients with IgE sensitization to birch and grass pollen were randomized into three groups: 1) acupuncture plus rescue medication (RM) (n= 212), 2) sham acupuncture plus RM (n= 102), or 3) RM alone (n= 108). Twelve acupuncture sessions were provided in groups 1 and 2 over 8 weeks. The outcome measures included changes in the Rhinitis Quality of Life Questionnaire (RQLQ) overall score and the RM score (RMs) from baseline to weeks 7, 8 and 16 in the first year as well as week 8 in the second year after randomization.

Compared with sham acupuncture and with RM, acupuncture was associated with improvement in RQLQ score and RMS. There were no differences after 16 weeks in the first year. After the 8-week follow-up phase in the second year, small improvements favoring real acupuncture over  sham were noted.

Based on these results, the authors concluded that “acupuncture led to statistically significant improvements in disease-specific quality of life and antihistamine use measures after 8 weeks of treatment compared with sham acupuncture and with RM alone, but the improvements may not be clinically significant.

The popular media were full of claims that this study proves the efficacy of acupuncture. However, I am not at all convinced that this conclusion is not hopelessly over-optimistic.

It might not have been the acupuncture itself that led to the observed improvements; they could well have been caused by several factors unrelated to the treatment itself. To understand my concern, we need to look closer at the actual interventions employed by the investigators.

The real acupuncture was done on acupuncture points thought to be indicated for hay fever. The needling was performed as one would normally do it, and the acupuncturists were asked to treat the patients in  group 1 in such a way that they were likely to experience the famous ‘de-qi’ feeling.

The sham acupuncture, by contrast, was performed on non-acupuncture points; acupuncturists were asked to use shallow needling only and they were instructed to try not to produce ‘de-qi’.

This means that the following factors in combination or alone could have caused [and in my view probably did cause] the observed differences in outcomes between the acupuncture and the sham group:

1) verbal or non-verbal communication between the acupuncturists and the patient [previous trials have shown this factor to be of crucial importance]

2) the visibly less deep needling in the sham-group

3) the lack of ‘de-qi’ experience in the sham-group.

Sham-treatments in clinical trials serve the purpose of a placebo. They are thus meant to be indistinguishable from the verum. If that is not the case [as in the present study], the trial cannot be accepted as being patient-blind. If a trial is not patient-blind, the expectations of patients will most certainly influence the results.

Therefore I believe that the marginal differences noted in this study were not due to the effects of acupuncture per se, but were an artifact caused through de-blinding of the patients. De facto, neither the patients nor the acupuncturists were blinded in this study.

If that is true, the effects were not just not clinically relevant, as noted by the authors, they also had nothing to do with acupuncture. In other words, acupuncture is not of proven efficacy for this condition – a verdict which is also supported by our systematic review of the subject which concluded that “the evidence for the effectiveness of acupuncture for the symptomatic treatment or prevention of allergic rhinitis is mixed. The results for seasonal allergic rhinitis failed to show specific effects of acupuncture…”

Once again, we have before us a study which looks impressive at first glance. At closer scrutiny, we find, however, that it had important design flaws which led to false positive results and conclusions. In my view, it would have been the responsibility of the authors to discuss these limitations in full detail and to draw conclusions that take them into account. Moreover, it would have been the duty of the peer-reviewers and journal editors to pick up on these points. Instead the editors even commissioned an accompanying editorial which displays an exemplary lack of critical thinking.

Having failed to do any of this, they are in my opinion all guilty of misleading the world media who reported extensively and often uncritically on this new study thus misleading us all. Sadly, the losers in this bonanza of incompetence are the many hay fever sufferers who will now be trying (and paying for) useless treatments.

Still in the spirit of ACUPUNCTURE AWARENESS WEEK, I have another critical look at a recent paper. If you trust some of the conclusions of this new article, you might think that acupuncture is an evidence-based treatment for coronary heart disease. I think this would be a recipe for disaster.

This condition affects millions and eventually kills a frighteningly large percentage of the population. Essentially, it is caused by the fact that, as we get older, the blood vessels supplying the heart also change, become narrower and get partially or even totally blocked. This causes lack of oxygen in the heart which causes pain known as angina pectoris. Angina is a most important warning sign indicating that a full blown heart attack might be not far.

The treatment of coronary heart disease consists in trying to let more blood flow through the narrowed coronaries, either by drugs or by surgery. At the same time, one attempts to reduce the oxygen demand of the heart, if possible. Normalisation of risk factors like hypertension and hypercholesterolaemia are key preventative strategies. It is not immediate clear to me how acupuncture might help in all this – but I have been wrong before!

The new meta-analysis included 16 individual randomised clinical trials. All had a high or moderate risk of bias. Acupuncture combined with conventional drugs (AC+CD) turned out to be superior to conventional drugs alone in reducing the incidence of acute myocardial infarction (AMI). AC+CD was superior to conventional drugs in reducing angina symptoms as well as in improving electrocardiography (ECG). Acupuncture by itself was also superior to conventional drugs for angina symptoms and ECG improvement. AC+CD was superior to conventional drugs in shortening the time to onset of angina relief. However, the time to onset was significantly longer for acupuncture treatment than for conventional treatment alone.

From these results, the authors [who are from the Chengdu University of Traditional Chinese Medicine in Sichuan, China] conclude that “AC+CD reduced the occurrence of AMI, and both acupuncture and AC+CD relieved angina symptoms and improved ECG. However, compared with conventional treatment, acupuncture showed a longer delay before its onset of action. This indicates that acupuncture is not suitable for emergency treatment of heart attack. Owing to the poor quality of the current evidence, the findings of this systematic review need to be verified by more RCTs to enhance statistical power.”

As in the meta-analysis discussed in my previous post, the studies are mostly Chinese, flawed, and not obtainable for an independent assessment. As in the previous article, I fail to see a plausible mechanism by which acupuncture might bring about the effects. This is not just a trivial or coincidental observation – I could cite dozens of systematic reviews for which the same criticism applies.

What is different, however, from the last post on gout is simple and important: if you treat gout with a therapy that is ineffective, you have more pain and eventually might opt for an effective one. If you treat coronary heart disease with a therapy that does not work, you might not have time to change, you might be dead.

Therefore I strongly disagree with the authors of this meta-analysis; “the findings of this systematic review need NOT to be verified by more RCTs to enhance statistical power” — foremost, I think, the findings need to be interpreted with much more caution and re-written. In fact, the findings show quite clearly that there is no good evidence to use acupuncture for coronary heart disease. To pretend otherwise is, in my view, not responsible.

There might be an important lesson here: A SEEMINGLY SLIGHT CORRECTION OF CONCLUSIONS OF SUCH SYSTEMATIC REVIEWS MIGHT SAVE LIVES.

This week is acupuncture awareness week, and I will use this occasion to continue focusing on this therapy. This first time ever event is supported by the British Acupuncture Council who state that it aims to “help better inform people about the ancient practice of traditional acupuncture. With 2.3 million acupuncture treatments carried out each year, acupuncture is one of the most popular complementary therapies practised in the UK today.

Right, let’s inform people about acupuncture then! Let’s show them that there is often more to acupuncture research than meets the eye.

My team and I have done lots of research into acupuncture and probably published more papers on this than any other subject. We had prominent acupuncturists on board from the UK, Korea, China and Japan, we ran conferences, published books and are proud to have been innovative and productive in our multidisciplinary research. But here I do not intend to dwell on our own achievements, rather I will highlight several important new papers in this area.

Korean authors just published a meta-analysis to assess the effectiveness of acupuncture as  therapy for gouty arthritis. Ten RCTs involving 852 gouty arthritis patients were included. Six studies of 512 patients reported a significant decrease in uric acid in the treatment group compared with a control group, while two studies of 120 patients reported no such effect. The remaining four studies of 380 patients reported a significant decrease in pain in the treatment group.

The authors conclude “that acupuncture is efficacious as complementary therapy for gouty arthritis patients”.

We should be delighted with such a positive and neat result! Why then do I hesitate and have doubts?

I believe that this paper reveals several important issues in relation to systematic reviews of Chinese acupuncture trials and studies of other TCM interventions. In fact, this is my main reason for discussing the new meta-analysis here. The following three points are crucial, in my view:

1) All the primary studies were from China, and 8 of the 10 were only available in Chinese.

2) All of them had major methodological flaws.

3) It has been shown repeatedly that all acupuncture-trials from China are positive.

Given this situation, the conclusions of any review for which there are only Chinese acupuncture studies might as well be written before the actual research has started. If the authors are pro-acupuncture, as the ones of the present article clearly are, they will conclude that “acupuncture is efficacious“. If the research team has some critical thinkers on board, the same evidence will lead to an entirely different conclusion, such as “due to the lack of rigorous trials, the evidence is less than compelling.

Systematic reviews are supposed to be the best type of evidence we currently have; they are supposed to guide therapeutic decisions. I find it unacceptable that one and the same set of data could be systematically analysed to generate such dramatically different outcomes. This is confusing and counter-productive!

So what is there to do? How can we prevent being misled by such articles? I think that medical journals should refuse to publish systematic reviews which so clearly lack sufficient critical input. I also believe that reviewers of predominantly Chinese studies should provide English translations of these texts so that they can be independently assessed by those who are not able to read Chinese – and for the sake of transparency, journal editors should insist on this point.

And what about the value of acupuncture for gouty arthritis? I think I let the readers draw their own conclusion.

The UK General Chiropractic Council has commissioned a survey of chiropractic patients’ views of chiropractic. Initially, 600 chiropractors were approached to recruit patients, but only 47 volunteered to participate. Eventually, 70 chiropractors consented and recruited a total of 544 patients who completed the questionnaire in 2012. The final report of this exercise has just become available.

I have to admit, I found it intensely boring. This is mainly because the questions asked avoided contentious issues. One has to dig deep to find nuggets of interest. Here are some of the findings that I thought were perhaps mildly intriguing:

15% of all patients did not receive information about possible adverse effects (AEs) of their treatment.

20% received no explanations why investigations such as X-rays were necessary and what risks they carried.

17% were not told how much their treatment would cost during the initial consultation.

38% were not informed about complaint procedures.

9% were not told about further treatment options for their condition.

18% said they were not referred to another health care professional when the condition failed to improve.

20% noted that the chiropractor did not liaise with the patient’s GP.

I think, one has to take such surveys with more than just a pinch of salt. At best, they give a vague impression of what patients believe. At worst, they are not worth the paper they are printed on.

Perhaps the most remarkable finding from the report is the unwillingness of chiropractors to co-operate with the GCC which, after all, is their regulating body. To recruit only ~10% of all UK chiropractors is more than disappointing. This low response rate will inevitably impact on the validity of the results and the conclusions.

It can be assumed that those practitioners who did volunteer are a self-selected sample and thus not representative of the UK chiropractic profession; they might be especially good, correct or obedient. This, in turn, also applies to the sample of patients recruited for this research. If that is so, the picture that emerged from the survey is likely to be be far too positive.

In any case, with a response rate of only ~10%, any survey is next to useless. I would therefore put it in the category of ‘not worth the paper it is printed on’.

 

The question whether spinal manipulation is an effective treatment for infant colic has attracted much attention in recent years. The main reason for this is, of course, that a few years ago Simon Singh had disclosed in a comment that the British Chiropractic Association (BCA) was promoting chiropractic treatment for this and several other childhood condition on their website. Simon famously wrote “they (the BCA) happily promote bogus treatments” and was subsequently sued for libel by the BCA. Eventually, the BCA lost the libel action as well as lots of money, and the entire chiropractic profession ended up with enough egg on their faces to cook omelets for all their patients.

At the time, the BCA had taken advice from several medical and legal experts; one of their medical advisers, I was told, was Prof George Lewith. Intriguingly, he and several others have just published a Cochrane review of manipulative therapies for infant colic. Here are the unabbreviated conclusions from their article:

The studies included in this meta-analysis were generally small and methodologically prone to bias, which makes it impossible to arrive at a definitive conclusion about the effectiveness of manipulative therapies for infantile colic. The majority of the included trials appeared to indicate that the parents of infants receiving manipulative therapies reported fewer hours crying per day than parents whose infants did not, based on contemporaneous crying diaries, and this difference was statistically significant. The trials also indicate that a greater proportion of those parents reported improvements that were clinically significant. However, most studies had a high risk of performance bias due to the fact that the assessors (parents) were not blind to who had received the intervention. When combining only those trials with a low risk of such performance bias, the results did not reach statistical significance. Further research is required where those assessing the treatment outcomes do not know whether or not the infant has received a manipulative therapy. There are inadequate data to reach any definitive conclusions about the safety of these interventions”

Cochrane reviews also carry a “plain language” summary which might be easier to understand for lay people. And here are the conclusions from this section of the review:

The studies involved too few participants and were of insufficient quality to draw confident conclusions about the usefulness and safety of manipulative therapies. Although five of the six trials suggested crying is reduced by treatment with manipulative therapies, there was no evidence of manipulative therapies improving infant colic when we only included studies where the parents did not know if their child had received the treatment or not. No adverse effects were found, but they were only evaluated in one of the six studies.

If we read it carefully, this article seems to confirm that there is no reliable evidence to suggest that manipulative therapies are effective for infant colic. In the analyses, the positive effect disappears, if the parents are properly blinded;  thus it is due to expectation or placebo. The studies that seem to show a positive effect are false positive, and spinal manipulation is, in fact, not effective.

The analyses disclose another intriguing aspect: most trials failed to mention adverse effects. This confirms the findings of our own investigation and amounts to a remarkable breach of publication ethics (nobody seems to be astonished by this fact; is it normal that chiropractic researchers ignore generally accepted rules of ethics?). It also reflects badly on the ability of the investigators of the primary studies to be objective. They seem to aim at demonstrating only the positive effects of their intervention; science is, however, not about confirming the researchers’ prejudices, it is about testing hypotheses.

The most remarkable thing about the new Cochrane review  is, I think, the in-congruence of the actual results and the authors’ conclusion. To a critical observer, the former are clearly negative but  the latter sound almost positive. I think this begs the question about the possibility of reviewer bias.

We have recently discussed on this blog whether reviews by one single author are necessarily biased. The new Cochrane review has 6 authors, and it seems to me that its conclusions are considerably more biased than my single-author review of chiropractic spinal manipulation for infant colic; in 2009, I concluded simply that “the claim [of effectiveness] is not based on convincing data from rigorous clinical trials”.

Which of the two conclusions describe the facts more helpfully and more accurately?

I think, I rest my case.

Would it not be nice to have a world where everything is positive? No negative findings ever! A dream! No, it’s not a dream; it is reality, albeit a reality that exists mostly in the narrow realm of alternative medicine research. Quite a while ago, we have demonstrated that journals of alternative medicine never publish negative results. Meanwhile, my colleagues investigating acupuncture, homeopathy, chiropractic etc. seem to have perfected their strategy of avoiding the embarrassment of a negative finding.

Since several years, researchers in this field have adopted a study-design which is virtually sure to generate nothing but positive results. It is being employed widely by enthusiasts of placebo-therapies, and it is easy to understand why: it allows them to conduct seemingly rigorous trials which can impress decision-makers and invariably suggests even the most useless treatment to work wonders.

One of the latest examples of this type of approach is a trial where acupuncture was tested as a treatment of cancer-related fatigue. Most cancer patients suffer from this symptom which can seriously reduce their quality of life. Unfortunately there is little conventional oncologists can do about it, and therefore alternative practitioners have a field-day claiming that their interventions are effective. It goes without saying that desperate cancer victims fall for this.

In this new study, cancer patients who were suffering from fatigue were randomised to receive usual care or usual care plus regular acupuncture. The researchers then monitored the patients’ experience of fatigue and found that the acupuncture group did better than the control group. The effect was statistically significant, and an editorial in the journal where it was published called this evidence “compelling”.

Due to a cleverly over-stated press-release, news spread fast, and the study was celebrated worldwide as a major breakthrough in cancer-care. Finally, most commentators felt, research has identified an effective therapy for this debilitating symptom which affects so many of the most desperate patients. Few people seemed to realise that this trial tells us next to nothing about what effects acupuncture really has on cancer-related fatigue.

In order to understand my concern, we need to look at the trial-design a little closer. Imagine you have an amount of money A and your friend owns the same sum plus another amount B. Who has more money? Simple, it is, of course your friend: A+B will always be more than A [unless B is a negative amount]. For the same reason, such “pragmatic” trials will always generate positive results [unless the treatment in question does actual harm]. Treatment as usual plus acupuncture is more than treatment as usual, and the former is therefore moer than likely to produce a better result. This will be true, even if acupuncture is no more than a placebo – after all, a placebo is more than nothing, and the placebo effect will impact on the outcome, particularly if we are dealing with a highly subjective symptom such as fatigue.

I can be fairly confident that this is more than a theory because, some time ago, we analysed all acupuncture studies with such an “A+B versus B” design. Our hypothesis was that none of these trials would generate a negative result. I probably do not need to tell you that our hypothesis was confirmed by the findings of our analysis. Theory and fact are in perfect harmony.

You might say that the above-mentioned acupuncture trial does still provide important information. Its authors certainly think so and firmly conclude that “acupuncture is an effective intervention for managing the symptom of cancer-related fatigue and improving patients’ quality of life”. Authors of similarly designed trials will most likely arrive at similar conclusions. But, if they are true, they must be important!

Are they true? Such studies appear to be rigorous – e.g. they are randomised – and thus can fool a lot of people, but they do not allow conclusions about cause and effect; in other words, they fail to show that the therapy in question has led to the observed result.

Acupuncture might be utterly ineffective as a treatment of cancer-related fatigue, and the observed outcome might be due to the extra care, to a placebo-response or to other non-specific effects. And this is much more than a theoretical concern: rolling out acupuncture across all oncology centres at high cost to us all might be entirely the wrong solution. Providing good care and warm sympathy could be much more effective as well as less expensive. Adopting acupuncture on a grand scale would also stop us looking for a treatment that is truly effective beyond a placebo – and that surely would not be in the best interest of the patient.

I have seen far too many of those bogus studies to have much patience left. They do not represent an honest test of anything, simply because we know their result even before the trial has started. They are not science but thinly disguised promotion. They are not just a waste of money, they are dangerous – because they produce misleading results – and they are thus also unethical.

1 9 10 11
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories