MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

false positive

Some experts concede that chiropractic spinal manipulation is effective for chronic low back pain (cLBP). But what is the right dose? There have been no full-scale trials of the optimal number of treatments with spinal manipulation. This study was aimed at filling this gap by trying to identify a dose-response relationship between the number of visits to a chiropractor for spinal manipulation and cLBP outcomes. A further aim was to determine the efficacy of manipulation by comparison with a light massage control.

The primary cLBP outcomes were the 100-point pain intensity scale and functional disability scales evaluated at the 12- and 24-week primary end points. Secondary outcomes included days with pain and functional disability, pain unpleasantness, global perceived improvement, medication use, and general health status.

One hundred patients with cLBP were randomized to each of 4 dose levels of care: 0, 6, 12, or 18 sessions of spinal manipulation from a chiropractor. Participants were treated three times per week for 6 weeks. At sessions when manipulation was not assigned, the patients received a focused light massage control. Covariate-adjusted linear dose effects and comparisons with the no-manipulation control group were evaluated at 6, 12, 18, 24, 39, and 52 weeks.

For the primary outcomes, mean pain and disability improvement in the manipulation groups were 20 points by 12 weeks, an effect that was sustainable to 52 weeks. Linear dose-response effects were small, reaching about two points per 6 manipulation sessions at 12 and 52 weeks for both variables. At 12 weeks, the greatest differences compared to the no-manipulation controls were found for 12 sessions (8.6 pain and 7.6 disability points); at 24 weeks, differences were negligible; and at 52 weeks, the greatest group differences were seen for 18 visits (5.9 pain and 8.8 disability points).

The authors concluded that the number of spinal manipulation visits had modest effects on cLBP outcomes above those of 18 hands-on visits to a chiropractor. Overall, 12 visits yielded the most favorable results but was not well distinguished from other dose levels.

This study is interesting because it confirms that the effects of chiropractic spinal manipulation as a treatment for cLBP are tiny and probably not clinically relevant. And even these tiny effects might not be due to the treatment per se but could be caused by residual confounding and bias.

As for the optimal dose, the authors suggest that, on average, 18 sessions might be the best. But again, we have to be clear that the dose-response effects were small and of doubtful clinical relevance. Since the therapeutic effects are tiny, it is obviously difficult to establish a dose-response relationship.

In view of the cost of chiropractic spinal manipulation and the uncertainty about its safety, I would probably not rate this approach as the treatment of choice but would consider the current Cochrane review which concludes that “high quality evidence suggests that there is no clinically relevant difference between spinal manipulation and other interventions for reducing pain and improving function in patients with chronic low-back pain” Personally, I think it is more prudent to recommend exercise, back school, massage or perhaps even yoga to cLBP-sufferers.

Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.

Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.

An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.

Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.

The principle that is being followed here is simple:

  • believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
  • the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
  • they publish their findings in one of the many journals that specialise in this sort of nonsense;
  • they make sure that advocates across the world learn about their results;
  • the community of apologists of this treatment picks up the information without the slightest critical analysis;
  • the researchers conduct more and more of such pseudo-research;
  • nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
  • thus the body of false or misleading ‘evidence’ grows and grows;
  • proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
  • too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
  • eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
  • important health care decisions are thus based on data which are false and misleading.

So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:

  • authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
  • editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
  • if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
  • in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).

What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ‘evidence’ critically…but how realistic is that?

According to its authors, this RCT was aimed at investigating the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression. In particular the second research question is intriguing, I think – so let’s have a closer look at this trial.

The study was designed as a randomized, partially double-blind, placebo-controlled, four-armed, 2×2 factorial trial with a 6-week study duration. A total of 44 patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was thus underpowered for the pre-planned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI -1.2;5.2) for Q-potencies vs. placebo, and -3.1 (-5.9;-0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.

The conclusions were remarkable: although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.

Alright, the authors encountered problems in recruiting enough patients and they therefore decided to stop the trial early. This sort of thing happens. Most researchers would then not publish any data at all. This team, however, did publish a report, and the decision to do so might be perfectly fine: other investigators might learn from the problems which led to early termination of the study.

But why do they conclude that the results were INCONCLUSIVE? I think the results were not inconclusive but non-existent; these were no results to report other than those related to the recruitment problems. And even if one insists on presenting outcome data as an exploratory analysis, one cannot honestly say they were INCONCLUSIVE, all one might state in this case is that the results failed to show an effect of the remedy or the consultation. This is far less favourable for homeopathy than stating the results were INCONCLUSIVE.

And why on earth do the authors conclude “we cannot recommend undertaking a further trial addressing this question in a similar setting”? This does not make the slightest sense to me. If the trialists encountered recruitment problems, others might find ways of overcoming them. The research question asking whether the effects of an extensive homeopathic case taking differ from those of a shorter conventional one seems important. If answered accurately, it could disentangle much of the confusion that surrounds clinical trials of homeopathy.

I have repeatedly commented on the odd conclusions drawn by proponents of alternative medicine on the basis of data that did not quite fulfil their expectations, and I often ask myself at what point this ‘prettification’ of the results via false positive conclusions crosses the line to scientific misconduct. My theory is that these conclusions appear odd to those capable of critical analysis because the authors bend over backwards in order to conclude more positively than the data would seem to permit. If we see it this way, such conclusions might even prove useful as a fairly sensitive ‘bullshit-detector’.

We have probably all fallen into the trap of thinking that something which has stood the ‘test of time’, i.e. something that has been used for centuries with apparent success, must be ok. In alternative medicine, this belief is extremely wide-spread, and one could argue that the entire sector is built on it. Influential proponents of ‘traditional’ medicine like Prince Charles do their best to strengthen this assumption. Sadly, however, it is easily disclosed as a classical fallacy: things that have stood the ‘test of time’ might work, of course, but the ‘test of time’ is never a proof of anything.

A recent study brought this message home loud and clear. This trial tested the efficacy of Rhodiola crenulata (R. crenulata), a traditional remedy which has been used widely in the Himalayan areas and in Tibet to prevent acute mountain sickness . As no scientific studies of this traditional treatment existed, the researchers conducted a double-blind, placebo-controlled crossover RCT to test its efficacy in acute mountain sickness prevention.

Healthy adult volunteers were randomized to two treatment sequences, receiving either 800 mg R. crenulata extract or placebo daily for 7 days before ascent and two days during mountaineering. After a three-month wash-out period, they were crossed over to the alternate treatment. On each occasion, the participants ascended rapidly from 250 m to 3421 m. The primary outcome measure was the incidence of acute mountain sickness with headache and at least one of the symptoms of nausea or vomiting, fatigue, dizziness, or difficulty sleeping.

One hundred and two participants completed the trial. No significant differences in the incidence of acute mountain sickness were found between R. crenulata extract and placebo groups. If anything, the incidence of severe acute mountain sickness with Rhodiola extract was slightly higher compared to the one with placebo: 35.3% vs. 29.4%.

R. crenulata extract was not effective in reducing the incidence or severity of acute mountain sickness as compared to placebo.

Similar examples could be found by the dozen. They demonstrate very clearly that the notion of the ‘test of time’ is erroneous: a treatment which has a long history of usage is not necessarily effective (or safe) – not only that, it might be dangerous. The true value of a therapy cannot be judged by experience, to be sure, we need rigorous clinical trials. Acute mountain sickness is a potentially life-threatening condition for which there are reasonably effective treatments. If people relied on the ‘ancient wisdom’ instead of using a therapy that actually works, they might pay for their error with their lives. The sooner alternative medicine proponents realise that, the better.

Acupressure is a treatment-variation of acupuncture; instead of sticking needles into the skin, pressure is applied over ‘acupuncture points’ which is supposed to provide a stimulus similar to needling. Therefore the effects of both treatments should theoretically be similar.

Acupressure could have several advantages over acupuncture:

  • it can be used for self-treatment
  • it is suitable for people with needle-phobia
  • it is painless
  • it is not invasive
  • it has less risks
  • it could be cheaper

But is acupressure really effective? What do the trial data tell us? Our own systematic review concluded that the effectiveness of acupressure is currently not well documented for any condition. But now there is a new study which might change this negative verdict.

The primary objective of this 3-armed RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care alone in the management of chemotherapy-induced nausea. 500 patients from outpatient chemotherapy clinics in three regions in the UK involving 14 different cancer units/centres were randomised to the wristband arm, the sham wristband arm and the standard care only arm. Participants were chemotherapy-naive cancer patients receiving chemotherapy of low, moderate and high emetogenic risk. The experimental group were given acupressure wristbands pressing the P6 point (anterior surface of the forearm). The Rhodes Index for Nausea/Vomiting, the Multinational Association of Supportive Care in Cancer (MASCC) Antiemesis Tool and the Functional Assessment of Cancer Therapy General (FACT-G) served as outcome measures. At baseline, participants completed measures of anxiety/depression, nausea/vomiting expectation and expectations from using the wristbands.

Data were available for 361 participants for the primary outcome. The primary outcome analysis (nausea in cycle 1) revealed no statistically significant differences between the three arms. The median nausea experience in patients using wristbands (both real and sham ones) was somewhat lower than that in the anti-emetics only group (median nausea experience scores for the four cycles: standard care arm 1.43, 1.71, 1.14, 1.14; sham acupressure arm 0.57, 0.71, 0.71, 0.43; acupressure arm 1.00, 0.93, 0.43, 0). Women responded more favourably to the use of sham acupressure wristbands than men (odds ratio 0.35 for men and 2.02 for women in the sham acupressure group; 1.27 for men and 1.17 for women in the acupressure group). No significant differences were detected in relation to vomiting outcomes, anxiety and quality of life. Some transient adverse effects were reported, including tightness in the area of the wristbands, feeling uncomfortable when wearing them and minor swelling in the wristband area (n = 6). There were no statistically significant differences in the costs associated with the use of real acupressure band.

26 subjects took part in qualitative interviews. Participants perceived the wristbands (both real and sham) as effective and helpful in managing their nausea during chemotherapy.

The authors concluded that there were no statistically significant differences between the three arms in terms of nausea, vomiting and quality of life, although apparent resource use was less in both the real acupressure arm and the sham acupressure arm compared with standard care only; therefore; no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting. However, the study provided encouraging evidence in relation to an improved nausea experience and some indications of possible cost savings to warrant further consideration of acupressure both in practice and in further clinical trials.

I could argue about several of the methodological details of this study. But I resist the temptation in order to focus on just one single point which I find important and which has implications beyond the realm of acupressure.

Why on earth do the authors conclude that no clear conclusions can be drawn about the use of acupressure wristbands in the management of chemotherapy-related nausea and vomiting? The stated aim of this RCT was to assess the effectiveness and cost-effectiveness of self-acupressure using wristbands compared with sham acupressure wristbands and standard care. The results failed to show significant differences of the primary outcome measures, consequently the conclusion cannot be “unclear”, it has to be that ACUPRESSURE WRIST BANDS ARE NOT MORE EFFECTIVE THAN SHAM ACUPRESSURE WRIST BANDS AS AN ADJUNCT TO ANTI-EMETIC DRUG TREATMENT (or something to that extent).

As long as RCTs of alternative therapies are run by evangelic believers in the respective therapy, we are bound to regularly encounter this lamentable phenomenon of white-washing negative findings with an inadequate conclusion. In my view, this is not research or science, it is pseudo-research or pseudo-science. And it is much more than a nuisance or a trivial matter; it is a waste of research funds, a waste of patients’ good will that has reached a point where people will lose trust in alternative medicine research. Someone should really do a systematic study to identify those research teams that regularly commit such scientific misconduct and ensure that they are cut off public funding and support.

This post will probably work best, if you have read the previous one describing how the parallel universe of acupuncture research insists on going in circles in order to avoid admitting that their treatment might not be as effective as they pretend. The way they achieve this is fairly simple: they conduct trials that are designed in such a way that they cannot possibly produce a negative result.

A brand-new investigation which was recently vociferously touted via press releases etc. as a major advance in proving the effectiveness of acupuncture is an excellent case in point. According to its authors, the aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care. This sounds alright, but wait!

755 patients with depression were randomised to one of three arms to 1)acupuncture, 2)counselling, and 3)usual care alone. The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat. PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of 10 sessions for acupuncture and 9 sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 and 12 months for acupuncture and counselling.

From this, the authors conclude that both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.

Acupuncture for depression? Really? Our own systematic review with co-authors who are the most ardent apologists of acupuncture I have come across showed that the evidence is inconsistent on whether manual acupuncture is superior to sham… Therefore, I thought it might be a good idea to have a closer look at this new study.

One needs to search this article very closely indeed to find out that the authors did not actually evaluate acupuncture versus usual care and counselling versus usual care at all, and that comparisons were not made between acupuncture, counselling, and usual care (hints like the use of the word “alone” are all we get to guess that the authors’ text is outrageously misleading). Not even the methods section informs us what really happened in this trial. You find this hard to believe? Here is the unabbreviated part of the article that describes the interventions applied:

Patients allocated to the acupuncture and counselling groups were offered up to 12 sessions usually on a weekly basis. Participating acupuncturists were registered with the British Acupuncture Council with at least 3 years post-qualification experience. An acupuncture treatment protocol was developed and subsequently refined in consultation with participating acupuncturists. It allowed for customised treatments within a standardised theory-driven framework. Counselling was provided by members of the British Association for Counselling and Psychotherapy who were accredited or were eligible for accreditation having completed 400 supervised hours post-qualification. A manualised protocol, using a humanistic approach, was based on competences independently developed for Skills for Health. Practitioners recorded in logbooks the number and length of sessions, treatment provided, and adverse events. Further details of the two interventions are presented in Tables S2 and S3. Usual care, both NHS and private, was available according to need and monitored for all patients in all three groups for the purposes of comparison.

It is only in the results tables that we can determine what treatments were actually given; and these were:

1) Acupuncture PLUS usual care (i.e. medication)

2) Counselling PLUS usual care

3) Usual care

Its almost a ‘no-brainer’ that, if you compare A+B to B (or in this three-armed study A+B vs C+B vs B), you find that the former is more than the latter – unless A is a negative, of course. As acupuncture has significant placebo-effects, it never can be a negative, and thus this trial is an entirely foregone conclusion. As, in alternative medicine, one seems to need experimental proof even for ‘no-brainers’, we have some time ago demonstrated that this common sense theory is correct by conducting a systematic review of all acupuncture trials with such a design. We concluded that the ‘A + B versus B’ design is prone to false positive results…What makes this whole thing even worse is the fact that I once presented our review in a lecture where the lead author of the new trial was in the audience; so there can be no excuse of not being aware of the ‘no-brainer’.

Some might argue that this is a pragmatic trial, that it would have been unethical to not give anti-depressants to depressed patients and that therefore it was not possible to design this study differently. However, none of these arguments are convincing, if you analyse them closely (I might leave that to the comment section, if there is interest in such aspects). At the very minimum, the authors should have explained in full detail what interventions were given; and that means disclosing these essentials even in the abstract (and press release) – the part of the publication that is most widely read and quoted.

It is arguably unethical to ask patients’ co-operation, use research funds etc. for a study, the results of which were known even before the first patient had been recruited. And it is surely dishonest to hide the true nature of the design so very sneakily in the final report.

In my view, this trial begs at least 5 questions:

1) How on earth did it pass the peer review process of one of the most highly reputed medical journals?

2) How did the protocol get ethics approval?

3) How did it get funding?

4) Does the scientific community really allow itself to be fooled by such pseudo-research?

5) What do I do to not get depressed by studies of acupuncture for depression?

It was 20 years ago today that I started my job as ‘Professor of Complementary Medicine’ at the University of Exeter and became a full-time researcher of all matters related to alternative medicine. One issue that was discussed endlessly during these early days was the question whether alternative medicine can be investigated scientifically. There were many vociferous proponents of the view that it was too subtle, too individualised, too special for that and that it defied science in principle. Alternative medicine, they claimed, needed an alternative to science to be validated. I spent my time arguing the opposite, of course, and today there finally seems to be a consensus that alternative medicine can and should be submitted to scientific tests much like any other branch of health care.

Looking back at those debates, I think it is rather obvious why apologists of alternative medicine were so vehement about opposing scientific investigations: they suspected, perhaps even knew, that the results of such research would be mostly negative. Once the anti-scientists saw that they were fighting a lost battle, they changed their tune and adopted science – well sort of: they became pseudo-scientists (‘if you cannot beat them, join them’). Their aim was to prevent disaster, namely the documentation of alternative medicine’s uselessness by scientists. Meanwhile many of these ‘anti-scientists turned pseudo-scientists’ have made rather surprising careers out of their cunning role-change; professorships at respectable universities have mushroomed. Yes, pseudo-scientists have splendid prospects these days in the realm of alternative medicine.

The term ‘pseudo-scientist’ as I understand it describes a person who thinks he/she knows the truth about his/her subject well before he/she has done the actual research. A pseudo-scientist is keen to understand the rules of science in order to corrupt science; he/she aims at using the tools of science not to test his/her assumptions and hypotheses, but to prove that his/her preconceived ideas were correct.

So, how does one become a top pseudo-scientist? During the last 20 years, I have observed some of the careers with interest and think I know how it is done. Here are nine lessons which, if followed rigorously, will lead to success (… oh yes, in case I again have someone thick enough to complain about me misleading my readers: THIS POST IS SLIGHTLY TONGUE IN CHEEK).

  1. Throw yourself into qualitative research. For instance, focus groups are a safe bet. This type of pseudo-research is not really difficult to do: you assemble about 5 -10 people, let them express their opinions, record them, extract from the diversity of views what you recognise as your own opinion and call it a ‘common theme’, write the whole thing up, and – BINGO! – you have a publication. The beauty of this approach is manifold: 1) you can repeat this exercise ad nauseam until your publication list is of respectable length; there are plenty of alternative medicine journals who will hurry to publish your pseudo-research; 2) you can manipulate your findings at will, for instance, by selecting your sample (if you recruit people outside a health food shop, for instance, and direct your group wisely, you will find everything alternative medicine journals love to print); 3) you will never produce a paper that displeases the likes of Prince Charles (this is more important than you may think: even pseudo-science needs a sponsor [or would that be a pseudo-sponsor?]).
  2. Conduct surveys. These are very popular and highly respected/publishable projects in alternative medicine – and they are almost as quick and easy as focus groups. Do not get deterred by the fact that thousands of very similar investigations are already available. If, for instance, there already is one describing the alternative medicine usage by leg-amputated police-men in North Devon, and you nevertheless feel the urge of going into this area, you can safely follow your instinct: do a survey of leg-amputated police men in North Devon with a medical history of diabetes. There are no limits, and as long as you conclude that your participants used a lot of alternative medicine, were very satisfied with it, did not experience any adverse effects, thought it was value for money, and would recommend it to their neighbour, you have secured another publication in an alternative medicine journal.
  3. If, for some reason, this should not appeal to you, how about taking a sociological, anthropological or psychological approach? How about studying, for example, the differences in worldviews, the different belief systems, the different ways of knowing, the different concepts about illness, the different expectations, the unique spiritual dimensions, the amazing views on holism – all in different cultures, settings or countries? Invariably, you will, of course, conclude that one truth is at least as good as the next. This will make you popular with all the post-modernists who use alternative medicine as a playground for getting a few publications out. This approach will allow you to travel extensively and generally have a good time. Your papers might not win you a Nobel prize, but one cannot have everything.
  4. It could well be that, at one stage, your boss has a serious talk with you demanding that you start doing what (in his narrow mind) constitutes ‘real science’. He might be keen to get some brownie-points at the next RAE and could thus want you to actually test alternative treatments in terms of their safety and efficacy. Do not despair! Even then, there are plenty of possibilities to remain true to your pseudo-scientific principles. By now you are good at running surveys, and you could, for instance, take up your boss’ suggestion of studying the safety of your favourite alternative medicine with a survey of its users. You simply evaluate their experiences and opinions regarding adverse effects. But be careful, you are on somewhat thinner ice here; you don’t want to upset anyone by generating alarming findings. Make sure your sample is small enough for a false negative result, and that all participants are well-pleased with their alternative medicine. This might be merely a question of selecting your patients cleverly. The main thing is that your conclusion is positive. If you want to go the extra pseudo-scientific mile, mention in the discussion of your paper that your participants all felt that conventional drugs were very harmful.
  5. If your boss insists you tackle the daunting issue of therapeutic efficacy, there is no reason to give up pseudo-science either. You can always find patients who happened to have recovered spectacularly well from a life-threatening disease after receiving your favourite form of alternative medicine. Once you have identified such a person, you write up her experience in much detail and call it a ‘case report’. It requires a little skill to brush over the fact that the patient also had lots of conventional treatments, or that her diagnosis was assumed but never properly verified. As a pseudo-scientist, you will have to learn how to discretely make such irritating details vanish so that, in the final paper, they are no longer recognisable. Once you are familiar with this methodology, you can try to find a couple more such cases and publish them as a ‘best case series’ – I can guarantee that you will be all other pseudo-scientists’ hero!
  6. Your boss might point out, after you have published half a dozen such articles, that single cases are not really very conclusive. The antidote to this argument is simple: you do a large case series along the same lines. Here you can even show off your excellent statistical skills by calculating the statistical significance of the difference between the severity of the condition before the treatment and the one after it. As long as you show marked improvements, ignore all the many other factors involved in the outcome and conclude that these changes are undeniably the result of the treatment, you will be able to publish your paper without problems.
  7. As your boss seems to be obsessed with the RAE and all that, he might one day insist you conduct what he narrow-mindedly calls a ‘proper’ study; in other words, you might be forced to bite the bullet and learn how to plan and run an RCT. As your particular alternative therapy is not really effective, this could lead to serious embarrassment in form of a negative result, something that must be avoided at all cost. I therefore recommend you join for a few months a research group that has a proven track record in doing RCTs of utterly useless treatments without ever failing to conclude that it is highly effective. There are several of those units both in the UK and elsewhere, and their expertise is remarkable. They will teach you how to incorporate all the right design features into your study without there being the slightest risk of generating a negative result. A particularly popular solution is to conduct what they call a ‘pragmatic’ trial, I suggest you focus on this splendid innovation that never fails to produce anything but cheerfully positive findings.
  8. It is hardly possible that this strategy fails – but once every blue moon, all precautions turn out to be in vain, and even the most cunningly designed study of your bogus therapy might deliver a negative result. This is a challenge to any pseudo-scientist, but you can master it, provided you don’t lose your head. In such a rare case I recommend to run as many different statistical tests as you can find; chances are that one of them will nevertheless produce something vaguely positive. If even this method fails (and it hardly ever does), you can always home in on the fact that, in your efficacy study of your bogus treatment, not a single patient died. Who would be able to doubt that this is a positive outcome? Stress it clearly, select it as the main feature of your conclusions, and thus make the more disappointing findings disappear.
  9. Now that you are a fully-fledged pseudo-scientist who has produced one misleading or false positive result after the next, you may want a ‘proper’ confirmatory study of your pet-therapy. For this purpose run the same RCT over again, and again, and again. Eventually you want a meta-analysis of all RCTs ever published. As you are the only person who ever conducted studies on the bogus treatment in question, this should be quite easy: you pool the data of all your trials and, bob’s your uncle: a nice little summary of the totality of the data that shows beyond doubt that your therapy works. Now even your narrow-minded boss will be impressed.

These nine lessons can and should be modified to suit your particular situation, of course. Nothing here is written in stone. The one skill any pseudo-scientist must have is flexibility.

Every now and then, some smart arse is bound to attack you and claim that this is not rigorous science, that independent replications are required, that you are biased etc. etc. blah, blah, blah. Do not panic: either you ignore that person completely, or (in case there is a whole gang of nasty sceptics after you) you might just point out that:

  • your work follows a new paradigm; the one of your critics is now obsolete,
  • your detractors fail to understand the complexity of the subject and their comments merely reveal their ridiculous incompetence,
  • your critics are less than impartial, in fact, most are bought by BIG PHARMA,
  • you have a paper ‘in press’ that fully deals with all the criticism and explains how inappropriate it really is.

In closing, allow me a final word about publishing. There are hundreds of alternative medicine journals out there to chose from. They will love your papers because they are uncompromising promotional. These journals all have one thing in common: they are run by apologists of alternative medicine who abhor to read anything negative about alternative medicine. Consequently hardly a critical word about alternative medicine will ever appear in these journals. If you want to make double sure that your paper does not get criticised during the peer-review process (this would require a revision, and you don’t need extra work of that nature), you can suggest a friend for peer-reviewing it. In turn, you can offer to him/her that you do the same to him/her the next time he/she has an article to submit. This is how pseudo-scientists make sure that the body of pseudo-evidence for their pseudo-treatments is growing at a steady pace.

I have said it so often that I hesitate to state it again: an uncritical researcher is a contradiction in terms. This begs the question as to how critical the researchers of alternative medicine truly are. In my experience, most tend to be uncritical in the extreme. But how would one go about providing evidence for this view? In a previous blog-post, I have suggested a fairly simple method: to calculate an index of negative conclusions drawn in the articles published by a specific researcher. This is what I wrote:

If we calculated the percentage of a researcher’s papers arriving at positive conclusions and divided this by the percentage of his papers drawing negative conclusions, we might have a useful measure. A realistic example might be the case of a clinical researcher who has published a total of 100 original articles. If 50% had positive and 50% negative conclusions about the efficacy of the therapy tested, his trustworthiness index (TI) would be 1.

Depending on what area of clinical medicine this person is working in, 1 might be a figure that is just about acceptable in terms of the trustworthiness of the author. If the TI goes beyond 1, we might get concerned; if it reaches 4 or more, we should get worried.

An example would be a researcher who has published 100 papers of which 80 are positive and 20 arrive at negative conclusions. His TI would consequently amount to 4. Most of us equipped with a healthy scepticism would consider this figure highly suspect.

So how would alternative medicine researchers do, if we applied this method for assessing their trustworthiness? Very poorly, I fear – but that is speculation! Let’s see some data. Let’s look at one prominent alternative medicine researcher and see. As an example, I have chosen Professor George Lewith (because his name is unique which avoids confusion with researchers), did a quick Medline search to identify the abstracts of his articles on alternative medicine, and extracted the crucial sentence from the conclusions of the most recent ones:

  1.  The study design of registered TCM trials has improved in estimating sample size, use of blinding and placebos
  2.  Real treatment was significantly different from sham demonstrating a moderate specific effect of PKP
  3. These findings highlight the importance of helping patients develop coherent illness representations about their LBP before trying to engage them in treatment-decisions, uptake, or adherence
  4. Existing theories of how context influences health outcomes could be expanded to better reflect the psychological components identified here, such as hope, desire, optimism and open-mindedness
  5. …mainstream science has moved on from the intellectual sterility and ad hominem attacks that characterise the sceptics’ movement
  6. Trustworthy and appropriate information about practitioners (e.g. from professional regulatory bodies) could empower patients to make confident choices when seeking individual complementary practitioners to consult
  7. Comparative effectiveness research is an emerging field and its development and impact must be reflected in future research strategies within complementary and integrative medicine
  8. The I-CAM-Q has low face validity and low acceptability, and is likely to produce biased estimates of CAM use if applied in England, Romania, Italy, The Netherlands or Spain
  9.  Our main finding was of beta power decreases in primary somatosensory cortex and SFG, which opens up a line of future investigation regarding whether this contributes toward an underlying mechanism of acupuncture.
  10. …physiotherapy was appraised more negatively in the National Health Service than the private sector but osteopathy was appraised similarly within both health-care sectors

This is a bit tedious, I agree, so I stop after just 10 articles. But even this short list does clearly indicate the absence of negative conclusions. In fact, I see none at all – arguably a few neutral ones, but nothing negative. All is positive in the realm of alternative medicine research then? In case you don’t agree with that assumption, you might prefer to postulate that this particular alternative medicine researcher somehow avoids negative conclusions. And if you believe that, you are not far from considering that we are being misinformed.

Alternative medicine is not really a field where one might reasonably expect that rigorous research generates nothing but positive results; even to expect 50 or 40% of such findings would be quite optimistic. It follows, I think, that if researchers only find positives, something must be amiss. I have recently demonstrated that the most active research homeopathic group (Professor Witt from the Charite in Berlin) has published nothing but positive findings; even if the results were not quite positive, they managed to formulate a positive conclusion. Does anyone doubt that this amounts to misinformation?

So, I do have produced at least some tentative evidence for my suspicion that some alternative medicine researchers misinform us. But how precisely do they do it? I can think of several methods for avoiding publishing a negative result or conclusion, and I fear that all of them are popular with alternative medicine researchers:

  • design the study in such a way that it cannot possibly give a negative result
  • manipulate the data
  • be inventive when it comes to statistics
  • home in on to the one positive aspect your generally negative data might show
  • do not write up your study; like this nobody will ever see your negative results

And why do they do it? My impression is that they use science not for testing their interventions but for proving them. Critical thinking is a skill that alternative medicine researchers do not seem to cultivate. Often they manage to hide this fact quite cleverly and for good reasons: no respectable funding body would give money for such an abuse of science! Nevertheless, the end-result is plain to see: no negative conclusions are being published!

There are at least two further implications of the fact that alternative medicine researchers misinform the public. The first concerns the academic centres in which these researchers are organised. If a prestigious university accommodates a research unit of alternative medicine, it gives considerable credence to alternative medicine itself. If the research that comes out of the unit is promotional pseudo-science, the result, in my view, amounts to misleading the public about the value of alternative medicine.

The second implication relates to the journals in which researchers of alternative medicine prefer to publish their articles. Today, there are several hundred journals specialised in alternative medicine. We have shown over and over again that these journals publish next to nothing in terms of negative results. In my view, this too amounts to systematic misinformation.

My conclusion from all this is depressing: the type of research that currently dominates alternative medicine is, in fact, pseudo-research aimed not at rigorously falsifying hypotheses but at promoting bogus treatments. In other words alternative medicine researchers crucially contribute to the ‘sea of misinformation’ in this area.

Can one design a clinical study in such a way that it looks highly scientific but, at the same time, has zero chances of generating a finding that the investigators do not want? In other words, can one create false positive findings at will and get away with it? I think it is possible; what is more, I believe that, in alternative medicine, this sort of thing happens all the time. Let me show you how it is done; four main points usually suffice:

  1.  The first rule is that it ought to be an RCT, if not, critics will say the result was due to selection bias. Only RCTs have the reputation of being ‘top notch’.
  2.  Once we are clear about this design feature, we need to define the patient population. Here the trick is to select individuals with an illness that cannot be quantified objectively. Depression, stress, fatigue…the choice is vast. The aim must be to employ an outcome measure that is well-accepted, validated etc. but which nevertheless is entirely subjective.
  3.  Now we need to consider the treatment to be “tested” in our study. Obviously we take the one we are fond of and want to “prove”. It helps tremendously, if this intervention has an exotic name and involves some exotic activity; this raises our patients’ expectations which will affect the result. And it is important that the treatment is a pleasant experience; patients must like it. Finally it should involve not just one but several sessions in which the patient can be persuaded that our treatment is the best thing since sliced bread – even if, in fact, it is entirely bogus.
  4.  We also need to make sure that, for our particular therapy, no universally accepted placebo exists which would allow patient-blinding. That would be fairly disastrous. And we certainly do not want to be innovative and create such a placebo either; we just pretend that controlling for placebo-effects is impossible or undesirable. By far the best solution would be to give the control group no treatment at all. Like this, they are bound to be disappointed for missing out a pleasant experience which, in turn, will contribute to unfavourable outcomes in the control group. This little trick will, of course, make the results in the experimental group look even better.

That’s about it! No matter how ineffective our treatment is, there is no conceivable way our study can generate a negative result; we are in the pink!

Now we only need to run the trial and publish the positive results. It might be advisable to recruit several co-authors for the publication – that looks more serious and is not too difficult: people are only too keen to prolong their publication-list. And we might want to publish our study in one of the many CAM-journals that are not too critical, as long as the result is positive.

Once our article is in print, we can legitimately claim that our bogus treatment is evidence-based. With a bit of luck, other research groups will proceed in the same way and soon we will have not just one but several positive studies. If not, we need to do two or three more trials along the same lines. The aim is to eventually do a meta-analysis that yields a convincingly positive verdict on our phony intervention.

You might think that I am exaggerating beyond measure. Perhaps a bit, I admit, but I am not all that far from the truth, believe me. You want proof? What about this one?

Researchers from the Charite in Berlin just published an RCT to investigate the effectiveness of a mindful walking program in patients with high levels of perceived psychological distress.

To prevent allegations of exaggeration, selective reporting, spin etc. I take the liberty of reproducing the abstract of this study unaltered:

Participants aged between 18 and 65 years with moderate to high levels of perceived psychological distress were randomized to 8 sessions of mindful walking in 4 weeks (each 40 minutes walking, 10 minutes mindful walking, 10 minutes discussion) or to no study intervention (waiting group). Primary outcome parameter was the difference to baseline on Cohen’s Perceived Stress Scale (CPSS) after 4 weeks between intervention and control.

Seventy-four participants were randomized in the study; 36 (32 female, 52.3 ± 8.6 years) were allocated to the intervention and 38 (35 female, 49.5 ± 8.8 years) to the control group. Adjusted CPSS differences after 4 weeks were -8.8 [95% CI: -10.8; -6.8] (mean 24.2 [22.2; 26.2]) in the intervention group and -1.0 [-2.9; 0.9] (mean 32.0 [30.1; 33.9]) in the control group, resulting in a highly significant group difference (P < 0.001).

Conclusion. Patients participating in a mindful walking program showed reduced psychological stress symptoms and improved quality of life compared to no study intervention. Further studies should include an active treatment group and a long-term follow-up

This whole thing could just be a bit of innocent fun, but I am afraid it is neither innocent nor fun, it is, in fact, quite serious. If we accept manipulated trials as evidence, we do a disservice to science, medicine and, most importantly, to patients. If the result of a trial is knowable before the study has even started, it is unethical to run the study. If the trial is not a true test but a simple promotional exercise, research degenerates into a farcical pseudo-science. If we abuse our patients’ willingness to participate in research, we jeopardise more serious investigations for the benefit of us all. If we misuse the scarce funds available for research, we will not have the money to conduct much needed investigations. If we tarnish the reputation of clinical research, we hinder progress.

If one spends a lot of time, as I presently do, sorting out old files, books, journals etc., one is bound to come across plenty of weird and unusual things. I for one, am slow at making progress with this task, mainly because I often start reading the material that is in front of me. It was one of those occasions that I had begun studying a book written by one of the more fanatic proponent of alternative medicine and stumbled over the term THE PROOF OF EXPERIENCE. It made me think, and I began to realise that the notion behind these four words is quite characteristic of the field of alternative health care.

When I studied medicine, in the 1970s, we were told by our peers what to do, which treatments worked for which conditions and why. They had all the experience and we, by definition, had none. Experience seemed synonymous with proof. Nobody dared to doubt the word of ‘the boss’. We were educated, I now realise, in the age of EMINENCE-BASED MEDICINE.

All of this gradually changed when the concepts of EVIDENCE-BASED MEDICINE became appreciated and generally adopted by responsible health care professionals. If now the woman or man on top of the medical ‘pecking order’ claims something that is doubtful in view of the published evidence, it is possible (sometimes even desirable) to say so – no matter how junior the doubter happened to be. As a result, medicine has thus changed for ever: progress is no longer made funeral by funeral [of the bosses] but new evidence is much more swiftly translated into clinical practice.

Don’t get me wrong, EVIDENCE-BASED MEDICINE does not does not imply disrespect EXPERIENCE; it merely takes it for what it is. And when EVIDENCE and EXPERIENCE fail to agree with each other, we have to take a deep breath, think hard and try to do something about it. Depending on the specific situation, this might involve further study or at least an acknowledgement of a degree of uncertainty. The tension between EXPERIENCE and EVIDENCE often is the impetus for making progress. The winner in this often complex story is the patient: she will receive a therapy which, according to the best available EVIDENCE and careful consideration of the EXPERIENCE, is best for her.

NOT SO IN ALTERNATIVE MEDICINE!!! Here EXPERIENCE still trumps EVIDENCE any time, and there is no need for acknowledging uncertainty: EXPERIENCE = proof!!!

In case you think I am exaggerating, I recommend thumbing through a few books on the subject. As I already stated, I have done this quite a bit in recent months, and I can assure you that there is very little evidence in these volumes to suggest that data, research, science, etc.. matter a hoot. No critical thinking is required, as long as we have EXPERIENCE on our side!

‘THE PROOF OF EXPERIENCE’ is still a motto that seems to be everywhere in alternative medicine. In many ways, it seems to me, this motto symbolises much of what is wrong with alternative medicine and the mind-set of its proponents. Often, the EXPERIENCE is in sharp contrast to the EVIDENCE. But this little detail does not seem to irritate anyone. Apologists of alternative medicine stubbornly ignore such contradictions. In the rare case where they do comment at all, the gist of their response normally is that EXPERIENCE is much more relevant than EVIDENCE. After all, EXPERIENCE is based on hundreds of years and thousands of ‘real-life’ cases, while EVIDENCE is artificial and based on just a few patients.

As far as I can see, nobody in alternative medicine pays more than a lip service to the fact that EXPERIENCE can be [and often is] grossly misleading. Little or no acknowledgement exists of the fact that, in clinical routine, there are simply far too many factors that interfere with our memories, impressions, observations and conclusions. If a patient gets better after receiving a therapy, she might have improved for a dozen reasons which are unrelated to the treatment per se. And if a patient does not get better, she might not come back at all, and the practitioner’s memory will therefore fail register such events as therapeutic failures. Whatever EXPERIENCE is, in health care, it rarely constitutes proof!

The notion of THE PROOF OF EXPERIENCE, it thus turns out, is little more than self-serving, wishful thinking which characterises the backward attitude that seems to be so remarkably prevalent in alternative medicine. No tension between EXPERIENCE and EVIDENCE is noticeable because the EVIDENCE is being ignored; as a result, there is no progress. The looser is, of course, the patient: she will receive a treatment based on criteria which are less than reliable.

Isn’t it time to burry the fallacy of THE PROOF OF EXPERIENCE once and for all?

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories