MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

Can one design a clinical study in such a way that it looks highly scientific but, at the same time, has zero chances of generating a finding that the investigators do not want? In other words, can one create false positive findings at will and get away with it? I think it is possible; what is more, I believe that, in alternative medicine, this sort of thing happens all the time. Let me show you how it is done; four main points usually suffice:

  1.  The first rule is that it ought to be an RCT, if not, critics will say the result was due to selection bias. Only RCTs have the reputation of being ‘top notch’.
  2.  Once we are clear about this design feature, we need to define the patient population. Here the trick is to select individuals with an illness that cannot be quantified objectively. Depression, stress, fatigue…the choice is vast. The aim must be to employ an outcome measure that is well-accepted, validated etc. but which nevertheless is entirely subjective.
  3.  Now we need to consider the treatment to be “tested” in our study. Obviously we take the one we are fond of and want to “prove”. It helps tremendously, if this intervention has an exotic name and involves some exotic activity; this raises our patients’ expectations which will affect the result. And it is important that the treatment is a pleasant experience; patients must like it. Finally it should involve not just one but several sessions in which the patient can be persuaded that our treatment is the best thing since sliced bread – even if, in fact, it is entirely bogus.
  4.  We also need to make sure that, for our particular therapy, no universally accepted placebo exists which would allow patient-blinding. That would be fairly disastrous. And we certainly do not want to be innovative and create such a placebo either; we just pretend that controlling for placebo-effects is impossible or undesirable. By far the best solution would be to give the control group no treatment at all. Like this, they are bound to be disappointed for missing out a pleasant experience which, in turn, will contribute to unfavourable outcomes in the control group. This little trick will, of course, make the results in the experimental group look even better.

That’s about it! No matter how ineffective our treatment is, there is no conceivable way our study can generate a negative result; we are in the pink!

Now we only need to run the trial and publish the positive results. It might be advisable to recruit several co-authors for the publication – that looks more serious and is not too difficult: people are only too keen to prolong their publication-list. And we might want to publish our study in one of the many CAM-journals that are not too critical, as long as the result is positive.

Once our article is in print, we can legitimately claim that our bogus treatment is evidence-based. With a bit of luck, other research groups will proceed in the same way and soon we will have not just one but several positive studies. If not, we need to do two or three more trials along the same lines. The aim is to eventually do a meta-analysis that yields a convincingly positive verdict on our phony intervention.

You might think that I am exaggerating beyond measure. Perhaps a bit, I admit, but I am not all that far from the truth, believe me. You want proof? What about this one?

Researchers from the Charite in Berlin just published an RCT to investigate the effectiveness of a mindful walking program in patients with high levels of perceived psychological distress.

To prevent allegations of exaggeration, selective reporting, spin etc. I take the liberty of reproducing the abstract of this study unaltered:

Participants aged between 18 and 65 years with moderate to high levels of perceived psychological distress were randomized to 8 sessions of mindful walking in 4 weeks (each 40 minutes walking, 10 minutes mindful walking, 10 minutes discussion) or to no study intervention (waiting group). Primary outcome parameter was the difference to baseline on Cohen’s Perceived Stress Scale (CPSS) after 4 weeks between intervention and control.

Seventy-four participants were randomized in the study; 36 (32 female, 52.3 ± 8.6 years) were allocated to the intervention and 38 (35 female, 49.5 ± 8.8 years) to the control group. Adjusted CPSS differences after 4 weeks were -8.8 [95% CI: -10.8; -6.8] (mean 24.2 [22.2; 26.2]) in the intervention group and -1.0 [-2.9; 0.9] (mean 32.0 [30.1; 33.9]) in the control group, resulting in a highly significant group difference (P < 0.001).

Conclusion. Patients participating in a mindful walking program showed reduced psychological stress symptoms and improved quality of life compared to no study intervention. Further studies should include an active treatment group and a long-term follow-up

This whole thing could just be a bit of innocent fun, but I am afraid it is neither innocent nor fun, it is, in fact, quite serious. If we accept manipulated trials as evidence, we do a disservice to science, medicine and, most importantly, to patients. If the result of a trial is knowable before the study has even started, it is unethical to run the study. If the trial is not a true test but a simple promotional exercise, research degenerates into a farcical pseudo-science. If we abuse our patients’ willingness to participate in research, we jeopardise more serious investigations for the benefit of us all. If we misuse the scarce funds available for research, we will not have the money to conduct much needed investigations. If we tarnish the reputation of clinical research, we hinder progress.

11 Responses to Can one design a trial such that it inevitably produces a positive result?

  • Without doubt your scenario happens and has happened many times. But then there are more dishonest trials. Trials where the conclusions don’t match the data. Trials where significance is found by using many different statistical tests until one shows the magic >0.05 number. The last also includes those where many different outcomes are measured with the probability that at least one will be significant. The examples I give are almost invariably published in altmed magazines pretending to be journals. It is a large and fraudulent operation.

    • @Acleron – I completely agree. The methods in this article are simply examples of poor methodology – and therefore are easy for peer-review to catch and dismiss. In my experience the most pernicious statistical trick (which I’m sad to say pervades a lot of non-altmed research – especially meta-studies) is a revision of the null hypothesis AFTER the data has been gathered and analyzed. For instance, let’s say that a well designed, double-blind RCT methodology is applied to study looking at the effects of a given drug… If we state at the outset that we’re looking to see if the drug helps with headaches then there is a pretty good chance that p <.05 means that the drug helps with headaches. However, what some "researchers" do is look at whether the drug helps with headaches, backaches, migraines, fatigue, asthma, cramps, etc etc… The more effects we look for, the greater the chance that one or two of those effects will show significance with p<.05 (after all, a p of 0.05 just means that there is a one-in-twenty chance that the outcome is the result of random chance). Because of this, if you're looking at multiple effects, then statistically the threshold for significance gets much smaller. However, some unethial "researchers" will instead simply re-evaluate the data as if they had only been looking at the effects they already know to have statistical significance. Thus the drug will be found to help helpful (or harmful, depending on what the researchers are trying the prove) in treating ____, even though in reality the finding is simply a random chance.

      This is why, for all research, there should be a greater push towards repeatability. Can the results be replicated? Sadly, there’s little money or publication space for studies which simply validate the findings of others.

  • How is this study one of “mindful walking” rather than just “walking”?
    I think they are trying to sneak the “mindful” part in.

    This study is essentially uncontrolled.

    It also has multiple comparisons, with no apparent attempt to control for this. (Their tables actually break out 24 different comparisons.)

    It essentially excludes all of my patients, as well, since the exclusion criteria included:

    “acute diseases or chronic disease at baseline”
    it also excluded anyone on
    “psychopharmacological drugs”
    which would be the a group I’d want to be using this therapy as an adjunct for (if it worked).

    Overall, a “junk” study.
    It’s clear why it was published in the journal it was.
    Although, more mainstream journals let this sort of thing through as well (for pharmacologic interventions as well). I think that the alt-medders learned some of their techniques from pharma studies.

  • I thought this was the norm for cognitive-behavioural interventions?

    Show that telling people positive things leads to them filling questionnaires in more positively and then wait for NICE to endorse your cheap ‘treatment’.

    Also, deviations from a trial’s protocol are helpful for ensuring the right result is reached. If therapists notice that the benefits of treatment aren’t quite what is expected, criteria for classing patients as improved or recovered can be dramatically watered down mid-trial, but before data has been collected and analysed.

  • Mindful walking? Is this defined anywhere (if so, I missed it)? Personally, I prefer mindLESS walking–much less stressful!

  • Would have been interesting to compare mindful walking with tour guide walking.

  • Thank you for this article. I found it very illuminating. I wonder if it is possible at all to do a clinical trial in mental health for those that do not involve drugs or operations. I thought therapies such as cognitive-behavioral therapies or meditation-based-stress-reduction are accepted treatments for not-too-severe depression/anxiety, for instance. Would a-no-placebo-test be one way to find out whether it is effective?

    • one would need a ‘attention control’ i.e. a control group where patients are given the same amount of attention but without the experimental treatment.

      • An ‘attention control’ is insufficient for therapies which are delivered in such a way where the patients perceptions are deliberately altered, such as cognitive-behavioral therapies, where patients may well change their questionnaire answering behaviour, but not their behaviour in general.

        In terms of RCTs to treat fatigue, objective measures are needed, neuropsychological testing and actigraphy at a minimum.

        http://onlinelibrary.wiley.com/doi/10.1111/cpsp.12042/abstract
        http://www.rehab.research.va.gov/jour/2013/506/ickmans506.html

        • Just attention control does seem unlikely to be able to really account for bias. Blinded assessment is another possible tool.

          Therapies where patients are encouraged to think positively or believe that they have greater control over symptoms if they carry out particular tasks may be more likely to distort results that rely upon subjective self-report measures than more passive cognitive and behavioral interventions.

Leave a Reply to Edzard Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories