Since many months, I have noticed a proliferation of so-called pilot studies of alternative therapies. A pilot study (also called feasibility study) is defined as a small scale preliminary study conducted in order to evaluate feasibility, time, cost, adverse events, and improve upon the study design prior to performance of a full-scale research project. Here I submit that most of the pilot studies of alternative therapies are, in fact, bogus.

To qualify as a pilot study, an investigation needs to have an aim that is in line with the above-mentioned definition. Another obvious hallmark must be that its conclusions are in line with this aim. We do not need to conduct much research to find that even these two elementary preconditions are not fulfilled by the plethora of pilot studies that are currently being published, and that proper pilot studies of alternative medicine are very rare.

Three recent examples of dodgy pilot studies will have to suffice (but rest assured, there are many, many more).

Foot Reflexotherapy Induces Analgesia in Elderly Individuals with Low Back Pain: A Randomized, Double-Blind, Controlled Pilot Study

The aim of this study was to evaluate the effects of foot reflexotherapy on pain and postural balance in elderly individuals with low back pain. And the conclusions drawn by its authors were that this study demonstrated that foot reflexotherapy induced analgesia but did not affect postural balance in elderly individuals with low back pain.

Effect of Tai Chi Training on Dual-Tasking Performance That Involves Stepping Down among Stroke Survivors: A Pilot Study.

The aim of this study was to investigate the effect of Tai Chi training on dual-tasking performance that involved stepping down and compared it with that of conventional exercise among stroke survivors. And the conclusions read: These results suggest a beneficial effect of Tai Chi training on cognition among stroke survivors without compromising physical task performance in dual-tasking.

The Efficacy of Acupuncture on Anthropometric Measures and the Biochemical Markers for Metabolic Syndrome: A Randomized Controlled Pilot Study.

The aim of this study was to evaluate the efficacy [of acupuncture] over 12 weeks of treatment and 12 weeks of follow-up. And the conclusion: Acupuncture decreases WC, HC, HbA1c, TG, and TC values and blood pressure in MetS.

It is almost painfully obvious that these studies are not ‘pilot’ studies as defined above.

So, what are they, and why are they so popular in alternative medicine?

The way I see it, they are the result of amateur researchers conducting pseudo-research for publication in lamentable journals in an attempt to promote their pet therapies (I have yet to find such a study that reports a negative finding). The sequence of events that lead to the publication of such pilot studies is usually as follows:

  • An enthusiast or a team of enthusiasts of alternative medicine decide that they will do some research.
  • They have no or very little know-how in conducting a clinical trial.
  • They nevertheless feel that such a study would be nice as it promotes both their careers and their pet therapy.
  • They design some sort of a plan and start recruiting patients for their trial.
  • At this point they notice that things are not as easy as they had imagined.
  • They have too few funds and too little time to do anything properly.
  • This does, however, not stop them to continue.
  • The trial progresses slowly, and patient numbers remain low.
  • After a while the would-be researchers get fed up and decide that their study has enough patients to stop the trial.
  • They improvise some statistical analyses with their results.
  • They write up the results the best they can.
  • They submit it for publication in a 3rd class journal and, in order to get it accepted, they call it a ‘pilot study’.
  • They feel that this title is an excuse for even the most obvious flaws in their work.
  • The journal’s reviewers and editors are all proponents of alternative medicine who welcome any study that seems to confirm their belief.
  • Thus the study does get published despite the fact that it is worthless.

Some might say ‘so what? no harm done!’

But I beg to differ: these studies pollute the medical literature and misguide people who are unable or unwilling to look behind the smoke-screen. Enthusiasts of alternative medicine popularise these bogus trials, while hiding the fact that their results are unreliable. Journalists report about them, and many consumers assume they are being told the truth – after all it was published in a ‘peer-reviewed’ medical journal!

My conclusions are as simple as they are severe:

  • Such pilot studies are the result of gross incompetence on many levels (researchers, funders, ethics committees, reviewers, journal editors).
  • They can cause considerable harm, because they mislead many people.
  • In more than one way, they represent a violation of medical ethics.
  • The could be considered scientific misconduct.
  • We should think of stopping this increasingly common form of scientific misconduct.

12 Responses to ‘Pilot studies’ of alternative medicine: incompetent, unethical, misleading and harmful

  • Not so sure if all these publications comes from only amateur researchers. I know a couple of ‘researchers’ that knows everything there is to know about clinical trials and yet they almost exclusively publish this type of nonsense.

    The reason, for me at least, is that they need to get a ‘positive’ result out there in whatever shape or form possible, which they then manipulate in the press via their Uni’s press office. And that is all they need to do in order to convince the public that their alternative treatment works and that it has been scientifically validated. For some reason the large well designed trials never really happens.

    One example that comes to mind is how the title of such a trial evolved from the published title to the completely different title on the acupuncture clinic’s website.

    Below is an email that I’ve written to PLoS One regarding this publication:

    “I am contacting you in regard to a recent publication in PLOS ONE entitled “The
    role of treatment timing and mode of stimulation in the treatment of primary
    dysmenorrhea with acupuncture: An exploratory randomised controlled trial”

    This study lacks a control group (this is science 101) and is under-powered
    and as such, no firm conclusions can or should be drawn. There are a number
    of other issues with this manuscript as well, which you can find here

    What I fear most, however, is that the authors who have links with
    commercial acupuncture clinics, use these seriously flawed results and turn
    it into a very clear positive result in order to mislead the public.

    Here is the title of this study on the website of Western Sydney University
    “Study points to acupuncture to reduce period pain”

    Here it is on the website of the National Institute of Complementary Medicine
    “Period pain reduced by Acupuncture treatment”

    So it goes from a very cautious “exploratory” study to a factual statement,
    and this is exactly what they want. And because the leading author, Mike
    Armour, happens to be a director of an acupuncture clinic they will post this
    title on their FB site: “Period pain: Acupuncture is a effective alternative to


    I do not believe that this manuscript should have been accepted for
    publication because of the above mentioned reasons, but also because the
    authors have only one purpose, and that is to mislead the public regarding
    the effectiveness of acupuncture.”

  • I’ve recently came across this publication (might have been discussed on your blog) that investigated where all these misleading health claims in newspapers comes from. I used to think that it was simply bad journalism but apparently a big part comes straight from over-inflated an exaggerated Uni press releases (approved by the researchers). I can personally vouch for this as in my comment above.

    Problem is that the expert quacks (or as I like to call them, Quacks 2.0) are making use of this issue to further whatever they like to further without any penalty whatsoever (it is of course a more widespread issue than only CAM).

  • “Pilot study” can also informally mean “a small study but it is the biggest we can afford” – which, until research funding is differently distributed, is the best we are going to get for many things.

    • this is not a pilot study but an underpowered study

    • @jane

      What’s the problem with research funding?

    • Hi jane. Do you understand how research funding is distributed? For biomedical research, a person usually applies to an appropriate funding body for a grant. The largest two UK bodies supporting biomedical research are the Wellcome Trust and the Medical Research Council but there are many other, smaller sources of funding. The applicant has to spell out in detail the research they want to undertake, why they want to undertake it, what hypothesis or hypotheses underpin the research idea, exactly how they plan to go about doing the research, what resources will be required, how long the research project will take, and so on. (You can visit the websites of grant-funding agencies to get the details for applicants.)

      The application will normally be sent out to referees — experts in the field of the applicant’s research — to obtain opinions on the potential value of the proposed project, the scientific standing and background of the applicant(s), the feasibility of the research plan, whether or not the amount of money applied for is reasonable or otherwise, and so on. Finally, the application will be read by members of a committee (normally comprising the top echelons of biomedical researchers), who meet to discuss a round of applications and to recommend which ones should be funded and which should be rejected.

      So, in principle, if a chiropractor (say) wants to get funding for a clinical trial of spinal manipulation therapy for chronic lower back pain, all they have to do is take the trouble to submit an appropriate grant application to a funding body.

      The problems arise when (for example) the referees and committee see that the chiropractor has never previously done any research, she/he expects to recruit an unrealistic number of patients, the clinical trial is badly designed, the control (placebo) arm of the trial makes poor sense and the outcome measures are only vaguely designed. From my personal perspective, the research competence of most people involved in CAM is so deficient, they’ll never succeed in attracting funding. (Edzard Ernst happens to be an exception to this comment, because he — from the start — set out to establish the means by which high quality research could be done in the field.)

      Do you see the problem with your “until research funding is differently distributed”? Suppose the UK government decides to pour millions of pounds into funding CAM/’holistic medicine’ research: how do they go about that? Do they give the money to the loudest voices who happen to have access to ministers? Do they give it to friends and family? No, they come up with a system indistinguishable from the one I just described, because when you ‘throw money at something’ you simply have to have something in place to ensure that public money isn’t just chucked on to a rubbish heap.

      Please advise how you think the sources of research funding should go about differently distributing their resources.

    • PS. There’s an exception to everything I said in my previous comment. When a private company wants to fund research, they can short-circuit the process I described and go directly to a researcher or researchers and offer to fund a project that’s more than partly their own initiative. This happens all the time with Big Pharma projects (though many/most of the doctors/scientists they approach are jealous of their reputations and do their best not to include the most self-serving things Big Pharma asks them to do.

      So why aren’t Big Snakeoil companies that support the industry approaching $200 billion doing (apparently) a darn thing to use their profits to ‘differently distribute’ research funding in camistry??

  • It’ll come as no surprise to anyone here that the systematic review by Mathie et al. of individualised homeopathy trials found just three trials it deemed ‘reliable evidence’ and that one of these self-described as ‘preliminary’ (Jacobs 2001) with just 75 participants, another (Bell 2004) as a ‘pilot study’ with just 62 participants and the last just had 81 participants (Jacobs 1994). Given that these were published 14 to 24 years ago, it doesn’t look like homeopaths had much of an appetite to conduct larger trials.

  • Once in a while you do get large trials such as the $600k acupuncture trial in Aus. At the time the “Acupuncture compared to sham acupuncture and standard care to improve live birth rates for women undergoing IVF: a randomised controlled trial” was the biggest ever CAM trial in Aus history but it was also called ‘Unis in a whacky waste of cash’.

    The reason is probably because it is a typical A+B vs B trial design which is bound to give a positive result. What is odd though, is that this trial has been completed a long time ago, but as far as I know, nothing has been published yet. If anyone out there have more info on this trial please let me know. You can find the trial details here (there is also some other interesting acupuncture pilot trials on this registry): Trial ID ACTRN12611000226909

  • Panos Barlas sent me this via Twitter regarding my criticism of pilot studies:
    Hmmm…is that when others are doing them? Because there’s a handful coauthored by Ernst…
    true, I have published several pilot studies of alt med. to be precise, Mediline lists 4 abstracts of my pilot studies
    One had several paragraphs under the heading “Feasibility of a Future Randomized Clinical Trial”
    one concluded: “A further randomized controlled study focusing on this group would appear justified and is being planned.”
    one concluded: ” results of this pilot study suggest that there is scope for conducting a randomised, placebo-controlled, double-blind trial to investigate the value of hypericum as a treatment for premenstrual syndrome.”
    The alst concluded: “These results suggest there is scope for conducting a randomized placebo-controlled trial to investigate the specific effect of Hypericum on fatigue and that the study design must take account of the role of depression in fatigue.”
    I think this shows in a near-exemplary fashion what pilot studies have to be used for:
    for evaluating the possibility of a proper trial AND NOT FOR TESTING EFFICACY/EFFECTIVENESS

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.