This was essentially the question raised in a correspondence with a sceptic friend. His suspicion was that statistical methods might produce false-positive overall findings, if the research is done by enthusiasts of the so-called alternative medicine (SCAM) in question (or other areas of inquiry which I will omit because they are outside my area of expertise). Consciously or inadvertently, such researchers might introduce a pro-SCAM bias into their work. As the research is done mostly by such enthusiasts; the totality of the evidence would turn out to be heavily skewed in favour of the SCAM under investigation. The end-result would then be a false-positive overall impression about the SCAM which is less based on reality than on the wishful thinking of the investigators.

How can one deal with this problem?

How to minimise the risk of being overwhelmed by false-positive research?

Today, we have several mechanisms and initiatives that are at least partly aimed at achieving just this. For instance, there are guidelines on how to conduct the primary research so that bias is minimised. The CONSORT statements are an example. As many studies pre-date CONSORT, we need a different approach for reviews of clinical trials. The PRISMA guideline or the COCHRANE handbook are attempts to make sure systematic reviews are transparent and rigorous. These methods can work quite well in finding the truth, but one needs to be aware, of course, that some researchers do their very best to obscure it. I have also tried to go one step further and shown that the direction of the conclusion correlates with the rigour of the study (btw: this was the paper that prompted Prof Hahn’s criticism and slander of my work and person).

So, problem sorted?

Not quite!

The trouble is that over-enthusiastic researchers may not always adhere to these guidelines, they may pretend to adhere but cut corners, or they may be dishonest and cheat. And what makes this even more tricky is the possibility that they do all this inadvertently; their enthusiasm could get the better of them, and they are doing research not to TEST WHETHER a treatment works but to PROVE THAT it works.

In the realm of SCAM we have a lot of this – trust me, I have seen it often with my own eyes, regrettably sometimes even within my own team of co-workers. The reason for this is that SCAM is loaded with emotion and quasi-religious beliefs; and these provide a much stronger conflict of interest than money could ever do, in my experience.

And how might we tackle this thorny issue?

After thinking long and hard about it, I came up in 2012 with my TRUSTWORTHYNESS INDEX:

If we calculated the percentage of a researcher’s papers arriving at positive conclusions and divided this by the percentage of his papers drawing negative conclusions, we might have a useful measure. A realistic example might be the case of a clinical researcher who has published a total of 100 original articles. If 50% had positive and 50% negative conclusions about the efficacy of the therapy tested, his TI would be 1.

Depending on what area of clinical medicine this person is working in, 1 might be a figure that is just about acceptable in terms of the trustworthiness of the author. If the TI goes beyond 1, we might get concerned; if it reaches 4 or more, we should get worried.

An example would be a researcher who has published 100 papers of which 80 are positive and 20 arrive at negative conclusions. His TI would consequently amount to 4. Most of us equipped with a healthy scepticism would consider this figure highly suspect.

Of course, this is all a bit simplistic, and, like all other citation metrics, my TI provides us not with any level of proof; it merely is a vague indicator that something might be amiss. And, as stressed already, the cut-off point for any scientist’s TI very much depends on the area of clinical research we are dealing with. The lower the plausibility and the higher the uncertainty associated with the efficacy of the experimental treatments, the lower the point where the TI might suggest  something  to be fishy.

Based on this concept, I later created the ALTERNATIVE MEDICINE HALL OF FAME. This is a list of researchers who manage to go through life researching their particular SCAM without ever publishing a negative conclusion about it. In terms of TI, these people have astronomically high values. The current list is not yet long, but it is growing:

John Weeks (editor of JCAM)

Deepak Chopra (US entrepreneur)

Cheryl Hawk (US chiropractor)

David Peters (osteopathy, homeopathy, UK)

Nicola Robinson (TCM, UK)

Peter Fisher (homeopathy, UK)

Simon Mills (herbal medicine, UK)

Gustav Dobos (various, Germany)

Claudia Witt (homeopathy, Germany and Switzerland)

George Lewith (acupuncture, UK)

John Licciardone (osteopathy, US)

The logical consequence of a high TI would be that researchers of that nature are banned from obtaining research funds and publishing papers, because their contribution is merely to confuse us and make science less reliable.

I am sure there are other ways of addressing the problem of being mislead by false-positive research. If you can think of one, I’d be pleased to hear about it.


25 Responses to Does clinical research into so-called alternative medicine (SCAM) send us up the garden path?

  • Ernst, Dear fellow,

    Yes, this is of great concern.

    Much more likely that standard trial design & analysis UNDERESTIMATES effect size of ‘holistically’ based therapies.

    Pharma trials, remedy versus diagnosis, generally try to eliminate the individual (so to speak), whereas holistic CAM is very often individual based – both in the view of diagnosis, with concomitants, and remediation used. One ends up with a collection of interventions with N=1, not susceptible to standard analysis.

    As someone who has worked so long in the field, no doubt you know this.

    Wrongly designed and interpreted trials completely miss this point, when they eliminate the individualisation effect. This is generalised methodological bias in the “science”, much favoured by proponents of pharma simply because it does have have intuitive (but wrong) appeal, and does detract from non-pharma methods. The result of challenges to this status quo is often pseudo-skeptic bluster.

    The evidence-base for homeopathy trials in particular is polluted with many many trials of “remedy x versus diagnosis y”, often with a resultant false interpretation that homeopathy “does not work”.

    Anyone with an inkling should know that homeopathy works differently from pharma. Such a trial would be better interpreted as reaching for an approximation of ~how many~ in a general population with diagnosis y are likely to be cured by that particular remedy x. According to homeopathic method, the others should be cured by various different person-based remedies, based on the totality of symptoms at the time of treatment.
    Further complications of homeopathy, addressing constitution and obstruction to cure, make the analysis even more fraught with difficulty.

    In mitigation, all research is interesting if correctly interpreted, and indeed such trials can show what happens if an uneducated patient or doctor uses a therapy in the wrong way.

    Nevertheless, it almost surprising that homeopathy makes any showing at all, given the number of allopathically based trials, where it ‘almost’ works, let alone the approximate parity with pharma trials (if such best-guess comparisons are valid)
    [Milgrom 2013]
    ( a rough comparison of pie charts, after “the pieman” Dr Kaplan in 2009, can be seen at )

    Some time back, I did an analysis of what happens when one treats a holistic armoury as if it is an allopathic, pharma one, without recognition of individualisation.
    The essence is to set up a theoretical population where results are definite and known in advance. I used a simple model of padlocks with keys – individualised remedies for being locked up. Treated allopathically, it is very obvious that “key” DOES NOT WORK for curing “lockedupness” in such a population (whereas, interpreted correctly, of course we know it is 100% effective). Further analysis exhibits an interesting analogue of regression to mean.
    It’s very easy to do, and anyone genuinely interested in the discrepancy between trials and clinical reports, could have looked at this years ago.

    In short, and very obviously, using inappropriate methodology in trial design, analysis, and interpretation produces wrong results, wrong thinking (and wrong blogs). However intuitively appealing the wrong methods are. Surprise, surprise.

    Now I’ll just wait for responses that say I “don’t understand scientific method”.

    • “The evidence-base for homeopathy trials in particular is polluted with many many trials of “remedy x versus diagnosis y”, often with a resultant false interpretation that homeopathy “does not work”.”

      • You refer to your write-up of Mathie’s Aug 2018 paper
        “Systematic Review and Meta-Analysis of Randomised, Other-than-Placebo Controlled, Trials of Individualised Homeopathic Treatment”

        which concluded (my capitals):
        ” the current data PRECLUDE A DECISIVE CONCLUSION about the comparative effectiveness of IHT”


        Well, yes, that’s what it says, which I think supports my point that there are insufficient well-designed trials. Which is to be lamented.
        And shows that some statisticians in the field demonstrate a sanguine unbiased attitude, in contrast to many of the pseudo-skeptic homeopathy denialists who cannot resist propaganda spin.

        Your own reading of this also seems to invite the reader to indulge in the Prime Pseudo-skeptic Fallacy, a.k.a. argumentum ad ignorantiam, that lack of evidence of effect somehow implies lack of effect. Disingenuous at the least. Am I being unfair? I think not. I wonder who failed here.

        I should not be having to teach people here how to read paper.

        To paraphrase, what Mathie actually reported was:
        Only eight trials – eight – had data that were extractable for analysis:
        for four heterogeneous trials with design 1a (Individualised Homeopathy as alternative for particular conditions), the pooled Odds Ratio was statistically non-significant;
        [for avoidance of doubt, that means, no firm conclusion can be drawn]
        and in the remaining trial of design 1a (Individualised Homeopathic intervention, as alternative to one popular conventional intervention – not, it should be pointed out, as alternative to ALL of psycho-pharmacology),
        IHT was NON-INFERIOR to fluoxetine in the treatment of depression.

        collectively for three clinically heterogeneous trials with design 1b (adjunct IHT), there was a statistically significant Standardized Mean Difference, FAVOURING adjunctive IHT
        – and the most pragmatic study attitude was associated with these.

        I should add that I do not know the quality of individualised homeopathy used. Some homeopaths are better than others, and I have also seen studies claiming to use “individualised homeopathy” which rely on automated prescription, thus are testing the automaton rather than the therapy.

        Moving on, Mathie’s Jan 2019 paper
        “Systematic Review and Meta-Analysis of Randomised, Other-than-Placebo Controlled, Trials of Non-Individualised Homeopathic Treatment”
        also says, inter alia
        “Significant heterogeneity undermined the planned meta-analyses or their meaningful interpretation.”
        “The current data preclude a decisive conclusion about the comparative effectiveness of NIHT.”

        Are you going to appeal to ignorance again?

        • you are in desperate need of a critical thinking course, Will

          • Hardly.

            If you refer to pseudo-skeptical “follow-my-leader thinking”, no, I feel no need for a propaganda re-education, thanks.

            Might I suggest people revisit Carl Sagan’s “baloney detection kit”?
            Especially the bits about over-reliance on convention. Oh, and the rest.

            Hint: those topics were meant to forewarn, Edzard, rather than as guidelines.

            As a game, readers might like see which points you can tick off in this blog. And the regular comments.
            (Ideally there should be a prize . .)

            And with regard to the execrable pseudo-skeptics, remember Sagan’s admonition:
            “Like all tools, the baloney detection kit can be misused, applied out of context, or even employed as a rote alternative to thinking. But applied judiciously, it can make all the difference in the world — not least in evaluating our own arguments before we present them to others.”

          • I am so glad you mention Carl Sagan; I studied him more than you, I think.
            in any case, you do need that course in critical thinking in order to come off mount stupid!

          • @EE

            Your answer to Will is your typical BS response when you don’t have a good answer.

            Randomized double blind placebo controlled studies are considered to be the gold standard for validating therapies. Science based medicine wants to give a 100% weighing to a therapy with an approval letter, yet the evidence from any study is never close to 100%. You here (Edzard) want to argue that it must be…. since SBM medicine arrived at an outcome. So, just because the authorities approve an therapy, does not mean that a therapy will not fail the patient. The process that arrives at a verdict is usually more focused on safety rather than efficacy. And yet neither safety nor efficacy are guaranteed, and often fail the patient.

            Why is this system of SBM approvals flawed ? Because the verdict is derived from only a preponderance of the evidence and not an overwhelming majority of the evidence.

            Any given study that ends in approval might have outcomes that achieve a preponderance of evidence in favor of efficacy that is not effective for 45% of those subjected to the trial.
            While another study that does NOT end in approval might have outcomes that 45% of those subjected to the study did experience a significant benefit.

            Why does SBM attempt to give 100% weighing to studies that do not have 100% evidence ? I hear over and over here at this website that it must be so, because science either proved or did not prove that a therapy is beneficial….. hogwash !

            Therefore, there is room for variation within all medicine that allows for individual outcomes, and this from most any therapies that are applied.
            The bottom line is that CAM therapies do work for some patients, while not working for all…. and this is acceptable for many people that choose to use CAM and avoid SBM.

            Please allow patients to choose for themselves, we don’t need a government to be our daddy….buyer beware,

          • “CAM therapies do work for some patients, while not working for all”
            in this case, I declare cigarette smoking an effective preventive measure against lung cancer, because my gran smoked all her life and died not of lung cancer.
            “Please allow patients to choose for themselves”

    • If each unique patient receives a unique treatment, such that no comparisons can be made with any other patient or their treatment, how does one generate data and information? How would one falsify the claim that homeopathic treatments work? How would one demonstrate to a skeptic that it works?

      You say the methods employed by science are wrong.
      You do not say what the right method is.

      Without methods which can generate measureable data homeopathy can only be said to work in the mind of a person who believes it works. Unless you are able to explain a non-scientific method which can demonstrate objectively whether it works or not.

      • @Leigh Jackson

        I stay away from what has proved not to work for me, I adhere to what has proven to work for me…. it’s that simple. Most CAM therapies have low penalty side effects. Over promise and under deliver medicine has failed me and mine too many times. However, if I have a good experience with SBM, I am more likely to return for that specific indication again.

        I prescribe to science for acute problems, blood studies, infections, diagnostic imaging (kept to a minimal), surgery as a last resort….. some types of cancer therapies as a last resort.

        • You are one of those for whom science is optional. Whether or not science is able to produce solutions for human problems, science provides our best understanding of nature. For “science” read “best understanding”.

          Scientists don’t claim perfect knowledge – they know better than anyone the limitations of human knowledge. That is the arena they work in.

          Anyone who believes to be true what science knows is not true is unfortunate.

  • @EE

    The evidence is that SBM is not effective for all ….while at the same time detrimental to some. End of discussion.

    • End of discussion for whom?

      The evidence is that life is an incredibly difficult phenomenon to understand. Far more complex than physics, for example. There are no equations of life. Science is the best understanding we have. Medicines which would falsify science if they worked should be viewed with skepticism. More likely the medicine is wrong than science – though that is not an absolute rule. If science can show a medicine works science can seek to understand how it works. If science understands the causes of illnesses, science can try to find fixes.

      Where science fails to find solutions an easy quick fix found with a snap of the fingers should be viewed with skepticism.

      • @ Leigh Jackson

        Leigh, how can you even claim it’s science, when many approved medications only yield a 55% efficacy…. some less than that. If we’re going to call it science, are not science facts supposed to be evident 100% of the time ?
        For most drug trials, they can choose the participants they want, slant the test criteria, configure the endpoints they think they can hit, and the interpret the findings enough to tip the scales to achieve 55% efficacy…. and they do !

        Stop calling it Science !

        • you can stop trying to convince us that you do not understand even the basics of healthcare – YOU HAVE LONG SUCCEEDED!

          • @EE

            Ahhh yes, when the time comes that you have no explanation to support your narrative about “science based medicine”…. you change the subject from science to….”healthcare”

            You have no answers…. only BS

          • you can stop trying to convince us that you do not understand even the basics of science, medicine, healthcare etc. – YOU HAVE LONG SUCCEEDED!

        • Regulators and doctors? Are they in the racket too? I call that Conspiracy Theory.

          I don’t accuse homeopathists, acupuncturists etc of deliberate fraud. Some perhaps but not all. They have fooled themselves and consequently fool others. Richard Feynman said that the easiest person to fool is yourself. See also Old Bob, below, on Francis Bacon. The latter was aware of how some people are willing fools.

  • I can see the rationale for the index but does this not suffer from the “file drawer problem?

  • Thanks all for pointing out the Carl Sagan thing e.g. Sagan quotes this intro from Bacon:

    “The human understanding is no dry light, but receives an infusion from the will and affections;
    whence proceed sciences which may be called “sciences as one would.” For what a man had rather
    were true he more readily believes. Therefore he rejects difficult things from impatience of research;
    sober things, because they narrow hope; the deeper things of nature, from superstition; the light
    of experience, from arrogance and pride, lest his mind should seem to be occupied with things
    mean and transitory; things not commonly believed, out of deference to the opinion of the vulgar.
    Numberless in short are the ways, and sometimes imperceptible, in which the affections colour
    and infect the understanding.”

    Francis Bacon, Novum Organon (1620)

    Sic transit gloria mundi – at least language-wise.

    • Brilliant phrase that “Sciences as one would.” I must make a note of it.

      In similar vein from Shakespeare’s Henry IV part 2, “Thy wish is father to that thought.”

      Bias is the enemy of science. My less eloquent elocution.

      • The art reflects its age so all a stranger needs to do is go view the art to find out everything about us today, from the art galleries, the modern buildings, the music and finally, the language. Now what was it that the prime minister said today? No, not “A pyramid of piffle.” not anymore, today he said “A tiger in your tank.” – the art of his Yesterday.

  • Some people acquire homeopathic products at the pharmacy and are not in touch with a homeopath. I wonder how effective the products are for these people. I don’t know how you could design an RCT for these people and products. Most of the comments I have seen on UK blogs involve a prescribing homeopath, but this is not the sole distribution channel.

  • The reason homeopathy harms nobody (directly) is because drinking/ ingesting a small amount of sugar/water is nearly always a non event. However, the indirect harm of people avoiding useful therapies by sticking with homeopathy is real.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.