MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

Some sceptics are convinced that, in alternative medicine, there is no evidence. This assumption is wrong, I am afraid, and statements of this nature can actually play into the hands of apologists of bogus treatments: they can then easily demonstrate the sceptics to be mistaken or “biased”, as they would probably say. The truth is that there is plenty of evidence – and lots of it is positive, at least at first glance.

Alternative medicine researchers have been very industrious during the last two decades to build up a sizable body of ‘evidence’. Consequently, one often finds data even for the most bizarre and implausible treatments. Take, for instance, the claim that homeopathy is an effective treatment for cancer. Those who promote this assumption have no difficulties in locating some weird in-vitro study that seems to support their opinion. When sceptics subsequently counter that in-vitro experiments tell us nothing about the clinical situation, apologists quickly unearth what they consider to be sound clinical evidence.

An example is this prospective observational 2011 study of cancer patients from two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n = 259), and one cohort with conventionally treated cancer patients (CG; n = 380). Its main outcome measures were the change of quality life after 3 months, after one year and impairment by fatigue, anxiety or depression. The results of this study show significant improvements in most of these endpoints, and the authors concluded that we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment.

Another, in some ways even better example is this 2005 observational study of 6544 consecutive patients from the Bristol Homeopathic Hospital. Every patient attending the hospital outpatient unit for a follow-up appointment was included, commencing with their first follow-up attendance. Of these patients 70.7% (n = 4627) reported positive health changes, with 50.7% (n = 3318) recording their improvement as better or much better. The authors concluded that homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.

The principle that is being followed here is simple:

  • believers in a bogus therapy conduct a clinical trial which is designed to generate an apparently positive finding;
  • the fact that the study cannot tell us anything about cause and effect is cleverly hidden or belittled;
  • they publish their findings in one of the many journals that specialise in this sort of nonsense;
  • they make sure that advocates across the world learn about their results;
  • the community of apologists of this treatment picks up the information without the slightest critical analysis;
  • the researchers conduct more and more of such pseudo-research;
  • nobody attempts to do some real science: the believers do not truly want to falsify their hypotheses, and the real scientists find it unreasonable to conduct research on utterly implausible interventions;
  • thus the body of false or misleading ‘evidence’ grows and grows;
  • proponents start publishing systematic reviews and meta-analyses of their studies which are devoid of critical input;
  • too few critics point out that these reviews are fatally flawed – ‘rubbish in, rubbish out’!
  • eventually politicians, journalists, health care professionals and other people who did not necessarily start out as believers in the bogus therapy are convinced that the body of evidence is impressive and justifies implementation;
  • important health care decisions are thus based on data which are false and misleading.

So, what can be done to prevent that such pseudo-evidence is mistaken as solid proof which might eventually mislead many into believing that bogus treatments are based on reasonably sound data? I think the following measures would be helpful:

  • authors should abstain from publishing over-enthusiastic conclusions which can all too easily be misinterpreted (given that the authors are believers in the therapy, this is not a realistic option);
  • editors might consider rejecting studies which contribute next to nothing to our current knowledge (given that these studies are usually published in journals that are in the business of promoting alternative medicine at any cost, this option is also not realistic);
  • if researchers report highly preliminary findings, there should be an obligation to do further studies in order to confirm or refute the initial results (not realistic either, I am afraid);
  • in case this does not happen, editors should consider retracting the paper reporting unconfirmed preliminary findings (utterly unrealistic).

What then can REALISTICALLY be done? I wish I knew the answer! All I can think of is that sceptics should educate the rest of the population to think and analyse such ‘evidence’ critically…but how realistic is that?

53 Responses to How to build a body of misleading pseudo-evidence for bogus treatments and mislead us all

  • One way is to keep chipping away at the publicity cores of these scams. Point out when the media have given credence to the incredible. These modalities rely on false advertising.

    Your point about evidence existing for these modalities depends on the definition of evidence. In its broadest possible meaning then any result could be called evidence. When the techniques used are fatally flawed, the conclusions bear little relationship to the data and a study is published in what is really a fan magazine then it is only evidence of the lengths to which alt-med supporters will go to sell their products.

    • good point; the ‘evidence’ is really pseudo-evidence.

      • Can someone explain why the evidence from the above studies on homeopathy are pseudo evidence ?

        ( Besides your belief or “theory” that homeopathy cannot have an effect – therefore it does not.)

        Surveys are tools used in conventional medicine and no one objects- -The evidence of this survey for instance, is pseudo evidence according to your criteria ?

        http://www.hss.edu/newsroom_social-networks-hip-replacement-outcomes.asp

        “Study: People Who Are Socially Isolated Experience More Pain After Hip Replacement”?

        • firstly, they are not called ‘survey’ by the author but ‘observational study’;
          secondly, the conclusions as cited above strongly imply cause and effect;
          in my view, it is foremost this incongruence of aim/method/result/conclusion that makes this pseudo-science.

        • @George

          Besides your belief or “theory” that homeopathy cannot have an effect

          That homeopathic remedies cannot have an effect is neither a belief nor a theory. Until proven otherwise it is a fact. Nothing but wishful thinking and greed says they can possibly have any effect.
          If by the term “homeopathy” you instead mean the seductive and deceitful ramblings of its practitioners who convince their offers that they are doing them good, that is another matter, which does not justify them selling magically shaken water. It is an unnecessary, expensive and potentially harmful deception.

          • Well. Authors who publish on homeopathy even in mainstream medical journals hold a different view on its effectiveness. . For instance:

            “supposed implausibility of homeopathy, which is based on the argument that very dilute substances (diluted beyond Avogadro’s number) cannot have biological activity, has been investigated by a number of scientists. Basic science research appears to suggest that the use of extremely dilute solutions may not be as implausible as has been claimed.”
            http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1847554/#!po=26.9231

            To the point:
            Both the papers on homeopathy and on pain from hip replacement are observational studies.

            IF you accept the validity of this statistical tool to establish cause and effect in the hip replacement study then you have to accept the results on the homeopathy study as well – to be consistent unless your conclusions have emotional or metaphysical basis. (Nothing is wrong with that just it is not so scientific.)

          • As I have said before George, at least our perseverance is admirable. There is an endless sea of nonsense out there from which you can pick at your will. it does not change the facts.

          • Correction: “our” should read “your”

        • The cancer study:
          One cohort was mainly treated in an expensive private clinic, specialised in giving second care to cancer patients after chemotherapy, radiation or surgery. they entered the study on average 10 months after their first diagnosis of cancer.

          The second cohort mostly consisted of patients in a general primary care hospital offering the complete set of cancer treatments, including chemo, radioation and surgery. These patients joined the study on average only three months after their first diagnosis of cancer. On average, the patients were six years older in this group.

          Comparison of both cohorts is based on the date the patients joined the study, not on the date of their first diagnosis. By proper calculation – not following the authors’ strange arithmetics more in favour of their deired results – the difference in score after three months into the study was a mere 2.2 points on the FACT-G questionnaire, adjusted for the time of first diagnosis this melts down to a mere 0.2 points. The authors of the questionnaire however state, that the minimum difference indicating clinical reference is 3 to 7 points. There was no beneficial effect of homeopathy, the authors’ conclusions are based on nothingness.

          Details of this analysis can be found here (in German): http://www.beweisaufnahme-homoeopathie.de/?p=461

          The Bristol observational study:
          Just consider this one sentence how the study was designed:

          ‘The outcome score was assessed during the consultation, with patients being asked [by the treating physician, my addition] to rate their overall improvement or deterioration compared to their status at first visit’.

          Now, the physisician doing the assessment for sure knows that he is in a study to investigate the success of the treatments he himself applied and gets paid for by the homeopathic hospital he works for. Would you really expect that the outcome could be against homeopathy and the physician would lose his job in the end? Do you really believe, that the physician would not have done his best to try to talk any patient out of his maybe not so favorable rating, if he ever dared to state something like this in an eye to eye interview?

          I would say the chances of a negative outcome were about the same as in a trial by Philipp Morris on smoking. If this setting does not lead to biased results, I do not know whichever else would.

          And, by the way, of course there was no control group, so you do not know how patients would have faired without the homeopathic treatment or any other as that.

          Details on this study can be found in Aust N: ‘In Sachen Homeopathie – eine Beweisaufnahme’, published March 2013, ISBN 978-3-942594-47-9, p 216-218 (in German)

          That is why these studies are bogus.

          • Björn Geir, this kind of answer you gave is emotional. But this is OK.

            Norbert, you like very much conspiracy theories.

            Starting from the end -regarding the anxious homeopath who …alters subconsciously the answers of the patients in the survey…… ——– by the same mode of thinking, you cannot trust any observational or other study or conclusion which has been supported or sponsored by an x pharmaceutical company. And all research done by big pharma is useless – according to what you say. Unless you believe they the the …good guys.

            The absence of the control group applies to both studies : hip replacement, and homeopathy studies. Why don’t you object to the findings of the first study?

            I told you before that all the statistical tools used in published studies are TYPICAL and are applied the same way in conventional and homeopathic research as long this is published in a good journal : This is not a homeopathic ….. fantasy —-even researchers who believe that homeopathy = placebo – Shang for instance. That;s why he was able to identify good studies in homeopathy and compare them with conventional studies.

            Whatever you say about “strange” calculations – this is the way it is done in ALL research.

            Unless you conduct a comparable study on how statistical tools are used in conventional vs homeopathic research significantly differ, the value of your observations is very limited, very close to what you call ..bogus.

          • >> you cannot trust any observational or other study or conclusion which has been supported or sponsored

            You missed the point. It is a big difference if a study is funded by somebody or if the results are obtained by somebody that would be out of a job (and out of a living that is) if the study does report anything else than a positive outcome. I believe nobody to be a good guy, if it comes to that.

            >> The absence of the control group applies to both studies…
            This is your question that that you did ask:
            >> Can someone explain why the evidence from the above studies on homeopathy are pseudo evidence ?
            You got the answer now.

            >> Whatever you say about “strange” calculations – this is the way it is done in ALL research.
            Hopefully not.
            The authors add up the score in the homeopathy group like this:
            22.1 + 21.8 + 16.6 + 18.6 = 81.1 instead of 79.1 as on my pocket calculator
            The score of the control group adds up to
            20.1 + 21.9 + 17.8 + 17.1 = 76.6 instead of 76.9 as on my pocket calculator.
            The difference in score was not 81.1 – 76.6 = 4.5 as given in the paper
            but a mere 79.1 – 76.9 = 2.2

            This is what I meant by strange arithmetics. If I had meant stistics I would have said statistics. It just happened to occur in the figures of the main outcome measure (scores after three months into the study) and just happens to modify the result in a favourable way by more than doubling it. Note: This is 13 months after first diagnosis for the homeopathic group and 6 months for the control group. Of course, the mistake may have happened by chance – but the researchers would be more trustworthy if this chance had not occurred.

            This is not a conspiracy theory of mine, these are facts you can read in black and white in the paper. See the link in the professor’s article.

  • Thanks for this terrific article, Edzard. I always like to post at least a Thank You when I read an excellent blog, as I see so many out there going without any comments at all.

    I posted a link to this blog at the forum at Skeptic.com, in the Healthcare category. It is a very well-done forum, and one of my favorites on the web.

  • You ask “what can realistically be done…” by sceptics to “educate the rest of the population”. Therein lies a problem, the consideration of “them and us”. People react poorly when they are receive a lecture, no matter how well-meaning. The psychological barriers go up – who is this guy, what does he know?

    Since the behaviour and temperament of others is outside our personal control, we might be better served by turning our attentions to ourselves. Understanding why people reject “scientism”; a thorough grasp of conflict resolution; an inclusive and collaborative approach rather than exclusive and confrontational… these may lead to more constructive progress.

    I don’t deny the problem. I do wonder about some of the underlying assumptions.

    • “I do wonder about some of the underlying assumptions.”

      Such as…? Are you referring to EE’s “assumptions” or those of the alties?

  • This technique is very effective. I don’t know about the training in the UK but few undergraduate dentists (my area) in the US have any training in reading the literature. If a person cannot read beyond the abstract and conclusions, it is hard to distinguish evidence from bovine scat. When many of my colleagues cannot tell the difference, it is hard to expect more from the general public (actually in the case of alt-med, the well heeled public) or the media.

    We live in a time of declining reimbursements for primary care physicians. For some the appeal of new profit centers makes any “evidence” appealing. The same holds for larger organizations like a certain woo friendly cancer group. It is hard for some to be that critical when their “rice bowl” is on the line. Best to not get sick.

  • This raises the question: what is the purpose of scientific literature? It seems increasingly unlikely that it’s useful as a raw tool for public education. As the scientific method “improves”, it requires an increasingly onerous grasp of disciplines such as cognitive psychology, statistics, and so on. Interpretation of these are all prone to serious error by even the most highly-trained experts.

    It is probable that the lay-public will never grasp this, and it is regularly demonstrated that even experts grasp these disciplines poorly. (This is why documented and checklisted procedures, and multi-disciplinary clinical and research teams, are all currently vital.)

    I suspect that the amassing of pseudo-evidence represents a “cargo cult” approach. If we build the air-strips, the planes will come. If we construct the evidence, our approach will be justified.

    So there is “good” and “bad” evidence. But as a patient, why should I trust someone I don’t know to tell me which is which?

    It seems possible that scientists have gone down an impasse. Maybe esoteric research papers were useful in a more paternalistic age, when the family doctor was trusted to understand them and apply their lessons to the passive, unquestioning patient who had no other option anyway.

    With even senior researchers and policy makers recognising the limits of the upper echelons of the “hierarchy of evidence”, the time is ripe for a ground-up, democratised re-modelling of how science is taught, understood, and done.

    • There are so many misunderstandings in your post that I hardly know where to begin, but here goes:

      The “purpose of scientific literature” is to publish the results of research and then wait for informed criticism and possible replication. This process works well and eventually weeds out poorly designed or biased reports.

      It is not necessary for the public to grasp all of science in order to judge the value of a study. Basic training in the history of science (which most anyone can grasp) will go a long way in forming a generally skeptical outlook. Decent books such as “Bad Science” by Ben Goldacre present basic information on how to read and judge studies that many, if not most, people can at least get the basics of logical thinking from.

      As to “experts”, their ranks will be whittled by the process. It is not a perfect system, but it is self-correcting and the bad apples and ideas are eventually eliminated.

      “…unquestioning patient who had no other option anyway.”

      Patients have always had the option of finding a different doctor–even more so before the era of HMO’s and PPO’s. The options remain the same. The option is to find a doctor you feel comfortable with, NOT to turn to unproven magical thinking.

      I really have no idea what your final paragraph really means, but the way science is “taught, understood, and done” is just fine–it’s the numbers of people who get adequately exposed to it that need to increase. Perhaps that is what you meant?

      • Hi Irene. Thanks for the comments, and for your efforts to disabuse me of any misunderstandings.

        I fully agree with your description of the purpose of scientific literature, and I generally think science is a good process: the best we have in fact.

        My question about the “purpose of scientific literature” could be clarified. It was in response to EE’s final question about what to do.

        EE’s thought-provoking post starts with research, and ends with a practical question: what can we actually *do*? Specifically, EE is concerned with the education of the patient. This highlights the disjunct between researchers (in the abstract, not EE specifically) and patients.

        I’m not criticising the scientific process, nor the production and continuous review of the literature – far from it. I’m simply questioning how we make nimble and effective use of such a huge corpus, which is produced and maintained by increasingly cumbersome processes, to educate patients.

        My suggestion is that perhaps the wrong question is being asked, and that scientific literature itself is of limited direct use in educating patients. Of course, it remains of signal importance in educating those who are trained to produce it, review it, and use it clinically. Even so, that takes a huge amount of training, and remains prone to significant error. (Again, I agree that this imperfect system is the best we have.)

        My comment about “underlying assumptions” was about whether it’s patronising to talk of “educating the public”; whether such a goal is achievable; and so on.

        Perhaps a useful way forward is to educate ourselves first, specifically around why patients might reject sound evidence-based advice, or why they may turn elsewhere. I can see “sceptics” making far greater progress in that regard than in training all patients in the basics of research – although Ben Goldacre’s book is indeed an excellent effort. I wonder what percentage of the population have read it?

        “The option is to find a doctor you feel comfortable with, NOT to turn to unproven magical thinking.”

        I would whole-heartedly agree, and yet that’s not what appears to happen.

        I feel I’m probably over-stating the obvious. But there it is.

    • “…the time is ripe for a ground-up, democratised re-modelling of how science is taught, understood, and done.”

      I don’t think so. Science isn’t democratic.

      • From the perspective of public health educators and policy makers, that might rankle. The processes of research, publication, review etc may not currently be democratic, but that’s changing*. And the processes of information dissemination and changing public health behaviours probably should be largely democratic. To say that those aspects aren’t part of science is to effectively place the public outside science. It relegates the public to being passive recipients of science, which is a shame in my opinion.

        * Examples of what might be seen as the increasing democratisation of science within healthcare include:

        Increasingly sophisticated wearable monitors that provide patients with very detailed personal data, allowing them to take personal and evidence-based responsibility for their health, and to share that data.

        Large UK charities increasingly interested in the provenance of donations, so that members of the public can see exactly what their donations have purchased, down to the test-tube, rather than the “pay and forget” model of donations with which we’re all familiar.

        Patient groups meeting their GPs.

        Patient groups raising the concerns about how data from past research can be shared and re-used.

        The increasing shift away from pay-walls and toward open publishing of research.

        I’m sure there are plenty more, that’s all off the top of my head. To me, that seems like a democratisation of science. I suppose others might disagree. Maybe the “doing” of science necessarily remains in the hands of the specifically trained scientist.

  • Careful Ernst, you are the verge of destroying the entire field of psychology.

  • @Norbert

    1.Why don’t you object to the findings of the research funded the pig pharma ( vaccines etc ) on the same grounds you object the findings of an X homeopathic study ?

    2.Again, I am asking you : if you accept the validity of an observational study without a control group to establish cause and effect such as the hip replacement study, why don’t you accept the application of the same tool in homeopathy ( besides the fact that it is sponsored by the homeopathic hospital)?

    3. Does the correction of the mistake you refer to make the result statistically insignificant in a degree that whatever conclusion should be substantially altered?

  • (1) I only judge studiea that I have read and analysed myself and I am focussing on homeopathy. The claims raised there seem highly improbable so it is worthwhile to see if all the studies said to prove its efficacy. And the conclusion whether a study is valid or not does not in the least depend on the fact if there were other studies out there that suffer the same problem.

    (2) By what logic do you think I accept the hip-replacement study? I did not say anything about it, for I do not know if I do or not – I did not even read it. Criticising one study does not mean that all the others are accepted as valid.

    (3) The authors of the test applied in the study themselves indicate, that the minimum difference in score to indicate clinical relevance is 3 to 7 points (see here: http://www.hqlo.com/content/1/1/79). 4.5 points would be on the verge to indicate some real clinical improvement, 2.2 points not. The conclusion given by the studies authors that their result showed clinical relevance would be utterly false with the real data.

    • So according to you all research sponsored or supported by big pharma is biased – this is interesting – I partially agree.
      This is what we currently have though – there is unfortunately no independent research for large trials or reviews like that.

      Regarding the hip replacement study I said that this type of study is used and accepted in general in medicine to establish cause and effect — Dr Ernst said the same thing for this study : “secondly, the conclusions as cited above strongly imply cause and effect;” Therefore the homeopathy study you refer to uses the same tool as the conventional research but for some reason you dont accept it.

      As I said before several studies in homeopathy ( showing some efficacy) have been examined for their compliance to typical statistics methods by prominent authors who don’t believe in Homeopathy – and they are found to be correct in applying the proper techniques.

      I dont really know about the calculations in the specific study – it seems improbable such a mistake to be able to alter the final conclusion so much after all the peer review process – Aren you you curious to email the author — maybe he can have an opinion about that – It seems an interesting question.

  • George, there must be something terribly wrong with my English, I fear. I simply fail to see where I did express the ideas you attribute to me.

    >> So according to you ALL research sponsored or supported by big pharma is biased
    Where did I express such an opinion?
    No I do not subscribe to this statement. Everybody who spends money on a study is interested on the outcome, otherwise he would not spend his money on it. The only problem I see is, that any organisation would not publish results against their interest. If the design of the study does not exclude biased results, reviewing the article somebody should be able to point out where the bias or error is – just like I do it for homeopathy.

    >> Regarding the hip replacement study I said that this type of study is used and accepted
    Remember, we are not talking about this study, bescause I did not review it.

    >> Therefore the homeopathy study you refer to uses the same tool
    Maybe, maybe not.

    >> as the conventional research but for some reason you dont accept it.
    George, in earnest, did you read my posts on this thread? Did you understand them? If yes, then please indicate, from where you got the impression, that my judgement on the studies here is based only on the tool that is used?

    >> have been examined for their compliance to typical statistics methods by prominent authors who don’t believe in Homeopathy – and they are found to be correct in applying the proper techniques.
    >> it seems improbable such a mistake to be able to alter the final conclusion so much after all the peer review process

    Good point! Let us see a very easy to understand example:
    Take the Study of Schmidt on weight reduction with homeopathy for an example (http://www.homeopathyjournal.net/article/S1475-4916(02)90049-4/abstract). The authors there assessed the weight of the participants by a scale having a dial where the smalles digit on display was 0.1 kg. With this dial they compared the difference of weight loss of both groups and found the mean difference to be 0.097 kg.
    This precision cannot be achieved by the device applied. There is an effect called error propagation, which the authors preferred to ignore. By proper calculation the result should be 0.097 kg +/- 0.2 kg. This error margin allows for the lacking precision of the scales only, there are other factors having an impact on the test-retest reliability. But still, it is much bigger than the results. Mind you, this is not the confidence interval yet, this is just the region of the errors in assessing the measure.
    In engineering, no undergraduate seminal work would be accepted showing such utter ignorance of basic understanding in measurement.
    But here in medical science, nobody seems to understand a basic thing about measurements:
    – the study was performed and written by educated authors
    – it was reviewed by peers prior to publishing
    – it was reviewed by Shang and rated high quality in favour of homeopathy
    – the results of Shang’s review were heavily discussed
    – the study is cited 21 times as indicated by Google -Scholar
    … and nobody realised, that the results are just as valid as if they were derived by tumbling the dice.

    In fact, there are questions to be asked:
    What is wrong with the education of researchers in the medical field?
    What qualification is needed to perform proper measurements and analyses?
    How can this qualification be brought to researchers actually working in the field?

    I started to write some papers dealing with this problem. The first currently is in print.

    >> Aren you you curious to email the author
    Not on this one. For the first analyses on my blog I did inform the authors of my findings and asked their comments. I never received any response. Maybe this will change after my papers got published.

    • If you believe that “any organisation would not publish results against their interest. ” then why you believe the results big pharma publishes regarding efficacy and/or safety? This is was my question. What is not clear?

      We are talking about this study ; that was my initial question: Dr Ernst answering to me said that “the conclusions as cited above strongly imply cause and effect;”
      And I asked you ( and Dr Ernst) Therefore the homeopathy study you refer to uses the same tool as the conventional research but for some reason you don’t accept it. Can you give a valid reason?

      In general I hope I do understand what you say – but i think that you have double standards : even if a result positive is demonstrated in homeopathic research using a statistical tool also applied in conventional research you dont want to accept that the results might be correct.

      Let me ask you – we know that in order to determine the tolerance interval in a measurement, we add and subtract 1/2 of the precision of the measuring instrument to the measurement. In this case you refer to, what is the half of 0.1 kg ? 0.05 Therefore the result is 0.097 kg +/- 0.05 kg. Is this correct?

  • >> If you believe that “any organisation would not publish results against their interest. ” then why you believe the results big pharma publishes regarding efficacy and/or safety?

    (1) My statement means that I believe publication bias to be very probable in sponsored research. I hope you can understand that this means something else than invalid results of a published study.

    (2) Why do you believe that I ‘believe results of big pharma’ anyway?

    >> that was my initial question:
    Well, from the very first of your comments to this thread the very first sentence was:
    >> Can someone explain why the evidence from the above studies on homeopathy are pseudo evidence ?
    Remember?
    This is the one I answered.

    >> Can you give a valid reason?
    I pointed out in a long and elaborate answer why the two studies on homeopathy are bogus. And, for the last time please, I do not know anything about this hip-study for I never did read it. And I never will, because it is outside of my scope.

    >> you dont want to accept that the results might be correct.
    Wrong. I readily accept, that the results MIGHT be correct. Then I check them and, well, you know what came out of it. George, please apply some basic logic: Just because a tool, in statistics or anywhere else, CAN produce correct results when handled properly does not mean it does so ALLWAYS.

    >> we know that in order to determine the tolerance interval in a measurement, we add and subtract 1/2 of the precision of the measuring instrument

    Well, George, that is just the lack of understanding of basic requirements for valid measurements that I feared prevails in medical science. Just to illustrate, that there is something more about performing valid measurements than just buying some device and copy the figures from the display into some forms, let me point out some things about error propagation, using this very easy example.

    Let us assume that the scale does proper rounding just as you said, nbot just truncate the trailing digits.

    Weight loss or gain is the difference of two readings from the scale. Say the first is 85.0 kg and the second is 84.0 kg. The first reading could be anything between 84.95 and 85.05 and the second could be anything between 83.95 and 84.05. The interval of possible results is defined by (maximum of first) – (minimum of second) on one hand and (minimum of first) – (maximum of second) on the other. The first is (85.05 – 83.95) = 1.1, the second is (84.95 – 84.05) = 0.9. So the weight loss in this example is 1 kg + / – 0.1. Note that the error interval totals to twice the resolution of the deviice.

    Comparing weight loss of two groups, say the first being 1.0 +/- 0.1 kg, the second being 0.9 +/-0.1 kg by the same line of reasoning amounts to 0.1 +/- 0.2 kg. Error interval now is four times the resolution.

    This effect is referred to as error propagation. In engineering this is taught in undergraduate courses of maths and any experimental courses of physics, mechanics etc.

    Let’s continue a little further:
    We just considered the resolutuion of the display alone. This usually is of minor contribution to test/retest reliability of some measuring equipment. Other confounders usually affect the result much stronger. Before this scale was employed in the research, they should have checked this by having say ten people measuring their weight ten times in random sequence in a short period of time. Then you may get an idea how sensitive the scale is to change of position and weight distribution in its sensors. Then just continue how sensitive the scale reacts to fluctuations im ambient temperature, humidity, discharge status of the batteries ….

    Where do you think this would end up?
    In any industry with safety issues or high quality standards (automotive, aeronautics..), you do not use any gage which was not subject to such intense scrutiny. In all others there is a rule of the thumb applied, that the resolution of your gage should be ten times better than the precision you require in your result.

    As I said, undergraduate level knowledge.

    • This is not the standard way measurements are done in medical research – you cannot really measure small differences with this application, by doubling twice the error tolerance.

      It is reasonable to do so in engineering to make sure that whatever design is structurally “correct” and will not collapse.

      But in medical research conventional AND alternative, researchers look for small differences indicating an effect. It does not mean that the results are not real. I don’t think you have a case.

      Once again unless you are able to provide a comparable study showing that studies on homeopathy depart significantly in using the standard statistical tools (used in conventional medicine) your observations do not disprove anything.

      • No, it really doesn’t work that way.
        Not only is it untrue that medical research looks for small differences (you don’t want treatments that are marginally better than placebo), it’s also wrong to claim that they can ignore the accuracy of their measurements because of that. If the differences are small, you need a better instrument to measure them, not ignore statistics.

        • You don’t seem familiar with the subject. No one “ignored” …statistics (!).

          The study was found to be methodologically correct by Shang who included it in his meta analyses. There is no one “correct” way to use statistical tools – and in conventional and homeopathic research these tools are used the same way.

          Regarding : you don’t want treatments that are marginally better than placebo.

          You know of course that medication works only for about 45 percent of the patients. This is typical in conventional research.

          But this is my favorite example : would you advocate the prescription of anti depressants in the light of this?

          These findings suggest that, compared with placebo, the new-generation antidepressants do not produce clinically significant improvements in depression in patients who initially have moderate or even very severe depression, but show significant effects only in the most severely depressed patients.

          http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0050045

          Why don’t you suggest that it is unethical for MDs to prescribe antidepressants since they are “proven” to be ineffective?

          • if there ever was an intelligent statement, it must be this: “…medication works only for about 45 percent of the patients”. are you for real?
            runner up: “There is no one “correct” way to use statistical tools”
            you do know your stuff, don’t you?

          • You don’t seem familiar with the subject. No one “ignored” …statistics (!).

            I’m not sure you’re familiar with the subject. Uncertainty analysis is part of statistics. Norbert Aust handed you the explanation on a silver plate, yet you refuse to accept that the statistical analysis in the weight reduction study was flawed.

            The study was found to be methodologically correct by Shang who included it in his meta analyses.

            I take it that you don’t agree with Shang et al. when they conclude that the clinical effects of homeopathy are placebo effects? Why then do you insist that the study is methodologically correct just because Shang et al. said it was “of higher quality”?

            You know of course that medication works only for about 45 percent of the patients. This is typical in conventional research.

            What medication and what condition are you talking about?

            Why don’t you suggest that it is unethical for MDs to prescribe antidepressants since they are “proven” to be ineffective?

            What makes you think I don’t find it unethical? It has nothing to do with homeopathy however.

      • Sorry for being late to reply.

        George, you do not understand a single word of this, do you?
        What I indicated is how the measurement in this case was done. Period. And error propagation ist not a thing that you can decide on to or not to have – you have got it. period. And this study is useless. Period.

        Get this paper and read it.
        http://onlinelibrary.wiley.com/doi/10.1111/fct.12062/abstract;jsessionid=075DF3A52A86A5BC80D10F1D6938E4F3.f01t01?deniedAccessCustomisedMessage=&userIsAuthenticated=false

  • Well —You sound quite sure Dr., Ernst — This is not my statement :

    “A senior executive with Britain’s biggest drugs company has admitted that most prescription medicines do not work on most people who take them.” Maybe you should ask him – not me – if he is for real…..

    http://www.independent.co.uk/news/science/glaxo-chief-our-drugs-do-not-work-on-most-patients-575942.html

    Regarding statistics I m sorry : do you know mathematics and/or statistics? This is common knowledge. There are different ways to apply statistical tools in different fields depending on what one wants to measure and why.

    The study was evaluated by Shang and found to be “high quality” – -Are you arguing against that?

  • Specifically —-“The vast majority of drugs – more than 90 per cent – only work in 30 or 50 per cent of the people,” Dr Roses said. “I wouldn’t say that most drugs don’t work. I would say that most drugs work in 30 to 50 per cent of people. Drugs out there on the market work, but they don’t work in everybody.”

    Do you want more ?

  • Sorry -,I though Dr Roses was a credible source – http://medicine.duke.edu/faculty/details/0098535 maybe he is not ? Or maybe the independent is a really obscure source ? Maybe one of the biggest pharma companies is lying – I don’t t know why – saying that “The vast majority of drugs – more than 90 per cent – only work in 30 or 50 per cent of the people,”

    It is not the average it is revealing but the real numbers; , besides that if he is not qualified to make such a statement – based on the experience and research in one of biggest companies in the world – I don’t know who is .

    Regarding Nobert calculations : there is NOT one correct way to use statistical tools – the choise depends on the purpose and the nature of measurement – if you want to measure a small effect or something else. This is common knowlegde in mathematics but If your exposure to math and statistics is limited to undergraduate statistics in engineering is natural to say things like that . The funniest thing he has talked about is the group indepandece in Jacobs review-

    According to his criteria all conventional research is wrong but he will never examine the conventional trials – he is an expert in ….explaining that the same statistcal tools and principles used typically in conventional research are inappropriate to be used in homeopathy.

    • come on! you really believe that a lone voice can be representative and be reliable with such a sweeping statement?

    • I give up, George.
      I do not want to waste my time any more in a hopeless case.

      You can write more bovine excrement in one sentence than I can answer in three chapters.
      Enjoy your ignorance.

      • This is not the correct criterion Dr. Ernst. What do you mean lone voice? He was the worldwide vice-president of genetics at GlaxoSmithKline -do you think he was quack or something?

        what he said it is not so ..strange.

        “I wouldn’t say that most drugs don’t work. I would say that most drugs work in 30 to 50 per cent of people. Drugs out there on the market work, but they don’t work in everybody”

        Every MD knows that switching medication is a common practice in this field for this reason.

        • if you say so

        • “I wouldn’t say that most drugs don’t work. I would say that most drugs work in 30 to 50 per cent of people. Drugs out there on the market work, but they don’t work in everybody”

          Every MD knows that switching medication is a common practice in this field for this reason.

          What are you trying to say with this, George? If drugs only work in 50%, or even 30%, of patients they will produce demonstrable effects in RCTs. And for the 50%, or 30%, of patients in whom they work, they work. If doctors have to switch to different drugs to find ones that work in particular patients, they are still finding treatments that work.

          On the other hand, therapies that fail to produce a difference between the treatment group and a placebo group in RCTs work in 0% of patients.

          A treatment that works in 30% of patients is better than a treatment that works in 0% of patients.

          • The demonstrable effects in RCTs will be often marginal—Read above.

          • The effects will still be detectable in RCTs (although larger sample sizes will be needed to demonstrate significance). A treatment that desn’t work in everybody is not a treatment that doesn’t work.

            What relevance do you think your quotation has to bogus treatments?

          • What a load of nonsense! 30 – 50 % is by no means a marginal, it’s easily measurable, no need for “creative” use of statistical tools. Do I need to remind you that you’re trying to defend homeopathy, where no high quality study that I know of has ever come close to helping 30 % of patients?

  • ….Well – if a drug works only in 30% of the patients probably the demonstrated effect in a clinical trial would be marginal – you can ask your math teacher friend or if you are in academia your statistic professor …..or to look at your studies yourself-

    Of course there are high quality studies showing an effect in homeopathy – Shang found some of them – his conclusions are false for other reasons – but he did find them.

    Speaking of creative statistics anybody can use creative statistics to prove or disprove something – If you knew statistics you would know that there is NOT one “correct” way to evaluate data -ask your math professor or teacher friend for more.

    • this is getting quite funny!
      it contributes to my daily delights to see someone so persistent and so persistently wrong as you.
      please carry on.

      • Thanks! I do intend to entertain – I see now — you did your best to answer my questions – and provide your best argument.

        ( I also learned that the vast majority of conventional clinical trials are useless since the groups are not independent-)

        Again – Thanks for everything!

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories