MD, PhD, FMedSci, FRSB, FRCP, FRCPEd.

Forgive me, if this post is long and a bit tedious, but I think it is important.

The claims continue that I am a dishonest falsifier of scientific data, because the renowned Prof R Hahn said so; this, for instance, is from a Tweet that appeared a few days ago

False claims, Edzard Ernst is the worst. Says independent researcher prof Hahn in his blog. His study: https://www.ncbi.nlm.nih.gov/pubmed/24200828 
His blog (German translation) http://www.homeopathy.at/betruegerische-studien-um-homoeopathie-als-wirkungslos-darzustellen…

The source of this permanent flow of defamations is Hahn’s strange article which I have tried to explain several times before. As the matter continues to excite homeopaths around the world, I have decided to give it another go. The following section (in bold) is directly copied from Hahn’s infamous paper where he evaluated several systematic reviews of homeopathy.

_________________________________________________________________________

In 1998, he [Ernst] selected 5 studies using highly diluted remedies from the original 89 and concluded that homeopathy has no effect [5].

In 2000, Ernst and Pittler [6] sought to invalidate the statistically significant superiority of homeopathy over placebo in the 10 studies with the highest Jadad score. The odds ratio, as presented by Linde et al. in 1999 [3], was 2.00 (1.37–2.91). The new argument was that the Jadad score and odds ratio in favor of homeopathy seemed to follow a straight line (in fact, it is asymptotic at both ends). Hence, Ernst and Pittler [6] claimed that the highest Jadad scores should theoretically show zero effect. This reasoning argued that the assumed data are more correct than the real data.

Two years later, Ernst [7] summarized the systematic reviews of homeopathy published in the wake of Linde’s first metaanalysis [2]. To support the view that homeopathy lacks effect, Ernst cited his own publications from 1998 and 2000 [5, 6]. He also presented Linde’s 2 follow-up reports [3, 4] as being further evidence that homeopathy equals placebo. 

_________________________________________________________________________

And that’s it! Except for some snide remarks (copied below) in the discussion section of the article, this is all Hahn has to say about my publications on homeopathy; in other words, he selects 3 of my papers (references are copied below) and (without understanding them, as we will see) vaguely discusses them. In my view, that is remarkable in 3 ways:

  • firstly, there I have published about 100 more papers on homeopathy which Hahn ignores (even though he knows about them as we shall see below);
  • secondly, he does not explain why he selected those 3 and not any others;
  • thirdly, he totally misrepresents all the 3 articles that he has selected.

In the following, I will elaborate on the last point in more detail (anyone capable of running a Medline search and reading Hahn’s article can verify the other points). I will do this by repeating what Hahn states about each of the 3 papers (in bold print), and then explain what each article truly was about.

HERE WE GO

_________________________________________________________________________

FIRST ARTICLE

In 1998, he [Ernst] selected 5 studies using highly diluted remedies from the original 89 and concluded that homeopathy has no effect [5].

This paper [ref 5] was a re-analysis of the Linde Lancet meta-analysis (unfortunately, this paper is not available electronically, but I can send copies to interested parties). For this purpose, I excluded all the studies that did not

  • use homeopathy following the ‘like cures like’ assumption (arguably those studies are not trials of homeopathy at all),
  • use remedies which were not highly diluted and thus contained active molecules (nobody doubts that remedies with pharmacologically active substances can have effects),
  • that did not get the highest rating for methodological quality by Linde et al (flawed trials are known to produce false-positive results).

My methodology was (I think) reasonable, pre-determined and explained in full detail in the article. It left me with 5 placebo-controlled RCTs. A meta-analysis across these 5 trials showed no difference to placebo.

Hahn misrepresents this paper by firstly not explaining what methodology I applied, and secondly by stating that I ‘selected’ the 5 studies from a pool of 89 trials. Yet, I defined my inclusion criteria which were met by just 5 studies.

___________________________________________________________________________

SECOND ARTICLE

In 2000, Ernst and Pittler [6] sought to invalidate the statistically significant superiority of homeopathy over placebo in the 10 studies with the highest Jadad score. The odds ratio, as presented by Linde et al. in 1999 [3], was 2.00 (1.37–2.91). The new argument was that the Jadad score and odds ratio in favor of homeopathy seemed to follow a straight line (in fact, it is asymptotic at both ends). Hence, Ernst and Pittler [6] claimed that the highest Jadad scores should theoretically show zero effect. This reasoning argued that the assumed data are more correct than the real data.

The 1st thing to notice here is that Hahn alleges we had ‘sought to invalidate’. How can he know that? The fact is that we were simply trying to discover something new in the pool of data. The paper he refers to here has been discussed before on this blog. Here is what I stated:

This was a short ‘letter to the editor’ by Ernst and Pittler published in the J Clin Epidemiol commenting on the above-mentioned re-analysis by Linde et al which was published in the same journal. As its text is not available on-line, I re-type parts of it here:

In an interesting re-analysis of their meta-analysis of clinical trials of homeopathy, Linde et al conclude that there is no linear relationship between quality scores and study outcome. We have simply re-plotted their data and arrive at a different conclusion. There is an almost perfect correlation between the odds ratio and the Jadad score between the range of 1-4… [some technical explanations follow which I omit]…Linde et al can be seen as the ultimate epidemiological proof that homeopathy is, in fact, a placebo.

Again Hahn’s interpretation of our paper is incorrect and implies that he has not understood what we actually intended to do here.

_____________________________________________________________________________

THIRD ARTICLE

Two years later, Ernst [7] summarized the systematic reviews of homeopathy published in the wake of Linde’s first metaanalysis [2]. To support the view that homeopathy lacks effect, Ernst cited his own publications from 1998 and 2000 [5, 6]. He also presented Linde’s 2 follow-up reports [3, 4] as being further evidence that homeopathy equals placebo. 

Again, Hahn assumes my aim in publishing this paper (the only one of the 3 papers that is available as full text on-line): ‘to support the view that homeopathy lacks effect’. He does so despite the fact that the paper very clearly states my aim: ‘This article is an attempt to critically evaluate all such papers published since 1997 with a view to defining the clinical effectiveness of homeopathic medicines.‘ This discloses perhaps better than anything else that Hahn’s article is not evidence, but opinion-based and not objective but polemic.

Hahn then seems to resent that I included my own articles. Does he not know that, in a systematic review, one has to include ALL relevant papers? Hahn also seems to imply that I merely included a few papers in my systematic review. In fact, I included all the 17 that were available at the time. It might also be worth mentioning that numerous subsequent and independent analyses that employed similar methodologies as mine arrived at the same conclusions as my review.

_____________________________________________________________________________

Despite Hahn’s overtly misleading statements, he offers little real critique of my work. Certainly Hahn does not state that I made any major mistakes in the 3 papers he cites. For his more vitriolic comments, we need to look at the discussion section of his article where he states:

Ideology Plays a Part

Ernst [7] makes conclusions based on assumed data [6] when the true data are at hand [3]. Ernst [7] invalidates a study by Jonas et al. [18] that shows an odds ratio of 2.19 (1.55–3.11) in favor of homeopathy for rheumatic conditions, using the notion that there are not sufficient data for the treatment of any specific condition [6]. However, his review deals with the overall efficacy of homeopathy and not with specific conditions. Ernst [7] still adds this statistically significant result in favor of homeopathy over placebo to his list of arguments of why homeopathy does not work. Such argumentation must be reviewed carefully before being accepted by the reader.

After re-studying all this in detail, I get the impression that Hahn does not understand (or does not want to understand?) the research questions posed, nor the methodologies employed in my 3 articles. He is remarkably selective in choosing just 3 of my papers (his reference No 7 cites many more of my systematic reviews of homeopathy), and he seems to be determined to get the wrong end of the stick in order to defame me. How he can, based on his ‘analysis’ arrive at the conclusion that ” I have never encountered any scientific writer who is so clearly biased (biased) as this Edzard Ernst“, is totally beyond reason.

In one point, however, Hahn seems to be correct: IDEOLOGY PLAYS A PART (NOT IN MY BUT IN HIS EVALUATION).

_____________________________________________________________________________

REFERENCES AS CITED IN HAHN’S ARTICLE

5 Ernst E: Are highly dilute homeopathic remedies placebos? Perfusion 1998;11:291.

6 Ernst E, Pittler MH: Re-analysis of previous metaanalysis of clinical trials of homeopathy. J Clin Epidemiol 2000;53:1188.

7 Ernst E: A systematic review of systematic reviews of homeopathy. Br J Clin Pharmacol 2002;54:577–582.

______________________________________________________________________________

For more information about Hahn, please see two comments on my previous post (by Björn Geir who understands Hahn’s native language).

This is also where you can find the only comment by Hahn that I am aware of:
Robert Hahn on Saturday 17 September 2016 at 09:50

Somebody alerted me on this website. Dr. Ernst spends most of his effort to reply to my article in Forsch Komplemetmed 2013; 20: 376-381 by discussing who I might be as a person. I hoped to see more effort being put on scientific reasoning.

1. For the scientific part: my experience in scientific reasoning of quite long and extensive. I am the most widely published Swede in the area of anesthesia and intensive care ever. Those who doubt this can look “Hahn RG” on PubMed.

2. For the religious part that, in my mind, has nothing to do with this topic, is that my wife developed a spiritualistic ability in the mid 1990:s which I have explored in four books published in Swedish between 1997 and 2007. I became convinced that much of this is true, but not all. The books reflect interviews with my wife and what happened in our family during that time. Almost half of all Swedes believe in afterlife and in the existence of a spiritual world. Dr. Ernsts reasoning is typical of skeptics, namely that a person with a known religious belief in not to trust – i.e. a person cannot have two sides, a religious and a scientific. I do not agree with that, but the view has led to that almost no scientist dares to tell his religious beliefs to anyone (which Ernst enforces by his reasoning). Besides, I am not very religious person at all, although the years spent writing these books was quite an interesting period of my life. In particular the last book which involved past-life memories that I had been revived during self-hypnotims. I am interested in exploring many sorts of secrets, not only scientific. But all types of evidence must be judged according to its own rules and laws.

3. Why did I write about homeopathy? The reason is a campaign led by skeptics in some summers ago. Teenagers sat in Swedish television and expressed firmly that “there is not a single publication showing that homeopathy works – nothing!”. I wonder how these young boys could know that, and suspected that had simply been instructed to say so by older skeptics . I looked up the topic on PubMed and soon found some positive papers. Not difficult to find. Had they looked? Surely not. I was a frequent blogger at the time, and wrote three blogs summarizing meta-analyses asking the question whether homeopathy was superior to placebo (disregarding the underlying disease). The response for my readers was impressive and I was eventually urged to write it up in English, which I did. That is the background to my article. I have no other involvement in homeopathy.

4. Me and Dr Ernst. I came across his name when scanning articles about homeopathy, and decided to look a bit deeper into what he had written. The typical scenario was to publish meta-analyses but excluding almost all material, leaving very little (of just a scant part of the literature) to summarize. No wonder there were no significant differences. If there were still significant differences the material was typically considered by him to be still too small or too imprecise or whatever to make any conclusion. This was quite systematic, and I lost trust in Ernst´s writings. This was pure scientific reasoning and has nothing to do with religion or anything else.

// Robert Hahn

_________________________________________________________________________

Lastly, if you need more info about Hahn, you might also want to read this.

17 Responses to An analysis of Hahn’s critique of my homeopathy papers: YES, IDEOLOGY DOES SEEM TO PLAY A PART

  • PubMed lists 3 SRs for Hahn one of which one is for homeopathy.

    PubMed lists 455 SRs for Ernst of which 36 mention “homeopathy” in the title or abstract.

    Hahn discusses the rationale behind EBM on his blog and claims that statistical tests alone can sufficently “prove” efficacy of treatments in the absence of any known mechanism by which the treatment might work – the risk of the result being due to chance being less than 5%. That may be good enough for medical treatments in general but may not always be so for particular cases. In the absence of any plausible mechanism of action for example, that 5% chance might look like a very good bet.

    For homeopathy we should be demanding a higher statistical bar, accepting say less than 0.5% risk of a chance result.

    Alternatively we could reject all but the most superlative quality RCTs and SRs from consideration – but Hahn doesn’t agree with doing that for homeopathy.

    Hahn is wrong to believe that conventional statistical measures can provide reasonable assurance of treatment benefit in the absence of any known potential mechanism of action. When the treatment consists of a drop of purified water he is very wrong indeed.

    • “For homeopathy we should be demanding a higher statistical bar, accepting say less than 0.5% risk of a chance result.”

      Even that’s too high, given the extremely low prior probability for homeopathy, as you say, “in the absence of any known potential mechanism of action”. For a detailed discussion of p-values see David Colquhoun’s blog: http://www.dcscience.net/2014/03/24/on-the-hazards-of-significance-testing-part-2-the-false-discovery-rate-or-how-not-to-make-a-fool-of-yourself-with-p-values/.

      • I refrained from suggesting a 5 standard deviations test, plumping for three instead of the conventional two standard deviations (p<0.05). I was tempted though! (I was talking in the same terms as Hahn by the way.)

      • On further reflection, after checking Wikipedia and considering my comment below to Edzard, I will now opt for the full Monty for homeopathy. Show me your 5-sigma please!

        As Wikpedia says: “The usefulness of this heuristic depends significantly on the question under consideration. In the social sciences, a result may be considered “significant” if its confidence level is of the order of a two-sigma effect (95%), while in particle physics, there is a convention of a five-sigma effect (99.99994% confidence) being required to qualify as a discovery.”

        The heuristic in question being the 3-sigma rule of thumb. So where does homeopathy fit in? It’s social-ish in as much as it’s medicine. On the other hand if it really does work it’s a challenge to everything we know in modern science. That places it in the fundamental physics league. Ergo 5-sigma.

        https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule

    • With homeopathy it is not just the absence of any reasonable model of how the treatment might work. It is not just unexplained, which implies there is a chance somebody might come up with something sometimes in the future. It is downright in contrast with science, technology and everyday experience. For instance, if shaking does have an effect to increase any effectiveness of the solution, why does this not happen when you put down your coffee on your table a little roughly? Why is it, that sterile and inert water remains inert when you shake it? Why does “information” or what you may call it from abundant stuff like table salt should have an effect on the individual while the daily consumption of the stuff itself has not?

      Any model of action should not only include how the effects that make homeopathy work come about. It also has to provide some explanation why these effects do not happen in everyday life under similar conditions.

      In a nutshell: Homeopathy is not just a possible new feature of science undiscovered as yet: either homeopathy is right and science is wrong or science is right and the evidence is flawed in some way – regardless of the p-value? Which is more probable?

      • CORRECT!
        it is not that we cannot understand how it works, but we understand that it cannot work.

        • As far as modern science goes we understand that it cannot work. However, a hyperbolical doubt can never be entirely removed that the whole of modern science may be flawed.

          That’s what it would take. That’s where the weird and wonderful science boundary “theories” start popping up. And where statistical significance pops up when a load of crappy studies are all mulched together with one or two good quality studies.

    • A concept that Hahn (and many other scientists) does not understand is the concept of prior probability. A statistical test tests the Null hypothesis, i.e. that there is no difference. However, any given study has literally millions of alternative hypotheses. “Pink dinosaurs cured my patients” is – for statistical purposes – a valid alternative hypothesis – albeit with a prior probability of zero. In order to decide which alternative hypothesis to chose, one has to consider the probability of at least following hypotheses: (a) the drug worked, (b) a confounding factor and (c) coincidence. Since homeopathy violates not only one, but several theories, the prior probability is zero. These probabilities are decisively influenced by the samples and the design. Since the design of homeopathic studies is mostly weak, they are prone to (b) or (c).

      Personally I think that metaanalysis methods are not really suited to test hypotheses with low prior probability since most metaanalysis assume that the the prior probability of (a) larger than the prior probability of (b) or (c). A concept that many scientists also underestimate is littlewood’s law. Given one event per second, and a definition of a miracle as an event with a chance of 1 in 1 million we encounter every 35 days a miracle, on average. That mans that even very rare events are more common than thought.

      • While the concept of prior probability is fundamental to calculating probabilities using Bayes’s theorem, I don’t see how it can be applied to clinical trials. I can’t think of any trial where the prior probability could be quantified, and indeed if a result runs counter to accepted medical understanding and practice (such as Helicobacter pylorii as the cause of peptic ulceration), analysing the results on the basis of an assumed low prior probability will discount and probably hide a real effect. At least the null hypothesis gives us a measure of the influence of chance in obtaining the observed results, even if it doesn’t shed any light on whether a “significant” result is due to pink dinosaurs or something else.

        The way to differentiate between (a) a real effect, (b) a confounding factor and (c) coincidence is in the design of the trial in the first place. Retrospective studies are at great risk from confounding factors, whereas properly blinded, randomised controlled trials are much less so. Clearly stating the trial design, how the results will be obtained, what the end points are and how the statistical analysis will be carrieds out, IN ADVANCE, BEFORE A SINGLE PATIENT HAS BEEN RECRUITED, is also an important way of improving the robustness of a trial, avoiding the spurious statistical significance that comes with data dredging, and preventing negative trials from disappearing.

        My understanding of how meta-analysis works is that essentially the results of many trials are pooled, and analysed as though they comprise one large trial with more subjects; the null hypothesis is still the basis of the statistical analysis.

        I encountered my own version of Littlewood’s law when I was a junior doctor working in acute medicine. I was struck by how often I encountered patients with very rare conditions, until I realised that there were a great many rare conditions out there (and of course the commoner ones are less likely to make it as far as a specialist department).

        What you are really saying when you talk of a prior probability of zero, is that you believe homeopathic remedies are so unlikely to be effective, on the basis of their stated mechanism of action, that there is no reason to test them in clinical trials in the first place. This may well be true, but it is in contrast to the way that clinical trials are carried out, where it is the effectiveness of the treatment which is being tested, not how it works.

        I would like to take issue with you assigning a prior probability of zero in these trials, because by so doing, you are saying that current theories of physics are correct, and by implication that further research in physics is unnecessary. I think most physicists would take the view that current theories are the best we have so far to explain observed phenomena, incomplete and even contradictory as they are. They are continuously searching for evidence to disprove what is currently accepted, thereby moving the field forward. Though they do generally require something a bit stronger than a p-value of 0.05; for instance a “7 sigma” result means that the measurements are seven standard deviations away from what would be predicted by chance, equivalent to a p-value of 0.0000000000026.

        Indeed, with their strange angles and distribution of charge, the ability of water molecules to form various microstructures in the liquid phase is something else that is still very poorly understood, though I doubt whether we will ever find that the person preparing a 30C dilution can mould the structure of the water in any meaningful way, however strong their intention may be.

        • First, I agree that prior probability is difficult to calculate or even estimate for a clinical trial (as in any experimental setting btw.). Indeed good trial – and experimental – design aims at minimizing the prior probability of unknown and/or unwanted confounding factors. You are correct that a false estimate of the prior probability might lead to overlooking some factors, However, not taking into account prior probability leads to situations very well described in Ioannidis paper available here: http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124.

          With regard to a prior probability of zero, a scientific theory is a model that is so well supported by data that it is very unlikely to be overthrown one day. The 7 sigma p-value – and the theories still holding – reflects that. That means that any hypothesis that requires a fundamental change in a theory has a prior probability of very close to zero, in any case much lower than that of the existence of an unwanted confounding factor or false positive. Homeopathy does not only violate one theory, but several. Your water example is a very good example for that. Even if we find that succussion molds water in a certain way, one still has to demonstrate that this water molding has an effect on cellular level. If that succeeds, one has to demonstrate that this effect is therapeutic on cellular level. If that succeeds, one has to demonstrate that a few small pills have a strong enough effect to act on the organism as a whole. Every single one of these demonstrations would not only extend an existing theory, but completely negate it.

          • Thank-you for directing me to Ionnidis’s paper. I have to say, I don’t think he has explained the section on prior probabilities very well. The method he describes is very much applicable, for instance, when interpreting the result of a test. For instance a screening test with a 95% accuracy can only be interpreted with regard to the prior probability of it being positive, in other words the prevalence of the screened condition within the population. However, I don’t see how this can in any way be applied to clinical trials, where the prior probability is unknown (we can estimate that it is high or low, depending on what we think the trial is going to show, but that doesn’t give us a number to plug into an equation).

            Also, close to zero is not the same as zero.

            I’m not really disagreeing with what you are saying, I’m just trying to be rigorous.

          • Well, any hypothesis in any experiment has a prior probability. Therefore the hypothesis “my drug works” (sloppiy expressed) in a clinical trial also has one. In this case it is difficult to determin, I admit that. The prior probability is high if there are prior supporting data like a mechanism, etc. and it is low if the even the mechanism itself seems unlikely. I will give you an example:

            Imagine a trial on the anti-inflammatory effects of dexametasone. You administer it to patients and observe a significant reduction of inflammation. From the myriad of possible we take following hypotheses:

            (a) false positive
            (b) design error
            (c) the pink dinosaur did it
            (d) dexametasone inhibited the inflammation.

            (b) should be taken care of by trial design, (a) by reproducibility, and (c) has a prior probability of very close to zero because nobody has ever seen a pink dinosaur.

            Dexametasone is a known NFkappaB pathway inhibitor, which in turn is known to abrogate inflammation, therefore we accept (d) as explanation for our observations.

            Please don’t be fooled about the pink dinosaur hypothesis. From a statistical point of view this is an equally *valid* hypothesis like (a), (b), and (d). However, it is so bizarre that we don’t even think of taking it into account. Why ? Because in this case we are aware that the prior probability is almost zero. With homeopathy the bizarrity of the hypothesis is not very obvious, but nevertheless present.

            The task is to transit from evidence to science based medicine, which homeopathy will not survive.

          • It is not so difficult to estimate a prior probability of homeopathy being effective. Not in a precise way, but not so much invalid at all. Indeed, homeopaths are very well ignorant of the importance of considering prior probabilities. The trouble lies within what “homeopathy has an effect” means. This formulation needs to be broken down. It would have to be a combined positive answer to the following:

            a) The condition is objectively pathological (i.e. we have a strongly reliable diagnosis).
            b) The solution contains an active ingredient.
            c) The active ingredient has pharmacological effects.
            d) The effects of the active ingredient are related to the disease or pathology at hand.

            Now, one by one, we can have fairly reasonable estimates of prior probabilities for these events:

            a) We can be quite confident that we can work this out quite well, so we can approach a prior probability of 1 for this, so let’s give, say, a Pa = 1.

            b) For lots of substances, we have certified, tested and reliable titration methods. We can construct a prior probability for this by giving one or more laboratories some dozens of 30C diluted solutions of whatever we desire to test and let them attempt an arbitrary number of titrations. The number of successful titrations detecting any amount of the substance at hand, divided by the total number of titration attempts provides us with a preliminary prior probability Pb, which is the probability of the solution containing active ingredient.

            c) We can search for pharmacokinetic data here and try to establish a probability of the substance at hand having exhibited observed effects in a prior study, not in vitro but in vivo. IF the Pc probability we construct here is based on in-vitro studies, we have to scale it by a coefficient that relates the chances of an in-vivo effect given an in-vitro effect in overall substance trials.

            d) This can be based on pharmacology and is a simple correlation analysis. It is evident that we cannot expect salt to have a BP-lowering effect, for example. Let’s call this Pd.

            Our prior probability is somewhat along the lines of:
            P = Pa*Pb*Pc*Pd.

            It is a lot easier to argue from here on, on how Pb will inevitably approach zero for ultramolecular dilutions, and Pd will also approach zero for various choices of the active ingredient. Also, we have lots of data available at hand, which show that numerous substances have evident in-vitro effects, but do not have an effect in-vivo. Therefore, it is quite often the case that Pc is also really small more often than not, should argumentation have been based on in-vitro observations.

  • Actually significance at the 5% level does not mean that the risk of the obtained results being due to chance is 5%. It means that if the results ARE due to chance (i.e. if the null hypothesis is true), then we should expect to see the obtained results 5% of the time. This is a very important difference. For instance, if I toss a coin five times in a row, the chance of it coming up heads each time is 1/32 (or about 3%). In other words if we assume the null hypothesis (a fair coin) then five consecutive heads is statistically significant at the 3% level. This does not mean that there is a 3% chance that the coin is, in fact, double-headed.

    As a physician, for many reasons I would not base my practice on the outcome of a single trial where the result is positive at the 5% level.

    It is important to be statistically aware when reading the report of a clinical trial (and not just to look at the abstract) and to reject studies where the methodology, results and statistical analysis are not sufficiently robust (or where insufficient details are given to assess these). It is also important to be sufficiently familiar with what is being investigated to be able to judge whether the questions addressed in a study are even meaningful.

    I certainly agree with Leigh Jackson that if the results do not fit with established science, or the underlying mechanism is not biologically plausible, then we must demand a higher bar. Extraordinary claims require extraordinary evidence.

    • Thanks. I followed Hahn’s own wording – albeit as translated by Google. From what he writes on his blog he appears to believe that not having any scientifically plausible (and most importantly testable) mechanism of action to explain how homeopathy might work, is of no consequence in deciding whether or not it does work. Statistics alone are all that are required – or more specifically the conventional 5% benchmark is all that is required. This puts too many watery eggs in one very flimsy basket.

  • Trying to understand an enigmatic personality such as Dr. Hahn, requires understanding that he is deeply affected by religion and religion does strange things to peoples perception of reality.

    Dr. Hahn and his wife are a spiritualists. His wife, Marie-Louise, thinks she is a medium and they have been actively exploring these ideas. They have co-authored several books on the subject of spiritism.
    Dr. Hahn is involved in anthroposophy, a religious cult that runs a famously controversial “clinic” in his home country Sweden. Homeopathy is extensively used by this cult.
    If I recall correctly Dr. Hahn thinks he has found his former life in a person from the 13th century.

    His web at http://www.roberthahn.se/ is in Swedish but you can translate the text easily and quite satisfactorily.
    If you use Google Chrome browser it may offer to or translate the pages automagically.
    Or you can copy paste the link (the URL) of a page into the left field in “translate.google.com” and choose Swedish as the original language and in the right field choose your preferred language and click the link that appears.

    • He is certainly an idiosyncratic thinker. His passions were raised by a TV programme showing young people being highly sceptical of homeopathy. He was spurred into action. Off he goes and discovers that homeopathy does in fact work! He discovers some analysis of a batch of RCTs with a positive statistical result. He discovers contradictory analysis resulting in negative results. The contradictory analysis is fraudulent, he says.

      Hume said that reason is slave to the passions. I would say that all humanity’s frail faculties can be brought into the service of the passions. Our capacity for statistics for example. We should all examine our consciences, be we sceptics or idiosyncratics.

      Science is our understanding of the nature of things, after as much as is humanly possible of human egotism has been stripped away.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

If you want to be able to edit your comment for five minutes after you first submit it, you will need to tick the box: “Save my name, email, and website in this browser for the next time I comment.”
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”


Click here for a comprehensive list of recent comments.

Categories