MD, PhD, FMedSci, FRSB, FRCP, FRCPEd.

methodology

1 2 3 10

“Non-reproducible single occurrences are of no significance to science”, this quote by Karl Popper often seems to get forgotten in medicine, particularly in alternative medicine. It indicates that findings have to be reproducible to be meaningful – if not, we cannot be sure that the outcome in question was caused by the treatment we applied.

This is thus a question of cause and effect.

The statistician Sir Austin Bradford Hill proposed in 1965 a set of 9 criteria to provide evidence of a relationship between a presumed cause and an observed effect while demonstrating the connection between cigarette smoking and lung cancer. One of his criteria is consistency or reproducibility: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.

By mentioning ‘different persons’, Hill seems to also establish the concept of INDEPENDENT replication.

Let me try to explain this with an example from the world of SCAM.

  1. A homeopath feels that childhood diarrhoea could perhaps be treated with individualised homeopathic remedies.  She conducts a trial, finds a positive result and concludes that the statistically significant decrease in the duration of diarrhea in the treatment group suggests that homeopathic treatment might be useful in acute childhood diarrhea. Further study of this treatment deserves consideration.
  2. Unsurprisingly, this study is met with disbelieve by many experts. Some go as far as doubting its validity, and several letters to the editor appear expressing criticism. The homeopath is thus motivated to run another trial to prove her point. Its results are consistent with the finding from the previous study that individualized homeopathic treatment decreases the duration of diarrhea and number of stools in children with acute childhood diarrhea.
  3. We now have a replication of the original finding. Yet, for a range of reasons, sceptics are far from satisfied. The homeopath thus runs a further trial and publishes a meta-analysis of all there studies. The combined analysis shows a duration of diarrhoea of 3.3 days in the homeopathy group compared with 4.1 in the placebo group (P = 0.008). She thus concludes that the results from these studies confirm that individualized homeopathic treatment decreases the duration of acute childhood diarrhea and suggest that larger sample sizes be used in future homeopathic research to ensure adequate statistical power. Homeopathy should be considered for use as an adjunct to oral rehydration for this illness.

To most homeopaths it seems that this body of evidence from three replication is sound and solid. Consequently, they frequently cite these publications as a cast-iron proof of their assumption that individualised homeopathy is effective. Sceptics, however, are still not convinced.

Why?

The studies have been replicated alright, but what is missing is an INDEPENDENT replication.

To me this word implies two things:

  1. The results have to be reproduced by another research group that is unconnected to the one that conducted the three previous studies.
  2. That group needs to be independent from any bias that might get in the way of conducting a rigorous trial.

And why do I think this latter point is important?

Simply because I know from many years of experience that a researcher, who strongly believes in homeopathy or any other subject in question, will inadvertently introduce all sorts of biases into a study, even if its design is seemingly rigorous. In the end, these flaws will not necessarily show in the published article which means that the public will be mislead. In other words, the paper will report a false-positive finding.

It is possible, even likely, that this has happened with the three trials mentioned above. The fact is that, as far as I know, there is no independent replication of these studies.

In the light of all this, Popper’s axiom as applied to medicine should perhaps be modified: findings without independent replication are of no significance. Or, to put it even more bluntly: independent replication is an essential self-cleansing process of science by which it rids itself from errors, fraud and misunderstandings.

On this blog, we constantly discuss the shortcomings of clinical trials of (and other research into) alternative medicine. Yet, there can be no question that research into conventional medicine is often unreliable as well.

What might be the main reasons for this lamentable fact?

A recent BMJ article discussed 5 prominent reasons:

Firstly, much research fails to address questions that matter. For example, new drugs are tested against placebo rather than against usual treatments. Or the question may already have been answered, but the researchers haven’t undertaken a systematic review that would have told them the research was not needed. Or the research may use outcomes, perhaps surrogate measures, that are not useful.

Secondly, the methods of the studies may be inadequate. Many studies are too small, and more than half fail to deal adequately with bias. Studies are not replicated, and when people have tried to replicate studies they find that most do not have reproducible results.

Thirdly, research is not efficiently regulated and managed. Quality assurance systems fail to pick up the flaws in the research proposals. Or the bureaucracy involved in having research funded and approved may encourage researchers to conduct studies that are too small or too short term.

Fourthly, the research that is completed is not made fully accessible. Half of studies are never published at all, and there is a bias in what is published, meaning that treatments may seem to be more effective and safer than they actually are. Then not all outcome measures are reported, again with a bias towards those are positive.

Fifthly, published reports of research are often biased and unusable. In trials about a third of interventions are inadequately described meaning they cannot be implemented. Half of study outcomes are not reported.

END OF QUOTE

Apparently, these 5 issues are the reason why 85% of biomedical research is being wasted.

That is in CONVENTIONAL medicine, of course.

What about alternative medicine?

There is no question in my mind that the percentage figure must be even higher here. But do the same reasons apply? Let’s go through them again:

  1. Much research fails to address questions that matter. That is certainly true for alternative medicine – just think of the plethora of utterly useless surveys that are being published.
  2. The methods of the studies may be inadequate. Also true, as we have seen hundreds of time on this blog. Some of the most prevalent flaws include in my experience small sample sizes, lack of adequate controls (e.g. A+B vs B design) and misleading conclusions.
  3. Research is not efficiently regulated and managed. True, but probably not a specific feature of alternative medicine research.
  4. Research that is completed is not made fully accessible. most likely true but, due to lack of information and transparency, impossible to judge.
  5. Published reports of research are often biased and unusable. This is unquestionably a prominent feature of alternative medicine research.

All of this seems to indicate that the problems are very similar – similar but much more profound in the realm of alternative medicine, I’d say based on many years of experience (yes, what follows is opinion and not evidence because the latter is hardly available).

The thing is that, like almost any other job, research needs knowledge, skills, training, experience, integrity and impartiality to do it properly. It simply cannot be done well without such qualities. In alternative medicine, we do not have many individuals who have all or even most of these qualities. Instead, we have people who often are evangelic believers in alternative medicine, want to further their field by doing some research and therefore acquire a thin veneer of scientific expertise.

In my 25 years of experience in this area, I have not often seen researchers who knew that research is for testing hypotheses and not for trying to prove one’s hunches to be correct. In my own team, those who were the most enthusiastic about a particular therapy (and were thus seen as experts in its clinical application), were often the lousiest researchers who had the most difficulties coping with the scientific approach.

For me, this continues to be THE problem in alternative medicine research. The investigators – and some of them are now sufficiently skilled to bluff us to believe they are serious scientists – essentially start on the wrong foot. Because they never were properly trained and educated, they fail to appreciate how research proceeds. They hardly know how to properly establish a hypothesis, and – most crucially – they don’t know that, once that is done, you ought to conduct investigation after investigation to show that your hypothesis is incorrect. Only once all reasonable attempts to disprove it have failed, can your hypothesis be considered correct. These multiple attempts of disproving go entirely against the grain of an enthusiast who has plenty of emotional baggage and therefore cannot bring him/herself to honestly attempt to disprove his/her beloved hypothesis.

The plainly visible result of this situation is the fact that we have dozens of alternative medicine researchers who never publish a negative finding related to their pet therapy (some of them were admitted to what I call my HALL OF FAME on this blog, in case you want to verify this statement). And the lamentable consequence of all this is the fast-growing mountain of dangerously misleading (but often seemingly robust) articles about alternative treatments polluting Medline and other databases.

An article entitled “Homeopathy in the Age of Antimicrobial Resistance: Is It a Viable Treatment for Upper Respiratory Tract Infections?” cannot possibly be ignored on this blog, particularly if published in the amazing journal ‘Homeopathy‘. The title does not bode well, in my view – but let’s see. Below, I copy the abstract of the paper without any changes; all I have done is include a few numbers in brackets; they refer to my comments that follow.

START OF ABSTRACT

Acute upper respiratory tract infections (URTIs) and their complications are the most frequent cause of antibiotic prescribing in primary care. With multi-resistant organisms proliferating, appropriate alternative treatments to these conditions are urgently required. Homeopathy presents one solution (1); however, there are many methods of homeopathic prescribing. This review of the literature considers firstly whether homeopathy offers a viable alternative therapeutic solution for acute URTIs (2) and their complications, and secondly how such homeopathic intervention might take place.

METHOD:

Critical review of post 1994 (3) clinical studies featuring homeopathic treatment of acute URTIs and their complications. Study design, treatment intervention, cohort group, measurement and outcome were considered. Discussion focused on the extent to which homeopathy is used to treat URTIs, rate of improvement and tolerability of the treatment, complications of URTIs, prophylactic and long-term effects, and the use of combination versus single homeopathic remedies.

RESULTS:

Multiple peer-reviewed (4) studies were found in which homeopathy had been used to treat URTIs and associated symptoms (cough, pharyngitis, tonsillitis, otitis media, acute sinusitis, etc.). Nine randomised controlled trials (RCTs) and 8 observational/cohort studies were analysed, 7 of which were paediatric studies. Seven RCTs used combination remedies with multiple constituents. Results for homeopathy treatment were positive overall (5), with faster resolution, reduced use of antibiotics and possible prophylactic and longer-term benefits.

CONCLUSIONS:

Variations in size, location, cohort and outcome measures make comparisons and generalisations concerning homeopathic clinical trials for URTIs problematic (6). Nevertheless, study findings suggest at least equivalence between homeopathy and conventional treatment for uncomplicated URTI cases (7), with fewer adverse events and potentially broader therapeutic outcomes. The use of non-individualised homeopathic compounds tailored for the paediatric population merits further investigation, including through cohort studies (8). In the light of antimicrobial resistance, homeopathy offers alternative strategies for minor infections and possible prevention of recurring URTIs (9).

END OF ABSTRACT

And here are my comments:

  1. This sounds as though the author already knew the conclusion of her review before she even started.
  2. Did she not just state that homeopathy is a solution?
  3. This is most unusual; why were pre-1994 articles not considered?
  4. This too is unusual; that a study is peer-reviewed is not really possible to affirm, one must take the journal’s word for it. Yet we know that peer-review is farcical in the realm of alternative medicine (see also below). Therefore, this is an odd inclusion criterion to mention in an abstract.
  5. An overall positive result obtained by including uncontrolled observational and cohort studies lacking a control group is meaningless. There is also no assessment of the quality of the RCTs; after a quick glance, I get the impression that the methodologically sound studies do not show homeopathy to be superior to placebo.
  6. Reviewers need to disentangle these complicating factors and arrive at a conclusion. This is almost invariably problematic, but it is the reviewer’s job.
  7. What might be the conventional treatment of uncomplicated URTI?
  8. Why on earth cohort studies? They tell us nothing about equivalence, efficacy etc.
  9. To reach that conclusion seems to have been the aim of this review (see my point number 1). If I am not mistaken, antibiotics are not indicated in the vast majority of cases of uncomplicated URTI. This means that the true alternative in the light of antimicrobial resistance is to not prescribe antibiotics and treat the patient symptomatically. No need at all for homeopathic placebos, and no need for wishful thinking reviews!

In the paper, the author explains her liking of uncontrolled studies: Non-RCTs and patient reported surveys are considered by some to be inferior forms of research evidence, but are important adjuncts to RCTs that can measure key markers such as patient satisfaction, quality of life and functional health. Observational studies such as clinical outcome studies and case reports, monitoring the effects of homeopathy in real-life clinical settings, are a helpful adjunct to RCTs and more closely reflect real-life experiences of patients and physicians than RCTs, and are therefore considered in this study. I would counter that this is not an issue of inferiority but one that depends on the research question; if the research question relates to efficacy/effectiveness, uncontrolled data are next to useless.

She also makes fairly categorical statements in the conclusion section of the paper about the effectiveness of homeopathy: [the] combined evidence from these and other studies suggests that homeopathic treatment can exert biological effects with fewer adverse events and broader therapeutic opportunities than conventional medicine in the treatment of URTIs. Given the cost implications of treating URTIs and their complications in children, and the relative absence of effective alternatives without potential side effects, the use of non-individualised homeopathic compounds tailored for the paediatric population merits further investigation, including through large-scale cohort studies…  the most important evidence still arises from practical clinical experience and from the successful treatment of millions of patients. I would counter that none of these conclusions are warranted by the data presented.

From reading the paper, I get the impression that the author (the paper provides no information about her conflicts of interest, nor funding source) is a novice to conducting reviews (even though the author is a senior lecturer, the paper reads more like a poorly organised essay than a scientific review). I am therefore hesitant to criticise her – but I do nevertheless find the fact deplorable that her article passed the peer-review process of ‘Homeopathy‘ and was published in a seemingly respectable journal. If anything, articles of this nature are counter-productive for everyone concerned; they certainly do not further effective patient care, and they give homeopathy-research a worse name than it already has.

The Impact Factor (IF) of a journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. It is frequently used as a measure of the importance of a journal within its field; journals with higher impact factors are often deemed to be more important than those with lower ones. The IF for any given year can be calculated as the number of citations, received in that year, of articles published in that journal during the two preceding years, divided by the total number of articles published in that journal during the two preceding years.

press-release celebrated the new IF of the journal ‘HOMEOPATHY’ which has featured on this blog before. I am sure that you all want to share in this joy:

START OF QUOTE

For the second year running there has been an increase in the number of times articles published in the Faculty of Homeopathy’s journal Homeopathy have been cited in articles in other peer-reviewed publications. The figure known as the Impact Factor (IF) has risen from 1.16 to 1.524, which represents a 52% increase in the number of citations.

An IF is used to determine the impact a particular journal has in a given field of research and is therefore widely used as a measure of quality. The latest IF assessment for Homeopathy covers citations during 2017 for articles published in the previous two years (2015 and 2016).

Dr Peter Fisher, Homeopathy’s editor-in-chief, said: “Naturally the editorial team is delighted by this news. This success is due to the quality and international nature of research and other content we publish. So I thank all those who have contributed such high quality papers, maintaining the journal’s position as the world’s foremost publication in the scholarly study of homeopathy. I would particularly like to thank our senior deputy editor, Dr Robert Mathie for all his hard work.”

First published in 1911 as the British Homoeopathic Journal, Homeopathy is the only homeopathic journal indexed by Medline, with over 100,000 full-text downloads per year. In January 2018, publishing responsibilities for the quarterly journal moved to Thieme, an award-winning medical and science publisher.

Greg White, Faculty chief executive, said: “Moving to a new publisher can be difficult, but the decision we took last year is certainly paying dividends. I would therefore like to thank everyone at Thieme for the part they are playing in the journal’s continued success.”

END OF QUOTE

While the champagne corks might be popping in homeopathic circles, I want to try and give some perspective to this celebration.

The IP has rightly been criticised so many times for so many reasons, that it is now not generally considered to be a valuable measure for anything. The main reason for this is that it can be (and is being) manipulated in numerous ways. But even if we accept the IP as a meaningful parameter, we must ask what an IP of 1.5 means and how it compares to other medical journals’ IP.

Here are some IFs of general and specialised medical journals readers of this blog might know:

Annals Int Med: 2016/2017 Impact Factor : 17.135,

BMJ: 2016/2017 Impact Factor : 20.785,

Circulation: 2016/2017 Impact Factor : 19.309,

Diabetes Care: 2016/2017 Impact Factor : 11.857,

Gastroenterology: 2016/2017 Impact Factor : 18.392,

Gut: 2016/2017 Impact Factor : 16.658,

J Clin Oncol: 2016/2017 Impact Factor : 24.008,

Lancet: 2016/2017 Impact Factor : 47.831,

Nature Medicine: 2016/2017 Impact Factor : 29.886,

Plos Medicine: 2016/2017 Impact Factor : 11.862,

Trends Pharm Sci: 2016/2017 Impact Factor : 12.797,

This selection seems to indicate that an IF of 1.5 is modest, to say the least. In turn, this means that the above press-release is perhaps just a little bit on the hypertrophic side.

But, of course, it’s all about homeopathy where, as we all know, LESS IS MORE!

Is homeopathy effective for specific conditions? The FACULTY OF HOMEOPATHY (FoH, the professional organisation of UK doctor homeopaths) say YES. In support of this bold statement, they cite a total of 35 systematic reviews of homeopathy with a focus on specific clinical areas. “Nine of these 35 reviews presented conclusions that were positive for homeopathy”, they claim. Here they are:

Allergies and upper respiratory tract infections 8,9
Childhood diarrhoea 10
Post-operative ileus 11
Rheumatic diseases 12
Seasonal allergic rhinitis (hay fever) 13–15
Vertigo 16

And here are the references (I took the liberty of adding my comments in blod):

8. Bornhöft G, Wolf U, Ammon K, et al. Effectiveness, safety and cost-effectiveness of homeopathy in general practice – summarized health technology assessment. Forschende Komplementärmedizin, 2006; 13 Suppl 2: 19–29.

This is the infamous ‘Swiss report‘ which, nowadays, only homeopaths take seriously.

9. Bellavite P, Ortolani R, Pontarollo F, et al. Immunology and homeopathy. 4. Clinical studies – Part 1. Evidence-based Complementary and Alternative Medicine: eCAM, 2006; 3: 293–301.

This is not a systematic review as it lacks any critical assessment of the primary data and includes observational studies and even case series.

10. Jacobs J, Jonas WB, Jimenez-Perez M, Crothers D. Homeopathy for childhood diarrhea: combined results and metaanalysis from three randomized, controlled clinical trials. Pediatric Infectious Disease Journal, 2003; 22: 229–234.

This is a meta-analysis by Jennifer Jacobs (who recently featured on this blog) of 3 studies by Jennifer Jacobs; hardly convincing I’d say.

11. Barnes J, Resch K-L, Ernst E. Homeopathy for postoperative ileus? A meta-analysis. Journal of Clinical Gastroenterology, 1997; 25: 628–633.

This is my own paper! It concluded that “several caveats preclude a definitive judgment.”

12. Jonas WB, Linde K, Ramirez G. Homeopathy and rheumatic disease. Rheumatic Disease Clinics of North America, 2000; 26: 117–123.

This is not a systematic review; here is the (unabridged) abstract:

Despite a growing interest in uncovering the basic mechanisms of arthritis, medical treatment remains symptomatic. Current medical treatments do not consistently halt the long-term progression of these diseases, and surgery may still be needed to restore mechanical function in large joints. Patients with rheumatic syndromes often seek alternative therapies, with homeopathy being one of the most frequent. Homeopathy is one of the most frequently used complementary therapies worldwide.

Proper systematic reviews fail to show that homeopathy is an effective treatment for rheumatic conditions (see for instance here and here).

13. Wiesenauer M, Lüdtke R. A meta-analysis of the homeopathic treatment of pollinosis with Galphimia glauca. Forschende Komplementärmedizin und Klassische Naturheilkunde, 1996; 3: 230–236.

This is a meta-analysis by Wiesenauer of trials conducted by Wiesenauer.

My own, more recent analysis of these data arrived at a considerably less favourable conclusion: “… three of the four currently available placebo-controlled RCTs of homeopathic Galphimia glauca (GG) suggest this therapy is an effective symptomatic treatment for hay fever. There are, however, important caveats. Most essentially, independent replication would be required before GG can be considered for the routine treatment of hay fever. (Focus on Alternative and Complementary Therapies September 2011 16(3))

14. Taylor MA, Reilly D, Llewellyn-Jones RH, et al. Randomised controlled trials of homoeopathy versus placebo in perennial allergic rhinitis with overview of four trial series. British Medical Journal, 2000; 321: 471–476.

This is a meta-analysis by David Reilly of 4 RCTs which were all conducted by David Reilly. This attracted heavy criticism; see here and here, for instance.

15. Bellavite P, Ortolani R, Pontarollo F, et al. Immunology and homeopathy. 4. Clinical studies – Part 2. Evidence-based Complementary and Alternative Medicine: eCAM, 2006; 3: 397–409.

This is not a systematic review as it lacks any critical assessment of the primary data and includes observational studies and even case series.

16. Schneider B, Klein P, Weiser M. Treatment of vertigo with a homeopathic complex remedy compared with usual treatments: a meta-analysis of clinical trials. Arzneimittelforschung, 2005; 55: 23–29.

This is a meta-analysis of 2 (!) RCTs and 2 observational studies of ‘Vertigoheel’, a preparation which is not a homeopathic but a homotoxicologic remedy (it does not follow the ‘like cures like’ assumption of homeopathy) . Moreover, this product contains pharmacologically active substances (and nobody doubts that active substances can have effects).

________________________________________________________________________________

So, positive evidence from 9 systematic reviews in 6 specific clinical areas?

I let you answer this question.

Shiatsu is an alternative therapy that is popular, but has so far attracted almost no research. Therefore, I was excited when I saw a new paper on the subject. Sadly, my excitement waned quickly when I stared reading the abstract.

This single-blind randomized controlled study was aimed to evaluate shiatsu on mood, cognition, and functional independence in patients undergoing physical activity. Alzheimer disease (AD) patients with depression were randomly assigned to the “active group” (Shiatsu + physical activity) or the “control group” (physical activity alone).

Shiatsu was performed by the same therapist once a week for ten months. Global cognitive functioning (Mini Mental State Examination – MMSE), depressive symptoms (Geriatric Depression Scale – GDS), and functional status (Activity of Daily Living – ADL, Instrumental ADL – IADL) were assessed before and after the intervention.

The researchers found a within-group improvement of MMSE, ADL, and GDS in the Shiatsu group. However, the analysis of differences before and after the interventions showed a statistically significant decrease of GDS score only in the Shiatsu group.

The authors concluded that the combination of Shiatsu and physical activity improved depression in AD patients compared to physical activity alone. The pathomechanism might involve neuroendocrine-mediated effects of Shiatsu on neural circuits implicated in mood and affect regulation.

The Journal Complementary Therapies in Medicine also published three ‘Highlights’ of this study:

  • We first evaluated the effect of Shiatsu in depressed patients with Alzheimer’s disease (AD).
  • Shiatsu significantly reduced depression in a sample of mild-to-moderate AD patients.
  • Neuroendocrine-mediated effect of Shiatsu may modulate mood and affect neural circuits.

Where to begin?

1 The study is called a ‘pilot’. As such it should not draw conclusions about the effectiveness of Shiatsu.

2 The design of the study was such that there was no accounting for the placebo effect (the often-discussed ‘A+B vs B’ design); therefore, it is impossible to attribute the observed outcome to Shiatsu. The ‘highlight’ – Shiatsu significantly reduced depression in a sample of mild-to-moderate AD patients – therefore turns out to be a low-light.

3 As this was a study with a control group, within-group changes are irrelevant and do not even deserve a mention.

4 The last point about the mode of action is pure speculation, and not borne out of the data presented.

5 Accumulating so much nonsense in one research paper is, in my view, unethical.

Research into alternative medicine does not have a good reputation – studies like this one are not inclined to improve it.

Forgive me, if this post is long and a bit tedious, but I think it is important.

The claims continue that I am a dishonest falsifier of scientific data, because the renowned Prof R Hahn said so; this, for instance, is from a Tweet that appeared a few days ago

False claims, Edzard Ernst is the worst. Says independent researcher prof Hahn in his blog. His study: https://www.ncbi.nlm.nih.gov/pubmed/24200828 
His blog (German translation) http://www.homeopathy.at/betruegerische-studien-um-homoeopathie-als-wirkungslos-darzustellen…

The source of this permanent flow of defamations is Hahn’s strange article which I have tried to explain several times before. As the matter continues to excite homeopaths around the world, I have decided to give it another go. The following section (in bold) is directly copied from Hahn’s infamous paper where he evaluated several systematic reviews of homeopathy.

_________________________________________________________________________

In 1998, he [Ernst] selected 5 studies using highly diluted remedies from the original 89 and concluded that homeopathy has no effect [5].

In 2000, Ernst and Pittler [6] sought to invalidate the statistically significant superiority of homeopathy over placebo in the 10 studies with the highest Jadad score. The odds ratio, as presented by Linde et al. in 1999 [3], was 2.00 (1.37–2.91). The new argument was that the Jadad score and odds ratio in favor of homeopathy seemed to follow a straight line (in fact, it is asymptotic at both ends). Hence, Ernst and Pittler [6] claimed that the highest Jadad scores should theoretically show zero effect. This reasoning argued that the assumed data are more correct than the real data.

Two years later, Ernst [7] summarized the systematic reviews of homeopathy published in the wake of Linde’s first metaanalysis [2]. To support the view that homeopathy lacks effect, Ernst cited his own publications from 1998 and 2000 [5, 6]. He also presented Linde’s 2 follow-up reports [3, 4] as being further evidence that homeopathy equals placebo. 

_________________________________________________________________________

And that’s it! Except for some snide remarks (copied below) in the discussion section of the article, this is all Hahn has to say about my publications on homeopathy; in other words, he selects 3 of my papers (references are copied below) and (without understanding them, as we will see) vaguely discusses them. In my view, that is remarkable in 3 ways:

  • firstly, there I have published about 100 more papers on homeopathy which Hahn ignores (even though he knows about them as we shall see below);
  • secondly, he does not explain why he selected those 3 and not any others;
  • thirdly, he totally misrepresents all the 3 articles that he has selected.

In the following, I will elaborate on the last point in more detail (anyone capable of running a Medline search and reading Hahn’s article can verify the other points). I will do this by repeating what Hahn states about each of the 3 papers (in bold print), and then explain what each article truly was about.

HERE WE GO

_________________________________________________________________________

FIRST ARTICLE

In 1998, he [Ernst] selected 5 studies using highly diluted remedies from the original 89 and concluded that homeopathy has no effect [5].

This paper [ref 5] was a re-analysis of the Linde Lancet meta-analysis (unfortunately, this paper is not available electronically, but I can send copies to interested parties). For this purpose, I excluded all the studies that did not

  • use homeopathy following the ‘like cures like’ assumption (arguably those studies are not trials of homeopathy at all),
  • use remedies which were not highly diluted and thus contained active molecules (nobody doubts that remedies with pharmacologically active substances can have effects),
  • that did not get the highest rating for methodological quality by Linde et al (flawed trials are known to produce false-positive results).

My methodology was (I think) reasonable, pre-determined and explained in full detail in the article. It left me with 5 placebo-controlled RCTs. A meta-analysis across these 5 trials showed no difference to placebo.

Hahn misrepresents this paper by firstly not explaining what methodology I applied, and secondly by stating that I ‘selected’ the 5 studies from a pool of 89 trials. Yet, I defined my inclusion criteria which were met by just 5 studies.

___________________________________________________________________________

SECOND ARTICLE

In 2000, Ernst and Pittler [6] sought to invalidate the statistically significant superiority of homeopathy over placebo in the 10 studies with the highest Jadad score. The odds ratio, as presented by Linde et al. in 1999 [3], was 2.00 (1.37–2.91). The new argument was that the Jadad score and odds ratio in favor of homeopathy seemed to follow a straight line (in fact, it is asymptotic at both ends). Hence, Ernst and Pittler [6] claimed that the highest Jadad scores should theoretically show zero effect. This reasoning argued that the assumed data are more correct than the real data.

The 1st thing to notice here is that Hahn alleges we had ‘sought to invalidate’. How can he know that? The fact is that we were simply trying to discover something new in the pool of data. The paper he refers to here has been discussed before on this blog. Here is what I stated:

This was a short ‘letter to the editor’ by Ernst and Pittler published in the J Clin Epidemiol commenting on the above-mentioned re-analysis by Linde et al which was published in the same journal. As its text is not available on-line, I re-type parts of it here:

In an interesting re-analysis of their meta-analysis of clinical trials of homeopathy, Linde et al conclude that there is no linear relationship between quality scores and study outcome. We have simply re-plotted their data and arrive at a different conclusion. There is an almost perfect correlation between the odds ratio and the Jadad score between the range of 1-4… [some technical explanations follow which I omit]…Linde et al can be seen as the ultimate epidemiological proof that homeopathy is, in fact, a placebo.

Again Hahn’s interpretation of our paper is incorrect and implies that he has not understood what we actually intended to do here.

_____________________________________________________________________________

THIRD ARTICLE

Two years later, Ernst [7] summarized the systematic reviews of homeopathy published in the wake of Linde’s first metaanalysis [2]. To support the view that homeopathy lacks effect, Ernst cited his own publications from 1998 and 2000 [5, 6]. He also presented Linde’s 2 follow-up reports [3, 4] as being further evidence that homeopathy equals placebo. 

Again, Hahn assumes my aim in publishing this paper (the only one of the 3 papers that is available as full text on-line): ‘to support the view that homeopathy lacks effect’. He does so despite the fact that the paper very clearly states my aim: ‘This article is an attempt to critically evaluate all such papers published since 1997 with a view to defining the clinical effectiveness of homeopathic medicines.‘ This discloses perhaps better than anything else that Hahn’s article is not evidence, but opinion-based and not objective but polemic.

Hahn then seems to resent that I included my own articles. Does he not know that, in a systematic review, one has to include ALL relevant papers? Hahn also seems to imply that I merely included a few papers in my systematic review. In fact, I included all the 17 that were available at the time. It might also be worth mentioning that numerous subsequent and independent analyses that employed similar methodologies as mine arrived at the same conclusions as my review.

_____________________________________________________________________________

Despite Hahn’s overtly misleading statements, he offers little real critique of my work. Certainly Hahn does not state that I made any major mistakes in the 3 papers he cites. For his more vitriolic comments, we need to look at the discussion section of his article where he states:

Ideology Plays a Part

Ernst [7] makes conclusions based on assumed data [6] when the true data are at hand [3]. Ernst [7] invalidates a study by Jonas et al. [18] that shows an odds ratio of 2.19 (1.55–3.11) in favor of homeopathy for rheumatic conditions, using the notion that there are not sufficient data for the treatment of any specific condition [6]. However, his review deals with the overall efficacy of homeopathy and not with specific conditions. Ernst [7] still adds this statistically significant result in favor of homeopathy over placebo to his list of arguments of why homeopathy does not work. Such argumentation must be reviewed carefully before being accepted by the reader.

After re-studying all this in detail, I get the impression that Hahn does not understand (or does not want to understand?) the research questions posed, nor the methodologies employed in my 3 articles. He is remarkably selective in choosing just 3 of my papers (his reference No 7 cites many more of my systematic reviews of homeopathy), and he seems to be determined to get the wrong end of the stick in order to defame me. How he can, based on his ‘analysis’ arrive at the conclusion that ” I have never encountered any scientific writer who is so clearly biased (biased) as this Edzard Ernst“, is totally beyond reason.

In one point, however, Hahn seems to be correct: IDEOLOGY PLAYS A PART (NOT IN MY BUT IN HIS EVALUATION).

_____________________________________________________________________________

REFERENCES AS CITED IN HAHN’S ARTICLE

5 Ernst E: Are highly dilute homeopathic remedies placebos? Perfusion 1998;11:291.

6 Ernst E, Pittler MH: Re-analysis of previous metaanalysis of clinical trials of homeopathy. J Clin Epidemiol 2000;53:1188.

7 Ernst E: A systematic review of systematic reviews of homeopathy. Br J Clin Pharmacol 2002;54:577–582.

______________________________________________________________________________

For more information about Hahn, please see two comments on my previous post (by Björn Geir who understands Hahn’s native language).

This is also where you can find the only comment by Hahn that I am aware of:
Robert Hahn on Saturday 17 September 2016 at 09:50

Somebody alerted me on this website. Dr. Ernst spends most of his effort to reply to my article in Forsch Komplemetmed 2013; 20: 376-381 by discussing who I might be as a person. I hoped to see more effort being put on scientific reasoning.

1. For the scientific part: my experience in scientific reasoning of quite long and extensive. I am the most widely published Swede in the area of anesthesia and intensive care ever. Those who doubt this can look “Hahn RG” on PubMed.

2. For the religious part that, in my mind, has nothing to do with this topic, is that my wife developed a spiritualistic ability in the mid 1990:s which I have explored in four books published in Swedish between 1997 and 2007. I became convinced that much of this is true, but not all. The books reflect interviews with my wife and what happened in our family during that time. Almost half of all Swedes believe in afterlife and in the existence of a spiritual world. Dr. Ernsts reasoning is typical of skeptics, namely that a person with a known religious belief in not to trust – i.e. a person cannot have two sides, a religious and a scientific. I do not agree with that, but the view has led to that almost no scientist dares to tell his religious beliefs to anyone (which Ernst enforces by his reasoning). Besides, I am not very religious person at all, although the years spent writing these books was quite an interesting period of my life. In particular the last book which involved past-life memories that I had been revived during self-hypnotims. I am interested in exploring many sorts of secrets, not only scientific. But all types of evidence must be judged according to its own rules and laws.

3. Why did I write about homeopathy? The reason is a campaign led by skeptics in some summers ago. Teenagers sat in Swedish television and expressed firmly that “there is not a single publication showing that homeopathy works – nothing!”. I wonder how these young boys could know that, and suspected that had simply been instructed to say so by older skeptics . I looked up the topic on PubMed and soon found some positive papers. Not difficult to find. Had they looked? Surely not. I was a frequent blogger at the time, and wrote three blogs summarizing meta-analyses asking the question whether homeopathy was superior to placebo (disregarding the underlying disease). The response for my readers was impressive and I was eventually urged to write it up in English, which I did. That is the background to my article. I have no other involvement in homeopathy.

4. Me and Dr Ernst. I came across his name when scanning articles about homeopathy, and decided to look a bit deeper into what he had written. The typical scenario was to publish meta-analyses but excluding almost all material, leaving very little (of just a scant part of the literature) to summarize. No wonder there were no significant differences. If there were still significant differences the material was typically considered by him to be still too small or too imprecise or whatever to make any conclusion. This was quite systematic, and I lost trust in Ernst´s writings. This was pure scientific reasoning and has nothing to do with religion or anything else.

// Robert Hahn

_________________________________________________________________________

Lastly, if you need more info about Hahn, you might also want to read this.

The HRI is an innovative international charity created to address the need for high quality scientific research in homeopathy… HRI is dedicated to promoting cutting research in homeopathy, using the most rigorous methods available, and communicating the results of such work beyond the usual academic circles… HRI aims to bring academically reliable information to a wide international audience, in an easy to understand form. This audience includes the general public, scientists, healthcare providers, healthcare policy makers, government and the media.

This sounds absolutely brilliant!

I should be a member of the HRI!

For years, I have pursued similar aims!

Hold on, perhaps not?

This article makes me wonder:

START OF QUOTE

… By the end of 2014, 189 randomised controlled trials of homeopathy on 100 different medical conditions had been published in peer-reviewed journals. Of these, 104 papers were placebo-controlled and were eligible for detailed review:
41% were positive (43 trials) – finding that homeopathy was effective
5% were negative (5 trials) – finding that homeopathy was ineffective
54% were inconclusive (56 trials)

How does this compare with evidence for conventional medicine?

An analysis of 1016 systematic reviews of RCTs of conventional medicine had strikingly similar findings2:
44% were positive – the treatments were likely to be beneficial
7% were negative – the treatments were likely to be harmful
49% were inconclusive – the evidence did not support either benefit or harm.

END OF QUOTE

The implication here is that the evidence base for homeopathy is strikingly similar to that of real medicine.

Nice try! But sadly it has nothing to do with ‘reliable information’!!!

In fact, it is grossly (and I suspect deliberately) misleading.

Regular readers of this blog will have spotted the reason, because we discussed (part of) it before. Let me remind you:

_______________________________________________________________________________

A clinical trial is a research tool for testing hypotheses; strictly speaking, it tests the ‘null-hypothesis’: “the experimental treatment generates the same outcomes as the treatment of the control group”. If the trial shows no difference between the outcomes of the two groups, the null-hypothesis is confirmed. In this case, we commonly speak of a negative result. If the experimental treatment was better than the control treatment, the null-hypothesis is rejected, and we commonly speak of a positive result. In other words, clinical trials can only generate positive or negative results, because the null-hypothesis must either be confirmed or rejected – there are no grey tones between the black of a negative and the white of a positive study.

For enthusiasts of alternative medicine, this can create a dilemma, particularly if there are lots of published studies with negative results. In this case, the totality of the available trial evidence is negative which means the treatment in question cannot be characterised as effective. It goes without saying that such an overall conclusion rubs the proponents of that therapy the wrong way. Consequently, they might look for ways to avoid this scenario.

One fairly obvious way of achieving this aim is to simply re-categorise the results. What, if we invented a new category? What, if we called some of the negative studies by a different name? What about INCONCLUSIVE?

That would be brilliant, wouldn’t it. We might end up with a simple statistic where the majority of the evidence is, after all, positive. And this, of course, would give the impression that the ineffective treatment in question is effective!

How exactly do we do this? We continue to call positive studies POSITIVE; we then call studies where the experimental treatment generated worst results than the control treatment (usually a placebo) NEGATIVE; and finally we call those studies where the experimental treatment created outcomes which were not different from placebo INCONCLUSIVE.

In the realm of alternative medicine, this ‘non-conclusive result’ method has recently become incredibly popular . Take homeopathy, for instance. The Faculty of Homeopathy proudly claim the following about clinical trials of homeopathy: Up to the end of 2011, there have been 164 peer-reviewed papers reporting randomised controlled trials (RCTs) in homeopathy. This represents research in 89 different medical conditions. Of those 164 RCT papers, 71 (43%) were positive, 9 (6%) negative and 80 (49%) non-conclusive.

This misleading nonsense was, of course, warmly received by homeopaths. The British Homeopathic Association, like many other organisations and individuals with an axe to grind lapped up the message and promptly repeated it: The body of evidence that exists shows that much more investigation is required – 43% of all the randomised controlled trials carried out have been positive, 6% negative and 49% inconclusive.

Let’s be clear what has happened here: the true percentage figures seem to show that 43% of studies (mostly of poor quality) suggest a positive result for homeopathy, while 57% of them (on average the ones of better quality) were negative. In other words, the majority of this evidence is negative. If we conducted a proper systematic review of this body of evidence, we would, of course, have to account for the quality of each study, and in this case we would have to conclude that homeopathy is not supported by sound evidence of effectiveness.

The little trick of applying the ‘INCONCLUSIVE’ method has thus turned this overall result upside down: black has become white! No wonder that it is so popular with proponents of all sorts of bogus treatments.

__________________________________________________________________________________

But one trick is not enough for the HRI! For thoroughly misinforming the public they have a second one up their sleeve.

And that is ‘comparing apples with pears’  – RCTs with systematic reviews, in their case.

In contrast to RCTs, systematic reviews can be (and often are) INCONCLUSIVE. As they evaluate the totality of all RCTs on a given subject, it is possible that some RCTs are positive, while others are negative. When, for example, the number of high-quality, positive studies included in a systematic review is similar to the number of high-quality, negative trials, the overall result of that review would be INCONCLUSIVE. And this is one of the reasons why the findings of systematic reviews cannot be compared in this way to those of RCTs.

I suspect that the people at the HRI know all this. They are not daft! In fact, they are quite clever. But unfortunately, they seem to employ their cleverness not for informing but for misleading their ‘wide international audience’.

Personally, I find our good friend Dana Ullman truly priceless. There are several reasons for that; one is that he is often so exemplarily wrong that it helps me to explain fundamental things more clearly. With a bit of luck, this might enable me to better inform people who might be thinking a bit like Dana. In this sense, our good friend Dana has significant educational value.

Recently, he made this comment:

According to present and former editors of THE LANCET and the NEW ENGLAND JOURNAL OF MEDICINE, “evidence based medicine” can no longer be trusted. There is obviously no irony in Ernst and his ilk “banking” on “evidence” that has no firm footing except their personal belief systems: https://medium.com/@drjasonfung/the-corruption-of-evidence-based-medicine-killing-for-profit-41f2812b8704

Ernst is a fundamentalist whose God is reductionistic science, a 20th century model that has little real meaning today…but this won’t stop the new attacks on me personally…

END OF COMMENT

Where to begin?

Let’s start with some definitions.

  • Evidence is the body of facts that leads to a given conclusion. Because the outcomes of treatments such as homeopathy depend on a multitude of factors, the evidence for or against their effectiveness is best based not on experience but on clinical trials and systematic reviews of clinical trials (this is copied from my book).
  • EBM is the integration of best research evidence with clinical expertise and patient values. It thus rests on three pillars: external evidence, ideally from systematic reviews, the clinician’s experience, and the patient’s preferences (and this is from another book).

Few people would argue that EBM, as it is applied currently, is without fault. Certainly I would not suggest that; I even used to give lectures about the limitations of EBM, and many experts (who are much wiser than I) have written about the many problems with EBM. It is important to note that such criticism demonstrates the strength of modern medicine and not its weakness, as Dana seems to think: it is a sign of a healthy debate aimed at generating progress. And it is noteworthy that internal criticism of this nature is largely absent in alternative medicine.

The criticism of EBM is often focussed on the unreliability of the what I called above the ‘best research evidence’. Let me therefore repeat what I wrote about it on this blog in 2012:

… The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.

Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.

Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.

Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.

Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-comings, they are far superior than any other method for determining the efficacy of medical interventions.

There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.

Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.

In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.

END OF QUOTE

Other criticism is aimed at the way EBM is currently used (and abused). This criticism is often justified and necessary, and it is again the expression of our efforts to generate progress. EBM is practised by humans; and humans are far from perfect. They can be corrupt, misguided, dishonest, sloppy, negligent, stupid, etc., etc. Sadly, that means that the practice of EBM can have all of these qualities as well. All we can do is to keep on criticising malpractice, educate people, and hope that this might prevent the worst abuses in future.

Dana and many of his fellow SCAMers have a different strategy; they claim that EBM “can no longer be trusted” (interestingly they never tell us what system might be better; eminence-based medicine? experience-based medicine? random-based medicine? Dana-based medicine?).

The claim that EBM can no longer be trusted is clearly not true, counter-productive and unethical; and I suspect they know it.

Why then do they make it?

Because they feel that it entitles them to argue that homeopathy (or any other form of SCAM) cannot be held to EBM-standards. If EBM is unreliable, surely, nobody can ask the ‘Danas of this world’ to provide anything like sound data!!! And that, of course, would be just dandy for business, wouldn’t it?

So, let’s not be deterred  or misled by these deliberately destructive people. Their motives are transparent and their arguments are nonsensical. EBM is not flawless, but with our continued efforts it will improve. Or, to repeat something that I have said many times before: EBM is the worst form of healthcare, except for all other known options.

I have said it often, and I say it again: I do like well-conducted systematic reviews; and Cochrane reviews are usually the best, i. e. most transparent, most thorough and least biased. Thus, I was pleased to see a new Cochrane review of acupuncture aimed at assessing the benefits and harms of acupuncture in patients with hip OA.

The authors included randomized controlled trials (RCTs) that compared acupuncture with sham acupuncture, another active treatment, or no specific treatment; and RCTs that evaluated acupuncture as an addition to another treatment. Major outcomes were pain and function at the short term (i.e. < 3 months after randomization) and adverse events.

Six RCTs with 413 participants were included. Four RCTs included only people with OA of the hip, and two included a mix of people with OA of the hip and knee. All RCTs included primarily older participants, with a mean age range from 61 to 67 years, and a mean duration of hip OA pain from two to eight years. Approximately two-thirds of participants were women. Two RCTs compared acupuncture versus sham acupuncture; the other four RCTs were not blinded. All results were evaluated at short-term (i.e. four to nine weeks after randomization).In the two RCTs that compared acupuncture to sham acupuncture, the sham acupuncture control interventions were judged believable, but each sham acupuncture intervention was also judged to have a risk of weak acupuncture-specific effects, due to placement of non-penetrating needles at the correct acupuncture points in one RCT, and the use of penetrating needles not inserted at the correct points in the other RCT. For these two sham-controlled RCTs, the risk of bias was low for all outcomes.

The combined analysis of two sham-controlled RCTs gave moderate quality evidence of little or no effect in reduction in pain for acupuncture relative to sham acupuncture. Due to the small sample sizes in the studies, the confidence interval includes both the possibility of moderate benefit and the possibility of no effect of acupuncture (120 participants; Standardized Mean Difference (SMD) -0.13, (95% Confidence Interval (CI) -0.49 to 0.22); 2.1 points greater improvement with acupuncture compared to sham acupuncture on 100 point scale (i.e., absolute percent change -2.1% (95% CI -7.9% to 3.6%)); relative percent change -4.1% (95% CI -15.6% to 7.0%)). Estimates of effect were similar for function (120 participants; SMD -0.15, (95% CI -0.51 to 0.21)). No pooled estimate, representative of the two sham-controlled RCTs, could be calculated or reported for the quality of life outcome.

The four other RCTs were unblinded comparative effectiveness RCTs, which compared (additional) acupuncture to four different active control treatments. There was low quality evidence that addition of acupuncture to the routine primary care that RCT participants were receiving from their physicians was associated with statistically significant and clinically relevant benefits, compared to the routine primary physician care alone, in pain (1 RCT; 137 participants; mean percent difference -22.9% (95% CI -29.2% to -16.6%); relative percent difference -46.5% (95% CI -59.3% to -33.7%)) and function (mean percent difference -19.0% (95% CI -24.41 to -13.59); relative percent difference -38.6% (95% CI -49.6% to -27.6%)). There was no statistically significant difference for mental quality of life and acupuncture showed a small, significant benefit for physical quality of life.

The effects of acupuncture compared with either advice plus exercise or NSAIDs are uncertain. The authors are also uncertain whether acupuncture plus patient education improves pain, function, and quality of life, when compared to patient education alone.

In general, the overall quality of the evidence for the four comparative effectiveness RCTs was low to very low, mainly due to the potential for biased reporting of patient-assessed outcomes due to lack of blinding and sparse data.

Information on safety was reported in 4 RCTs. Two RCTs reported minor side effects of acupuncture, which were primarily minor bruising, bleeding, or pain at needle insertion sites.

The authors concluded that acupuncture probably has little or no effect in reducing pain or improving function relative to sham acupuncture in people with hip osteoarthritis. Due to the small sample size in the studies, the confidence intervals include both the possibility of moderate benefits and the possibility of no effect of acupuncture. One unblinded trial found that acupuncture as an addition to routine primary physician care was associated with benefits on pain and function. However, these reported benefits are likely due at least partially to RCT participants’ greater expectations of benefit from acupuncture. Possible side effects associated with acupuncture treatment were minor.

This is an excellent review of data that (because of contradictions, methodological limitations, heterogeneity etc.) are not easy to evaluate fairly. The review shows that previous verdicts about acupuncture for osteoarthritis might have been too optimistic. Acupuncture has no or only very small specific therapeutic effects. As we have much better therapeutic options for this condition, it means that acupuncture can no longer be recommended as an effective therapy.

That surely must be big news in the little world of acupuncture!

I have been personally involved in several similar reviews:

In 1997, I concluded that the most rigorous studies suggest that acupuncture is not superior to sham-needling in reducing pain of osteoarthritis: both alleviate symptoms to roughly the same degree.

In 2006, the balance of evidence seemed to have shifted and more positive data had emerged; consequently our review concluded that sham-controlled RCTs suggest specific effects of acupuncture for pain control in patients with peripheral joint OA. Considering its favourable safety profile acupuncture seems an option worthy of consideration particularly for knee OA. Further studies are required particularly for manual or electro-acupuncture in hip OA.

Now, it seems that my initial conclusion of 1996 was more realistic. To me this is a fascinating highlight on the fact that in EBM, we change our minds based on the current best evidence. By contrast, in alternative medicine, as we have often lamented on this blog, minds do not easily change and all too often dogma seems to reign.

The new Cochrane review is important in several ways. Firstly, it affirms an appropriately high standard for such reviews. Secondly, it originates from a research team that has, in the past, been outspokenly pro-acupuncture; it is therefore unlikely that the largely negative findings were due to an anti-acupuncture bias. Thirdly – and most importantly – osteoarthritis has been THE condition for which even critical reviewers had to admit that there was at least some good, positive evidence.

It seems therefore, that yet again a beautiful theory has been slain by an ugly fact.

1 2 3 10
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories