MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

research methodology

A new study of homeopathic arnica suggests efficacy. How come?

Subjects scheduled for rhinoplasty surgery with nasal bone osteotomies by a single surgeon were prospectively randomized to receive either oral perioperative arnica or placebo in a double-blinded fashion. A commercially available preparation was used which contained 12 capsules: one 500 mg capsule with arnica 1M is given preoperatively on the morning of surgery and two more later that day after surgery. Thereafter, arnica was administered in the 12C potency three times daily for the next 3 days (“C” indicates a 100-fold serial dilution; and M, a 1000-fold dilution)

Ecchymosis was measured in digital “three-quarter”-view photographs at three postoperative time points. Each bruise was outlined with Adobe Photoshop and the extent was scaled to a standardized reference card. Cyan, magenta, yellow, black, and luminosity were analyzed in the bruised and control areas to calculate change in intensity.

Compared with 13 subjects receiving placebo, 9 taking arnica had 16.2%, 32.9%, and 20.4% less extent on postoperative days 2/3, 7, and 9/10, a statistically significant difference on day 7. Color change initially showed 13.1% increase in intensity with arnica, but 10.9% and 36.3% decreases on days 7 and 9/10, a statistically significant difference on day 9/10. One subject experienced mild itching and rash with the study drug that resolved during the study period.

The authors concluded that Arnica montana seems to accelerate postoperative healing, with quicker resolution of the extent and the intensity of ecchymosis after osteotomies in rhinoplasty surgery, which may dramatically affect patient satisfaction.

Why are the results positive? Pervious systematic reviews confirm that homeopathic arnica is a pure placebo. First, I thought the answer lies in the 1M potency. It could well still contain active molecules. But then I realised that the answer is much more simple: if we apply the conventional level of statistical significance, there are no statistically significant differences to placebo at all! I had not noticed the little sentence by the authors: a P value of 0.1 was set as a meaningful difference with statistical significance. In fact, none of the effects called significant by the authors pass the conventionally used probability level of 5%.

So, what so the results of this new study truly mean? In my view, they show what was known all along: HOMEOPATHIC REMEDIES ARE PLACEBOS.

A recent comment to a post of mine (by a well-known and experienced German alt med researcher) made the following bold statement aimed directly at me and at my apparent lack of understanding research methodology:

C´mon , as researcher you should know the difference between efficacy and effectiveness. This is pharmacological basic knowledge. Specific (efficacy) + nonspecific effects = effectiveness. And, in fact, everything can be effective – because of non-specific or placebo-like effects. That does not mean that efficacy is existent.

The point he wanted to make is that outcome studies – studies without a control group where the researcher simply observe the outcome of a particular treatment in a ‘real life’ situation – suffice to demonstrate the effectiveness of therapeutic interventions. This belief is very wide-spread in alternative medicine and tends to mislead all concerned. It is therefore worth re-visiting this issue here in an attempt to create some clarity.

When a patient’s condition improves after receiving a therapy, it is very tempting to feel that this improvement reflects the effectiveness of the intervention (as the researcher mentioned above obviously does). Tempting but wrong: there are many other factors involved as well, for instance:

  • the placebo effect (mainly based on conditioning and expectation),
  • the therapeutic relationship with the clinician (empathy, compassion etc.),
  • the regression towards the mean (outliers tend to return to the mean value),
  • the natural history of the patient’s condition (most conditions get better even without treatment),
  • social desirability (patients tend to say they are better to please their friendly clinician),
  • concomitant treatments (patients often use treatments other than the prescribed one without telling their clinician).

So, how does this fit into the statement above ‘Specific (efficacy) + nonspecific effects = effectiveness’? Even if this formula were correct, it would not mean that outcome studies of the nature described demonstrate the effectiveness of a therapy. It all depends, of course, on what we call ‘non-specific’ effects. We all agree that placebo-effects belong to this category. Probably, most experts also would include the therapeutic relationship and the regression towards the mean under this umbrella. But the last three points from my list are clearly not non-specific effects of the therapy; they are therapy-independent determinants of the clinical outcome.

The most important factor here is usually the natural history of the disease. Some people find it hard to imagine what this term actually means. Here is a little joke which, I hope, will make its meaning clear and memorable.

CONVERATION BETWEEN TWO HOSPITAL DOCTORS:

Doc A: The patient from room 12 is much better today.

Doc B: Yes, we stared his treatment just in time; a day later and he would have been cured without it!

I am sure that most of my readers now understand (and never forget) that clinical improvement cannot be equated with the effectiveness of the treatment administered (they might thus be immune to the misleading messages they are constantly exposed to). Yet, I am not at all sure that all ‘alternativists’ have got it.

In my last post, I claimed that researchers of alternative medicine tend to be less than rigorous. I did not link this statement to any evidence at all. Perhaps I should have at least provided an example!? As it happens, I just came across a brand new paper which nicely demonstrates what I meant.

According to its authors, this non-interventional study was performed to generate data on safety and treatment effects of a complex homeopathic drug. They treated 1050 outpatients suffering from common cold with a commercially available homeopathic remedy for 8 days. The study was conducted in 64 German outpatient practices of medical doctors trained in CAM. Tolerability, compliance and the treatment effects were assessed by the physicians and by patient diaries. Adverse events were collected and assessed with specific attention to homeopathic aggravation and proving symptoms. Each adverse effect was additionally evaluated by an advisory board of experts.

The physicians detected 60 adverse events from 46 patients (4.4%). Adverse drug reactions occurred in 14 patients (1.3%). Six patients showed proving symptoms (0.57%) and only one homeopathic aggravation (0.1%) appeared. The rate of compliance was 84% for all groups. The global assessment of the treatment effects resulted in the verdict “good” and “very good” in 84.9% of all patients.

The authors concluded that the homeopathic complex drug was shown to be safe and effective for children and adults likewise. Adverse reactions specifically related to homeopathic principles are very rare. All observed events recovered quickly and were of mild to moderate intensity.

So why do I think this is ‘positively barmy’?

The study had no control group. This means that there is no way anyone can attribute the observed ‘treatment effects’ to the homeopathic remedy. There are many other phenomena that may have caused or contributed to it, e. g.:

  • a placebo effect
  • the natural history of the condition
  • regression to the mean
  • other treatments which the patients took but did not declare
  • the empathic encounter with the physician
  • social desirability

To plan a study with the aim as stated above and to draw the conclusion as cited above is naïve and unprofessional (to say the least) on the part of the researchers (I often wonder where, in such cases, the boundary between incompetence and research misconduct might lie). To pass such a paper through the peer review process is negligent on the part of the reviewers. To publish the article is irresponsible on the part of the editor.

In a nut-shell: COLLECTIVELY, THIS IS ‘POSITIVELY BARMY’!!!

Distant healing is one of the most bizarre yet popular forms of alternative medicine. Healers claim they can transmit ‘healing energy’ towards patients to enable them to heal themselves. There have been many trials testing the effectiveness of the method, and the general consensus amongst critical thinkers is that all variations of ‘energy healing’ rely entirely on a placebo response. A recent and widely publicised paper seems to challenge this view.

This article has, according to its authors, two aims. Firstly it reviews healing studies that involved biological systems other than ‘whole’ humans (e.g., studies of plants or cell cultures) that were less susceptible to placebo-like effects. Secondly, it presents a systematic review of clinical trials on human patients receiving distant healing.

All the included studies examined the effects upon a biological system of the explicit intention to improve the wellbeing of that target; 49 non-whole human studies and 57 whole human studies were included.

The combined weighted effect size for non-whole human studies yielded a highly significant (r = 0.258) result in favour of distant healing. However, outcomes were heterogeneous and correlated with blind ratings of study quality; 22 studies that met minimum quality thresholds gave a reduced but still significant weighted r of 0.115.

Whole human studies yielded a small but significant effect size of r = .203. Outcomes were again heterogeneous, and correlated with methodological quality ratings; 27 studies that met threshold quality levels gave an r = .224.

From these findings, the authors drew the following conclusions: Results suggest that subjects in the active condition exhibit a significant improvement in wellbeing relative to control subjects under circumstances that do not seem to be susceptible to placebo and expectancy effects. Findings with the whole human database suggests that the effect is not dependent upon the previous inclusion of suspect studies and is robust enough to accommodate some high profile failures to replicate. Both databases show problems with heterogeneity and with study quality and recommendations are made for necessary standards for future replication attempts.

In a press release, the authors warned: the data need to be treated with some caution in view of the poor quality of many studies and the negative publishing bias; however, our results do show a significant effect of healing intention on both human and non-human living systems (where expectation and placebo effects cannot be the cause), indicating that healing intention can be of value.

My thoughts on this article are not very complimentary, I am afraid. The problems are, it seems to me, too numerous to discuss in detail:

  • The article is written such that it is exceedingly difficult to make sense of it.
  • It was published in a journal which is not exactly known for its cutting edge science; this may seem a petty point but I think it is nevertheless important: if distant healing works, we are confronted with a revolution in the understanding of nature – and surely such a finding should not be buried in a journal that hardly anyone reads.
  • The authors seem embarrassingly inexperienced in conducting and publishing systematic reviews.
  • There is very little (self-) critical input in the write-up.
  • A critical attitude is necessary, as the primary studies tend to be by evangelic believers in and amateur enthusiasts of healing.
  • The article has no data table where the reader might learn the details about the primary studies included in the review.
  • It also has no table to inform us in sufficient detail about the quality assessment of the included trials.
  • It seems to me that some published studies of distant healing are missing.
  • The authors ignored all studies that were not published in English.
  • The method section lacks detail, and it would therefore be impossible to conduct an independent replication.
  • Even if one ignored all the above problems, the effect sizes are small and would not be clinically important.
  • The research was sponsored by the ‘Confederation of Healing Organisations’ and some of the comments look as though the sponsor had a strong influence on the phraseology of the article.

Given these reservations, my conclusion from an analysis of the primary studies of distant healing would be dramatically different from the one published by the authors: DESPITE A SIZABLE AMOUNT OF PRIMARY STUDIES ON THE SUBJECT, THE EFFECTIVENESS OF DISTANT HEALING REMAINS UNPROVEN. AS THIS THERAPY IS BAR OF ANY BIOLOGICAL PLAUSIBILITY, FURTHER RESEARCH IN THIS AREA SEEMS NOT WARRANTED.

Twenty years ago, I published a short article in the British Journal of Rheumatology. Its title was ALTERNATIVE MEDICINE, THE BABY AND THE BATH WATER. Reading it again today – especially in the light of the recent debate (with over 700 comments) on acupuncture – indicates to me that very little has since changed in the discussions about alternative medicine (AM). Does that mean we are going around in circles? Here is the (slightly abbreviated) article from 1995 for you to judge for yourself:

“Proponents of alternative medicine (AM) criticize the attempt of conducting RCTs because they view this is in analogy to ‘throwing out the baby with the bath water’. The argument usually goes as follows: the growing popularity of AM shows that individuals like it and, in some way, they benefit through using it. Therefore it is best to let them have it regardless of its objective effectiveness. Attempts to prove or disprove effectiveness may even be counterproductive. Should RCTs prove that a given intervention is not superior to a placebo, one might stop using it. This, in turn, would be to the disadvantage of the patient who, previous to rigorous research, has unquestionably been helped by the very remedy. Similar criticism merely states that AM is ‘so different, so subjective, so sensitive that it cannot be investigated in the same way as mainstream medicine’. Others see reasons to change the scientific (‘reductionist’) research paradigm into a broad ‘philosophical’ approach. Yet others reject the RCTs because they think that ‘this method assumes that every person has the same problems and there are similar causative factors’.

The example of acupuncture as a (popular) treatment for osteoarthritis, demonstrates the validity of such arguments and counter-arguments. A search of the world literature identified only two RCTs on the subject. When acupuncture was tested against no treatment, the experimental group of osteoarthritis sufferers reported a 23% decrease of pain, while the controls suffered a 12% increase. On the basis of this result, it might seem highly unethical to withhold acupuncture from pain-stricken patients—’if a patient feels better for whatever reason and there are no toxic side effects, then the patient should have the right to get help’.

But what about the placebo effect? It is notoriously difficult to find a placebo indistinguishable to acupuncture which would allow patient-blinded studies. Needling non-acupuncture points may be as close as one can get to an acceptable placebo. When patients with osteoarthritis were randomized into receiving either ‘real acupuncture or this type of sham acupuncture both sub-groups showed the same pain relief.

These findings (similar results have been published for other AMs) are compatible only with two explanations. Firstly acupuncture might be a powerful placebo. If this were true, we need to establish how safe acupuncture is (clearly it is not without potential harm); if the risk/benefit ratio is favourable and no specific, effective form of therapy exists one might still consider employing this form as a ‘placebo therapy’ for easing the pain of osteoarthritis sufferers. One would also feel motivated to research this powerful placebo and identify its characteristics or modalities with the aim of using the knowledge thus generated to help future patients.

Secondly, it could be the needling, regardless of acupuncture points and philosophy, that decreases pain. If this were true, we could henceforward use needling for pain relief—no special training in or equipment for acupuncture would be required, and costs would therefore be markedly reduced. In addition, this knowledge would lead us to further our understanding of basic mechanisms of pain reduction which, one day, might evolve into more effective analgesia. In any case the published research data, confusing as they often are, do not call for a change of paradigm; they only require more RCTs to solve the unanswered problems.

Conducting rigorous research is therefore by no means likely to ‘throw out the baby with the bath water’. The concept that such research could harm the patient is wrong and anti-scientific. To follow its implications would mean neglecting the ‘baby in the bath water’ until it suffers serious damage. To conduct proper research means attending the ‘baby’ and making sure that it is safe and well.

Reflexology is the treatment of reflex zones, usually on the sole of the feet, with manual massage and pressure. Reflexologists assume that certain zones correspond to certain organs, and that their treatment can influence the function of these organs. Thus reflexology is advocated for all sorts of conditions. Proponents are keen to point out that their approach has many advantages: it is pleasant (the patient feels well with the treatment and the therapist feels even better with the money), safe and cheap, particularly if the patient does the treatment herself.

Self-administered foot reflexology could be practical because it is easy to learn and not difficult to apply. But is it also effective? A recent systematic review evaluated the effectiveness of self-foot reflexology for symptom management.

Participants were healthy persons not diagnosed with a specific disease. The intervention was foot reflexology administered by participants, not by practitioners or healthcare providers. Studies with either between groups or within group comparison were included. The electronic literature searches utilized core databases (MEDLINE, EMBASE, Cochrane, and CINAHL Chinese (CNKI), Japanese (J-STAGE), and Korean databases (KoreaMed, KMbase, KISS, NDSL, KISTI, and OASIS)).

Three non-randomized trials and three before-and-after studies met the inclusion criteria. No RCTs were located. The results of these studies showed that self-administered foot reflexology resulted in significant improvement in subjective outcomes such as perceived stress, fatigue, and depression. However, there was no significant improvement in objective outcomes such as cortisol levels, blood pressure, and pulse rate. We did not find any randomized controlled trial.

The authors concluded that this study presents the effectiveness of self-administered foot reflexology for healthy persons’ psychological and physiological symptoms. While objective outcomes showed limited results, significant improvements were found in subjective outcomes. However, owing to the small number of studies and methodological flaws, there was insufficient evidence supporting the use of self-performed foot reflexology. Well-designed randomized controlled trials are needed to assess the effect of self-administered foot reflexology in healthy people.

I find this review quite interesting, but I would draw very different conclusions from its findings.

The studies that are available turned out to be of very poor methodological quality: they lack randomisation or rely on before/after comparisons. This means they are wide open to bias and false-positive results, particularly in regards to subjective outcome measures. Predictably, the findings of this review confirm that no effects are seen on objective endpoints. This is in perfect agreement with the hypothesis that reflexology is a pure placebo. Considering the biological implausibility of the underlying assumptions of reflexology, this makes sense.

My conclusions of this review would therefore be as follows: THE RESULTS ARE IN KEEPING WITH REFLEXOLOGY BEING A PURE PLACEBO.

The discussion whether acupuncture is more than a placebo is as long as it is heated. Crucially, it is also quite tedious, tiresome and unproductive, not least because no resolution seems to be in sight. Whenever researchers develop an apparently credible placebo and the results of clinical trials are not what acupuncturists had hoped for, the therapists claim that the placebo is, after all, not inert and the negative findings must be due to the fact that both placebo and real acupuncture are effective.

Laser acupuncture (acupoint stimulation not with needle-insertion but with laser light) offers a possible way out of this dilemma. It is relatively easy to make a placebo laser that looks convincing to all parties concerned but is a pure and inert placebo. Many trials have been conducted following this concept, and it is therefore highly relevant to ask what the totality of this evidence suggests.

A recent systematic review did just that; specifically, it aimed to evaluate the effects of laser acupuncture on pain and functional outcomes when it is used to treat musculoskeletal disorders.

Extensive literature searches were used to identify all RCTs employing laser acupuncture. A meta-analysis was performed by calculating the standardized mean differences and 95% confidence intervals, to evaluate the effect of laser acupuncture on pain and functional outcomes. Included studies were assessed in terms of their methodological quality and appropriateness of laser parameters.

Forty-nine RCTs met the inclusion criteria. Two-thirds (31/49) of these studies reported positive effects. All of them were rated as being of high methodological quality and all of them included sufficient details about the lasers used. Negative or inconclusive studies mostly failed to demonstrate these features. For all diagnostic subgroups, positive effects for both pain and functional outcomes were more consistently seen at long-term follow-up rather than immediately after treatment.

The authors concluded that moderate-quality evidence supports the effectiveness of laser acupuncture in managing musculoskeletal pain when applied in an appropriate treatment dosage; however, the positive effects are seen only at long-term follow-up and not immediately after the cessation of treatment.

Surprised? Well, I am!

This is a meta-analysis I always wanted to conduct and never came round to doing. Using the ‘trick’ of laser acupuncture, it is possible to fully blind patients, clinicians and data evaluators. This eliminates the most obvious sources of bias in such studies. Those who are convinced that acupuncture is a pure placebo would therefore expect a negative overall result.

But the result is quite clearly positive! How can this be? I can see three options:

  • The meta-analysis could be biased and the result might therefore be false-positive. I looked hard but could not find any significant flaws.
  • The primary studies might be wrong, fraudulent etc. I did not see any obvious signs for this to be so.
  • Acupuncture might be more than a placebo after all. This notion might be unacceptable to sceptics.

I invite anyone who sufficiently understands clinical trial methodology to scrutinise the data closely and tell us which of the three possibilities is the correct one.

Even though it has been published less than a month ago, my new book ‘A SCIENTIST IN WONDERLAND…‘ has already received many most flattering reviews. For me, the most impressive one was by the journal ‘Nature’; they called my memoire ‘ferociously frank’ and ‘a clarion call for medical ethics’.

I did promise to provide several little excerpts for the readers of this blog to enable them to make up their own minds as to whether they want to read it or not. Today I offer you the start of the chapter 6 entitled ‘WONDERLAND’. I do hope you enjoy it.

It has been claimed by some members of the lunatic fringe of alternative medicine that I took up the Laing Chair at Exeter with the specific agenda of debunking alternative medicine. This is certainly not true; if anything, I was predisposed to look kindly on it. After all, I had grown up and done my medical training in Germany where the use of alternative therapies in a supportive role alongside standard medical care was considered routine and unremarkable. As a clinician, I had seen positive results from alternative therapies. If I came to Exeter with any preconceived ideas at all, they were of a generally favourable kind. I was sure that, if we applied the rules of science to the study of alternative medicine, we would find plenty of encouraging evidence.
As if to prove this point, the managing director of a major UK homeopathic pharmacy wrote a comment on my blog in April 2014: “…I met you once in Exeter in the 90s when exploring a possible clinical study. I found you most encouraging and openly enthusiastic about homeopathy. I would go so far as to say I was inspired to go further in homeopathy thanks to you but now you want to close down something which in my experience does so much good in the world. What went wrong?”
The answer to this question is fairly simple: nothing went wrong, but the evidence demonstrated more and more indispu-tably that most alternative therapies are not nearly as effective as enthusiasts tried to make us believe…

Here is another short passage from my new book A SCIENTIST IN WONDERLAND. It describes the event where I was first publicly exposed to the weird and wonderful world of alternative medicine in the UK. It is also the scene which, in my original draft, was the very beginning of the book.

I hope that the excerpt inspires some readers to read the entire book – it currently is BOOK OF THE WEEK in the TIMES HIGHER EDUCATION!!!

… [an] aggressive and curious public challenge occurred a few weeks later during a conference hosted by the Research Council for Complementary Medicine in London. This organization had been established a few years earlier with the aim of conducting and facilitating research in all areas of alternative medicine. My impression of this institution, and indeed of the various other groups operating in this area, was that they were far too uncritical, and often proved to be hopelessly biased in favour of alternative medicine. This, I thought, was an extraordinary phenomenon: should research councils and similar bodies not have a duty to be critical and be primarily concerned about the quality of the research rather than the overall tenor of the results? Should research not be critical by nature? In this regard, alternative medicine appeared to be starkly different from any other type of health care I had encountered previously.

On short notice, I had accepted an invitation to address this meeting packed with about 100 proponents of alternative medicine. I felt that their enthusiasm and passion were charming but, no matter whom I talked to, there seemed to be little or no understanding of the role of science in all this. A strange naïvety pervaded this audience: alternative practitioners and their supporters seemed a bit like children playing “doctor and patient”. The language, the rituals and the façade were all more or less in place, but somehow they seemed strangely detached from reality. It felt a bit as though I had landed on a different planet. The delegates passionately wanted to promote alternative medicine, while I, with equal passion and conviction, wanted to conduct good science. The two aims were profoundly different. Nevertheless, I managed to convince myself that they were not irreconcilable, and that we would manage to combine our passions and create something worthwhile, perhaps even groundbreaking.

Everyone was excited about the new chair in Exeter; high hopes and expectations filled the room. The British alternative medicine scene had long felt discriminated against because they had no academic representation to speak of. I certainly did sympathize with this particular aspect and felt assured that, essentially, I was amongst friends who realized that my expertise and their enthusiasm could add up to bring about progress for the benefit of many patients.
During my short speech, I summarized my own history as a physician and a scientist and outlined what I intended to do in my new post—nothing concrete yet, merely the general gist. I stressed that my plan was to apply science to this field in order to find out what works and what doesn’t; what is safe and what isn’t. Science, I pointed out, generates progress through asking critical questions and through testing hypotheses. Alternative medicine would either be shown by good science to be of value, or it would turn out to be little more than a passing fad. The endowment of the Laing chair represented an important mile-stone on the way towards the impartial evaluation of alternative medicine, and surely this would be in the best interest of all parties concerned.

To me, all this seemed an entirely reasonable approach, particularly as it merely reiterated what I had just published in an editorial for The Lancet entitled “Scrutinizing the Alternatives”.

My audience, however, was not impressed. When I had finished, there was a stunned, embarrassed silence. Finally someone shouted angrily from the back row: “How did they dare to appoint a doctor to this chair?” I was startled by this question and did not quite understand. What had prompted this reaction? What did this audience expect? Did they think my qualifications were not good enough? Why were they upset by the appointment of a doctor? Who else, in their view, might be better equipped to conduct medical research?

It wasn’t until weeks later that it dawned on me: they had been waiting for someone with a strong commitment to the promotion of alternative medicine. Such a commitment could only come from an alternative practitioner. A doctor personified the establishment, and “alternative” foremost symbolized “anti-establishment”. My little speech had upset them because it confirmed their worst fears of being annexed by “the establishment”. These enthusiasts had hoped for a believer from their own ranks and certainly not for a doctor-scientist to be appointed to the world’s first chair of complementary medicine. They had expected that Exeter University would lend its support to their commercial and ideological interests; they had little understanding of the concept that universities should not be in the business of promoting anything other than high standards.

Even today, after having given well over 600 lectures on the topic of alternative medicine, and after coming on the receiving end of ever more hostile attacks, aggressive questions and personal insults, this particular episode is still etched deeply into my memory. In a very real way, it set the scene for the two decades to come: the endless conflicts between my agenda of testing alternative medicine scientifically and the fervent aspirations of enthusiasts to promote alternative medicine uncritically. That our positions would prove mutually incompatible had been predictable from the very start. The writing had been on the wall—but it took me a while to be able to fully understand the message.

A recent article in the BMJ about my new book seems to have upset fellow researchers of alternative medicine. I am told that the offending passage is the following:

“Too much research on complementary therapies is done by people who have already made up their minds,” the first UK professor of complementary medicine has said. Edzard Ernst, who left his chair at Exeter University early after clashing with the Prince of Wales, told journalists at the Science Media Centre in London that, although more research into alternative medicines was now taking place, “none of the centres is anywhere near critical enough.”

Following this publication, I received indignant inquiries from colleagues asking whether I meant to say that their work lacks critical thinking. As this is a valid question, I will try to answer it the best I presently can.

Any critical evaluation of alternative medicine has to yield its fair share of negative conclusions about the value of alternative medicine. If it fails to do that, one would need to assume that most or all alternative therapies generate more good than harm – and very few experts (who are not proponents of alternative medicine) would assume that this can possibly be the case.

Put differently, this means that a researcher or a research group that does not generate its fair share of negative conclusions is suspect of lacking a critical attitude. In a previous post, I have addressed this issue in more detail by creating an ‘index’: THE TRUSTWORTHINESS INDEX. I have also provided a concrete example of a researcher who seems to be associated with a remarkably high index (the higher the index, the more suspicion of critical attitude).

Instead of unnecessarily upsetting my fellow researchers of alternative medicine any further, I will just issue this challenge: if any research group can demonstrate to have an index below 0.5 (which would mean the team has published twice as many negative conclusions as positive ones), I will gladly and publicly retract my suspicion that this group is “anywhere near critical enough”.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories