MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

scientific misconduct

Chiropractors are back pain specialists, they say. They do not pretend to treat non-spinal conditions, they claim.

If such notions were true, why are so many of them still misleading the public? Why do many chiropractors pretend to be primary care physicians who can take care of most illnesses regardless of any connection with the spine? Why do they continue to happily promote bogus treatments? Why do chiropractors, for instance, claim they can treat gastrointestinal diseases?

This recent narrative review of the literature, for example, was aimed at summarising studies describing the management of disorders of the gastrointestinal (GI) tract using ‘chiropractic therapy’ broadly defined here as spinal manipulation therapy, mobilizations, soft tissue therapy, modalities and stretches.

Twenty-one articles were found through searching the published literature to meet the authors’ inclusion criteria. The retrieved articles included case reports to clinical trials to review articles. The majority of articles chronicling patient experiences under chiropractic care reported that they experienced mild to moderate improvements in GI symptoms. No adverse effects were reported.

From this, the authors concluded that chiropractic care can be considered as an adjunctive therapy for patients with various GI conditions providing there are no co-morbidities.

I think, we would need to look for a long time to find an article with conclusions that are more ridiculous, false and unethical than these.

The old adage applies: rubbish in, rubbish out. If we include unreliable reports such as anecdotes, our finding will be unreliable as well. If we do not make this mistake and conduct a proper systematic review, we will arrive at very different conclusions. My own systematic review, for instance, of controlled clinical trials drew the following conclusion: There is no supportive evidence that chiropractic is an effective treatment for gastrointestinal disorders.

That probably says it all. I only want to add a short question: SHOULD THIS LATEST CHIROPRACTIC ATTEMPT TO MISLEAD THE PUBLIC BE CONSIDERED ‘SCIENTIFIC MISCONDUCT’ OR ‘FRAUD’?

A paper entitled ‘Real world research: a complementary method to establish the effectiveness of acupuncture’ caught my attention recently. I find it quite remarkable and think it might stimulate some discussion on this blog.  Here is its abstract:

Acupuncture has been widely used in the management of a variety of diseases for thousands of years, and many relevant randomized controlled trials have been published. In recent years, many randomized controlled trials have provided controversial or less-than-convincing evidence that supports the efficacy of acupuncture. The clinical effectiveness of acupuncture in Western countries remains controversial.

Acupuncture is a complex intervention involving needling components, specific non-needling components, and generic components. Common problems that have contributed to the equivocal findings in acupuncture randomized controlled trials were imperfections regarding acupuncture treatment and inappropriate placebo/sham controls. In addition, some inherent limitations were also present in the design and implementation of current acupuncture randomized controlled trials such as weak external validity. The current designs of randomized controlled trials of acupuncture need to be further developed. In contrast to examining efficacy and adverse reaction in a “sterilized” environment in a narrowly defined population, real world research assesses the effectiveness and safety of an intervention in a much wider population in real world practice. For this reason, real world research might be a feasible and meaningful method for acupuncture assessment. Randomized controlled trials are important in verifying the efficacy of acupuncture treatment, but the authors believe that real world research, if designed and conducted appropriately, can complement randomized controlled trials to establish the effectiveness of acupuncture. Furthermore, the integrative model that can incorporate randomized controlled trial and real world research which can complement each other and potentially provide more objective and persuasive evidence.

In the article itself, the authors list seven criteria for what they consider good research into acupuncture:

  1. Acupuncture should be regarded as complex and individualized treatment;
  2. The study aim (whether to assess the efficacy of acupuncture needling or the effectiveness of acupuncture treatment) should be clearly defined and differentiated;
  3. Pattern identification should be clearly specified, and non-needling components should also be considered;
  4. The treatment protocol should have some degree of flexibility to allow for individualization;
  5. The placebo or sham acupuncture should be appropriate: knowing “what to avoid” and “what to mimic” in placebos/shams;
  6. In addition to “hard evidence”, one should consider patient-reported outcomes, economic evaluations, patient preferences and the effect of expectancy;
  7. The use of qualitative research (e.g., interview) to explore some missing areas (e.g., experience of practitioners and patient-practitioner relationship) in acupuncture research.

Furthermore, the authors list the advantages of their RWR-concept:

  1. In RWR, interventions are tailored to the patients’ specific conditions, in contrast to standardized treatment. As a result, conclusions based on RWR consider all aspects of acupuncture that affect the effectiveness.
  2. At an operational level, patients’ choice of the treatment(s) decreases the difficulties in recruiting and retaining patients during the data collection period.
  3. The study sample in RWR is much more representative of the real world situation (similar to the section of the population that receives the treatment). The study, therefore, has higher external validity.
  4. RWR tends to have a larger sample size and longer follow-up period than RCT, and thus is more appropriate for assessing the safety of acupuncture.

The authors make much of their notion that acupuncture is a COMPLEX INTERVENTION; specifically they claim the following: Acupuncture treatment includes three aspects: needling, specific non-needling components drove by acupuncture theory, and generic components not unique to acupuncture treatment. In addition, acupuncture treatment should be performed on the basis of the patient condition and traditional Chinese medicine (TCM) theory.

There is so much BS here that it is hard to decide where to begin refuting. As the assumption of acupuncture or other alternative therapies being COMPLEX INTERVENTIONS (and therefore exempt from rigorous tests) is highly prevalent in this field, let me try to just briefly tackle this one.

The last time I saw a patient and prescribed a drug treatment I did all of the following:

  • I greeted her, asked her to sit down and tried to make her feel relaxed.
  • I first had a quick chat about something trivial.
  • I then asked why she had come to see me.
  • I started to take notes.
  • I inquired about the exact nature and the history of her problem.
  • I then asked her about her general medical history, family history and her life-style.
  • I also asked about any psychological problems that might relate to her symptoms.
  • I then conducted a physical examination.
  • Subsequently we discussed what her diagnosis might be.
  • I told her what my working diagnosis was.
  • I ordered a few tests to either confirm or refute it and explained them to her.
  • We decided that she should come back and see me in a few days when her tests had come back.
  • In order to ease her symptoms in the meanwhile, I gave her a prescription for a drug.
  • We discussed this treatment, how and when she should take it, adverse effects etc.
  • We also discussed other therapeutic options, in case the prescribed treatment was in any way unsatisfactory.
  • I reassured her by telling her that her condition did not seem to be serious and stressed that I was confident to be able to help her.
  • She left my office.

The point I am trying to make is: prescribing an entirely straight forward drug treatment is also a COMPLEX INTERVENTION. In fact, I know of no treatment that is NOT complex.

Does that mean that drugs and all other interventions are exempt from being tested in rigorous RCTs? Should we allow drug companies to adopt the RWR too? Any old placebo would pass that test and could be made to look effective using RWR. In the example above, my compassion, care and reassurance would alleviate my patient’s symptoms, even if the prescription I gave her was complete rubbish.

So why should acupuncture (or any other alternative therapy) not be tested in proper RCTs? I fear, the reason is that RCTs might show that it is not as effective as its proponents had hoped. The conclusion about the RWR is thus embarrassingly simple: proponents of alternative medicine want double standards because single standards would risk to disclose the truth.

You may feel that homeopaths are bizarre, irrational, perhaps even stupid – but you cannot deny their tenacity. Since 200 years, they are trying to convince us that their treatments are effective beyond placebo. And they seem to get more and more bold with their claims: while they used to suggest that homeopathy was effective for trivial conditions like a common cold, they now have their eyes on much more ambitious things. Two recent studies, for instance, claim that homeopathic remedies can help cancer patients.

The aim of the first study was to evaluate whether homeopathy influenced global health status and subjective wellbeing when used as an adjunct to conventional cancer therapy.

In this pragmatic randomized controlled trial, 410 patients, who were treated by standard anti-neoplastic therapy, were randomized to receive or not receive classical homeopathic adjunctive therapy in addition to standard therapy. The main outcome measures were global health status and subjective wellbeing as assessed by the patients. At each of three visits (one baseline, two follow-up visits), patients filled in two questionnaires for quantification of these endpoints.

The results show that 373 patients yielded at least one of three measurements. The improvement of global health status between visits 1 and 3 was significantly stronger in the homeopathy group by 7.7 (95% CI 2.3-13.0, p=0.005) when compared with the control group. A significant group difference was also observed with respect to subjective wellbeing by 14.7 (95% CI 8.5-21.0, p<0.001) in favor of the homeopathic as compared with the control group. Control patients showed a significant improvement only in subjective wellbeing between their first and third visits.

Our homeopaths concluded that the results suggest that the global health status and subjective wellbeing of cancer patients improve significantly when adjunct classical homeopathic treatment is administered in addition to conventional therapy.

The second study is a little more modest; it had the aim to explore the benefits of a three-month course of individualised homeopathy (IH) for survivors of cancer.

Fifteen survivors of any type of cancer were recruited by a walk-in cancer support centre. Conventional treatment had to have taken place within the last three years. Patients scored their total, physical and emotional wellbeing using the Functional Assessment of Chronic Illness Therapy for Cancer (FACIT-G) before and after receiving four IH sessions.

The results showed that 11 women had statistically positive results for emotional, physical and total wellbeing based on FACIT-G scores.

And the conclusion: Findings support previous research, suggesting CAM or individualised homeopathy could be beneficial for survivors of cancer.

As I said: one has to admire their tenacity, perhaps also their chutzpa – but not their understanding of science or their intelligence. If they were able to think critically, they could only arrive at one conclusion: STUDY DESIGNS THAT ARE WIDE OPEN TO BIAS ARE LIKELY TO DELIVER BIASED RESULTS.

The second study is a mere observation without a control group. The reported outcomes could be due to placebo, expectation, extra attention or social desirability. We obviously need an RCT! But the first study was an RCT!!! Its results are therefore more convincing, aren’t they?

No, not at all. I can repeat my sentence from above: The reported outcomes could be due to placebo, expectation, extra attention or social desirability. And if you don’t believe it, please read what I have posted about the infamous ‘A+B versus B’ trial design (here and here and here and here and here for instance).

My point is that such a study, while looking rigorous to the naïve reader (after all, it’s an RCT!!!), is just as inconclusive when it comes to establishing cause and effect as a simple case series which (almost) everyone knows by now to be utterly useless for that purpose. The fact that the A+B versus B design is nevertheless being used over and over again in alternative medicine for drawing causal conclusions amounts to deceit – and deceit is unethical, as we all know.

My overall conclusion about all this:

QUACKS LOVE THIS STUDY DESIGN BECAUSE IT NEVER FAILS TO PRODUCE FALSE POSITIVE RESULTS.

This is a question which I have asked myself more often than I care to remember. The reason is probably that, in alternative medicine, I feel surrounded by so much dodgy research that I simply cannot avoid asking it.

In particular, the co-called ‘pragmatic’ trials which are so much ‘en vogue’ at present are, in my view, a reason for concern. Take a study of cancer patients, for instance, where one group is randomized to get the usual treatments and care, while the experimental group receives the same and several alternative treatments in addition. These treatments are carefully selected to be agreeable and pleasant; each patient can choose the ones he/she likes best, always had wanted to try, or has heard many good things about. The outcome measure of our fictitious study would, of course, be some subjective parameter such as quality of life.

In this set-up, the patients in our experimental group thus have high expectations, are delighted to get something extra, even more happy to get it for free, receive plenty of attention and lots of empathy, care, time, attention etc. By contrast, our poor patients in the control group would be a bit miffed to have drawn the ‘short straw’ and receive none of this.

What result do we expect?

Will the quality of life after all this be equal in both groups?

Will it be better in the miffed controls?

Or will it be higher in those lucky ones who got all this extra pampering?

I don’t think I need to answer these questions; the answers are too obvious and too trivial.

But the real and relevant question is the following, I think: IS SUCH A TRIAL JUST SILLY AND MEANINGLESS OR IS IT UNETHICAL?

I would argue the latter!

Why?

Because the results of the study are clearly known before the first patient had even been recruited. This means that the trial was not necessary; the money, time and effort has been wasted. Crucially, patients have been misled into thinking that they give their time, co-operation, patience etc. because there is a question of sufficient importance to be answered.

But, in truth, there is no question at all!

Perhaps you believe that nobody in their right mind would design, fund and conduct such a daft trial. If so, you assumed wrongly. Such studies are currently being published by the dozen. Here is the abstract of the most recent one I could find:

The aim of this study was to evaluate the effectiveness of an additional, individualized, multi-component complementary medicine treatment offered to breast cancer patients at the Merano Hospital (South Tyrol) on health-related quality of life compared to patients receiving usual care only. A randomized pragmatic trial with two parallel arms was performed. Women with confirmed diagnoses of breast cancer were randomized (stratified by usual care treatment) to receive individualized complementary medicine (CM group) or usual care alone (usual care group). Both groups were allowed to use conventional treatment for breast cancer. Primary endpoint was the breast cancer-related quality of life FACT-B score at 6 months. For statistical analysis, we used analysis of covariance (with factors treatment, stratum, and baseline FACT-B score) and imputed missing FACT-B scores at 6 months with regression-based multiple imputation. A total of 275 patients were randomized between April 2011 and March 2012 to the CM group (n = 136, 56.3 ± 10.9 years of age) or the usual care group (n = 139, 56.0 ± 11.0). After 6 months from randomization, adjusted means for health-related quality of life were higher in the CM group (FACT-B score 107.9; 95 % CI 104.1-111.7) compared to the usual care group (102.2; 98.5-105.9) with an adjusted FACT-B score difference between groups of 5.7 (2.6-8.7, p < 0.001). Thus, an additional individualized and complex complementary medicine intervention improved quality of life of breast cancer patients compared to usual care alone. Further studies evaluating specific effects of treatment components should follow to optimize the treatment of breast cancer patients. 

The key sentence in this abstract is, of course: complementary medicine intervention improved quality of life of breast cancer patients… It provides the explanation as to why these trials are so popular with alternative medicine researchers: they are not real research but they are quite simply promotion! The next step would be to put a few of those pseudo-scientific trials together and claim that there is solid proof that integrating alternative treatments into conventional health care produces better results. At that stage, few people will bother asking whether this is really due to the treatments in questioning or to the additional attention, pampering etc.

My question is ARE SUCH TRIALS ETHICAL?

I would very much appreciate your opinion.

A new study of homeopathic arnica suggests efficacy. How come?

Subjects scheduled for rhinoplasty surgery with nasal bone osteotomies by a single surgeon were prospectively randomized to receive either oral perioperative arnica or placebo in a double-blinded fashion. A commercially available preparation was used which contained 12 capsules: one 500 mg capsule with arnica 1M is given preoperatively on the morning of surgery and two more later that day after surgery. Thereafter, arnica was administered in the 12C potency three times daily for the next 3 days (“C” indicates a 100-fold serial dilution; and M, a 1000-fold dilution)

Ecchymosis was measured in digital “three-quarter”-view photographs at three postoperative time points. Each bruise was outlined with Adobe Photoshop and the extent was scaled to a standardized reference card. Cyan, magenta, yellow, black, and luminosity were analyzed in the bruised and control areas to calculate change in intensity.

Compared with 13 subjects receiving placebo, 9 taking arnica had 16.2%, 32.9%, and 20.4% less extent on postoperative days 2/3, 7, and 9/10, a statistically significant difference on day 7. Color change initially showed 13.1% increase in intensity with arnica, but 10.9% and 36.3% decreases on days 7 and 9/10, a statistically significant difference on day 9/10. One subject experienced mild itching and rash with the study drug that resolved during the study period.

The authors concluded that Arnica montana seems to accelerate postoperative healing, with quicker resolution of the extent and the intensity of ecchymosis after osteotomies in rhinoplasty surgery, which may dramatically affect patient satisfaction.

Why are the results positive? Pervious systematic reviews confirm that homeopathic arnica is a pure placebo. First, I thought the answer lies in the 1M potency. It could well still contain active molecules. But then I realised that the answer is much more simple: if we apply the conventional level of statistical significance, there are no statistically significant differences to placebo at all! I had not noticed the little sentence by the authors: a P value of 0.1 was set as a meaningful difference with statistical significance. In fact, none of the effects called significant by the authors pass the conventionally used probability level of 5%.

So, what so the results of this new study truly mean? In my view, they show what was known all along: HOMEOPATHIC REMEDIES ARE PLACEBOS.

In my last post, I claimed that researchers of alternative medicine tend to be less than rigorous. I did not link this statement to any evidence at all. Perhaps I should have at least provided an example!? As it happens, I just came across a brand new paper which nicely demonstrates what I meant.

According to its authors, this non-interventional study was performed to generate data on safety and treatment effects of a complex homeopathic drug. They treated 1050 outpatients suffering from common cold with a commercially available homeopathic remedy for 8 days. The study was conducted in 64 German outpatient practices of medical doctors trained in CAM. Tolerability, compliance and the treatment effects were assessed by the physicians and by patient diaries. Adverse events were collected and assessed with specific attention to homeopathic aggravation and proving symptoms. Each adverse effect was additionally evaluated by an advisory board of experts.

The physicians detected 60 adverse events from 46 patients (4.4%). Adverse drug reactions occurred in 14 patients (1.3%). Six patients showed proving symptoms (0.57%) and only one homeopathic aggravation (0.1%) appeared. The rate of compliance was 84% for all groups. The global assessment of the treatment effects resulted in the verdict “good” and “very good” in 84.9% of all patients.

The authors concluded that the homeopathic complex drug was shown to be safe and effective for children and adults likewise. Adverse reactions specifically related to homeopathic principles are very rare. All observed events recovered quickly and were of mild to moderate intensity.

So why do I think this is ‘positively barmy’?

The study had no control group. This means that there is no way anyone can attribute the observed ‘treatment effects’ to the homeopathic remedy. There are many other phenomena that may have caused or contributed to it, e. g.:

  • a placebo effect
  • the natural history of the condition
  • regression to the mean
  • other treatments which the patients took but did not declare
  • the empathic encounter with the physician
  • social desirability

To plan a study with the aim as stated above and to draw the conclusion as cited above is naïve and unprofessional (to say the least) on the part of the researchers (I often wonder where, in such cases, the boundary between incompetence and research misconduct might lie). To pass such a paper through the peer review process is negligent on the part of the reviewers. To publish the article is irresponsible on the part of the editor.

In a nut-shell: COLLECTIVELY, THIS IS ‘POSITIVELY BARMY’!!!

In the realm of homeopathy there is no shortage of irresponsible claims. I am therefore used to a lot – but this new proclamation takes the biscuit, particularly as it currently is being disseminated in various forms worldwide. It is so outrageously unethical that I decided to reproduce it here [in a slightly shortened version]:

“Homeopathy has given rise to a new hope to patients suffering from dreaded HIV, tuberculosis and the deadly blood disease Hemophilia. In a pioneering two-year long study, city-based homeopath Dr Rajesh Shah has developed a new medicine for AIDS patients, sourced from human immunodeficiency virus (HIV) itself.

The drug has been tested on humans for safety and efficacy and the results are encouraging, said Dr Shah. Larger studies with and without concomitant conventional ART (Antiretroviral therapy) can throw more light in future on the scope of this new medicine, he said. Dr Shah’s scientific paper for debate has just been published in Indian Journal of Research in Homeopathy…

The drug resulted in improvement of blood count (CD4 cells) of HIV patients, which is a very positive and hopeful sign, he said and expressed the hope that this will encourage an advanced research into the subject. Sourcing of medicines from various virus and bacteria has been a practise in the homeopathy stream long before the prevailing vaccines came into existence, said Dr Shah, who is also organising secretary of Global Homeopathy Foundation (GHF)…

Dr Shah, who has been campaigning for the integration of homeopathy and allopathic treatments, said this combination has proven to be useful for several challenging diseases. He teamed up with noted virologist Dr Abhay Chowdhury and his team at the premier Haffkine Institute and developed a drug sourced from TB germs of MDR-TB patients.”

So, where is the study? It is not on Medline, but I found it on the journal’s website. This is what the abstract tells us:

“Thirty-seven HIV-infected persons were registered for the trial, and ten participants were dropped out from the study, so the effect of HIV nosode 30C and 50C, was concluded on 27 participants under the trial.

Results: Out of 27 participants, 7 (25.93%) showed a sustained reduction in the viral load from 12 to 24 weeks. Similarly 9 participants (33.33%) showed an increase in the CD4+ count by 20% altogether in 12 th and 24 th week. Significant weight gain was observed at week 12 (P = 0.0206). 63% and 55% showed an overall increase in either appetite or weight. The viral load increased from baseline to 24 week through 12 week in which the increase was not statistically significant (P > 0.05). 52% (14 of 27) participants have shown either stability or improvement in CD4% at the end of 24 weeks, of which 37% participants have shown improvement (1.54-48.35%) in CD4+ count and 15% had stable CD4+ percentage count until week 24 week. 16 out of 27 participants had a decrease (1.8-46.43%) in CD8 count. None of the adverse events led to discontinuation of study.

Conclusion: The study results revealed improvement in immunological parameters, treatment satisfaction, reported by an increase in weight, relief in symptoms, and an improvement in health status, which opens up possibilities for future studies.”

In other words, the study had not even a control group. This means that the observed ‘effects’ are most likely just the normal fluctuations one would expect without any clinical significance whatsoever.

The homeopathic Ebola cure was bad enough, I thought, but, considering the global importance of AIDS, the homeopathic HIV treatment is clearly worse.

Today, I had a great day: two wonderful book reviews, one in THE TIMES HIGHER EDUCATION and one in THE SPECTATOR. But then I did something that I shouldn’t have done – I looked whether someone had already written a review on the Amazon site. There were three reviews; the first was nice the last was very stupid and the third one almost made me angry. Here it is:

I was at Exeter when Ernst took over what was already a successful Chair in CAM. I am afraid this part of it appears to be fiction. It was embarrassing for those of us CAM scientists trying to work there, but the university nevertheless supported his right to freedom of speech through all the one-sided attacks he made on CAM. Sadly, it became impossible to do genuine CAM research at Exeter, as one had to either agree with him that CAM is rubbish, or go elsewhere. He was eventually asked to leave the university, having spent the £2.M charity pot set up by Maurice Laing to help others benefit from osteopathy. CAM research funding is so tiny (in fact it is pretty much non-existent) and the remedies so cheap to make, that there is not the kind of corruption you find in multi-billion dollar drug companies (such as that recently in China) or the intrigue described. Subsequently it is not possible to become a big name in CAM in the UK (which may explain the ‘about face’ from the author when he found that out?). The book bears no resemblance to what I myself know about the field of CAM research, which is clearly considerably more than the author, and I would recommend anyone not to waste time and money on this particular account.

I know, I should just ignore it, but outright lies have always made me cross!

Here are just some of the ‘errors’ in the above text:

  • There was no chair when I came.
  • All the CAM scientists – not sure what that is supposed to mean.
  • I was never asked to leave.
  • The endowment was not £ 2 million.
  • It was not set up to help others benefit from osteopathy.

It is a pity that this ‘CAM-expert’ hides behind a pseudonym. Perhaps he/she will tell us on this blog who he/she is. And then we might find out how well-informed he/she truly is and how he/she was able to insert so many lies into such a short text.

Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?

Here is a brand new one which might stand for dozens of others.

In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).

The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.

Good news then for enthusiasts of homeopathy? 91% improvement!

Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:

Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.

Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:

  1. How on earth can we take this and so many other articles on homeopathy seriously?
  2. When does this sort of article cross the line between wishful thinking and scientific misconduct?

Guest post by Nick Ross

If you’re a fan of Edzard Ernst – and who with a rational mind would not be – then you will be a fan of HealthWatch.

Edzard is a distinguished supporter. Do join us. I can’t promise much in return except that you will be part of a small and noble organisation that campaigns for treatments that work – in other words for evidence based medicine. Oh, and you get a regular Newsletter, which is actually rather good.

HealthWatch was inspired 25 years ago by Professor Michael Baum, the breast cancer surgeon who was incandescent that so many women presented to his clinic late, doomed and with suppurating sores, because they had been persuaded to try ‘alternative treatment’ rather than the real thing.

But like Edzard (and indeed like Michael Baum), HealthWatch keeps an open mind. If there are reliable data to show that an apparently weirdo treatment works, hallelujah. If there is evidence that an orthodox one doesn’t then it deserves a raspberry. HealthWatch has worked to expose quacks and swindlers and to get the Advertising Standards Authority to do its job regulating against false claims and flimflam. It has fought the NHS to have women given fair and balanced advice about the perils of mass screening. It has campaigned with Sense About Science, English Pen and Index to protect whistleblowing scientists from vexatious libel laws, and it has joined the AllTrials battle for transparency in drug trials. It has an annual competition for medical and nursing students to encourage critical analysis of clinical research protocols, and it stages the annual HealthWatch Award and Lecture which has featured Edzard (in 2005) and a galaxy of other champions of scepticism and good evidence including Sir Iain Chalmers, Richard Smith, David Colquhoun, Tim Harford, John Diamond, Richard Doll, Peter Wilmshurst, Ray Tallis, Ben Goldacre, Fiona Godlee and, last year, Simon Singh. We are shortly to sponsor a national debate on Lord Saatchi’s controversial Medical innovation Bill.

But we need new blood. Do please check us out. Be careful, because since we first registered our name a host of brazen copycats have emerged, not least Her Majesty’s Government with ‘Healthwatch England’ which is part of the Care Quality Commission. We have had to put ‘uk’ at the end of our web address to retain our identity. So take the link to http://www.healthwatch-uk.org/, or better still take out a (very modestly priced) subscription.

As Edmund Burke might well have said, all it takes for quackery to flourish is that good men and women do nothing.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories