As I am drafting this post, I am in a plane flying back from Finland. The in-flight meal reminded me of the fact that no food is so delicious that it cannot be spoilt by the addition of too many capers. In turn, this made me think about the paper I happened to be reading at the time, and I arrived at the following theory: no trial design is so rigorous that it cannot to be turned into something utterly nonsensical by the addition of a few amateur researchers.
The paper I was reading when this idea occurred to me was a randomised, triple-blind, placebo-controlled cross-over trial of homeopathy. Sounds rigorous and top quality? Yes, but wait!
Essentially, the authors recruited 86 volunteers who all claimed to be suffering from “mental fatigue” and treated them with Kali-Phos 6X or placebo for one week (X-potencies signify dilution steps of 1: 10, and 6X therefore means that the salt had been diluted 1: 1000000 ). Subsequently, the volunteers were crossed-over to receive the other treatment for one week.
The results failed to show that the homeopathic medication had any effect (not even homeopaths can be surprised about this!). The authors concluded that Kali-Phos was not effective but cautioned that, because of the possibility of a type-2-error, they might have missed an effect which, in truth, does exist.
In my view, this article provides an almost classic example of how time, money and other resources can be wasted in a pretence of conducting reasonable research. As we all know, clinical trials usually are for testing hypotheses. But what is the hypothesis tested here?
According to the authors, the aim was to “assess the effectiveness of Kali-Phos 6X for attention problems associated with mental fatigue”. In other words, their hyposesis was that this remedy is effective for treating the symptom of mental fatigue. This notion, I would claim, is not a scientific hypothesis, it is a foolish conjecture!
Arguably any hypothesis about the effectiveness of a highly diluted homeopathic remedy is mere wishful thinking. But, if there were at least some promissing data, some might conclude that a trial was justified. By way of justification for the RCT in question, the authors inform us that one previous trial had suggested an effect; however, this study did not employ just Kali-Phos but a combined homeopathic preparation which contained Kalium-Phos as one of several components. Thus the authors’ “hypothesis” does not even amount to a hunch, not even to a slight incling! To me, it is less than a shot in the dark fired by blind optimists – nobody should be surprised that the bullet failed to hit anything.
It could even be that the investigators themselves dimly realised that something is amiss with the basis of their study; this might be the reason why they called it an “exploratory trial”. But an exploratory study is one whithout a hypothesis, and the trial in question does have a hyposis of sorts – only that it is rubbish. And what exactly did the authos meant to explore anyway?
That self-reported mental fatigue in healthy volunteers is a condition that can be mediatised such that it merits treatment?
That the test they used for quantifying its severity is adequate?
That a homeopathic remedy with virtually no active ingredient generates outcomes which are different from placebo?
That Hahnemann’s teaching of homeopathy was nonsense and can thus be discarded (he would have sharply condemned the approach of treating all volunteers with the same remedy, as it contradicts many of his concepts)?
That funding bodies can be fooled to pay for even the most ridiculous trial?
That ethics-committees might pass applications which are pure nonsense and which are thus unethical?
A scientific hypothesis should be more than a vague hunch; at its simplest, it aims to explain an observation or phenomenon, and it ought to have certain features which many alt med researchers seem to have never heard of. If they test nonsense, the result can only be nonsense.
The issue of conducting research that does not make much sense is far from trivial, particularly as so much (I would say most) of alt med research is of such or even worst calibre (if you do not believe me, please go on Medline and see for yourself how many of the recent articles in the category “complementary alternative medicine” truly contribute to knowledge worth knowing). It would be easy therefore to cite more hypothesis-free trials of homeopathy.
One recent example from Germany will have to suffice: in this trial, the only justification for conducting a full-blown RCT was that the manufacturer of the remedy allegedly knew of a few unpublished case-reports which suggested the treatment to work – and, of course, the results of the RCT eventually showed that it didn’t. Anyone with a background in science might have predicied that outcome – which is why such trials are so deplorably wastefull.
Research-funds are increasingly scarce, and they must not be spent on nonsensical projects! The money and time should be invested more fruitfully elsewhere. Participants of clinical trials give their cooperation willingly; but if they learn that their efforts have been wasted unnecessarily, they might think twice next time they are asked. Thus nonsensical research may have knock-on effects with far-reaching consequences.
Being a researcher is at least as serious a profession as most other occupations; perhaps we should stop allowing total amateurs wasting money while playing at being professioal. If someone driving a car does something seriously wrong, we take away his licence; why is there not a similar mechanism for inadequate researchers, funders, ethics-committees which prevents them doing further damage?
At the very minimum, we should critically evaluate the hypothesis that the applicants for research-funds propose to test. Had someone done this properly in relatiom to the two above-named studies, we would have saved about £150,000 per trial (my estimate). But as it stands, the authors will probably claim that they have produced fascinating findings which urgently need further investigation – and we (normally you and I) will have to spend three times the above-named amount (again, my estimate) to finance a “definitive” trial. Nonsense, I am afraid, tends to beget more nonsense.
In these austere and difficult times, it must be my duty, I think, to alert my fellow citizens to a possible source of additional income which almost anyone can plug into: become a charlatan, and chances are that your economic hardship is a memory from the past. To achieve this aim, I [with my tongue firmly lodged in my cheek] suggest a fairly straight forward step by step approach.
1. Find an attractive therapy and give it a fantastic name
Did I just say “straight forward”? Well, the first step isn’t that easy, after all. Most of the really loony ideas turn out to be taken: ear candles, homeopathy, aura massage, energy healing, urine-therapy, chiropractic etc. As a true charlatan, you want your very own quackery. So you will have to think of a new concept.
Something truly ‘far out’ would be ideal, like claiming the ear is a map of the human body which allows you to treat all diseases by doing something odd on specific areas of the ear – oops, this territory is already occupied by the ear acupuncture brigade. How about postulating that you have super-natural powers which enable you to send ‘healing energy’ into patients’ bodies so that they can repair themselves? No good either: Reiki-healers might accuse you of plagiarism.
But you get the gist, I am sure, and will be able to invent something. When you do, give it a memorable name, the name can make or break your new venture.
2. Invent a fascinating history
Having identified your treatment and a fantastic name for it, you now need a good story to explain how it all came about. This task is not all that tough and might even turn out to be fun; you could think of something touching like you cured your moribund little sister at the age of 6 with your intervention, or you received the inspiration in your dreams from an old aunt who had just died, or perhaps you want to create some religious connection [have you ever visited Lourdes?]. There are no limits to your imagination; just make sure the story is gripping – one day, they might make a movie of it.
3. Add a dash of pseudo-science
Like it or not, but we live in an age where we cannot entirely exclude science from our considerations. At the very minimum, I recommend a little smattering of sciency terminology. As you don’t want to be found out, select something that only few experts understand; quantum physics, entanglement, chaos-theory and Nano-technology are all excellent options.
It might also look more convincing to hint at the notion that top scientists adore your concepts, or that whole teams from universities in distant places are working on the underlying mechanisms, or that the Nobel committee has recently been alerted etc. If at all possible, add a bit of high tech to your new invention; some shiny new apparatus with flashing lights and digital displays might be just the ticket. The apparatus can be otherwise empty – as long as it looks impressive, all is fine.
4. Do not forget a dose of ancient wisdom
With all this science – sorry, pseudo-science – you must not forget to remain firmly grounded in tradition. Your treatment ought to be based on ancient wisdom which you have rediscovered, modified and perfected. I recommend mentioning that some of the oldest cultures of the planet have already been aware of the main pillars on which your invention today proudly stands. Anything that is that old has stood the test of time which is to say, your treatment is both effective and safe.
5. Claim to have a panacea
To maximise your income, you want to have as many customers as possible. It would therefore be unwise to focus your endeavours on just one or two conditions. Commercially, it is much better to affirm in no uncertain terms that your treatment is a cure for everything, a panacea. Do not worry about the implausibility of such a claim. In the realm of quackery, it is perfectly acceptable, even common behaviour to be outlandish.
6. Deal with the ‘evidence-problem’ and the nasty sceptics
It is depressing, I know, but even the most exceptionally gifted charlatan is bound to attract doubters. Sceptics will sooner or later ask you for evidence; in fact, they are obsessed by it. But do not panic – this is by no means as threatening as it appears. The obvious solution is to provide testimonial after testimonial.
You need a website where satisfied customers report impressive stories how your treatment saved their lives. In case you do not know such customers, invent them; in the realm of quackery, there is a time-honoured tradition of writing your own testimonials. Nobody will be able to tell!
7. Demonstrate that you master the fine art of cheating with statistics
Some of the sceptics might not be impressed, and when they start criticising your ‘evidence’, you might need to go the extra mile. Providing statistics is a very good way of keeping them at bay, at least for a while. The general consensus amongst charlatans is that about 70% of their patients experience remarkable benefit from whatever placebo they throw at them. So, my advice is to do a little better and cite a case series of at least 5000 patients of whom 76.5 % showed significant improvements.
What? You don’t have such case series? Don’t be daft, be inventive!
8. Score points with Big Pharma
You must be aware who your (future) customers are (will be): they are affluent, had a decent education (evidently without much success), and are middle-aged, gullible and deeply alternative. Think of Prince Charles! Once you have empathised with this mind-set, it is obvious that you can profitably plug into the persecution complex which haunts these people.
An easy way of achieving this is to claim that Big Pharma has got wind of your innovation, is positively frightened of losing millions, and is thus doing all they can to supress it. Not only will this give you street cred with the lunatic fringe of society, it also provides a perfect explanation why your ground-breaking discovery has not been published it the top journals of medicine: the editors are all in the pocket of Big Pharma, of course.
9. Ask for money, much money
I have left the most important bit for the end; remember: your aim is to get rich! So, charge high fees, even extravagantly high ones. If your treatment is a product that you can sell (e.g. via the internet, to escape the regulators), sell it dearly; if it is a hands-on therapy, charge heavy consultation fees and claim exclusivity; if it is a teachable technique, start training other therapists at high fees and ask a franchise-cut of their future earnings.
Over-charging is your best chance of getting famous – or have you ever heard of a charlatan famous for being reasonably priced? It will also get rid of the riff-raff you don’t want to see in your surgery. Poor people might be even ill! No, you don’t want them; you want the ‘worried rich and well’ who can afford to see a real doctor when things should go wrong. But most importantly, high fees will do a lot of good to your bank account.
Now you are all set. However, to prevent you from stumbling at the first hurdle, here are some handy answers to the questions you inevitably will receive from sceptics, this nasty breed that is never happy. The answers are not designed to convince them but, if voiced in public, they will ensure that the general opinion is on your side – and that’s what is paramount in the realm of quackery.
Q: Your treatment can cause considerable harm; do you find that responsible?
A: Harm? Do you know what you are talking about? Obviously not! Every year, hundreds of thousands die because of the medicine they received from mainstream doctors. This is what I call harm!
Q: Experts say that your treatment is not biologically plausible, what is your response?
A: There are many things science does not yet understand and many things that it will never understand. In any case, there are other ways of knowing, and science is but one of them.
Q: Where are the controlled trials to back up your claim?
A: Clinical trials are of very limited value; they are far too small, frequently biased and never depict the real life situation. This is why many experts now argue for better ways of showing the value of medical interventions.
Q: Professor Ernst recently said that your therapy is unproven, is that true?
A: This man cannot be trusted; he is in the pocket of the pharmaceutical industry! He would say that, wouldn’t he?
Anyway, did you know that only 15% of conventional therapies actually are evidence-based?
Q: Why is your treatment so expensive?
A: Years of training, a full research programme, constant audits, compliance with regulations, and a large team of co-workers – do you think that all of this comes free? Personally, I would treat all my patients for free (and often do so) but I have responsibilities to others, you know.
Surely, homeopathy must be free of adverse-effects! The typically highly diluted remedies contain no active molecules and therefore they cannot possibly cause any harm whatsoever. One could even go one step further and argue that the generally acknowledged absence of side-effects made homeopathy as popular as it is today. Why then did we just publish a systematic review of adverse effects of homeopathy?
We conducted searches in 5 electronic databases and also looked through our own, extensive files on homeopathy. This resulted in 38 primary reports of adverse effects associated with the use of homeopathy. The total number of patients thus affected was not small: 1159; 4 fatalities were also reported. Our conclusion was that “homeopathy has the potential to harm patients in both direct and indirect ways”.
I already hear homeopaths shouting at me: “but this is nothing compare to the millions of patients who suffer side-effects of conventional drugs!” I don’t doubt this for a second; our aim was not to show that homeopathy is less safe than mainstream medicine, we merely wanted to test the hypothesis that numerous adverse-effects are on record.
So, how can a therapy that usually relies on nothing more than placebos cause harm? Many of the patients that experienced harm did so because the use of homeopathy meant that effective treatments were given too late or not at all. In our book TRICK OR TREATMENT, we describe the case of a homeopath who collaborated with my research team while conducting a clinical trial of homeopathy; before the trial had been completed, she died of cancer simply because she self-treated it with homeopathy and thus lost valuable time for proper therapy which might have saved her life. I have said it often and I say it again: if used as an alternative to an effective cure, even the most “harmless” treatment can become life-threatening.
In other cases, adverse effects can occur when remedies are not highly diluted. Most but not all homeopathic remedies are devoid of active molecules. Homeopaths prescribe treatments like arsenic and other highly poisonous substances; if such a remedy is not administered in a much diluted form, it can easily kill whoever is unfortunate enough to take it.
The reactions (letters yet to be published in the journal) by proponents of homeopathy to our article was predictable: they claimed we contradicted our published statements that homeopathy was without any effect at all, they tried to find mistakes in our analysis, they claimed that we had a track record of publishing sloppy research, and they even asked the editor to withdraw our paper [I am pleased to report that he resisted this invitation]. Our response to these comments and allegations pointed out that ad hominem attacks are transparent attempts to get rid of unwanted truths – in fact, they are not just transparent but also never successful in suppressing the evidence and, in the final analysis, they merely disclose the fallacies of the opponent.
Science has seen its steady stream of scandals which are much more than just regrettable, as they undermine much of what science stands for. In medicine, fraud and other forms of misconduct of scientists can even endanger the health of patients.
On this background, it would be handy to have a simple measure which would give us some indication about the trustworthiness of scientists, particularly clinical scientists. Might I be as bold as to propose such a method, the TRUSTWORTHINESS INDEX (TI)?
A large part of clinical science is about testing the efficacy of treatments, and it is the scientist who does this type of research who I want to focus on. It goes without saying that, occasionally, such tests will have to generate negative results such as “the experimental treatment was not effective” [actually “negative” is not the right term, as it is clearly positive to know that a given therapy does not work]. If this never happens with the research of a given individual, we could be dealing with false positive results. In such a case, our alarm bells should start ringing, and we might begin to ask ourselves, how trustworthy is this person?
Yet, in real life, the alarm bells rarely do ring. This absence of suspicion might be due to the fact that, at one point in time, one single person tends to see only one particular paper of the individual in question – and one result tells him next to nothing about the question whether this scientist produces more than his fair share of positive findings.
What is needed is a measure that captures the totality of a researcher’s out-put. Such parameters already exist; think of the accumulated ”Impact Factor” or the ”H-Index”, for instance. But, at best, these citation metrics provide information about the frequency or impact of this person’s published papers and totally ignore his trustworthiness. To get a handle on this particular aspect of a scientist’s work, we might have to consider not the impact but the direction of his published conclusions.
If we calculated the percentage of a researcher’s papers arriving at positive conclusions and divided this by the percentage of his papers drawing negative conclusions, we might have a useful measure. A realistic example might be the case of a clinical researcher who has published a total of 100 original articles. If 50% had positive and 50% negative conclusions about the efficacy of the therapy tested, his TI would be 1.
Depending on what area of clinical medicine this person is working in, 1 might be a figure that is just about acceptable in terms of the trustworthiness of the author. If the TI goes beyond 1, we might get concerned; if it reaches 4 or more, we should get worried.
An example would be a researcher who has published 100 papers of which 80 are positive and 20 arrive at negative conclusions. His TI would consequently amount to 4. Most of us equipped with a healthy scepticism would consider this figure highly suspect.
Of course, this is all a bit simplistic, and, like all other citation metrics, my TI provides us not with any level of proof; it merely is a vague indicator that something might be amiss. And, as stressed already, the cut-off point for any scientist’s TI very much depends on the area of clinical research we are dealing with. The lower the plausibility and the higher the uncertainty associated with the efficacy of the experimental treatments, the lower the point where the TI might suggest something to be fishy.
A good example of an area plagued with implausibility and uncertainty is, of course, alternative medicine. Here one would not expect a high percentage of rigorous tests to come out positive, and a TI of 0.5 might perhaps already be on the limit.
So how does the TI perform when we apply it to my colleagues, the full-time researchers in alternative medicine? I have not actually calculated the exact figures, but as an educated guess, I estimate that it would be very hard, even impossible, to find many with a TI under 4.
But surely this cannot be true! It would be way above the acceptable level which we just estimated to be around 0.5. This must mean that my [admittedly slightly tongue in cheek] idea of calculating the TI was daft. The concept of my TI clearly does not work.
The alternative explanation for the high TIs in alternative medicine might be that most full-time researchers in this field are not trustworthy. But this hypothesis must be rejected off hand – or mustn’t it?
We all remember the libel case of the British Chiropractic Association (BCA) against Simon Singh, I’m sure. The BCA lost, and the chiropractic profession was left in disarray.
One would have thought that chiropractors have learnt a lesson from this experience which, after all, resulted in a third of all UK chiropractors facing disciplinary proceedings. One would have thought that chiropractors had enough of their attempts to pursue others when, in fact, they themselves were clearly in the wrong. One would have thought that chiropractors would eventually focus on providing us with some sound evidence about their treatments. One would have thought that chiropractors might now try to get their act together.
Yet it seems that such hopes are being sorely disappointed. In particular, chiropractors continue to attack those who have the courage to publicly criticise them. The proof for this statement is that, during the last few months, chiropractors took direct or indirect actions against me on three different occasions.
The first complaint was made by a chiropractor to the PRESS COMPLAINTS COMMISSION (PCC). The GUARDIAN had commented on a paper that I had just published which demonstrated that many trials of chiropractic fail to mention adverse effects. If nothing else, this omission amounts to a serious breach of publication ethics and is thus not a trivial matter. However, the chiropractor felt that the GUARDIAN and I were essentially waging a war against chiropractors in order to tarnish the reputation and public image of chiropractors. The PCC considered the case and promptly dismissed it.
The second complaint was made by a local chiropractor to my university. He alleged that I had been generally unfair in my publications on the subject and, specifically, he claimed that, in a recent systematic review of deaths after chiropractic treatments, I had committed what he called “research misconduct”. My university considered the case and promptly dismissed it.
The third and probably most significant complaint was also made by a chiropractor directly to my university. This time, the allegation was that I had fabricated data in an article published as long ago as 1996. The chiropractor in question had previously already tried three times to attack me through complaints and through his publications. Crucially, several years ago he had filed a formal complaint with the General Medical Council (GMC) claiming that, in my published articles, I systematically and wilfully misquoted the chiropractic literature. At the time, the GMC had ruled that his accusation had been unfounded.
Presumably to increase his chances of success for his fourth attempt, his new complaint to my university was backed up by a supporting letter from the WORLD FEDERATION OF CHIROPRACTIC. This document stated that my publications relating to the risks of chiropractic had “serious scientific shortcomings” and suggested that Exeter University “publicly distance itself from Prof Ernst’s publications on chiropractic, to enhance the reputation of the university”. My university peers considered the case and promptly dismissed it.
At this point, I should perhaps explain that my university has, in the past, been less than protective towards me. During the last decade or so, complaints angainst me had become a fairly regular occurrence, and invariably, my peers have taken them very seriously. When the first private secretary of Charles Windsor filed one, they even deemed it appropriate to conduct an official 13 month long investigation into my alleged wrong-doings. Thus my peers’ dismissal of the two chiropractors’ claims indicates to me that their two recent complaints must have been truly and utterly devoid of substance.
The three deplorable episodes summarised here speak for themselves, I think. I will therefore abstain from further comments and am delighted to leave this task to the readers of this blog.