Has it ever occurred to you that much of the discussion about cause and effect in alternative medicine goes in circles without ever making progress? I have come to the conclusion that it does. Here I try to illustrate this point using the example of acupuncture, more precisely the endless discussion about how to best test acupuncture for efficacy. For those readers who like to misunderstand me I should explain that the sceptics’ view is in capital letters.
At the beginning there was the experience. Unaware of anatomy, physiology, pathology etc., people started sticking needles in other people’s skin, some 2000 years ago, and observed that they experienced relief of all sorts of symptoms.When an American journalist reported about this phenomenon in the 1970s, acupuncture became all the rage in the West. Acupuncture-fans then claimed that a 2000-year history is ample proof that acupuncture does work.
BUT ANECDOTES ARE NOTORIOUSLY UNRELIABLE!
Even the most enthusiastic advocates conceded that this is probably true. So they documented detailed case-series of lots of patients, calculated the average difference between the pre- and post-treatment severity of symptoms, submitted it to statistical tests, and published the notion that the effects of acupuncture are not just anecdotal; in fact, they are statistically significant, they said.
BUT THIS EFFECT COULD BE DUE TO THE NATURAL HISTORY OF THE CONDITION!
“True enough”, grumbled the acupuncture-fans and conducted the very first controlled clinical trials. Essentially they treated one group of patients with acupuncture while another group received conventional treatments as usual. When they analysed the results, they found that the acupuncture group had improved significantly more. “Now do you believe us?”, they asked triumphantly, “acupuncture is clearly effective”.
NO! THIS OUTCOME MIGHT BE DUE TO SELECTION BIAS. SUCH A STUDY-DESIGN CANNOT ESTABLISH CAUSE AND EFFECT.
The acupuncturists felt slightly embarrassed because they had not thought of that. They had allocated their patients to the treatment according to patients’ choice. Thus the expectation of the patients (or the clinician) to get relief from acupuncture might have been the reason for the difference in outcome. So they consulted an expert in trial-design and were advised to allocate not by choice but by chance. In other words, they repeated the previous study but randomised patients to the two groups. Amazingly, their RCT still found a significant difference favouring acupuncture over treatment as usual.
BUT THIS DIFFERENCE COULD BE CAUSED BY A PLACEBO-EFFECT!
Now the acupuncturists were in a bit of a pickle; as far as they could see, there was no good placebo for acupuncture! Eventually some methodologist-chap came up with the idea that, in order to mimic a placebo, they could simply stick needles into non-acupuncture points. When the acupuncturists tried that method, they found that there were improvements in both groups but the difference between real acupuncture and placebo was tiny and usually neither statistically significant nor clinically relevant.
NOW DO YOU CONCEDE THAT ACUPUNCTURE IS NOT AN EFFECTIVE TREATMENT?
Absolutely not! The results merely show that needling non-acupuncture points is not an adequate placebo. Obviously this intervention also sends a powerful signal to the brain which clearly makes it an effective intervention. What do you expect when you compare two effective treatments?
IF YOU REALLY THINK SO, YOU NEED TO PROVE IT AND DESIGN A PLACEBO THAT IS INERT.
At that stage, the acupuncturists came up with a placebo-needle that did not actually penetrate the skin; it worked like a mini stage dagger that telescopes into itself while giving the impression that it penetrated the skin just like the real thing. Surely this was an adequate placebo! The acupuncturists repeated their studies but, to their utter dismay, they found again that both groups improved and the difference in outcome between their new placebo and true acupuncture was minimal.
WE TOLD YOU THAT ACUPUNCTURE WAS NOT EFFECTIVE! DO YOU FINALLY AGREE?
Certainly not, they replied. We have thought long and hard about these intriguing findings and believe that they can be explained just like the last set of results: the non-penetrating needles touch the skin; this touch provides a stimulus powerful enough to have an effect on the brain; the non-penetrating placebo-needles are not inert and therefore the results merely depict a comparison of two effective treatments.
YOU MUST BE JOKING! HOW ARE YOU GOING TO PROVE THAT BIZARRE HYPOTHESIS?
We had many discussions and consensus meeting amongst the most brilliant brains in acupuncture about this issue and have arrived at the conclusion that your obsession with placebo, cause and effect etc. is ridiculous and entirely misplaced. In real life, we don’t use placebos. So, let’s instead address the ‘real life’ question: is acupuncture better than usual treatment? We have conducted pragmatic studies where one group of patients gets treatment as usual and the other group receives acupuncture in addition. These studies show that acupuncture is effective. This is all the evidence we need. Why can you not believe us?
NOW WE HAVE ARRIVED EXACTLY AT THE POINT WHERE WE HAVE BEEN A LONG TIME AGO. SUCH A STUDY-DESIGN CANNOT ESTABLISH CAUSE AND EFFECT. YOU OBVIOUSLY CANNOT DEMONSTRATE THAT ACUPUNCTURE CAUSES CLINICAL IMPROVEMENT. THEREFORE YOU OPT TO PRETEND THAT CAUSE AND EFFECT ARE IRRELEVANT. YOU USE SOME IMITATION OF SCIENCE TO ‘PROVE’ THAT YOUR PRECONCEIVED IDEAS ARE CORRECT. YOU DO NOT SEEM TO BE INTERESTED IN THE TRUTH ABOUT ACUPUNCTURE AT ALL.
One cannot very well write a blog about alternative medicine without giving full credit to the biggest and probably most determined champion of quackery who ever hugged a tree. Prince Charles certainly has done more than anyone else I know to let unproven treatments infiltrate real medicine. To honour his unique achievements, I am here presenting a fictitious interview with him. It never did take place, of course, and the questions I put to him are pure imagination. However, the ‘answers’ are in a way quite real: they have been taken unaltered from various speeches he made and articles he wrote. To avoid being accused of using dodgy sources which might have quoted him inaccurately or sympathetically, I have exclusively used HRH’s very own official website as a source for his comments. It seems safe to assume that HRH identifies with them more fully than with the many other statements he made on this subject.
I have not changed a single word in his statements and I have tried to avoid quoting him out of context; I did, however, take the liberty of putting sentences side by side which do not always originate from the same speech or article, i.e. I have used quotes from different communications to appear as though they originally were in sequence. It will be clear from the text that the fictitious interview is dated before Charles’ Foundation folded because of money laundering and fraud.
It is, of course, hugely tempting to comment on the various statements by Charles. However, I have resisted this temptation; I wanted the reader to enjoy his wisdom in its pure and unadulterated beauty. Anyone who feels like it will have plenty of opportunity to post comments, if they so wish.
To make clear what is what, my questions appear in italics, while his ‘answers’ are in Roman typeface.
Q I believe you have no training in science or medicine; yet you have long felt yourself expert enough to champion bizarre forms of therapies which many of our readers might call quackery.
As you know by now, this is an area to which I attach the greatest importance and where I have tried to make a particular contribution. For many years, the NHS has found complementary medicine an uncomfortable bedfellow – at best regarded as ‘fringe’ and in some quarters as ‘quack’; never viewed as a substitute for conventional medicine and rarely as a genuine partner in providing therapy.
I look back to the rather “lukewarm” response I received in 1983 as President of the British Medical Association when I first spoke about integration and complementary and alternative medicine. We have clearly travelled a very long way since that time.
Q Alternative medicine is mainly used by those who can afford it; at present, little of it is available on the NHS. Why do you want to change this situation?
The very popularity of non-conventional approaches suggests that people are either dissatisfied with the kind of orthodox treatment they are receiving, or find genuine relief in such therapies. Whatever the case, it is only reasonable to try to identify the factors that are contributing to their increased use. And if advantages are found, clearly they should not be limited only to those people who can pay, but should be made more widely available on the NHS.
Q If with a capital “I”?
I believe it is because complementary and alternative approaches to healthcare bring a different emphasis to bear which often unlocks an individual’s inner resources to aid recovery or help to manage living with a serious chronic illness. It is also because complementary and alternative therapies often offer more effective and less intrusive ways of treating illness.
Q Really? Are you sure that they are more effective that conventional treatments? What is your evidence for that?
In 1997 the Foundation for Integrated Medicine, of which I am the president and founder, identified research and development based on rigorous scientific evidence as one of the keys to the medical establishment’s acceptance of non-conventional approaches. I believed then, as I do now, that the move to a more integrated provision of healthcare would ultimately benefit patients and their families.
Q But belief is hardly a good substitute for evidence. In this context, it is interesting to note that chiropractors and osteopaths received the same status as doctors and nurses in the UK. Is this another of your achievements? Was it based on belief or on evidence?
True healing is a synergy that comes not by courtesy of a medical diploma.
Q What do you mean?
As we know, the professions of Osteopathy and Chiropractice are now regulated in the same way as doctors and dentists, with their own Acts of Parliament. I’m very proud to have played a tiny role in trying to push for that Act of Parliament over the years. It has also been reassuring to see the progress being made by the other main complementary professions and I look forward to the further development of regulatory frameworks enabling high standards of training, clinical practice and professional behaviour.
Q Some might argue that statutory regulation made them not more professional but merely improved their status and thus prevented asking question about evidence. Why did they need to be regulated in that way?
The House of Lord’s Select Committee Report on Complementary and Alternative Medicine in 2000, quite sensibly recommended that only complementary professions which were statutorily regulated, or which had well-established arrangements for voluntary self-regulation, should be made available through the NHS.
Q Integrated healthcare seems to be your new buzz-word, what does it mean? Is it more than a passing fad?
Integrated Healthcare is, I believe, here to stay. The public want it and need it. It is not a takeover of the orthodox by CAM or the other way around, but is rather the bringing together of the best from both for the ultimate benefit of the patient.
Q Your lobby-group, Foundation for Integrated Medicine, what has it ever done to justify its existence?
In 1997 the steering group of The Foundation for Integrated Medicine (FIM), of which I am proud to be president, published a discussion document ‘Integrated Healthcare – A Way Forward for the Next Five Years?’
Q Sorry to interrupt, but if so many people are already using them, why do you feel compelled to promote unproven treatments even more? Why is ‘a way forward’ in promotion actually needed? Why did we need a lobby group like FIM?
Homoeopaths, osteopaths, reflexologists, acupuncturists, T’ai chi instructors, art therapists, chiropractors, herbalists and aromatherapists: these practitioners were working alongside NHS colleagues in acute hospitals, on children’s wards, in nursing homes and in particular in primary healthcare, in GP practices and health clinics up and down the country.
Q Exactly! Why then even more promotion of unproven treatments?
All well and good, perhaps, but if there are advantages in this approach, clearly they should not be limited only to those who can pay.
Q Yes, if again with a capital “I”, presumably . Anyway, do you believe these therapies should be tested like other treatments?
One of the obstacles always raised is that it is very difficult to trial complementary therapies in the rigorous randomised way that mainstream medicine deems to be the gold standard. This is ironic as there are, of course, un-evaluated orthodox practices which continue to be funded by the NHS.
Q Are you an expert on research methodology as well?
At the same time, we should be mindful that clinically controlled trials alone are not the only pre-requisites to apply a healthcare intervention. Consumer-based surveys can explore WHY people choose complementary and alternative medicine and tease out the therapeutic powers of belief and trust
These “rationalist selves” would be enormously relieved to see the effectiveness of these treatments proven through the “double-blind randomized controlled trial” – the gold-standard of medical research. However, we know that some complementary and alternative medicine disciplines (and indeed other forms of medical or surgical intervention) do not lend themselves to this research method.
Q Are you sure? This sounds like something someone who is ignorant of research methodology has told you.
… it has been suggested that we need a research method for complementary treatment that is, to use that awful expression, “fit for purpose”. Something that is entirely practical – what has been called “applied” research – which takes into account the whole person and the whole treatment as it is actually given in the surgery or the hospital. Something that might offer us a better idea of the cost-effectiveness of any given approach. It would also help to provide the right sort of evidence that health service commissioners require when they decide which services they wish to commission for their patients.
Q Hmm – anyway, would you promote unproven treatments even for serious conditions like cancer?
Two surveys have indicated that up to eighty per cent of cancer patients try alternative or complementary treatments at some stage following diagnosis and seventy-five per cent of patients would like to see complementary medicine available on the N.H.S.
Q Yes, but why the promotion?
There is a major role for complementary medicine in bowel cancer – as a support to more conventional approaches – in helping to prevent it through lifestyle changes, helping to boost our immune systems and in helping sufferers to come to terms with, and maintain, a sense of control over their own lives and wellbeing. My own Foundation For Integrated Medicine is, for example, involved in finding ways to integrate the best of complementary and alternative medicine.
Q And what do you understand by “the best”? In medicine, this term should mean “the most effective”, shouldn’t it?
Many cancer patients have turned to an integrated approach to managing their health, finding complementary therapies such as acupuncture, aromatherapy, reflexology and massage therapy extremely therapeutic. I know of one patient who turned to Gerson Therapy having been told that she was suffering from terminal cancer, and would not survive another course of chemotherapy. Happily, seven years later she is alive and well. So it is therefore vital that, rather than dismissing such experiences, we should further investigate the beneficial nature of these treatments.
Q Gerson? Is it ethical to promote an unproven starvation diet for cancer?
…many patients use and believe in Gerson Therapy, yet more evidence needs to be available as to who might benefit or what adverse effects there might be. But, surely, we need to take a wider view of the most appropriate types of research methodology – a wider view of what research will help patients.
Q You are a very wealthy man; will you put your own money into the research that you regularly demand?
Complementary medicine is gaining a toehold on the rockface of medical science.
Q I beg your pardon.
Complementary medicine’s toehold is literally that, and it’s an inescapable fact that clinical trials, of the calibre that medical science demands, cost money. Figures from the Department of Complementary Medicine at the University of Exeter show that less than 8p out of every £100 of NHS funds for medical research was spent on complementary medicine. In 1998-99 the Medical Research Council spent no money on it at all, and in 1999 only 0.05% of the total research budget of UK medical charities went to this area.
Q Hmm – Nature; you are very fond of all things natural, aren’t you?
The garden is designed to remind people of our interconnectedness with Nature and of the beneficial medicinal properties She provides through countless plants, flowers and trees. Throughout the 20th century so much ancient, accumulated, traditional wisdom has been thrown away – whether in the fields of medicine, architecture, agriculture or education. The baby was thrown out with the bathwater, so this garden is designed to bring the baby back again and to remind us of that priceless, traditional knowledge before we lose that rich store of Nature’s healing gifts for the benefit of our descendants.
When you think about it, what on earth is the point of throwing away our lifeline; of abandoning the priceless knowledge and wisdom accumulated over 1,000’s of years relating to the treatment of the human condition by natural means? It is sheer folly it seems to me to forget that we are a part of Nature and to imagine we can survive on this Earth as if we were merely a mechanical process divorced from, and in opposition to, the unity of the world around us.
Q …and herbalism?
Medical herbalists talk about ‘synergy’, the result of a complex mix of active ingredients in a plant that create a more powerful therapeutic effect together than if isolated. It’s a concept that has a wider application. As the 17th century poet John Donne famously wrote, “No man is an Island, entire of itself; every man is a piece of the Continent, a part of the main.”
Q I am not sure I understand; what does that mean?
Medical herbalists, who make up their own preparations from combinations of fresh or dried plants, believe that this mix within individual herbs as well as in traditional mixtures of plant medicines creates what is called synergy, in which all the chemical components contribute to the remedy’s specific therapeutic effects.
At a time when farmers everywhere are struggling to make ends meet, the development of a natural pharmacy of organically grown herbs offers an alternative means of earning a living. Yet without protective measures, herbs are easily adulterated or their quality compromised.
Q …and homeopathy?
I went to open the new Glasgow Homeopathic Hospital for instance a couple of years ago, I met a whole lot of students who were studying homeopathy, I think, and I’ve never forgotten when they said to me ‘Are you interested in homeopathy’ and I thought – I don’t know, why do I bother?
Q And why exactly do you bother, if I may ask?
By allowing patients treatment choice, negative emotions can, in part, be alleviated. Many complementary practitioners provide time, empathy, hope and reassurance – skills that are referred to as the “human effect” – which can improve the confidence of cancer patients, alter mindsets and produce major positive changes in the immune system. As a result the “human effect” can greatly prolong life: it has been demonstrated that in a variety of cancers, such as breast cancer, that attitude of mind can not only raise the quality of life but in some cases can even prolong life. At the same time, we need specific treatments that are designed to improve the quality of patients’ lives, and to provide relief from the unpleasant symptoms of cancer – anxiety; pain; sleeplessness; skin irritation; poor appetite; nausea and depression, to name but a few.
Q At heart you seem to be a vitalist who believes in a vital force or energy that interconnects anything with everything and determines our health.
Research in the new field of psychoneuroimmunology – or mind-body medicine as it is sometimes called – is discovering that there is a constant interplay between our emotions, thoughts and actions and our body systems. It seems that the food we eat, the air we breathe, the exercise we take, our relationships with other people, all have a direct bearing on our health and natural healing processes. Complementary medicine has always known this and I believe it is one of the reasons for its enormous popularity.
Q Clarence House made several statements assuring the British public that you never overstep your constitutional role by trying to influence health politics; they were having us on, weren’t they?
A few days ago I launched an initiative to promote the provision of more complementary medicine in the NHS. For many years I have been working towards this goal.
Q Does that mean these statements were wrong?
I am convinced there is no better moment than now to create a real integration of our healthcare, particularly when there is talk of a Patient-Centred NHS. So much ill-health and disease is due to the misery, stress and alienation we see in our community.
It was 20 years ago today that I started my job as ‘Professor of Complementary Medicine’ at the University of Exeter and became a full-time researcher of all matters related to alternative medicine. One issue that was discussed endlessly during these early days was the question whether alternative medicine can be investigated scientifically. There were many vociferous proponents of the view that it was too subtle, too individualised, too special for that and that it defied science in principle. Alternative medicine, they claimed, needed an alternative to science to be validated. I spent my time arguing the opposite, of course, and today there finally seems to be a consensus that alternative medicine can and should be submitted to scientific tests much like any other branch of health care.
Looking back at those debates, I think it is rather obvious why apologists of alternative medicine were so vehement about opposing scientific investigations: they suspected, perhaps even knew, that the results of such research would be mostly negative. Once the anti-scientists saw that they were fighting a lost battle, they changed their tune and adopted science – well sort of: they became pseudo-scientists (‘if you cannot beat them, join them’). Their aim was to prevent disaster, namely the documentation of alternative medicine’s uselessness by scientists. Meanwhile many of these ‘anti-scientists turned pseudo-scientists’ have made rather surprising careers out of their cunning role-change; professorships at respectable universities have mushroomed. Yes, pseudo-scientists have splendid prospects these days in the realm of alternative medicine.
The term ‘pseudo-scientist’ as I understand it describes a person who thinks he/she knows the truth about his/her subject well before he/she has done the actual research. A pseudo-scientist is keen to understand the rules of science in order to corrupt science; he/she aims at using the tools of science not to test his/her assumptions and hypotheses, but to prove that his/her preconceived ideas were correct.
So, how does one become a top pseudo-scientist? During the last 20 years, I have observed some of the careers with interest and think I know how it is done. Here are nine lessons which, if followed rigorously, will lead to success (… oh yes, in case I again have someone thick enough to complain about me misleading my readers: THIS POST IS SLIGHTLY TONGUE IN CHEEK).
- Throw yourself into qualitative research. For instance, focus groups are a safe bet. This type of pseudo-research is not really difficult to do: you assemble about 5 -10 people, let them express their opinions, record them, extract from the diversity of views what you recognise as your own opinion and call it a ‘common theme’, write the whole thing up, and – BINGO! – you have a publication. The beauty of this approach is manifold: 1) you can repeat this exercise ad nauseam until your publication list is of respectable length; there are plenty of alternative medicine journals who will hurry to publish your pseudo-research; 2) you can manipulate your findings at will, for instance, by selecting your sample (if you recruit people outside a health food shop, for instance, and direct your group wisely, you will find everything alternative medicine journals love to print); 3) you will never produce a paper that displeases the likes of Prince Charles (this is more important than you may think: even pseudo-science needs a sponsor [or would that be a pseudo-sponsor?]).
- Conduct surveys. These are very popular and highly respected/publishable projects in alternative medicine – and they are almost as quick and easy as focus groups. Do not get deterred by the fact that thousands of very similar investigations are already available. If, for instance, there already is one describing the alternative medicine usage by leg-amputated police-men in North Devon, and you nevertheless feel the urge of going into this area, you can safely follow your instinct: do a survey of leg-amputated police men in North Devon with a medical history of diabetes. There are no limits, and as long as you conclude that your participants used a lot of alternative medicine, were very satisfied with it, did not experience any adverse effects, thought it was value for money, and would recommend it to their neighbour, you have secured another publication in an alternative medicine journal.
- If, for some reason, this should not appeal to you, how about taking a sociological, anthropological or psychological approach? How about studying, for example, the differences in worldviews, the different belief systems, the different ways of knowing, the different concepts about illness, the different expectations, the unique spiritual dimensions, the amazing views on holism – all in different cultures, settings or countries? Invariably, you will, of course, conclude that one truth is at least as good as the next. This will make you popular with all the post-modernists who use alternative medicine as a playground for getting a few publications out. This approach will allow you to travel extensively and generally have a good time. Your papers might not win you a Nobel prize, but one cannot have everything.
- It could well be that, at one stage, your boss has a serious talk with you demanding that you start doing what (in his narrow mind) constitutes ‘real science’. He might be keen to get some brownie-points at the next RAE and could thus want you to actually test alternative treatments in terms of their safety and efficacy. Do not despair! Even then, there are plenty of possibilities to remain true to your pseudo-scientific principles. By now you are good at running surveys, and you could, for instance, take up your boss’ suggestion of studying the safety of your favourite alternative medicine with a survey of its users. You simply evaluate their experiences and opinions regarding adverse effects. But be careful, you are on somewhat thinner ice here; you don’t want to upset anyone by generating alarming findings. Make sure your sample is small enough for a false negative result, and that all participants are well-pleased with their alternative medicine. This might be merely a question of selecting your patients cleverly. The main thing is that your conclusion is positive. If you want to go the extra pseudo-scientific mile, mention in the discussion of your paper that your participants all felt that conventional drugs were very harmful.
- If your boss insists you tackle the daunting issue of therapeutic efficacy, there is no reason to give up pseudo-science either. You can always find patients who happened to have recovered spectacularly well from a life-threatening disease after receiving your favourite form of alternative medicine. Once you have identified such a person, you write up her experience in much detail and call it a ‘case report’. It requires a little skill to brush over the fact that the patient also had lots of conventional treatments, or that her diagnosis was assumed but never properly verified. As a pseudo-scientist, you will have to learn how to discretely make such irritating details vanish so that, in the final paper, they are no longer recognisable. Once you are familiar with this methodology, you can try to find a couple more such cases and publish them as a ‘best case series’ – I can guarantee that you will be all other pseudo-scientists’ hero!
- Your boss might point out, after you have published half a dozen such articles, that single cases are not really very conclusive. The antidote to this argument is simple: you do a large case series along the same lines. Here you can even show off your excellent statistical skills by calculating the statistical significance of the difference between the severity of the condition before the treatment and the one after it. As long as you show marked improvements, ignore all the many other factors involved in the outcome and conclude that these changes are undeniably the result of the treatment, you will be able to publish your paper without problems.
- As your boss seems to be obsessed with the RAE and all that, he might one day insist you conduct what he narrow-mindedly calls a ‘proper’ study; in other words, you might be forced to bite the bullet and learn how to plan and run an RCT. As your particular alternative therapy is not really effective, this could lead to serious embarrassment in form of a negative result, something that must be avoided at all cost. I therefore recommend you join for a few months a research group that has a proven track record in doing RCTs of utterly useless treatments without ever failing to conclude that it is highly effective. There are several of those units both in the UK and elsewhere, and their expertise is remarkable. They will teach you how to incorporate all the right design features into your study without there being the slightest risk of generating a negative result. A particularly popular solution is to conduct what they call a ‘pragmatic’ trial, I suggest you focus on this splendid innovation that never fails to produce anything but cheerfully positive findings.
- It is hardly possible that this strategy fails – but once every blue moon, all precautions turn out to be in vain, and even the most cunningly designed study of your bogus therapy might deliver a negative result. This is a challenge to any pseudo-scientist, but you can master it, provided you don’t lose your head. In such a rare case I recommend to run as many different statistical tests as you can find; chances are that one of them will nevertheless produce something vaguely positive. If even this method fails (and it hardly ever does), you can always home in on the fact that, in your efficacy study of your bogus treatment, not a single patient died. Who would be able to doubt that this is a positive outcome? Stress it clearly, select it as the main feature of your conclusions, and thus make the more disappointing findings disappear.
- Now that you are a fully-fledged pseudo-scientist who has produced one misleading or false positive result after the next, you may want a ‘proper’ confirmatory study of your pet-therapy. For this purpose run the same RCT over again, and again, and again. Eventually you want a meta-analysis of all RCTs ever published. As you are the only person who ever conducted studies on the bogus treatment in question, this should be quite easy: you pool the data of all your trials and, bob’s your uncle: a nice little summary of the totality of the data that shows beyond doubt that your therapy works. Now even your narrow-minded boss will be impressed.
These nine lessons can and should be modified to suit your particular situation, of course. Nothing here is written in stone. The one skill any pseudo-scientist must have is flexibility.
Every now and then, some smart arse is bound to attack you and claim that this is not rigorous science, that independent replications are required, that you are biased etc. etc. blah, blah, blah. Do not panic: either you ignore that person completely, or (in case there is a whole gang of nasty sceptics after you) you might just point out that:
- your work follows a new paradigm; the one of your critics is now obsolete,
- your detractors fail to understand the complexity of the subject and their comments merely reveal their ridiculous incompetence,
- your critics are less than impartial, in fact, most are bought by BIG PHARMA,
- you have a paper ‘in press’ that fully deals with all the criticism and explains how inappropriate it really is.
In closing, allow me a final word about publishing. There are hundreds of alternative medicine journals out there to chose from. They will love your papers because they are uncompromising promotional. These journals all have one thing in common: they are run by apologists of alternative medicine who abhor to read anything negative about alternative medicine. Consequently hardly a critical word about alternative medicine will ever appear in these journals. If you want to make double sure that your paper does not get criticised during the peer-review process (this would require a revision, and you don’t need extra work of that nature), you can suggest a friend for peer-reviewing it. In turn, you can offer to him/her that you do the same to him/her the next time he/she has an article to submit. This is how pseudo-scientists make sure that the body of pseudo-evidence for their pseudo-treatments is growing at a steady pace.
Swiss chiropractors have just published a clinical trial to investigate outcomes of patients with radiculopathy due to cervical disk herniation (CDH). All patients had neck pain and dermatomal arm pain; sensory, motor, or reflex changes corresponding to the involved nerve root and at least one positive orthopaedic test for cervical radiculopathy were included. CDH was confirmed by magnetic resonance imaging. All patients received regular neck manipulations.
Baseline data included two pain numeric rating scales (NRSs), for neck and arm, and the Neck Disability Index (NDI). At two, four and twelve weeks after the initial consultation, patients were contacted by telephone, and the data for NDI, NRSs, and patient’s global impression of change were collected. High-velocity, low-amplitude thrusts were administered by experienced chiropractors. The proportion of patients reporting to feel “better” or “much better” on the patient’s global impression of change scale was calculated. Pre-treatment and post-treatment NRSs and NDIs were analysed.
Fifty patients were included. At two weeks, 55.3% were “improved,” 68.9% at four and 85.7% at twelve weeks. Statistically significant decreases in neck pain, arm pain, and NDI scores were noted at 1 and 3 months compared with baseline scores. 76.2% of all sub-acute/chronic patients were improved at 3 months.
The authors concluded that most patients in this study, including sub-acute/chronic patients, with symptomatic magnetic resonance imaging-confirmed CDH treated with spinal manipulative therapy, reported significant improvement with no adverse events.
In the presence of disc herniation, chiropractic manipulations have been described to cause serious complications. Some experts therefore believe that CDH is a contra-indication for spinal manipulation. The authors of this study imply, however, that it is not – on the contrary, they think it is an effective intervention for CDH.
One does not need to be a sceptic to notice that the basis for this assumption is less than solid. The study had no control group. This means that the observed effect could have been due to:
a placebo response,
the regression towards the mean,
the natural history of the condition,
or other factors which have nothing to do with the chiropractic intervention per se.
And what about the interesting finding that no adverse-effects were noted? Does that mean that the treatment is safe? Sorry, but it most certainly does not! In order to generate reliable results about possibly rare complications, the study would have needed to include not 50 but well over 50 000 patients.
So what does the study really tell us? I have pondered over this question for some time and arrived at the following answer: NOTHING!
Is that a bit harsh? Well, perhaps yes. And I will revise my verdict slightly: the study does tell us something, after all – chiropractors tend to confuse research with the promotion of very doubtful concepts at the expense of their patients. I think, there is a name for this phenomenon: PSEUDO-SCIENCE.
Indian homeopaths recently published a clinical trial aimed at evaluating homeopathic treatment in the management of diabetic polyneuropathy. The condition affects many diabetic patients; its symptoms include tingling, numbness, burning sensation in the feet and pain, particularly at night. The best treatment consists of adequate metabolic control of the underlying diabetes. The pain can be severe often does not respond adequately to conventional pain-killers. It is therefore obvious that any new, effective treatment would be more than welcome.
The new trial is a prospective observational study which was carried out from October 2005 to September 2009 by the Indian Central Council for Research in Homeopathy at its five Institutes. Patients suffering diabetic polyneuropathy (DPN) were screened and enrolled in the study, if they fulfilled the inclusion and exclusion criteria. The Diabetic Distal Symmetric Polyneuropathy Symptom Score (DDSPSS), developed by the Council, served as the primary outcome measure.
A total of 15 homeopathic medicines were identified after repertorizing the nosological symptoms and signs of the disease. The appropriate constitutional medicine was selected and prescribed in the 30, 200 and 1 M potencies on an individualized basis. Patients were followed up for 12 months.
Of 336 diabetics enrolled in the study, 247 patients who attended at least three follow-up appointments and baseline nerve conduction studies were included in the analysis. A statistically significant improvement in DDSPSS total score was found at 12 months. Most objective measures did not show significant improvements. Lycopodium clavatum (n = 132), Phosphorus (n = 27) and Sulphur (n = 26) were the most frequently prescribed homeopathic remedies.
From these results, the authors concluded that: “homeopathic medicines may be effective in managing the symptoms of DPN patients.”
Does this study tell us anything worth knowing? The short answer to this question, I am afraid, is NO.
Its weaknesses are all too obvious:
1) There was no control group.
2) Patients who did not come back to the follow-up appointments – presumably because they were not satisfied – were excluded from the analyses. The average benefit reported is thus likely to be a cherry-picked false positive result.
3) The primary outcome measure was not validated.
4) The observed positive effect on subjective symptoms could be due to several factors which are entirely unrelated to the homeopathic treatments’ e.g. better metabolic control, regression towards the mean, or social desirability.
Anyone who had seen the protocol of this study would have predicted the result; I see no way that such a study does not generate an apparently positive outcome. In other words, conducting the investigation was superfluous, which means that the patients’ participation was in vain; and this, in turn, means that the trial was arguably unethical.
This might sound a bit harsh, but I am entirely serious: deeply flawed research should not happen. It is a waste of scarce resources and patients’ tolerance; crucially, it has a powerful potential to mislead us and to set back our efforts to improve health care. All of this is unethical.
The problem of research which is so poor that it crosses the line into being unethical is, of course, not confined to homeopathy. In my view, it is an important issue in much of alternative medicine and quite possibly in conventional medicine as well. Over the years, several mechanisms have been put in place to prevent or at least minimize the problem, for instance, ethic committees and peer-review. The present study shows, I think, that these mechanisms are fragile and that, sometimes, they fail altogether.
In their article, the authors of the new homeopathic study suggest that more investigations of homeopathy for diabetic polyneuropathy should be done. However, I suggest almost precisely the opposite: unethical research of this nature should be prevented, and the existing mechanisms to achieve this aim must be strengthened.
Neck pain is a common problem which is often far from easy to treat. Numerous therapies are being promoted but few are supported by good evidence. Could yoga be the solution?
The aim of a brand-new RCT was to evaluate the effectiveness of Iyengar yoga for chronic non-specific neck pain. Patients were randomly assigned to either yoga or exercise. The yoga group attended a 9-week yoga course, while the exercise group received a self-care manual on home-based exercises for neck pain. The primary outcome measure was neck pain. Secondary outcome measures included functional disability, pain at motion, health-related quality of life, cervical range of motion, proprioceptive acuity, and pressure pain threshold. Fifty-one patients participated in the study: yoga (n = 25), exercise (n = 26). At the end of the treatment phase, patients in the yoga group reported significantly less neck pain compared as well as less disability and better mental quality of life compared with the exercise group. Range of motion and proprioceptive acuity were improved and the pressure pain threshold was elevated in the yoga group.
The authors draw the following conclusion: “Yoga was more effective in relieving chronic nonspecific neck pain than a home-based exercise program. Yoga reduced neck pain intensity and disability and improved health-related quality of life. Moreover, yoga seems to influence the functional status of neck muscles, as indicated by improvement of physiological measures of neck pain.”
I’d love to agree with the authors and would be more than delighted, if an effective treatment for neck pain had been identified. Yoga is fairly safe and inexpensive; it promotes a generally healthy life-style, and is attractive to many patients; it has thus the potential to help thousands of suffering individuals. However, I fear that things might not be quite as rosy as the authors of this trial seem to believe.
The principle of an RCT essentially is that two groups of patients receive two different therapies and that any difference in outcome after the treatment phase is attributable to the therapy in question. Unfortunately, this is not the case here. One does not need to be an expert in critical thinking to realise that, in the present study, the positive outcome might be unrelated to yoga. For instance, it could be that the unsupervised home exercises were carried out wrongly and thus made the neck pain worse. In this case, the difference between the two treatment groups might not have been caused by yoga at all. A second possibility is that the yoga-group benefited not from the yoga itself but from the attention given to these patients which the exercise-group did not have. A third explanation could be that the yoga teachers were very kind to their patients, and that the patients returned their kindness by pretending to have less symptoms or exaggerating their improvements. In my view the most likely cause of the results seen in this study is a complex mixture of all the options just mentioned.
This study thus teaches us two valuable lessons: 1) whenever possible, RCTs should be designed such that a clear attribution of cause and effect is possible, once the results are on the table; 2) if cause and effect cannot be clearly defined, it is unwise to draw conclusions that are definite and have the potential to mislead the public.
This post has an odd title and addresses an odd subject. I am sure some people reading it will ask themselves “has he finally gone potty; is he a bit xenophobic, chauvinistic, or what?” I can assure you none of the above is the case.
Since many years, I have been asked to peer-review Chinese systematic reviews and meta-analyses of TCM-trials submitted to various journals and to the Cochrane Collaboration for publication, and I estimate that around 300 such articles are available today. Initially, I thought they were a valuable contribution to our knowledge, particularly for the many of us who cannot read Chinese languages. I hoped they might provide reliable information about this huge and potentially important section of the TCM-evidence. After doing this type of work for some time, I became more and more frustrated; now I have decided not to accept this task any longer – not because it is too much trouble, but because I have come to the conclusion that these articles are far less helpful than I had once assumed; in fact, I now fear that they are counter-productive.
In order to better understand what I mean, it might be best to use an example; this recent systematic review seems as good for that purpose as any.
Its Chinese authors “hypothesized that the eligible trials would provide evidence of the effect of Chinese herbs on bone mineral density (BMD) and the therapeutic benefits of Chinese medicine treatment in patients with bone loss“. Randomized controlled trials (RCTs) were thus retrieved for a systematic review from Medline and 8 Chinese databases. The authors identified 12 RCTs involving a total of 1816 patients. The studies compared Chinese herbs with placebo or standard anti-osteoporotic therapy. The pooled data from these RCTs showed that the change of BMD in the spine was more pronounced with Chinese herbs compared to the effects noted with placebo. Also, in the femoral neck, Chinese herbs generated significantly higher increments of BMD compared to placebo. Compared to conventional anti-osteoporotic drugs, Chinese herbs generated greater BMD changes.
In their abstract, the part on the paper that most readers access, the authors reached the following conclusions: “Our results demonstrated that Chinese herb significantly increased lumbar spine BMD as compared to the placebo or other standard anti-osteoporotic drugs.” In the article itself, we find this more detailed conclusion: “We conclude that Chinese herbs substantially increased BMD of the lumbar spine compared to placebo or anti-osteoporotic drugs as indicated in the current clinical reports on osteoporosis treatment. Long term of Chinese herbs over 12 months of treatment duration may increase BMD in the hip more effectively. However, further studies are needed to corroborate the positive effect of increasing the duration of Chinese herbs on outcome as the results in this analysis are based on indirect comparisons. To date there are no studies available that compare Chinese herbs, Chinese herbs plus anti-osteoporotic drugs, and anti-osteoporotic drug versus placebo in a factorial design. Consequently, we are unable to draw any conclusions on the possible superiority of Chinese herbs plus anti-osteoporotic drug versus anti-osteoporotic drug or Chinese herb alone in the context of BMD.“
Most readers will feel that this evidence is quite impressive and amazingly solid; they might therefore advocate routinely using Chinese herbs for the common and difficult to treat problem of osteoporosis. The integration of TCM might avoid lots of human suffering, prolong the life of many elderly patients, and save us all a lot of money. Why then am I not at all convinced?
The first thing to notice is the fact that we do not really know which of the ~7000 different Chinese herbs should be used. The article tells us surprisingly little about this crucial point. And even, if we manage to study this question in more depth, we are bound to get thoroughly confused; there are simply too many herbal mixtures and patent medicines to easily identify the most promising candidates.
The second and more important hurdle to making sense of these data is the fact that most of the primary studies originate from inaccessible Chinese journals and were published in Chinese languages which, of course, few people in the West can understand. This is entirely our fault, some might argue, but it does mean that we have to believe the authors, take their words at face value, and cannot check the original data. You may think this is fine, after all, the paper has gone through a rigorous peer-review process where it has been thoroughly checked by several top experts in the field. This, however, is a fallacy; like you and me, the peer-reviewers might not read Chinese either! (I don’t, and I reviewed quite a few of these papers; in some instances, I even asked for translations of the originals to do the job properly but this request was understandably turned down) In all likelihood, the above paper and most similar articles have not been properly peer-reviewed at all.
The third and perhaps most crucial point can only be fully appreciated, if we were able to access and understand the primary studies; it relates to the quality of the original RCTs summarised in such systematic reviews. The abstract of the present paper tells us nothing at all about this issue. In the paper, however, we do find a formal assessment of the studies’ risk of bias which shows that the quality of the included RCTs was poor to very poor. We also find a short but revealing sentence: “The reports of all trials mentioned randomization, but only seven described the method of randomization.” This remark is much more significant than it may seem: we have shown that such studies use such terminology in a rather adventurous way; reviewing about 2000 of these allegedly randomised trials, we found that many Chinese authors call a trial “randomised” even in the absence of a control group (one cannot randomise patients and have no control group)! They seem to like the term because it is fashionable and makes publication of their work easier. We thus have good reason to fear that some/many/most of the studies were not RCTs at all.
The fourth issue that needs mentioning is the fact that very close to 100% of all Chinese TCM-trials report positive findings. This means that either TCM is effective for every indication it is tested for (most unlikely, not least because there are many negative non-Chinese trials of TCM), or there is something very fundamentally wrong with Chinese research into TCM. Over the years, I have had several Chinese co-workers in my team and was invariably impressed by their ability to work hard and efficiently; we often discussed the possible reasons for the extraordinary phenomenon of 0% negative Chinese trials. The most plausible answer they offered was this: it would be most impolite for a Chinese researcher to produce findings which contradict the opinion of his/her peers.
In view of these concerns, can we trust the conclusions of such systematic reviews? I don’t think so – and this is why I have problems with research of this nature. If there are good reasons to doubt their conclusions, these reviews might misinform us systematically, they might not further but hinder progress, and they might send us up the garden path. This could well be in the commercial interest of the Chinese multi-billion dollar TCM-industry, but it would certainly not be in the interest of patients and good health care.
During the last decade, Professor Claudia Witt and co-workers from the Charite in Berlin have published more studies of homeopathy than any other research group. Much of their conclusions are over-optimistic and worringly uncritical, in my view. Their latest article is on homeopathy as a treatment of eczema. As it happens, I have recently published a systematic review of this subject; it concluded that “the evidence from controlled clinical trials… fails to show that homeopathy is an efficacious treatment for eczema“. The question therefore arises whether the latest publication of the Berlin team changes my conclusion in any way.
Their new article describes a prospective multi-centre study which included 135 children with mild to moderate atopic eczema. The parents of the kids enrolled in this trial were able to choose either homeopathic or conventional doctors for their children who treated them as they saw fit. The article gives only scant details about the actual treatments administered. The main outcome of the study was a validated symptom score at 36 months. Further endpoints included quality of life, conventional medicine consumption, safety and disease related costs at six, 12 and 36 months.
The results showed no significant differences between the groups at 36 months. However, the children treated conventionally seemed to improve quicker than those in the homeopathy group. The total costs were about twice higher in the homoeopathic compared to the conventional group. The authors conclude as follows: “Taking patient preferences into account, while being unable to rule out residual confounding, in this long-term observational study, the effects of homoeopathic treatment were not superior to conventional treatment for children with mild to moderate atopic eczema, but involved higher costs“.
At least one previous report of this study has been available for some time and had thus been included in my systematic review. It is therefore unlikely that this new analysis might change my conclusion, particularly as the trial by Witt et al has many flaws. Here are just some of the most obvious ones:
Patients were selected according to parents’ preferences.
This means expectations could have played an important role.
It also means that the groups were not comparable in various, potentially important prognostic variables.
Even though much of the article reads as though the homeopaths exclusively employed homeopathic remedies, the truth is that both groups received similar amounts of conventional care and treatments. In other words, the study followed a ‘A+B versus B’ design (here is the sentence that best gives the game away “At 36 months the frequency of daily basic skin care was… comparable in both groups, as was the number of different medications (including corticosteroids and antihistamines)…”). I have previously stated that this type of study-design can never produce a negative result because A+B is always more than B.
Yet, at first glance, this new study seems to prove my thesis wrong: even though the parents chose their preferred options, and even though all patients were treated conventionally, the addition of homeopathy to conventional care failed to produce a better clinical outcome. On the contrary, the homeopathically treated kids had to wait longer for their symptoms to ease. The only significant difference was that the addition of homeopathy to conventional eczema treatments was much more expensive than conventional therapy alone (this finding is less than remarkable: even the most useless additional intervention costs money).
So, is my theory about ‘A+B versusB’ study-designs wrong? I don’t think so. If B equals zero, one would expect exactly the finding Witt et al produced: A+0=A. In turn, this is not a compliment for the homeopaths of this study: they seem to have been incapable of even generating a placebo-response. And this might indicate that homeopathy was not even usefull as a means to generate a placebo-response. Whatever interpretation one adopts, this study tells us very little of value (as children often grow out of eczema, we cannot even be sure whether the results are not simply a reflection of the natural history of the disease); in my view, it merely demonstrates that weak study designs can only create weak findings which, in this particular case, are next to useless.
The study was sponsored by the Robert Bosch Stiftung, an organisation which claims to be dedicated to excellence in research and which has, in the past, spent millions on researching homeopathy. It seems doubtful that trials of this caliber can live up to any claim of excellence. In any case, the new analysis is certainly no reason to change the conclusion of my systematic review.
To their credit, Witt et al are well aware of the many weaknesses of their study. Perhaps in an attempt to make them appear less glaring, they stress that “the aim of this study was to reflect the real world situation“.Usually I do not accept the argument that pragmatic trials cannot be rigorous – but I think Witt et al do have a point here: the real word tells us that homeopathic remedies are pure placebos!
The ‘Samueli Institute’ might be known to many readers of this blog; it is a wealthy institution that is almost entirely dedicated to promoting the more implausible fringe of alternative medicine. The official aim is “to create a flourishing society through the scientific exploration of wellness and whole-person healing“. Much of its activity seems to be focused on military medical research. Its co-workers include Harald Walach who recently was awarded a rare distinction for his relentless efforts in introducing esoteric pseudo-science into academia.
Now researchers from the Californian branch of the Samueli Institute have published an articles whic, in my view, is another landmark in nonsense.
Jain and colleagues conducted a randomized controlled trial to determine whether Healing Touch with Guided Imagery [HT+GI] reduced post-traumatic stress disorder (PTSD) compared to treatment as usual (TAU) in “returning combat-exposed active duty military with significant PTSD symptoms“. HT is a popular form of para-normal healing where the therapist channels “energy” into the patient’s body; GI is a self-hypnotic from of relaxation-therapy. While the latter approach might be seen as plausible and, at least to some degree, evidence-based, the former cannot.
123 soldiers were randomized to 6 sessions of HT+GI, while the control group had no such therapies. All patients also received standard conventional therapies, and the treatment period was three weeks. The results showed significant reductions in PTSD symptoms as well as depression for HT+GI compared to controls. HT+GI also showed significant improvements in mental quality of life and cynicism.
The authors concluded that HT+GI resulted in a clinically significant reduction in PTSD and related symptoms, and that further investigations of biofield therapies for mitigating PTSD in military populations are warranted.
The Samueli Institute claims to “support science grounded in observation, investigation, and analysis, and [to have] the courage to ask challenging questions within a framework of systematic, high-quality, research methods and the peer-review process“. I do not think that the above-named paper lives up to these standards.
As discussed in some detail in a previous post, this type of study-design is next to useless for determining whether any intervention does any good at all: A+B is always more than B alone! Moreover, if we test HT+GI as a package, how can we conclude about the effectiveness of one of the two interventions? Thus this trial tells us next to nothing about the effectiveness of HT, nor about the effectiveness of HT+GI.
Previously, I have argued that conducting a trial for which the result is already clear before the first patient has been recruited, is not ethical. Samueli Institute, however, claims that it “acts with the highest respect for the public it serves by ensuring transparency, responsible management and ethical practices from discovery to policy and application“. Am I the only one who senses a contradiction here?
Perhaps other research in this area might be more informative? Even the most superficial Medline-search brings to light a flurry of articles on HT and other biofield therapies that are relevant.
Several trials have indeed produces promissing evidence suggesting positive effects of such treatments on anxiety and other symptoms. But the data are far from uniform, and most investigations are wide open to bias. The more rigorous studies seem to suggest that these interventions are not effective beyond placebo. Our review demonstrated that “the evidence is insufficient” to suggest that reiki, another biofield therapy, is an effective treatment for any condition.
Another study showed that tactile touch led to significantly lower levels of anxiety. Conventional massage may even be better than HT, according to some trials. The conclusion from this body of evidence is, I think, fairly obvious: touch can be helpful (most clinicians knew that anyway) but this has nothing to do with energy, biofields, healing energy or any of the other implausible assumptions these treatments are based on.
I therefore disagree with the authors’ conclusion that “further investigation into biofield therapies… is warranted“. If we really want to help patients, let’s find out more about the benefits of touch and let’s not mislead the public about some mystical energies and implausible quackery. And if we truly want to improve heath care, as the Samueli Institute claims, let’s use our limited resources for research which meaningfully contributes to our knowledge.
As I am drafting this post, I am in a plane flying back from Finland. The in-flight meal reminded me of the fact that no food is so delicious that it cannot be spoilt by the addition of too many capers. In turn, this made me think about the paper I happened to be reading at the time, and I arrived at the following theory: no trial design is so rigorous that it cannot to be turned into something utterly nonsensical by the addition of a few amateur researchers.
The paper I was reading when this idea occurred to me was a randomised, triple-blind, placebo-controlled cross-over trial of homeopathy. Sounds rigorous and top quality? Yes, but wait!
Essentially, the authors recruited 86 volunteers who all claimed to be suffering from “mental fatigue” and treated them with Kali-Phos 6X or placebo for one week (X-potencies signify dilution steps of 1: 10, and 6X therefore means that the salt had been diluted 1: 1000000 ). Subsequently, the volunteers were crossed-over to receive the other treatment for one week.
The results failed to show that the homeopathic medication had any effect (not even homeopaths can be surprised about this!). The authors concluded that Kali-Phos was not effective but cautioned that, because of the possibility of a type-2-error, they might have missed an effect which, in truth, does exist.
In my view, this article provides an almost classic example of how time, money and other resources can be wasted in a pretence of conducting reasonable research. As we all know, clinical trials usually are for testing hypotheses. But what is the hypothesis tested here?
According to the authors, the aim was to “assess the effectiveness of Kali-Phos 6X for attention problems associated with mental fatigue”. In other words, their hyposesis was that this remedy is effective for treating the symptom of mental fatigue. This notion, I would claim, is not a scientific hypothesis, it is a foolish conjecture!
Arguably any hypothesis about the effectiveness of a highly diluted homeopathic remedy is mere wishful thinking. But, if there were at least some promissing data, some might conclude that a trial was justified. By way of justification for the RCT in question, the authors inform us that one previous trial had suggested an effect; however, this study did not employ just Kali-Phos but a combined homeopathic preparation which contained Kalium-Phos as one of several components. Thus the authors’ “hypothesis” does not even amount to a hunch, not even to a slight incling! To me, it is less than a shot in the dark fired by blind optimists – nobody should be surprised that the bullet failed to hit anything.
It could even be that the investigators themselves dimly realised that something is amiss with the basis of their study; this might be the reason why they called it an “exploratory trial”. But an exploratory study is one whithout a hypothesis, and the trial in question does have a hyposis of sorts – only that it is rubbish. And what exactly did the authos meant to explore anyway?
That self-reported mental fatigue in healthy volunteers is a condition that can be mediatised such that it merits treatment?
That the test they used for quantifying its severity is adequate?
That a homeopathic remedy with virtually no active ingredient generates outcomes which are different from placebo?
That Hahnemann’s teaching of homeopathy was nonsense and can thus be discarded (he would have sharply condemned the approach of treating all volunteers with the same remedy, as it contradicts many of his concepts)?
That funding bodies can be fooled to pay for even the most ridiculous trial?
That ethics-committees might pass applications which are pure nonsense and which are thus unethical?
A scientific hypothesis should be more than a vague hunch; at its simplest, it aims to explain an observation or phenomenon, and it ought to have certain features which many alt med researchers seem to have never heard of. If they test nonsense, the result can only be nonsense.
The issue of conducting research that does not make much sense is far from trivial, particularly as so much (I would say most) of alt med research is of such or even worst calibre (if you do not believe me, please go on Medline and see for yourself how many of the recent articles in the category “complementary alternative medicine” truly contribute to knowledge worth knowing). It would be easy therefore to cite more hypothesis-free trials of homeopathy.
One recent example from Germany will have to suffice: in this trial, the only justification for conducting a full-blown RCT was that the manufacturer of the remedy allegedly knew of a few unpublished case-reports which suggested the treatment to work – and, of course, the results of the RCT eventually showed that it didn’t. Anyone with a background in science might have predicied that outcome – which is why such trials are so deplorably wastefull.
Research-funds are increasingly scarce, and they must not be spent on nonsensical projects! The money and time should be invested more fruitfully elsewhere. Participants of clinical trials give their cooperation willingly; but if they learn that their efforts have been wasted unnecessarily, they might think twice next time they are asked. Thus nonsensical research may have knock-on effects with far-reaching consequences.
Being a researcher is at least as serious a profession as most other occupations; perhaps we should stop allowing total amateurs wasting money while playing at being professioal. If someone driving a car does something seriously wrong, we take away his licence; why is there not a similar mechanism for inadequate researchers, funders, ethics-committees which prevents them doing further damage?
At the very minimum, we should critically evaluate the hypothesis that the applicants for research-funds propose to test. Had someone done this properly in relatiom to the two above-named studies, we would have saved about £150,000 per trial (my estimate). But as it stands, the authors will probably claim that they have produced fascinating findings which urgently need further investigation – and we (normally you and I) will have to spend three times the above-named amount (again, my estimate) to finance a “definitive” trial. Nonsense, I am afraid, tends to beget more nonsense.