The ‘Samueli Institute’ might be known to many readers of this blog; it is a wealthy institution that is almost entirely dedicated to promoting the more implausible fringe of alternative medicine. The official aim is “to create a flourishing society through the scientific exploration of wellness and whole-person healing“. Much of its activity seems to be focused on military medical research. Its co-workers include Harald Walach who recently was awarded a rare distinction for his relentless efforts in introducing esoteric pseudo-science into academia.
Now researchers from the Californian branch of the Samueli Institute have published an articles whic, in my view, is another landmark in nonsense.
Jain and colleagues conducted a randomized controlled trial to determine whether Healing Touch with Guided Imagery [HT+GI] reduced post-traumatic stress disorder (PTSD) compared to treatment as usual (TAU) in “returning combat-exposed active duty military with significant PTSD symptoms“. HT is a popular form of para-normal healing where the therapist channels “energy” into the patient’s body; GI is a self-hypnotic from of relaxation-therapy. While the latter approach might be seen as plausible and, at least to some degree, evidence-based, the former cannot.
123 soldiers were randomized to 6 sessions of HT+GI, while the control group had no such therapies. All patients also received standard conventional therapies, and the treatment period was three weeks. The results showed significant reductions in PTSD symptoms as well as depression for HT+GI compared to controls. HT+GI also showed significant improvements in mental quality of life and cynicism.
The authors concluded that HT+GI resulted in a clinically significant reduction in PTSD and related symptoms, and that further investigations of biofield therapies for mitigating PTSD in military populations are warranted.
The Samueli Institute claims to “support science grounded in observation, investigation, and analysis, and [to have] the courage to ask challenging questions within a framework of systematic, high-quality, research methods and the peer-review process“. I do not think that the above-named paper lives up to these standards.
As discussed in some detail in a previous post, this type of study-design is next to useless for determining whether any intervention does any good at all: A+B is always more than B alone! Moreover, if we test HT+GI as a package, how can we conclude about the effectiveness of one of the two interventions? Thus this trial tells us next to nothing about the effectiveness of HT, nor about the effectiveness of HT+GI.
Previously, I have argued that conducting a trial for which the result is already clear before the first patient has been recruited, is not ethical. Samueli Institute, however, claims that it “acts with the highest respect for the public it serves by ensuring transparency, responsible management and ethical practices from discovery to policy and application“. Am I the only one who senses a contradiction here?
Perhaps other research in this area might be more informative? Even the most superficial Medline-search brings to light a flurry of articles on HT and other biofield therapies that are relevant.
Several trials have indeed produces promissing evidence suggesting positive effects of such treatments on anxiety and other symptoms. But the data are far from uniform, and most investigations are wide open to bias. The more rigorous studies seem to suggest that these interventions are not effective beyond placebo. Our review demonstrated that “the evidence is insufficient” to suggest that reiki, another biofield therapy, is an effective treatment for any condition.
Another study showed that tactile touch led to significantly lower levels of anxiety. Conventional massage may even be better than HT, according to some trials. The conclusion from this body of evidence is, I think, fairly obvious: touch can be helpful (most clinicians knew that anyway) but this has nothing to do with energy, biofields, healing energy or any of the other implausible assumptions these treatments are based on.
I therefore disagree with the authors’ conclusion that “further investigation into biofield therapies… is warranted“. If we really want to help patients, let’s find out more about the benefits of touch and let’s not mislead the public about some mystical energies and implausible quackery. And if we truly want to improve heath care, as the Samueli Institute claims, let’s use our limited resources for research which meaningfully contributes to our knowledge.
As I am drafting this post, I am in a plane flying back from Finland. The in-flight meal reminded me of the fact that no food is so delicious that it cannot be spoilt by the addition of too many capers. In turn, this made me think about the paper I happened to be reading at the time, and I arrived at the following theory: no trial design is so rigorous that it cannot to be turned into something utterly nonsensical by the addition of a few amateur researchers.
The paper I was reading when this idea occurred to me was a randomised, triple-blind, placebo-controlled cross-over trial of homeopathy. Sounds rigorous and top quality? Yes, but wait!
Essentially, the authors recruited 86 volunteers who all claimed to be suffering from “mental fatigue” and treated them with Kali-Phos 6X or placebo for one week (X-potencies signify dilution steps of 1: 10, and 6X therefore means that the salt had been diluted 1: 1000000 ). Subsequently, the volunteers were crossed-over to receive the other treatment for one week.
The results failed to show that the homeopathic medication had any effect (not even homeopaths can be surprised about this!). The authors concluded that Kali-Phos was not effective but cautioned that, because of the possibility of a type-2-error, they might have missed an effect which, in truth, does exist.
In my view, this article provides an almost classic example of how time, money and other resources can be wasted in a pretence of conducting reasonable research. As we all know, clinical trials usually are for testing hypotheses. But what is the hypothesis tested here?
According to the authors, the aim was to “assess the effectiveness of Kali-Phos 6X for attention problems associated with mental fatigue”. In other words, their hyposesis was that this remedy is effective for treating the symptom of mental fatigue. This notion, I would claim, is not a scientific hypothesis, it is a foolish conjecture!
Arguably any hypothesis about the effectiveness of a highly diluted homeopathic remedy is mere wishful thinking. But, if there were at least some promissing data, some might conclude that a trial was justified. By way of justification for the RCT in question, the authors inform us that one previous trial had suggested an effect; however, this study did not employ just Kali-Phos but a combined homeopathic preparation which contained Kalium-Phos as one of several components. Thus the authors’ “hypothesis” does not even amount to a hunch, not even to a slight incling! To me, it is less than a shot in the dark fired by blind optimists – nobody should be surprised that the bullet failed to hit anything.
It could even be that the investigators themselves dimly realised that something is amiss with the basis of their study; this might be the reason why they called it an “exploratory trial”. But an exploratory study is one whithout a hypothesis, and the trial in question does have a hyposis of sorts – only that it is rubbish. And what exactly did the authos meant to explore anyway?
That self-reported mental fatigue in healthy volunteers is a condition that can be mediatised such that it merits treatment?
That the test they used for quantifying its severity is adequate?
That a homeopathic remedy with virtually no active ingredient generates outcomes which are different from placebo?
That Hahnemann’s teaching of homeopathy was nonsense and can thus be discarded (he would have sharply condemned the approach of treating all volunteers with the same remedy, as it contradicts many of his concepts)?
That funding bodies can be fooled to pay for even the most ridiculous trial?
That ethics-committees might pass applications which are pure nonsense and which are thus unethical?
A scientific hypothesis should be more than a vague hunch; at its simplest, it aims to explain an observation or phenomenon, and it ought to have certain features which many alt med researchers seem to have never heard of. If they test nonsense, the result can only be nonsense.
The issue of conducting research that does not make much sense is far from trivial, particularly as so much (I would say most) of alt med research is of such or even worst calibre (if you do not believe me, please go on Medline and see for yourself how many of the recent articles in the category “complementary alternative medicine” truly contribute to knowledge worth knowing). It would be easy therefore to cite more hypothesis-free trials of homeopathy.
One recent example from Germany will have to suffice: in this trial, the only justification for conducting a full-blown RCT was that the manufacturer of the remedy allegedly knew of a few unpublished case-reports which suggested the treatment to work – and, of course, the results of the RCT eventually showed that it didn’t. Anyone with a background in science might have predicied that outcome – which is why such trials are so deplorably wastefull.
Research-funds are increasingly scarce, and they must not be spent on nonsensical projects! The money and time should be invested more fruitfully elsewhere. Participants of clinical trials give their cooperation willingly; but if they learn that their efforts have been wasted unnecessarily, they might think twice next time they are asked. Thus nonsensical research may have knock-on effects with far-reaching consequences.
Being a researcher is at least as serious a profession as most other occupations; perhaps we should stop allowing total amateurs wasting money while playing at being professioal. If someone driving a car does something seriously wrong, we take away his licence; why is there not a similar mechanism for inadequate researchers, funders, ethics-committees which prevents them doing further damage?
At the very minimum, we should critically evaluate the hypothesis that the applicants for research-funds propose to test. Had someone done this properly in relatiom to the two above-named studies, we would have saved about £150,000 per trial (my estimate). But as it stands, the authors will probably claim that they have produced fascinating findings which urgently need further investigation – and we (normally you and I) will have to spend three times the above-named amount (again, my estimate) to finance a “definitive” trial. Nonsense, I am afraid, tends to beget more nonsense.
In my last post, we discussed the “A+B versus B” trial design as a tool to produce false positive results. This method is currently very popular in alternative medicine, yet it is by no means the only approach that can mislead us. Today, let’s look at other popular options with a view of protecting us against trialists who naively or willfully might fool us.
The crucial flaw of the “A+B versus B” design is that it fails to account for non-specific effects. If the patients in the experimental group experience better outcomes than the control group, this difference could well be due to effects that are unrelated to the experimental treatment. There are, of course, several further ways to ignore non-specific effects in clinical research. The simplest option is to include no control group at all. Homeopaths, for instance, are very proud of studies which show that ~70% of their patients experience benefit after taking their remedies. This type of result tends to impress journalists, politicians and other people who fail to realise that such a result might be due to a host of factors, e.g. the placebo-effect, the natural history of the disease, regression towards the mean or treatments which patients self-administered while taking the homeopathic remedies. It is therefore misleading to make causal inferences from such data.
Another easy method to generate false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the outcomes in clinical trials is to make sure that expectation is not the cause of or contributor to the outcome. They say that expectation can move mountains; this might be an exaggeration, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment. Similarly, the parents of a child or the owners of an animal can transfer their expectations, and this is one of several reasons why it is incorrect to claim that children and animals are immune to placebo-effects.
Failure to randomise is another source of bias which can make an ineffective therapy look like an effective one when tested in a clinical trial. If we allow patients or trialists to select or choose which patients receive the experimental and which get the control-treatment, it is likely that the two groups differ in a number of variables. Some of these variables might, in turn, impact on the outcome. If, for instance, doctors allocate their patients to the experimental and control groups, they might select those who will respond to the former and those who don’t to the latter. This may not happen with malicious intent but through intuition or instinct: responsible health care professionals want those patients who, in their experience, have the best chances to benefit from a given treatment to receive that treatment. Only randomisation can, when done properly, make sure we are comparing comparable groups of patients, and non-randomisation is likely to produce misleading findings.
While these options for producing false positives are all too obvious, the next possibility is slightly more intriguing. It refers to studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but to investigations attempting to assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce the same or similarly positive results, both must be effective. For instance, such a study might compare the effects of acupuncture to a common pain-killer. Such trials are aptly called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a deliberately silly example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.
Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In our hypothetical example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then apparently confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition, and people who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.
Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, the most useless or even slightly detrimental treatment would appear to be effective simply because it is equivalent to or less harmful than the comparator.
A variation of this theme is the plethora of controlled clinical trials which compare one unproven therapy to another unproven treatment. Perdicatbly, the results indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic researchers then tend to conclude that this proves both treatments to be equally effective.
Another option for creating misleadingly positive findings is to cherry-pick the results. Most trails have many outcome measures; for instance, a study of acupuncture for pain-control might quantify pain in half a dozen different ways, it might also measure the length of the treatment until pain has subsided, the amount of medication the patients took in addition to receiving acupuncture, the days off work because of pain, the partner’s impression of the patient’s health status, the quality of life of the patient, the frequency of sleep being disrupted by pain etc. If the researchers then evaluate all the results, they are likely to find that one or two of them have changed in the direction they wanted. This can well be a chance finding: with the typical statistical tests, one in 20 outcome measures would produce a significant result purely by chance. In order to mislead us, the researchers only need to “forget” about all the negative results and focus their publication on the ones which by chance have come out as they had hoped.
One fail-proof method for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.
And finally, there is always the possibility of overt cheating. Researchers are only human and are thus not immune to temptation. They may have conflicts of interest or may know that positive results are much easier to publish than negative ones. Certainly they want to publish their work – “publish or perish”! So, faced with disappointing results of a study, they might decide to prettify them or even invent new ones which are more pleasing to them, their peers, or their sponsors.
Am I claiming that this sort of thing only happens in alternative medicine? No! Obviously, the way to minimise the risk of such misconduct is to train researchers properly and make sure they are able to think critically. Am I suggesting that investigators of alternative medicine are often not well-trained and almost always uncritical? Yes.
Would it not be nice to have a world where everything is positive? No negative findings ever! A dream! No, it’s not a dream; it is reality, albeit a reality that exists mostly in the narrow realm of alternative medicine research. Quite a while ago, we have demonstrated that journals of alternative medicine never publish negative results. Meanwhile, my colleagues investigating acupuncture, homeopathy, chiropractic etc. seem to have perfected their strategy of avoiding the embarrassment of a negative finding.
Since several years, researchers in this field have adopted a study-design which is virtually sure to generate nothing but positive results. It is being employed widely by enthusiasts of placebo-therapies, and it is easy to understand why: it allows them to conduct seemingly rigorous trials which can impress decision-makers and invariably suggests even the most useless treatment to work wonders.
One of the latest examples of this type of approach is a trial where acupuncture was tested as a treatment of cancer-related fatigue. Most cancer patients suffer from this symptom which can seriously reduce their quality of life. Unfortunately there is little conventional oncologists can do about it, and therefore alternative practitioners have a field-day claiming that their interventions are effective. It goes without saying that desperate cancer victims fall for this.
In this new study, cancer patients who were suffering from fatigue were randomised to receive usual care or usual care plus regular acupuncture. The researchers then monitored the patients’ experience of fatigue and found that the acupuncture group did better than the control group. The effect was statistically significant, and an editorial in the journal where it was published called this evidence “compelling”.
Due to a cleverly over-stated press-release, news spread fast, and the study was celebrated worldwide as a major breakthrough in cancer-care. Finally, most commentators felt, research has identified an effective therapy for this debilitating symptom which affects so many of the most desperate patients. Few people seemed to realise that this trial tells us next to nothing about what effects acupuncture really has on cancer-related fatigue.
In order to understand my concern, we need to look at the trial-design a little closer. Imagine you have an amount of money A and your friend owns the same sum plus another amount B. Who has more money? Simple, it is, of course your friend: A+B will always be more than A [unless B is a negative amount]. For the same reason, such “pragmatic” trials will always generate positive results [unless the treatment in question does actual harm]. Treatment as usual plus acupuncture is more than treatment as usual, and the former is therefore moer than likely to produce a better result. This will be true, even if acupuncture is no more than a placebo – after all, a placebo is more than nothing, and the placebo effect will impact on the outcome, particularly if we are dealing with a highly subjective symptom such as fatigue.
I can be fairly confident that this is more than a theory because, some time ago, we analysed all acupuncture studies with such an “A+B versus B” design. Our hypothesis was that none of these trials would generate a negative result. I probably do not need to tell you that our hypothesis was confirmed by the findings of our analysis. Theory and fact are in perfect harmony.
You might say that the above-mentioned acupuncture trial does still provide important information. Its authors certainly think so and firmly conclude that “acupuncture is an effective intervention for managing the symptom of cancer-related fatigue and improving patients’ quality of life”. Authors of similarly designed trials will most likely arrive at similar conclusions. But, if they are true, they must be important!
Are they true? Such studies appear to be rigorous – e.g. they are randomised – and thus can fool a lot of people, but they do not allow conclusions about cause and effect; in other words, they fail to show that the therapy in question has led to the observed result.
Acupuncture might be utterly ineffective as a treatment of cancer-related fatigue, and the observed outcome might be due to the extra care, to a placebo-response or to other non-specific effects. And this is much more than a theoretical concern: rolling out acupuncture across all oncology centres at high cost to us all might be entirely the wrong solution. Providing good care and warm sympathy could be much more effective as well as less expensive. Adopting acupuncture on a grand scale would also stop us looking for a treatment that is truly effective beyond a placebo – and that surely would not be in the best interest of the patient.
I have seen far too many of those bogus studies to have much patience left. They do not represent an honest test of anything, simply because we know their result even before the trial has started. They are not science but thinly disguised promotion. They are not just a waste of money, they are dangerous – because they produce misleading results – and they are thus also unethical.