Some consumers believe that research is by definition reliable, and patients are even more prone to this error. When they read or hear that ‘RESEARCH HAS SHOWN…’ or that ‘A RECENT STUDY HAS DEMONSTRATED…’, they usually trust the statements that follow. But is this trust in research and researchers justified? During 25 years that I have been involved in so-called alternative medicine (SCAM), I have encountered numerous instances which make me doubt. In this post, I will briefly discuss some the many ways in which consumers can be mislead by apparently sound evidence (for an explanation as to what is and what isn’t evidence, see here).
ABSENCE OF EVIDENCE
I have just finished reading a book by a German Heilpraktiker that is entirely dedicated to SCAM. In it, the author makes hundreds of statements and presents them as evidence-based facts. To many lay people or consumers, this will look convincing, I am sure. Yet, it has one fatal defect: the author fails to offer any real evidence that would back up his statements. The only references provided were those of other books which are equally evidence-free. This popular technique of making unsupported claims allows the author to make assertions without any checks and balances. A lay person is usually unable or unwilling to differentiate such fabulations from evidence, and this technique is thus easy and poular for misleading us about SCAM.
On this blog, we have encountered this phenomenon ad nauseam: a commentator makes a claim and supports it with some seemingly sound evidence, often from well-respected sources. The few of us who bother to read the referenced articles quickly discover that they do not say what the commentator claimed. This method relies on the reader beeing easily bowled over by some pretend-evidence. As many consumers cannot be bothered to look beyond the smokescreen supplied by such pretenders, the method usually works surprisingly well.
An example: Vidatox is a homeopathic cancer ‘cure’ from Cuba. The Vidatox website clains that it is effective for many cancers. Considering how sensational this claim is, one would expect to find plenty of published articles on Vidatox. However, a Medline search resulted in one paper on the subject. Its authors drew the following conclusion: Our results suggest that the concentration of Vidatox used in the present study has not anti-neoplastic effects and care must be taken in hiring Vidatox in patients with HCC.
The question one often has to ask is this: where is the line between misleading research and fraud?
There is no area in healthcare that produces more surveys than SCAM. About 500 surveys are published every year! This ‘survey-mania’ has a purpose: it promotes a positive message about SCAM which hypothesis-testing research rarely does.
For a typical SCAM survey, a team of enthusiastic researchers might put together a few questions and design a questionnaire to find out what percentage of a group of individuals have tried SCAM in the past. Subsequently, the investigators might get one or two hundred responses. They then calculate simple descriptive statistics and demonstrate that xy % use SCAM. This finding eventually gets published in one of the many third-rate SCAM journals. The implication then is that, if SCAM is so popular, it must be good, and if it’s good, the public purse should pay for it. Few consumers would realise that this conclusion is little more that a fallacious appeal to popularity.
AVOIDING THE QUESTION
Another popular way of SCAM researchers to mislead the public is to avoid the research questions that matter. For instance, few experts would deny that one of the most urgent issues in chiropractic relates to the risk of spinal manipulations. One would therefore expect that a sizable proportion of the currently published chiropractic research is dedicated to it. Yet, the opposite is the case. Medline currently lists more than 3 000 papers on ‘chiropractic’, but only 17 on ‘chiropractic, harm’.
A pilot study is a small scale preliminary study conducted in order to evaluate feasibility, time, cost, adverse events, and improve upon the study design prior to performance of a full-scale research project. Yet, the elementary preconditions are not fulfilled by the plethora of SCAM pilot studies that are currently being published. True pilot studies of SCAM are, in fact, very rare. The reason for the abundance of pseudo-pilots is obvious: they can easily be interpreted as showing encouragingly positive results for whatever SCAM is being tested. Subsequently, SCAM proponents can mislead the public by claiming that there are plenty of positive studies and therefore their SCAM is supported by sound evidence.
As regularly mentioned on this blog, there are several ways to design a study such that the risk of producing a negative result is minimal. The most popular one in SCAM research is the ‘A+B versus B’ design. In this study, for instance, cancer patients who were suffering from fatigue were randomised to receive usual care or usual care plus regular acupuncture. The researchers then monitored the patients’ experience of fatigue and found that the acupuncture group did better than the control group. The effect was statistically significant, and an editorial in the journal where it was published called this evidence “compelling”. Due to a cleverly over-stated press-release, news spread fast, and the study was celebrated worldwide as a major breakthrough in cancer-care.
Imagine you have an amount of money A and your friend owns the same sum plus another amount B. Who has more money? Simple, it is, of course your friend: A+B will always be more than A [unless B is a negative amount]. For the same reason, such “pragmatic” trials will always generate positive results [unless the treatment in question does actual harm]. Treatment as usual plus acupuncture is more than treatment as usual alone, and the former is therefore more than likely to produce a better result. This will be true, even if acupuncture is a pure placebo – after all, a placebo is more than nothing, and the placebo effect will impact on the outcome, particularly if we are dealing with a highly subjective symptom such as fatigue.
A more obvious method for generating false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the group allocation in clinical trials is to make sure that expectation is not a contributor to the result. Expectation might not move mountains, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better, even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment.
Failure to randomise is another source of bias which can mislead us. If we allow patients or trialists to select or chose which patients receive the experimental and which get the control-treatment, it is likely that the two groups differ in a number of variables. Some of these variables might, in turn, impact on the outcome. If, for instance, doctors allocate their patients to the experimental and control groups, they might select those who will respond to the former and those who don’t to the latter. This may not happen with intent but through intuition or instinct: responsible health care professionals want those patients who, in their experience, have the best chances to benefit from a given treatment to receive that treatment. Only randomisation can, when done properly, make sure we are comparing comparable groups of patients. Non-randomisation can easily generate false-positive findings.
It is also possible to mislead people with studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but which assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce similarly positive results, both must be effective. Such trials are called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a simple, hypothetical example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.
Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In the above example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then seemingly confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition. People who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.
Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, even the most useless SCAM would appear to be effective simply because it is less harmful than the comparator.
A variation of this theme is the plethora of controlled clinical trials in SCAM which compare one unproven therapy to another unproven treatment. Perdicatbly, the results would often indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic SCAM researchers then tend to conclude that this proves both treatments to be equally effective. The more likely conclusion, however, is that both are equally useless.
Another technique for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.
A further popular method for misleading the public is the outright omission findings that SCAM researchers do not like. If the aim is that the public believe the myth that all SCAM is free of side-effects, SCAM researchers only need to omit reporting them in clinical trials. On this blog, I have alerted my readers time and time again to this common phenomenon. We even assessed it in a systematic review. Sixty RCTs of chiropractic were included. Twenty-nine RCTs did not mention adverse effects at all. Sixteen RCTs reported that no adverse effects had occurred. Complete information on incidence, severity, duration, frequency and method of reporting of adverse effects was included in only one RCT.
Most trails have many outcome measures; for instance, a study of acupuncture for pain-control might quantify pain in half a dozen different ways, it might also measure the length of the treatment until pain has subsided, the amount of medication the patients took in addition to receiving acupuncture, the days off work because of pain, the partner’s impression of the patient’s health status, the quality of life of the patient, the frequency of sleep being disrupted by pain etc. If the researchers then evaluate all the results, they are likely to find that one or two of them have changed in the direction they wanted (especially, if they also include half a dozen different time points at which these variables are quatified). This can well be a chance finding: with the typical statistical tests, one in 20 outcome measures would produce a significant result purely by chance. In order to mislead us, the researchers only need to “forget” about all the negative results and focus their publication on the ones which by chance have come out as they had hoped.
When it come to fraud, there is more to chose from than one would have ever wished for. We and others have, for example, shown that Chinese trials of acupuncture hardly ever produce a negative finding. In other words, one does not need to read the paper, one already knows that it is positive – even more extreme: one does not need to conduct the study, one already knows the result before the research has started. This strange phenomenon indicates that something is amiss with Chinese acupuncture research. This suspicion was even confirmed by a team of Chinese scientists. In this systematic review, all randomized controlled trials (RCTs) of acupuncture published in Chinese journals were identified by a team of Chinese scientists. A total of 840 RCTs were found, including 727 RCTs comparing acupuncture with conventional treatment, 51 RCTs with no treatment controls, and 62 RCTs with sham-acupuncture controls. Among theses 840 RCTs, 838 studies (99.8%) reported positive results from primary outcomes and two trials (0.2%) reported negative results. The percentages of RCTs concealment of the information on withdraws or sample size calculations were 43.7%, 5.9%, 4.9%, 9.9%, and 1.7% respectively. The authors concluded that publication bias might be major issue in RCTs on acupuncture published in Chinese journals reported, which is related to high risk of bias. We suggest that all trials should be prospectively registered in international trial registry in future.
A survey of clinical trials in China has revealed fraudulent practice on a massive scale. China’s food and drug regulator carried out a one-year review of clinical trials. They concluded that more than 80 percent of clinical data is “fabricated“. The review evaluated data from 1,622 clinical trial programs of new pharmaceutical drugs awaiting regulator approval for mass production. Officials are now warning that further evidence malpractice could still emerge in the scandal.
I hasten to add that fraud in SCAM research is certainly not confined to China. On this blog, you will find plenty of evidence for this statement, I am sure.
Research is obviously necessary, if we want to answer the many open questions in SCAM. But sadly, not all research is reliable and much of SCAM research is misleading. Therefore, it is always necessary to be on the alert and apply all the skills of critical evaluation we can muster.