MD, PhD, FMedSci, FRSB, FRCP, FRCPEd

research

Rigorous studies of homeopathy are a bit like gold dust; they are so rare that we see perhaps only one or two per year. It is therefore good news that very recently one such trial has been published.

This randomized, placebo-controlled study tested the efficacy of a complex homeopathic medicine, Cocculine, for chemotherapy-induced nausea and vomiting (CINV) in non-metastatic breast cancer patients treated by standard chemotherapy regimens.

Chemotherapy-naive patients with non-metastatic breast cancer scheduled to receive 6 cycles of chemotherapy were randomized to receive standard anti-emetic treatment plus either the complex homeopathic remedy or the matching placebo. The primary endpoint was nausea score measured after the 1st chemotherapy course.

In total, 431 patients were randomized: 214 to Cocculine (C) and 217 to placebo (P). Patient characteristics were well-balanced between the 2 arms. Overall, compliance to study treatments was excellent and similar between the 2 arms. A total of 205 patients (50.9%; 103 patients in the placebo and 102 in the homeopathy arms) had nausea scores > 6 indicative of no impact of nausea on quality of life during the 1st chemotherapy course. There was no difference between the 2 arms when primary endpoint analysis was performed by chemotherapy stratum; or in the subgroup of patients with susceptibility to nausea and vomiting before inclusion. In addition, nausea, vomiting and global emesis scores were not statistically different at any time between the two study arms.

The authors’ conclusions could not be clearer: “This double-blinded, placebo-controlled, randomised Phase III study showed that adding a complex homeopathic medicine (Cocculine) to standard anti-emetic prophylaxis does not improve the control of CINV in early breast cancer patients.”

COCCULINE is manufactured by Boiron and contains Cocculus indicus 4CH, Strychnos nux vomica 4CH, Nicotiana tabacum 4CH, Petroleum rectificatum 4CH   aa 0,375 mg. Boiron informs us that “this homeopathic preparation is indicated in sickness during travelling (kinetosis). Preventive dosage is 2 tablets 3 times a day one day before departure and on the day of journey. Treatment dosage is 2 tablets every hour. The interval is prolonged in dependence on improvement. Dosage in children is the same as in adults. The tablets are left to dissolve in mouth or in a small amount of water.”

Homeopaths might argue that this trail did not follow the rules of classical homeopathy where treatments need to be individualised. This may be true but, in this case, they should campaign for all OTC homeopathy to be banned. As they do not do that, I suggest they live with yet another rigorous clinical trial demonstrating that homeopathic remedies are pure placebos.

If I had a pint of beer for every time I have been accused of bias against chiropractic, I would rarely be sober. The thing is that I do like to report about decent research in this field and I am almost every day looking out for new articles which might be worth writing about – but they are like gold dust!

“Huuuuuuuuh, that just shows how very biased he is” I hear the chiro community shout. Well let’s put my hypothesis to the test. Here is a complete list of recent (2013)Medline-listed articles on chiropractic; no omission, no bias, just facts (for clarity, the Pubmed-link is listed first, then the title in bold followed by a short comment in italics):

http://www.ncbi.nlm.nih.gov/pubmed/23360894

Towards establishing an occupational threshold for cumulative shear force in the vertebral joint – An in vitro evaluation of a risk factor for spondylolytic fractures using porcine specimens.

This is an interesting study of the shear forces observed in porcine vertebral specimen during maneuvers which might resemble spinal manipulation in humans. The authors conclude that “Our investigation suggested that pars interarticularis damage may begin non-linearly accumulating with shear forces between 20% and 40% of failure tolerance (approximately 430 to 860N”

http://www.ncbi.nlm.nih.gov/pubmed/23337706

Development of an equation for calculating vertebral shear failure tolerance without destructive mechanical testing using iterative linear regression.

This is a mathematical modelling of the forces that might act on the spine during manipulation. The authors draw no conclusions.

http://www.ncbi.nlm.nih.gov/pubmed/23324133

Collaborative Care for Older Adults with low back pain by family medicine physicians and doctors of chiropractic (COCOA): study protocol for a randomized controlled trial.

This is merely the publication of a trial that is about to commence.

http://www.ncbi.nlm.nih.gov/pubmed/23323682

Military Report More Complementary and Alternative Medicine Use than Civilians.

This is a survey which suggests that ~45% of all military personnel use some form of alternative medicine.

http://www.ncbi.nlm.nih.gov/pubmed/23319526

Complementary and Alternative Medicine Use by Pediatric Specialty Outpatients

This is another survey; it concludes that ” that CAM use is high among pediatric specialty clinic outpatients”

http://www.ncbi.nlm.nih.gov/pubmed/23311664

Extending ICPC-2 PLUS terminology to develop a classification system specific for the study of chiropractic encounters

This is an article on chiropractic terminology which concludes that “existing ICPC-2 PLUS terminology could not fully represent chiropractic practice, adding terms specific to chiropractic enabled coding of a large number of chiropractic encounters at the desired level. Further, the new system attempted to record the diversity among chiropractic encounters while enabling generalisation for reporting where required. COAST is ongoing, and as such, any further encounters received from chiropractors will enable addition and refinement of ICPC-2 PLUS (Chiro)”.

http://www.ncbi.nlm.nih.gov/pubmed/23297270

US Spending On Complementary And Alternative Medicine During 2002-08 Plateaued, Suggesting Role In Reformed Health System

This is a study of the money spent on alternative medicine concluding as follows “Should some forms of complementary and alternative medicine-for example, chiropractic care for back pain-be proven more efficient than allopathic and specialty medicine, the inclusion of complementary and alternative medicine providers in new delivery systems such as accountable care organizations could help slow growth in national health care spending”

http://www.ncbi.nlm.nih.gov/pubmed/23289610

A Royal Chartered College joins Chiropractic & Manual Therapies.

This is a short comment on the fact that a chiro institution received a Royal Charter.

http://www.ncbi.nlm.nih.gov/pubmed/23242960

Exposure-adjusted incidence rates and severity of competition injuries in Australian amateur taekwondo athletes: a 2-year prospective study.

This is a study by chiros to determine the frequency of injuries in taekwondo athletes.

The first thing that strikes me is the paucity of articles; ok, we are talking of just january 2013 but by comparison most medical fields like neurology, rheumatology have produced hundreds of articles during this period and even the field of acupuncture research has generated about three times more.

The second and much more important point is that I fail to see much chiropractic research that is truly meaningful or tells us anything about what I consider the most urgent questions in this area, e.g. do chiropractic interventions work? are they safe?

My last point is equally critical. After reading the 9 papers, I have to honestly say that none of them impressed me in terms of its scientific rigor.

So, what does this tiny investigation suggest? Not a lot, I have to admit, but I think it supports the hypothesis that research into chiropractic is not very active, nor high quality, nor does it address the most urgent questions.

In my very first post on this blog, I proudly pronounced that this would not become one of those places where quack-busters have field-day. However, I am aware that, so far, I have not posted many complimentary things about alternative medicine. My ‘excuse’ might be that there are virtually millions of sites where this area is uncritically promoted and very few where an insider dares to express a critical view. In the interest of balance, I thus focus of critical assessments.

Yet I intend, of course, report positive news when I think it is relevant and sound. So, today I shall discuss a new trial which is impressively sound and generates some positive results:

French rheumatologists conducted a prospective, randomised, double blind, parallel group, placebo controlled  trial of avocado-soybean-unsaponifiables (ASU). This dietary supplement has complex pharmacological activities and has been used since years for osteoarthritis (OA) and other conditions. The clinical evidence has, so far, been encouraging, albeit not entirely convincing. My own review arrived at the conclusion that “the majority of rigorous trial data available to date suggest that ASU is effective for the symptomatic treatment of OA and more research seems warranted. However, the only real long-term trial yielded a largely negative result”.

For the new trial, patients with symptomatic hip OA and a minimum joint space width (JSW) of the target hip between 1 and 4 mm were randomly assigned to  three years of 300 mg/day ASU-E or placebo. The primary outcome was JSW change at year 3, measured radiographically at the narrowest point.

A total of 399 patients were randomised. Their mean baseline JSW was 2.8 mm. There was no significant difference on mean JSW loss, but there was 20% less progressors in the ASU than in the placebo group (40% vs 50%, respectively). No difference was observed in terms of clinical outcomes. Safety was excellent.

The authors concluded that 3 year treatment with ASU reduces the speed of JSW narrowing, indicating a potential structure modifying effect in hip OA. They cautioned that their results require independent confirmation and that the clinical relevance of their findings require further assessment.

I like this study, and here are just a few reasons why:

It reports a massive research effort; I think anyone who has ever attempted a 3-year RCT might agree with this view.

It is rigorous; all the major sources of bias are excluded as far as humanly possible.

It is well-reported; all the essential details are there and anyone who has the skills and funds would be able to attempt an independent replication.

The authors are cautious in their interpretation of the results.

The trial tackles an important clinical problem; OA is common and any treatment that helps without causing significant harm would be more than welcome.

It yielded findings which are positive or at least promising; contrary to what some people seem to believe, I do like good news as much as anyone else.

I WISH THERE WERE MORE ALT MED STUDIES/RESEARCHERS OF THIS CALIBER!

The ‘Samueli Institute’ might be known to many readers of this blog; it is a wealthy institution that is almost entirely dedicated to promoting the more implausible fringe of alternative medicine. The official aim is “to create a flourishing society through the scientific exploration of wellness and whole-person healing“. Much of its activity seems to be focused on military medical research. Its co-workers include Harald Walach who recently was awarded a rare distinction for his relentless efforts in introducing esoteric pseudo-science into academia.

Now researchers from the Californian branch of the Samueli Institute have published an articles whic, in my view, is another landmark in nonsense.

Jain and colleagues conducted a randomized controlled trial to determine whether Healing Touch with Guided Imagery [HT+GI] reduced post-traumatic stress disorder (PTSD) compared to treatment as usual (TAU) in “returning combat-exposed active duty military with significant PTSD symptoms“. HT is a popular form of para-normal healing where the therapist channels “energy” into the patient’s body; GI is a self-hypnotic from of relaxation-therapy. While the latter approach might be seen as plausible and, at least to some degree, evidence-based, the former cannot.

123 soldiers were randomized to 6 sessions of HT+GI, while the control group had no such therapies. All patients also received standard conventional therapies, and the treatment period was three weeks. The results showed significant reductions in PTSD symptoms as well as depression for HT+GI compared to controls. HT+GI also showed significant improvements in mental quality of life and cynicism.

The authors concluded that HT+GI resulted in a clinically significant reduction in PTSD and related symptoms, and that further investigations of biofield therapies for mitigating PTSD in military populations are warranted.

The Samueli Institute claims to “support science grounded in observation, investigation, and analysis, and [to have] the courage to ask challenging questions within a framework of systematic, high-quality, research methods and the peer-review process“. I do not think that the above-named paper lives up to these standards.

As discussed in some detail in a previous post, this type of study-design is next to useless for determining whether any intervention does any good at all: A+B is always more than B alone! Moreover, if we test HT+GI as a package, how can we conclude about the effectiveness of one of the two interventions? Thus this trial tells us next to nothing about the effectiveness of HT, nor about the effectiveness of HT+GI.

Previously, I have argued that conducting a trial for which the result is already clear before the first patient has been recruited, is not ethical. Samueli Institute, however, claims that it “acts with the highest respect for the public it serves by ensuring transparency, responsible management and ethical practices from discovery to policy and application“. Am I the only one who senses a contradiction here?

Perhaps other research in this area might be more informative? Even the most superficial Medline-search brings to light a flurry of articles on HT and other biofield therapies that are relevant.

Several trials have indeed produces promissing evidence suggesting positive effects of such treatments on anxiety and other symptoms. But the data are far from uniform, and most investigations are wide open to bias. The more rigorous studies seem to suggest that these interventions are not effective beyond placebo. Our review demonstrated that “the evidence is insufficient” to suggest that reiki, another biofield therapy, is an effective treatment for any condition.

Another study showed that tactile touch led to significantly lower levels of anxiety. Conventional massage may even be better than HT, according to some trials. The conclusion from this body of evidence is, I think, fairly obvious: touch can be helpful (most clinicians knew that anyway) but this has nothing to do with energy, biofields, healing energy or any of the other implausible assumptions these treatments are based on.

I therefore disagree with the authors’ conclusion that “further investigation into biofield therapies… is warranted“. If we really want to help patients, let’s find out more about the benefits of touch and let’s not mislead the public about some mystical energies and implausible quackery. And if we truly want to improve heath care, as the Samueli Institute claims, let’s use our limited resources for research which meaningfully contributes to our knowledge.

On January 27, 1945, the concentration camp in Auschwitz was liberated. By May of the same year, around 20 similar camps had been discovered. What they revealed is so shocking that it is difficult to put it in words.

Today, on ‘HOCOCAUST MEMORIAL DAY’, I quote (shortened and slightly modified) from articles I published many years ago (references can be found in the originals) to remind us of the unspeakable atrocities that occurred during the Nazi period and of the crucial role the German medical profession played in them.

The Nazi’s euthanasia programme, also known as “Action T4″, started in specialized medicinal departments in 1939. Initially, it was aimed at children suffering from “idiocy, Down’s syndrome, hydrocephalus and other abnormalities”. By the end of 1939, the programme was extended to adults “unworthy of living.” We estimate that, when it was stopped, more than 70,000 patients had been killed.

Action T4 (named after its address: Tiergarten Strasse 4) was the Berlin headquarters of the euthanasia programme. It was run by approximately 50 physicians who, amongst other activities, sent questionnaires to (mostly psychiatric) hospitals urging them to return lists of patients for euthanasia. The victims were transported to specialized centers where they were gassed or poisoned. Action T4 was thus responsible for medically supervised, large-scale murder. Its true significance, however, lies elsewhere. Action T4 turned out to be nothing less than a “pilot project” for the extinction of millions of prisoners of the concentration camps.

The T4 units had developed the technology for killing on an industrial scale. It was only with this know-how that the total extinction of all Jews of the Reich could be planned. This truly monstrous task required medical expertise.

Almost without exception, those physicians who had worked for T4 went on to take charge of what the Nazis called the ‘Final Solution’. While action T4 had killed thousands, its offspring would murder millions under the trained instructions of Nazi doctors.

The medical profession’s role in these crimes was critical and essential. German physicians had been involved at all levels and stages. They had created and embraced the pseudo-science of race hygiene. They were instrumental in developing it further into applied racism. They had generated the know-how of mass extinction. Finally, they also performed outrageously cruel and criminal experiments under the guise of scientific inquiry [see below]. German doctors had thus betrayed all the ideals medicine had previously stood for, and had become involved in criminal activities unprecedented in the history of medicine (full details and references on all of this are provided in my article, see link above).

Alternative medicine

It is well-documented that alternative medicine was strongly supported by the Nazis. The general belief is that this had nothing to do with the sickening atrocities of this period. I believe that this assumption is not entirely correct. In 2001, I published an article which reviews the this subject; I take the liberty of borrowing from it here.

Based on a general movement in favour of all things natural, a powerful trend towards natural ways of healing had developed in the 19(th)century. By 1930, this had led to a situation in Germany where roughly as many lay-practitioners of alternative medicine as conventional doctors were in practice.This had led to considerable tensions between the two camps. To re-unify German medicine under the banner of ‘Neue Deutsche Heilkunde’ (New German Medicine), Nazi officials eventually decided to create  the profession of the ‘Heilpraktiker‘ (healing practitioner). Heilpraktiker were not allowed to train students and their profession was thus meant to become extinct within one generation; Goebbels spoke of having created the cradle and the grave of the Heilpraktiker. However, after 1945, this decision was challenged in the courts and eventually over-turned – and this is why Heilpraktiker are still thriving today.

The ‘flag ship’ of the ‘Neue Deutsche Heilkunde’ was the ‘Rudolf Hess Krankenhaus‘ in Dresden (which was re-named into Gerhard Wagner Krankenhaus after Hess’ flight to the UK). It represented a full integration of alternative and orthodox medicine.

‘Research’

An example of systematic research into alternative medicine is the Nazi government’s project to validate homoeopathy. The data of this massive research programme are now lost (some speculate that homeopaths made them disappear) but, according to an eye-witness report, its results were entirely negative (full details and references on alt med in 3rd Reich are in the article cited above).

There is,of course, plenty of literature on the subject of Nazi ‘research’ (actually, it was pseudo-research) and the unspeakable crimes it entailed. By contrast, there is almost no published evidence that these activities included in any way alternative medicine, and the general opinion seems to be that there are no connections whatsoever. I fear that this notion might be erroneous.

As far as I can make out, no systematic study of the subject has so far been published, but I found several hints and indications that the criminal experiments of Nazi doctors also involved alternative medicine (the sources are provided in my articles cited above or in the links provided below). Here are but a few leads:

Dr Wagner, the chief medical officer of the Nazis was a dedicated and most active proponent of alternative medicine.

Doctors in the alternative “Rudolf Hess Krankenhaus” [see above] experimented on speeding up the recovery of wounded soldiers, on curing syphilis with fasting, and on various other projects to help the war effort.

The Dachau concentration camp housed the largest plantation of medicinal herbs in Germany.

Dr Madaus (founder of the still existing company for natural medicines by the same name) experimented on the sterilisation of humans with herbal and homeopathic remedies, a project that was deemed of great importance for controlling the predicted population growth in the East of the expanding Reich.

Dr Grawitz infected Dachau prisoners with various pathogens to test the effectiveness of homeopathic remedies.

Schuessler salts were also tested on concentration camp inmates.

So, why bring all of this up today? Is it not time that we let grass grow over these most disturbing events? I think not! For many years, I actively researched this area (you can find many of my articles on Medline) because I am convinced that the unprecedented horrors of Nazi medicine need to be told and re-told – not just on HOLOCAUST MEMORIAL DAY, but continually. This, I hope, will minimize the risk of such incredible abuses ever happening again.

As I am drafting this post, I am in a plane flying back from Finland. The in-flight meal reminded me of the fact that no food is so delicious that it cannot be spoilt by the addition of too many capers. In turn, this made me think about the paper I happened to be reading at the time, and I arrived at the following theory: no trial design is so rigorous that it cannot to be turned into something utterly nonsensical by the addition of a few amateur researchers.

The paper I was reading when this idea occurred to me was a randomised, triple-blind, placebo-controlled cross-over trial of homeopathy. Sounds rigorous and top quality? Yes, but wait!

Essentially, the authors recruited 86 volunteers who all claimed to be suffering from “mental fatigue” and treated them with Kali-Phos 6X or placebo for one week (X-potencies signify dilution steps of 1: 10, and 6X therefore means that the salt had been diluted 1: 1000000 ). Subsequently, the volunteers were crossed-over to receive the other treatment for one week.

The results failed to show that the homeopathic medication had any effect (not even homeopaths can be surprised about this!). The authors concluded that Kali-Phos was not effective but cautioned that, because of the possibility of a type-2-error, they might have missed an effect which, in truth, does exist.

In my view, this article provides an almost classic example of how time, money and other resources can be wasted in a pretence of conducting reasonable research. As we all know, clinical trials usually are for testing hypotheses. But what is the hypothesis tested here?

According to the authors, the aim was to “assess the effectiveness of Kali-Phos 6X for attention problems associated with mental fatigue”. In other words, their hyposesis was that this remedy is effective for treating the symptom of mental fatigue. This notion, I would claim, is not a scientific hypothesis, it is a foolish conjecture!

Arguably any hypothesis about the effectiveness of a highly diluted homeopathic remedy is mere wishful thinking. But, if there were at least some promissing data, some might conclude that a trial was justified. By way of justification for the RCT in question, the authors inform us that one previous trial had suggested an effect; however, this study did not employ just Kali-Phos but a combined homeopathic preparation which contained Kalium-Phos as one of several components. Thus the authors’ “hypothesis” does not even amount to a hunch, not even to a slight incling! To me, it is less than a shot in the dark fired by blind optimists – nobody should be surprised that the bullet failed to hit anything.

It could even be that the investigators themselves dimly realised that something is amiss with the basis of their study; this might be the reason why they called it an “exploratory trial”. But an exploratory study is one whithout a hypothesis, and the trial in question does have a hyposis of sorts – only that it is rubbish. And what exactly did the authos meant to explore anyway?

That self-reported mental fatigue in healthy volunteers is a condition that can be mediatised such that it merits treatment?

That the test they used for quantifying its severity is adequate?

That a homeopathic remedy with virtually no active ingredient generates outcomes which are different from placebo?

That Hahnemann’s teaching of homeopathy was nonsense and can thus be discarded (he would have sharply condemned the approach of treating all volunteers with the same remedy, as it contradicts many of his concepts)?

That funding bodies can be fooled to pay for even the most ridiculous trial?

That ethics-committees might pass applications which are pure nonsense and which are thus unethical?

A scientific hypothesis should be more than a vague hunch; at its simplest, it aims to explain an observation or phenomenon, and it ought to have certain features which many alt med researchers seem to have never heard of. If they test nonsense, the result can only be nonsense.

The issue of conducting research that does not make much sense is far from trivial, particularly as so much (I would say most) of alt med research is of such or even worst calibre (if you do not believe me, please go on Medline and see for yourself how many of the recent articles in the category “complementary alternative medicine” truly contribute to knowledge worth knowing). It would be easy therefore to cite more hypothesis-free trials of homeopathy.

One recent example from Germany will have to suffice: in this trial, the only justification for conducting a full-blown RCT was that the manufacturer of the remedy allegedly knew of a few unpublished case-reports which suggested the treatment to work – and, of course, the results of the RCT eventually showed that it didn’t. Anyone with a background in science might have predicied that outcome – which is why such trials are so deplorably wastefull.

Research-funds are increasingly scarce, and they must not be spent on nonsensical projects! The money and time should be invested more fruitfully elsewhere. Participants of clinical trials give their cooperation willingly; but if they learn that their efforts have been wasted unnecessarily, they might think twice next time they are asked. Thus nonsensical research may have knock-on effects with far-reaching consequences.

Being a researcher is at least as serious a profession as most other occupations; perhaps we should stop allowing total amateurs wasting money while playing at being professioal. If someone driving a car does something seriously wrong, we take away his licence; why is there not a similar mechanism for inadequate researchers, funders, ethics-committees which prevents them doing further damage?

At the very minimum, we should critically evaluate the hypothesis that the applicants for research-funds propose to test. Had someone done this properly in relatiom to the two above-named studies, we would have saved about £150,000 per trial (my estimate). But as it stands, the authors will probably claim that they have produced fascinating findings which urgently need further investigation – and we (normally you and I) will have to spend three times the above-named amount (again, my estimate) to finance a “definitive” trial. Nonsense, I am afraid, tends to beget more nonsense.

 

In my last post, we discussed the “A+B versus B” trial design as a tool to produce false positive results. This method is currently very popular in alternative medicine, yet it is by no means the only approach that can mislead us. Today, let’s look at other popular options with a view of protecting us against trialists who naively or willfully might fool us.

The crucial flaw of the “A+B versus B” design is that it fails to account for non-specific effects. If the patients in the experimental group experience better outcomes than the control group, this difference could well be due to effects that are unrelated to the experimental treatment. There are, of course, several further ways to ignore non-specific effects in clinical research. The simplest option is to include no control group at all. Homeopaths, for instance, are very proud of studies which show that ~70% of their patients experience benefit after taking their remedies. This type of result tends to impress journalists, politicians and other people who fail to realise that such a result might be due to a host of factors, e.g. the placebo-effect, the natural history of the disease, regression towards the mean or treatments which patients self-administered while taking the homeopathic remedies. It is therefore misleading to make causal inferences from such data.

Another easy method to generate false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the outcomes in clinical trials is to make sure that expectation is not the cause of or contributor to the outcome. They say that expectation can move mountains; this might be an exaggeration, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment. Similarly, the parents of a child or the owners of an animal can transfer their expectations, and this is one of several reasons why it is incorrect to claim that children and animals are immune to placebo-effects.

Failure to randomise is another source of bias which can make an ineffective therapy look like an effective one when tested in a clinical trial. If we allow patients or trialists to select or choose which patients receive the experimental and which get the control-treatment, it is likely that the two groups differ in a number of variables. Some of these variables might, in turn, impact on the outcome. If, for instance, doctors allocate their patients to the experimental and control groups, they might select those who will respond to the former and those who don’t to the latter. This may not happen with malicious intent but through intuition or instinct: responsible health care professionals want those patients who, in their experience, have the best chances to benefit from a given treatment to receive that treatment. Only randomisation can, when done properly, make sure we are comparing comparable groups of patients, and non-randomisation is likely to produce misleading findings.

While these options for producing false positives are all too obvious, the next possibility is slightly more intriguing. It refers to studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but to investigations attempting to assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce the same or similarly positive results, both must be effective. For instance, such a study might compare the effects of acupuncture to a common pain-killer. Such trials are aptly called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a deliberately silly example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.

Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In our hypothetical example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then apparently confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition, and people who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.

Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, the most useless or even slightly detrimental treatment would appear to be effective simply because it is equivalent to or less harmful than the comparator.

A variation of this theme is the plethora of controlled clinical trials which compare one unproven therapy to another unproven treatment. Perdicatbly, the results indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic researchers then tend to conclude that this proves both treatments to be equally effective.

Another option for creating misleadingly positive findings is to cherry-pick the results. Most trails have many outcome measures; for instance, a study of acupuncture for pain-control might quantify pain in half a dozen different ways, it might also measure the length of the treatment until pain has subsided, the amount of medication the patients took in addition to receiving acupuncture, the days off work because of pain, the partner’s impression of the patient’s health status, the quality of life of the patient, the frequency of sleep being disrupted by pain etc. If the researchers then evaluate all the results, they are likely to find that one or two of them have changed in the direction they wanted. This can well be a chance finding: with the typical statistical tests, one in 20 outcome measures would produce a significant result purely by chance. In order to mislead us, the researchers only need to “forget” about all the negative results and focus their publication on the ones which by chance have come out as they had hoped.

One fail-proof method for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.

And finally, there is always the possibility of overt cheating. Researchers are only human and are thus not immune to temptation. They may have conflicts of interest or may know that positive results are much easier to publish than negative ones. Certainly they want to publish their work – “publish or perish”! So, faced with disappointing results of a study, they might decide to prettify them or even invent new ones which are more pleasing to them, their peers, or their sponsors.

Am I claiming that this sort of thing only happens in alternative medicine? No! Obviously, the way to minimise the risk of such misconduct is to train researchers properly and make sure they are able to think critically. Am I suggesting that investigators of alternative medicine are often not well-trained and almost always uncritical? Yes.

 

How do you fancy playing a little game? Close your eyes, relax, take a minute or two and imagine the newspaper headlines which new medical discoveries might make within the next 100 years or so. I know, this is a slightly silly and far from serious game but, I promise, it’s quite good fun.

Personally, I see the following headlines emerging in front of my eyes:

MEASLES IRRADICATED

VACCINATION AGAINST AIDS READY FOR ROUTINE USE

IDENTIFICATION OF THE CAUSE OF DEMENTIA LEADS TO FIRST EFFECTIVE CURE

GENE-THERAPY BEGINS TO SAVE LIVES IN EVERY DAY PRACTICE

CANCER, A NON-FATAL DISEASE

HEALTHY AGEING BECOMES REALITY

Yes, I know this is nothing but naïve conjecture mixed with wishful thinking, and there is hardly anything truly surprising in my list.

But, hold on, is it not remarkable that I visualise considerable advances in conventional healthcare but no similarly spectacular headlines relating to alternative medicine? After all, alternative medicine is my area of expertise.  Why do I not see the following announcements?

YET ANOTHER HOMEOPATH WINS THE NOBEL PRIZE

CHIROPRACTIC SUBLUXATION CONFIRMED AS THE SOLE CAUSE OF MANY DISEASES

CHRONICALLY ILL PATIENTS CAN RELY ON BACH FLOWER REMEDIES

CHINESE HERBS CURE PROSTATE CANCER

ACUPUNCTURE MAKES PAIN-KILLERS OBSOLETE

ROYAL DETOX-TINCTURE PROLONGS LIFE

CRANIOSACRAL THERAPY PROVEN EFFECTIVE FOR CEREBRAL PALSY

IRIDOLOGY, A VALID DIAGNOSTIC TEST

How can I be so confident that such headlines about alternative medicine will not, one day, become reality?

Simple: because I only need to study the past and realise which breakthroughs have occurred within the previous 100 years. Mainstream scientists and doctors have discovers insulin-therapy that turned diabetes from a death sentence into a chronic disease, they have developed antibiotics which saved millions of lives, they have manufactured vaccinations for deadly infections, they have invented diagnostic techniques that made early treatment of many life-threatening conditions possible etc, etc, etc.

None of the many landmarks in the history of medicine has ever been in the realm of alternative medicine.

What about herbal medicine? Some might ask. Aspirin, vincristine, taxol and other drugs originated from the plant kingdom, and I am sure there will be similar such success-stories in the future.

But were these truly developments driven by traditional herbalists? No! They were discoveries entirely based on systematic research and rigorous science.

Progress in healthcare will not come from clinging to a dogma, nor from adhering to yesterday’s implausibilites, nor from claiming that clinical experience is more important than scientific research.

I am not saying, of course, that all of alternative medicine is useless. I am saying, however, that it is time to get realistic about what alternative treatments can do and what it cannot achieve. They will not save many lives, for instance; an alternative cure for anything is a contradiction in terms. The strength of some alternative therapies lies in palliative and supportive care and not in changing the natural history of diseases.

Yet proponents of alternative medicine tend to ignore this all too obvious fact and go way beyond the line that divides responsible from irresponsible behaviour. The result is a plethora of bogus claims – and this is clearly not right. It raises false hopes which, in a nutshell, are always unethical and often cruel.

 

Science has seen its steady stream of scandals which are much more than just regrettable, as they undermine much of what science stands for. In medicine, fraud and other forms of misconduct of scientists can even endanger the health of patients.

On this background, it would be handy to have a simple measure which would give us some indication about the trustworthiness of scientists, particularly clinical scientists. Might I be as bold as to propose such a method, the TRUSTWORTHINESS INDEX (TI)?

A large part of clinical science is about testing the efficacy of treatments, and it is the scientist who does this type of research who I want to focus on. It goes without saying that, occasionally, such tests will have to generate negative results such as “the experimental treatment was not effective” [actually “negative” is not the right term, as it is clearly positive to know that a given therapy does not work]. If this never happens with the research of a given individual, we could be dealing with false positive results. In such a case, our alarm bells should start ringing, and we might begin to ask ourselves, how trustworthy is this person?

Yet, in real life, the alarm bells rarely do ring. This absence of suspicion might be due to the fact that, at one point in time, one single person tends to see only one particular paper of the individual in question – and one result tells him next to nothing about the question whether this scientist produces more than his fair share of positive findings.

What is needed is a measure that captures the totality of a researcher’s out-put. Such parameters already exist; think of the accumulated ”Impact Factor” or the ”H-Index”, for instance. But, at best, these citation metrics provide information about the frequency or impact of this person’s published papers and totally ignore his trustworthiness. To get a handle on this particular aspect of a scientist’s work, we might have to consider not the impact but the direction of his published conclusions.

If we calculated the percentage of a researcher’s papers arriving at positive conclusions and divided this by the percentage of his papers drawing negative conclusions, we might have a useful measure. A realistic example might be the case of a clinical researcher who has published a total of 100 original articles. If 50% had positive and 50% negative conclusions about the efficacy of the therapy tested, his TI would be 1.

Depending on what area of clinical medicine this person is working in, 1 might be a figure that is just about acceptable in terms of the trustworthiness of the author. If the TI goes beyond 1, we might get concerned; if it reaches 4 or more, we should get worried.

An example would be a researcher who has published 100 papers of which 80 are positive and 20 arrive at negative conclusions. His TI would consequently amount to 4. Most of us equipped with a healthy scepticism would consider this figure highly suspect.

Of course, this is all a bit simplistic, and, like all other citation metrics, my TI provides us not with any level of proof; it merely is a vague indicator that something might be amiss. And, as stressed already, the cut-off point for any scientist’s TI very much depends on the area of clinical research we are dealing with. The lower the plausibility and the higher the uncertainty associated with the efficacy of the experimental treatments, the lower the point where the TI might suggest  something  to be fishy.

A good example of an area plagued with implausibility and uncertainty is, of course, alternative medicine. Here one would not expect a high percentage of rigorous tests to come out positive, and a TI of 0.5 might perhaps already be on the limit.

So how does the TI perform when we apply it to my colleagues, the full-time researchers in alternative medicine? I have not actually calculated the exact figures, but as an educated guess, I estimate that it would be very hard, even impossible, to find many with a TI under 4.

But surely this cannot be true! It would be way above the acceptable level which we just estimated to be around 0.5. This must mean that my [admittedly slightly tongue in cheek] idea of calculating the TI was daft. The concept of my TI clearly does not work.

The alternative explanation for the high TIs in alternative medicine might be that most full-time researchers in this field are not trustworthy. But this hypothesis must be rejected off hand – or mustn’t it?

Gravityscan Badge

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories