medical ethics

As I am drafting this post, I am in a plane flying back from Finland. The in-flight meal reminded me of the fact that no food is so delicious that it cannot be spoilt by the addition of too many capers. In turn, this made me think about the paper I happened to be reading at the time, and I arrived at the following theory: no trial design is so rigorous that it cannot to be turned into something utterly nonsensical by the addition of a few amateur researchers.

The paper I was reading when this idea occurred to me was a randomised, triple-blind, placebo-controlled cross-over trial of homeopathy. Sounds rigorous and top quality? Yes, but wait!

Essentially, the authors recruited 86 volunteers who all claimed to be suffering from “mental fatigue” and treated them with Kali-Phos 6X or placebo for one week (X-potencies signify dilution steps of 1: 10, and 6X therefore means that the salt had been diluted 1: 1000000 ). Subsequently, the volunteers were crossed-over to receive the other treatment for one week.

The results failed to show that the homeopathic medication had any effect (not even homeopaths can be surprised about this!). The authors concluded that Kali-Phos was not effective but cautioned that, because of the possibility of a type-2-error, they might have missed an effect which, in truth, does exist.

In my view, this article provides an almost classic example of how time, money and other resources can be wasted in a pretence of conducting reasonable research. As we all know, clinical trials usually are for testing hypotheses. But what is the hypothesis tested here?

According to the authors, the aim was to “assess the effectiveness of Kali-Phos 6X for attention problems associated with mental fatigue”. In other words, their hyposesis was that this remedy is effective for treating the symptom of mental fatigue. This notion, I would claim, is not a scientific hypothesis, it is a foolish conjecture!

Arguably any hypothesis about the effectiveness of a highly diluted homeopathic remedy is mere wishful thinking. But, if there were at least some promissing data, some might conclude that a trial was justified. By way of justification for the RCT in question, the authors inform us that one previous trial had suggested an effect; however, this study did not employ just Kali-Phos but a combined homeopathic preparation which contained Kalium-Phos as one of several components. Thus the authors’ “hypothesis” does not even amount to a hunch, not even to a slight incling! To me, it is less than a shot in the dark fired by blind optimists – nobody should be surprised that the bullet failed to hit anything.

It could even be that the investigators themselves dimly realised that something is amiss with the basis of their study; this might be the reason why they called it an “exploratory trial”. But an exploratory study is one whithout a hypothesis, and the trial in question does have a hyposis of sorts – only that it is rubbish. And what exactly did the authos meant to explore anyway?

That self-reported mental fatigue in healthy volunteers is a condition that can be mediatised such that it merits treatment?

That the test they used for quantifying its severity is adequate?

That a homeopathic remedy with virtually no active ingredient generates outcomes which are different from placebo?

That Hahnemann’s teaching of homeopathy was nonsense and can thus be discarded (he would have sharply condemned the approach of treating all volunteers with the same remedy, as it contradicts many of his concepts)?

That funding bodies can be fooled to pay for even the most ridiculous trial?

That ethics-committees might pass applications which are pure nonsense and which are thus unethical?

A scientific hypothesis should be more than a vague hunch; at its simplest, it aims to explain an observation or phenomenon, and it ought to have certain features which many alt med researchers seem to have never heard of. If they test nonsense, the result can only be nonsense.

The issue of conducting research that does not make much sense is far from trivial, particularly as so much (I would say most) of alt med research is of such or even worst calibre (if you do not believe me, please go on Medline and see for yourself how many of the recent articles in the category “complementary alternative medicine” truly contribute to knowledge worth knowing). It would be easy therefore to cite more hypothesis-free trials of homeopathy.

One recent example from Germany will have to suffice: in this trial, the only justification for conducting a full-blown RCT was that the manufacturer of the remedy allegedly knew of a few unpublished case-reports which suggested the treatment to work – and, of course, the results of the RCT eventually showed that it didn’t. Anyone with a background in science might have predicied that outcome – which is why such trials are so deplorably wastefull.

Research-funds are increasingly scarce, and they must not be spent on nonsensical projects! The money and time should be invested more fruitfully elsewhere. Participants of clinical trials give their cooperation willingly; but if they learn that their efforts have been wasted unnecessarily, they might think twice next time they are asked. Thus nonsensical research may have knock-on effects with far-reaching consequences.

Being a researcher is at least as serious a profession as most other occupations; perhaps we should stop allowing total amateurs wasting money while playing at being professioal. If someone driving a car does something seriously wrong, we take away his licence; why is there not a similar mechanism for inadequate researchers, funders, ethics-committees which prevents them doing further damage?

At the very minimum, we should critically evaluate the hypothesis that the applicants for research-funds propose to test. Had someone done this properly in relatiom to the two above-named studies, we would have saved about £150,000 per trial (my estimate). But as it stands, the authors will probably claim that they have produced fascinating findings which urgently need further investigation – and we (normally you and I) will have to spend three times the above-named amount (again, my estimate) to finance a “definitive” trial. Nonsense, I am afraid, tends to beget more nonsense.


Daniels and Vogel recently published an article entitled “Consent in osteopathy: A cross sectional survey of patients’ information and process preferences” (INTERNATIONAL JOURNAL OF OSTEOPATHIC MEDICINE 2012, 15:3, p.92-102). It addresses an important yet woefully under-researched area.

I find most laudable that two osteopaths conduct research into medical ethics; but the questions still are, does the article tell us anything worth knowing and is it sufficiently rigorous and critical? As the journal does not seem to be available on Medline, I cannot provide a link. I therefore take the liberty of quoting the most important bits from directly the abstract here.

Objective: To explore and describe patients’ preferences of consent procedures in a sample of UK osteopathic patients.

Methods: A cross sectional survey using a new questionnaire was performed incorporating paper and web-based versions of the instruments. 500 copies were made available, (n = 200) to patients attending the British School of Osteopathy (BSO) clinic, and (n = 300) for patients attending 30 randomly sampled osteopaths in practice. Quantitative data were analysed descriptively to assess patient preferences; non-parametric analyses were performed to test for preference difference between patients using demographic characteristics.

Results: 124 completed questionnaires were returned from the BSO sample representing a 41% response rate. None were received from patients attending practices outside of the BSO clinic. The majority (98%) of patient respondents thought that having information about rare yet potentially severe risks of treatment was important. Patients’ preferred to have this information presented during the initial consultation (72%); communication method favoured was verbal (90%). 99% would like the opportunity to ask questions about risks, and all respondents (100%) consider being informed about their current diagnosis as important.

Conclusion: Patients endorse the importance of information exchange as part of the consent process. Verbal communication is very important and is the favoured method for both receiving information and giving consent. Further research is required to test the validity of these results in practice samples

The 0% response-rate in patients from non-BSO practices is, of course, remarkable and not without irony. In my view, it highlights better than anything else the fact that informed consent rarely appears on the osteopathic radar screen. In a way, this increases the praise we should give the two authors for tackling the issue.

The central question of the survey is whether patients want to know about the risks of osteopathy. This is more than a little bizarre: informed consent is not an option, it is a legal, moral and ethical obligation. It seems therefore odd to ask the question “do you want to learn about the risks which you are about to be exposed to?”

Even odder is, I think, the second question “when do you want to receive this information?” It goes without saying that informed consent has to happen before the intervention! This is what, common sense tells us, the law dictates and ethical codes prescribe.

There is general agreement amongst health care professionals and ethicist that verbal consent does suffice in most therapeutic situations, that patients must have the opportunity to ask questions, and that informed consent also extends to diagnostic issues. So, the questions referring to these issues are also a bit strange or naive, in my view.

The article might be revealing mostly by what it does not address rather than by what it tells us. It would be really valuable to know the percentage of osteopaths who abide by the legal, moral and ethical imperative of informed consent in their daily practice. To the best of my knowledge, this information is not available [if anyone has such information, please let me know and provide the reference]. Assuming that it is similar to the percentage of UK chiropractors who obtain informed consent, it might be seriously wanting: only 45% of them routinely obtain informed consent from their patients.

Another issue that, in my view, would be relevant to clarify is the nature of the information provided by osteopaths to patients, other than that of serious risks associated with spinal manipulation/mobilisation. Do they tell their patients about the evidence suggesting that osteopathy does (not) work for the condition at hand? Do they elaborate on non-osteopathic treatments for that disease? I fear that the answers to these questions might well be negative.

Imagine a patient being told that there is no good evidence for effectiveness of osteopathy, that the possibility of some harm exists, and that other interventions might actually do more good than harm than what the osteopath has to offer. How likely is it that this patient would agree to receiving osteopathic treatment?

For most alternative practitioners, including osteopaths, informed consent and most other important ethical issues have so far remained highly uncomfortable areas. This may have a good and simple reason: they have the potential to become real and serious threats to their current practice and business. I suspect this is why there is so very little awareness of and research into the ethics of alternative medicine: “best not to wake sleeping lions”, seems to be the general attitude.

The survey by Daniels and Vogel, even though it touches upon an important topic, avoids the truly pertinent questions. It therefore looks to me a bit like a fig leaf shamefully hiding an area of potential embarrassment.

And where do we go from here? I predict that the current strategy of alternative practitioners to ignore and violate medical ethics as much as possible will not be tolerated for much longer. Double standards in health care cannot and should not survive. The sooner we begin addressing some of these uncomfortable questions with rigorous research, the better – perhaps not for the practitioner but certainly for the patient.

Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.

Click here for a comprehensive list of recent comments.