I just came across a website that promised to”cover 5 common misconceptions about alternative medicine that many people have”. As much of this blog is about this very issue, I was fascinated. Here are Dr Cohen’s 5 points in full:
5 Misconceptions about Alternative Medicine Today
1. Alternative Medicine Is Only an Alternative
In fact, many alternative practitioners are also medical doctors, chiropractors, or other trained medical professionals. Others work closely with MDs to coordinate care. Patients should always let all of their health care providers know about treatments that they receive from all the others.
2. Holistic Medicine Isn’t Mainstream
In fact, scientists and doctors do perform studies on all sorts of alternative therapies to determine their effectiveness. These therapies, like acupuncture and an improved diet, pass the test of science and then get integrated into standard medical practices.
3. Natural Doctors Don’t Use Conventional Medicine
No credible natural doctor will ever tell a patient to replace prescribed medication without consulting with his or her original doctor. In many cases, the MD and natural practitioner are the same person. If not, they will coordinate treatment to benefit the health of the patient.
4. Alternative Medicine Doesn’t Work
Actual licensed health providers won’t just suggest natural therapies on a whim. They will consider scientific studies and their own experience to suggest therapies that do work. Countless studies have, for example, confirmed that acupuncture is an effective treatment for many medical conditions. Also, the right dietary changes are known to help improve health and even minimize or cure some diseases. Numerous other alternative therapies have been proven effective using scientific studies.
5. Big Medical Institutions are Against Alternative Medicine
According to a recent survey, about half of big insurers pay for tested alternative therapies like acupuncture. Also, hospitals and doctors do recognize that lifestyle changes, some herbal remedies, and other kinds of alternative medicine may reduce side effects, allow patients to reduce prescription medicine, and even lower medical bills.
This is not to say that every insurer, doctor, or hospital will support a particular treatment. However, patients are beginning to take more control of their health care. If their own providers won’t suggest natural remedies, it might be a good idea to find one who does.
The Best Medicine Combines Conventional and Alternative Medicine
Everyone needs to find the right health care providers to enjoy the safest and most natural care possible. Good natural health providers will have a solid education in their field. Nobody should just abandon their medical treatment to pursue alternative cures. However, seeking alternative therapies may help many people reduce their reliance on harsh medications by following the advice of alternative providers and coordinating their care with all of their health care providers.
END OF COHEN’S TEXT
COMMENT BY MYSELF
Who the Dickens is Dr Cohen and what is his background? I asked myself after reading this. From his website, it seems that he is a chiropractor from North Carolina – not just any old chiro, but one of the best!!! – who also uses several other dubious therapies. He sums up his ‘philosophy’ as follows:
There is an energy or life force that created us (all 70 trillion cells that we are made of) from two cells (sperm and egg cells). This energy or innate intelligence continues to support you throughout life and allows you to grow, develop, heal, and express your every potential. This life force coordinates all cells, tissues, muscles and organs by sending specific, moment by moment communication via the nervous system. If the nervous system is over-stressed or interfered with in any way, then your life force messages will not be properly expressed.
Here he is on the cover of some magazine and here is also his ‘PAIN CLINIC’
Fascinating stuff, I am sure you agree.
As I do not want to risk a libel case, I will abstain from commenting on Dr Cohen and his methods or beliefs. Instead I will try to clear up a few misconceptions that are pertinent to him and the many other practitioners who are promoting pure BS via the Internet.
- Not everyone who uses the title ‘Dr’ is a doctor in the sense of having studied medicine.
- Chiropractors are not ‘trained medical professionals’.
- The concepts of ‘vitalism’, ‘life force’ etc. have been abandoned in real heath care a long time ago, and medicine has improved hugely because of this.
- Hardly any alternative therapy has ‘passed the test of science’.
- Therefore, it is very doubtful whether alternative practitioners actually will ‘consider scientific studies’.
- True, some trials did suggest that acupuncture is an effective treatment for many medical conditions; but their methodological quality is often far too low to draw firm conclusions and many other, often better studies have shown the contrary.
- Numerous other alternative therapies have been proven ineffective using scientific studies.
- Therefore it might be a good idea to find a health care provider who does not offer unproven treatments simply to make a fast buck.
- Seeking alternative therapies may harm many people.
Dear Professor Robinson,
please forgive me for writing to you in a matter that, you might think, is really none of my business. I have been following the news and discussions about the BLACKMORE CHAIR at your university. Having been a professor of complementary medicine at Exeter for ~20 years and having published more papers on this subject than anyone else on the planet, I am naturally interested and would like to express some concerns, if you allow me to.
With my background, I would probably be the last person to argue that a research chair in alternative medicine is not a good and much-needed thing. However, accepting an endowment from a commercially interested source is, as you are well aware, a highly problematic matter.
I am confident that you intend to keep the sponsor at arm’s length and plan to appoint a true scientist to this post who will not engage in the promotional activities which the alternative medicine scene might be expecting. And I am equally sure that the money will be put to good use resulting in good and fully independent science.
But, even if all of this is the case, there are important problems to consider. By accepting Blackmore’s money, you have, perhaps inadvertently, given credit to a commercially driven business empire. As you probably know, Blackmores have a reputation of being ‘a bit on the cavalier side’ when it comes to rules and regulations. This is evidenced, for instance, by the number of complaints that have been upheld against them by the Australian authorities.
For these reasons, the creation of the new chair is not just a step towards generating research, it could (and almost inevitably will) be seen as a boost for quackery. It is foremost this aspect which might endanger the reputation of your university, I am afraid.
My own experience over the last two decades has taught me to be cautious and sceptical regarding the motives of many involved in the multi-billion alternative medicine business. I have recently published my memoir entitled ‘A SCIENTIST IN WONDERLAND. SEARCHING FOR TRUTH AND FINDING TROUBLE’; it might be a helpful read for you and the new professor.
I hope you take my remarks as they were meant: constructive advice from someone who had to learn it all the hard way. If I can be of further assistance, please do not hesitate to ask me.
A recent comment to a post of mine (by a well-known and experienced German alt med researcher) made the following bold statement aimed directly at me and at my apparent lack of understanding research methodology:
C´mon , as researcher you should know the difference between efficacy and effectiveness. This is pharmacological basic knowledge. Specific (efficacy) + nonspecific effects = effectiveness. And, in fact, everything can be effective – because of non-specific or placebo-like effects. That does not mean that efficacy is existent.
The point he wanted to make is that outcome studies – studies without a control group where the researcher simply observe the outcome of a particular treatment in a ‘real life’ situation – suffice to demonstrate the effectiveness of therapeutic interventions. This belief is very wide-spread in alternative medicine and tends to mislead all concerned. It is therefore worth re-visiting this issue here in an attempt to create some clarity.
When a patient’s condition improves after receiving a therapy, it is very tempting to feel that this improvement reflects the effectiveness of the intervention (as the researcher mentioned above obviously does). Tempting but wrong: there are many other factors involved as well, for instance:
- the placebo effect (mainly based on conditioning and expectation),
- the therapeutic relationship with the clinician (empathy, compassion etc.),
- the regression towards the mean (outliers tend to return to the mean value),
- the natural history of the patient’s condition (most conditions get better even without treatment),
- social desirability (patients tend to say they are better to please their friendly clinician),
- concomitant treatments (patients often use treatments other than the prescribed one without telling their clinician).
So, how does this fit into the statement above ‘Specific (efficacy) + nonspecific effects = effectiveness’? Even if this formula were correct, it would not mean that outcome studies of the nature described demonstrate the effectiveness of a therapy. It all depends, of course, on what we call ‘non-specific’ effects. We all agree that placebo-effects belong to this category. Probably, most experts also would include the therapeutic relationship and the regression towards the mean under this umbrella. But the last three points from my list are clearly not non-specific effects of the therapy; they are therapy-independent determinants of the clinical outcome.
The most important factor here is usually the natural history of the disease. Some people find it hard to imagine what this term actually means. Here is a little joke which, I hope, will make its meaning clear and memorable.
CONVERATION BETWEEN TWO HOSPITAL DOCTORS:
Doc A: The patient from room 12 is much better today.
Doc B: Yes, we stared his treatment just in time; a day later and he would have been cured without it!
I am sure that most of my readers now understand (and never forget) that clinical improvement cannot be equated with the effectiveness of the treatment administered (they might thus be immune to the misleading messages they are constantly exposed to). Yet, I am not at all sure that all ‘alternativists’ have got it.
A recent article in the BMJ about my new book seems to have upset fellow researchers of alternative medicine. I am told that the offending passage is the following:
“Too much research on complementary therapies is done by people who have already made up their minds,” the first UK professor of complementary medicine has said. Edzard Ernst, who left his chair at Exeter University early after clashing with the Prince of Wales, told journalists at the Science Media Centre in London that, although more research into alternative medicines was now taking place, “none of the centres is anywhere near critical enough.”
Following this publication, I received indignant inquiries from colleagues asking whether I meant to say that their work lacks critical thinking. As this is a valid question, I will try to answer it the best I presently can.
Any critical evaluation of alternative medicine has to yield its fair share of negative conclusions about the value of alternative medicine. If it fails to do that, one would need to assume that most or all alternative therapies generate more good than harm – and very few experts (who are not proponents of alternative medicine) would assume that this can possibly be the case.
Put differently, this means that a researcher or a research group that does not generate its fair share of negative conclusions is suspect of lacking a critical attitude. In a previous post, I have addressed this issue in more detail by creating an ‘index’: THE TRUSTWORTHINESS INDEX. I have also provided a concrete example of a researcher who seems to be associated with a remarkably high index (the higher the index, the more suspicion of critical attitude).
Instead of unnecessarily upsetting my fellow researchers of alternative medicine any further, I will just issue this challenge: if any research group can demonstrate to have an index below 0.5 (which would mean the team has published twice as many negative conclusions as positive ones), I will gladly and publicly retract my suspicion that this group is “anywhere near critical enough”.
Homeopathy has many critics who claim that there is no good evidence for this type of therapy. Homeopaths invariably find this most unfair and point to a plethora of studies that show an effect. They are, of course, correct! There are plenty of trials that suggest that homeopathic remedies do work. The question, however, is HOW RELIABLE ARE THESE STUDIES?
Here is a brand new one which might stand for dozens of others.
In this study, homeopaths treated 50 multimorbid patients with homeopathic remedies identifies by a method called ‘polarity analysis’ (PA) and prospectively followed them over one year (PA enables homeopaths to calculate a relative healing probability, based on Boenninghausen’s grading of polar symptoms).
The 43 patients (86%) who completed the observation period experienced an average improvement of 91% in their initial symptoms. Six patients dropped out, and one did not achieve an improvement of 80%, and was therefore also counted as a treatment failure. The cost of homeopathic treatment was 41% of projected equivalent conventional treatment.
Good news then for enthusiasts of homeopathy? 91% improvement!
Yet, I am afraid that critics might not be bowled over. They might smell a whiff of selection bias, lament the lack of a control group or regret the absence of objective outcome measures. But I was prepared to go as far as stating that such results might be quite interesting… until I read the authors’ conclusions that is:
Polarity Analysis is an effective method for treating multimorbidity. The multitude of symptoms does not prevent the method from achieving good results. Homeopathy may be capable of taking over a considerable proportion of the treatment of multimorbid patients, at lower costs than conventional medicine.
Virtually nothing in these conclusions is based on the data provided. They are pure extrapolation and wild assumptions. Two questions seem to emerge from this:
- How on earth can we take this and so many other articles on homeopathy seriously?
- When does this sort of article cross the line between wishful thinking and scientific misconduct?
During the next few weeks, I will post several short excerpts from my new book ‘A SCIENTIST IN WONDERLAND‘. Its subtitle already discloses much of what it is all about: ‘A MEMOIRE OF SEARCHING THE TRUTH AND FINDING TROUBLE’.
Some of my critics are likely to claim that I engage in this form of ‘promotion’ because I want to maximise my income by enticing my readers to buy the book. This is partly true, of course: after having worked very hard on this book for about 5 years, I want it to be read (but, at the same time, my critics would be mistaken: I do not expect to get rich on my new book – I am not that naïve; this ‘memoire’ will never be found in any best-seller list, I am sure). So, I suggest (if you do not want me to profit in any way) that you read my memoire after you got it from your library (which obviously would not affect my cash-flow all that much).
So here it is: with much trepidation and even more excitement I present to you the very first, short excerpt (as I said, there will be more).
There are some people, a fortunate few, who seem to know from an early age where they want to go in life, and have no trouble getting there.
I was not one of them. I was born in Germany in the years immediately following the end of World War II and, like many German children of that era, I was acutely aware of the awkwardness and unease that my elders displayed when it came to discussions that touched on the country’s recent history. Even as a young boy, I was conscious that there was a large and restive skeleton in the nation’s closet, and that it belonged to all of us – even to those of us who had not been alive during the Nazi era were somehow nevertheless its legatees, inextricably bound to it simply by the awareness of its existence.
With time, the growing realization that so many of our peers – teachers, uncles, aunts; perhaps even our own parents – had lent their assent, or worse, their enthusiastic assistance to the Nazi regime robbed their generation of its moral authority and left us, their children, unmoored and adrift.
In a profound sense I felt homeless. An accident of fate had landed me on the planet with a German passport, and with German as my mother tongue, but where did I really belong? Where would I go? What would I do with my life?
There had been physicians in my family for generations and there was always an expectation that I, too, would enter that profession. Yet I felt no strong pull towards medicine. As a young man my only real passion was music, particularly jazz, with its anarchic improvisations and disobedient rhythms; and the fact that it had been banned by the Nazis only made it all the more appealing to me. I would have been perfectly happy to linger indefinitely in the world of music, but eventually, like a debt come due, medicine summoned me, and I surrendered myself to the profession of my forebears.
In hindsight I am glad that my mother nudged me gently yet insistently in the direction of medical school. While music has delighted and comforted me throughout my life, it has been medicine that has truly defined me, stretching, challenging and nourishing me intellectually, even as it tested me on a personal level almost to the limits of my endurance.
Certainly, I had never anticipated that asking basic and necessary questions as a scientist might prove so fiercely controversial, and that as a result of my research I might become involved in ideological wrangling and political intrigue emanating from the highest level.
If I had known the difficulties I would face, the stark choices, the conflicts and machinations that awaited me, would I have chosen to spend my life in medicine? Yes, I would. Becoming a physician and pursuing the career of a scientist has afforded me not only the opportunity to speak out against the dangerous and growing influence of pseudoscience in medicine, but also, paradoxically, has given me both the reason and the courage to look back steadily at the unbearable past.
This is the story of how I finally found where I belong.
Guest post by Louise Lubetkin
(A SCIENTIST IN WONDERLAND: A MEMOIRE OF SEARCHING FOR TRUTH AND FINDING TROUBLE has now been published. An apt opportunity perhaps to post a letter and comment from the person who helped me GREATLY in finishing it.)
People write memoirs for a variety of reasons but perhaps one of the strongest impelling forces is the need to make sense of one’s own experiences. It is not surprising that you, who spent your entire professional career searching for explanations, identifying associations and parsing correlations, found yourself looking at your own life with the same analytical curiosity. Memoir is in many respects a natural choice in this regard.
That you chose to undertake a profoundly personal inventory at this juncture is also understandable in human terms. Retirement, whether anticipated and planned for, or (as in your case) thrust rudely upon you, reorders one’s sense of identity in ways that cannot fail to prompt reflection. It would have been surprising had you not felt an urge to look back and take stock, to trace the narrative arc of your life from its beginnings in post-war Germany all the way to the quiet house in rural Suffolk where you now sit, surrounded by the comfort of books and the accumulated paraphernalia of a life spent digging and delving in search of the building blocks of truth.
Given the contentious circumstances surrounding your departure from academic life, it is quite likely that you will be asked whether your decision to write a memoir was driven, at least in part, by a desire to settle scores. I think you can dismiss such a question unhesitatingly. You have no scores to settle: you came to England after a steady and unbroken ascent to the apex of your professional career, voluntarily leaving behind a position that most people would regard with envy and deference. You were never a supplicant at Exeter’s door; far from it. The fact that things went inexorably downhill over the course of your 20 years’ tenure there, and ended so deplorably, is not a reflection on you, your department, or the quality or quantity of work you turned out. Rather, it is a reflection on the very nature of the work you went there to do – and if there is any message in your memoir, it is this:
Alternative medicine is not, at its heart, a logical enterprise, and its adherents are not committed to – nor even interested in – a rational evaluation of their methods. Rather, alternative medicine is primarily an ideological position, a political credo, a reaction against mainstream medicine. To many of its adherents and impassioned advocates, its appeal lies not in any demonstrable therapeutic efficacy but in its perceived outsider status as the countercultural medicine, the medicine for Everyman, the David to the bullying medical-pharmaceutical Goliath. That your research work would elicit howls of protest was perhaps inevitable, given the threat it posed to the profitable and powerful alternative medicine industry. But it didn’t stop there: astonishingly, your work drew the ire of none less than the meddlesome heir apparent to the British throne. Prince Charles’ attempts to stymie your work call to mind the twelfth century martyr Thomas à Becket, of whom Henry II reputedly cried: “Oh, who will rid me of this turbulent priest?” (Henry’s sycophantic henchmen were quick to oblige, dispatching the hapless cleric on the steps of Canterbury cathedral.)
It’s clear that you were acutely aware, as a young man growing up in Germany, that science was not immune to the corrupting influence of political ideology, and that the German medical profession had entered – enthusiastically – into a Faustian compact with the Nazi regime. You have exhibited a courageous insistence on confronting and examining a national past that has at times felt like an intensely personal burden to you. It is ironic that in going to sleepy Exeter in an earnest, conscious attempt to shake off the constricting, intrigue-ridden atmosphere of academic Vienna, you ultimately found yourself once again mired in a struggle against the influence of ideology and the manipulation of science for political ends.
You went to Exeter strictly as a scientist, a skilled inquirer, a methodical investigator, expecting to be able to bring the rigors of logic and the scientific method to bear on an area of medical practice that had until then not been subjected to any kind of systematic evaluation. Instead, you were caught in a maelstrom of intrigue far worse than that which you had gratefully left behind in Vienna, buffeted and bruised by forces against which a lesser man would surely not have had the fortitude to push back so long and so hard.
Hard to believe but, in the last 35 years, I have written or edited a total of 49 books; about half of them on alternative medicine and the rest on various subjects related to clinical medicine and research. Each time a new one comes out, I am excited, of course, but this one is special:
- I have not written a book for several years.
- I have worked on it much longer than on any book before.
- Never before have I written a book with is so much about myself.
- None of my previous book covered material that is as ‘sensitive’ as this one.
I started on this book shortly after TRICK OR TREATMENT had been published. Its initial working title was ALTERNATIVE MEDICINE: THE INSIDE STORY. My aim was to focus on the extraordinary things which had happened during my time in Exeter, to shed some light on the often not so quaint life in academia, and to show how bizarre the world of alternative medicine truly is. But several people who know about these things and who had glanced at the first draft chapters strongly advised me to radically change this concept. They told me that such a book could only work as a personal memoire.
Yet I was most reluctant to write about myself; I wanted to write about science, research as well as the obstacles which some people manage to put in their way. So, after much discussion and contemplation, I compromised and added the initial chapters which told the reader about my background and my work prior to the Exeter appointment. This brought in subjects like my research on ‘Nazi-medicine’ (which, I believe, is more important than that on alternative medicine) that seemed almost entirely unrelated to alternative medicine, and the whole thing began to look a bit disjointed, in my view. However, my advisers felt this was a step in the right direction and argued that my compromise was not enough; they wanted more about me as a person, my motivations, my background etc. Eventually I (partly) gave in and provided a bit more of what they seemed to want.
But I am clearly not a novelist, most of what I have ever written is medical stuff; my style is too much that of a scientist – dry and boring. In other words, my book seemed to be going nowhere. Just when, after years of hard work, I was about to throw it all in the bin, help came from a totally unexpected corner.
Louise Lubetkin (even today, I have never met her in person) had contributed several posts as ‘guest editor’ to this blog, and I very much liked her way with words. When she offered to have a look at my book, I was thrilled. It is largely thanks to her that my ‘memoire’ ever saw the light of day. She helped enormously with making it readable and with joining up the seemingly separate episodes describes in my book.
Finding a fitting title was far from easy. Nothing seemed to encapsulate its contents, and ‘A SCIENTIST IN WONDERLAND’, the title I eventually chose, is a bit of a compromise; the subtitle does describe it much better, I think: A MEMOIR OF SEARCHING FOR TRUTH AND FINDING TROUBLE.
Now that the book is about to be published, I am anxious as never before on similar occasions. I do, of course, not think for a minute that it will be anything near to a best-seller, but I want people with an interest in alternative medicine, academia or science to read it (get it from a library to save money) and foremost I want them to understand why I wrote it. For me, this is neither about settling scores nor about self-promotion, it is about telling a story which is important in more than one way.
As promised, I will try with this post to explain my reservations regarding the new meta-analysis suggesting that individualised homeopathic remedies are superior to placebos. Before I start, however, I want to thank all those who have commented on various issues; it is well worth reading the numerous and diverse comments.
To remind us of the actual meta-analysis, it might be useful to re-publish its abstract (the full article is also available online):
A rigorous and focused systematic review and meta-analysis of randomised controlled trials (RCTs) of individualised homeopathic treatment has not previously been undertaken. We tested the hypothesis that the outcome of an individualised homeopathic treatment approach using homeopathic medicines is distinguishable from that of placebos.
The review’s methods, including literature search strategy, data extraction, assessment of risk of bias and statistical analysis, were strictly protocol-based. Judgment in seven assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise ‘reliable evidence’ if its risk of bias was low or was unclear in one specified domain. ‘Effect size’ was reported as odds ratio (OR), with arithmetic transformation for continuous data carried out as required; OR > 1 signified an effect favouring homeopathy.
Thirty-two eligible RCTs studied 24 different medical conditions in total. Twelve trials were classed ‘uncertain risk of bias’, three of which displayed relatively minor uncertainty and were designated reliable evidence; 20 trials were classed ‘high risk of bias’. Twenty-two trials had extractable data and were subjected to meta-analysis; OR = 1.53 (95% confidence interval (CI) 1.22 to 1.91). For the three trials with reliable evidence, sensitivity analysis revealed OR = 1.98 (95% CI 1.16 to 3.38).
Medicines prescribed in individualised homeopathy may have small, specific treatment effects. Findings are consistent with sub-group data available in a previous ‘global’ systematic review. The low or unclear overall quality of the evidence prompts caution in interpreting the findings. New high-quality RCT research is necessary to enable more decisive interpretation.
Since my team had published an RCTs of individualised homeopathy, it seems only natural that my interest focussed on why the study (even though identified by Mathie et al) had not been included in the meta-analysis. Our study had provided no evidence that adjunctive homeopathic remedies, as prescribed by experienced homeopathic practitioners, are superior to placebo in improving the quality of life of children with mild to moderate asthma in addition to conventional treatment in primary care.
I was convinced that this trial had been rigorous and thus puzzled why, despite receiving ‘full marks’ from the reviewers, they had not included it in their meta-analysis. I thus wrote to Mathie, the lead author of the meta-analysis, and he explained: For your trial (White et al. 2003), under domain V of assessment, we were unable to extract data for meta-analysis, and so it was attributed high risk of bias, as specified by the Cochrane judgmental criteria. Our designated main outcome was the CAQ, for which we needed to know (or could at least estimate) a mean and SD for both the baseline and the end-point of the study. Since your paper reported only the change from baseline in Table 3 or in the main text, it is not possible to derive the necessary end-point for analysis.
It took a while and several further emails until I understood: our study did report both the primary (Table 2 quality of life) and secondary outcome measure (Table 3 severity of symptoms). The primary outcome measure was reported in full detail such that a meta-analysis would have been possible. The secondary outcome measure was also reported but not in full detail, and the data provided by us would not lend themselves to meta-analyses. By electing not our primary but our secondary outcome measure for their meta-analysis, Mathie et al were able to claim that they were unable to use our study and reject it for their meta-analysis.
Why did they do that?
The answer is simple: in their methods section, they specify that they used outcome measures “based on a pre-specified hierarchical list in order of greatest to least importance, recommended by the WHO“. This, I would argue is deeply flawed: the most important outcome measure of a study is usually the one for which the study was designed, not the one that some guys at the WHO feel might be important (incidentally, the WHO list was never meant to be applied to meta-analyses in that way).
By following rigidly their published protocol, the authors of the meta-analysis managed to exclude our negative trial. Thus they did everything right – or did they?
Well, I think they committed several serious mistakes.
- Firstly, they wrote the protocol, which forced them to exclude our study. Following a protocol is not a virtue in itself; if the protocol is nonsensical it even is the opposite. Had they proceeded as is normal in such cases and used our primary outcome measure in their meta-analyses, it is most likely that their overall results would not have been in favour of homeopathy.
- Secondly, they awarded our study a malus point for the criterium ‘selective outcome reporting’. This is clearly a wrong decision: we did report the severity-outcome, albeit not in sufficient detail for their meta-analysis. Had they not committed this misjudgment, our RCT would have been the only one with an ‘A’ rating. This would have very clearly highlighted the nonsense of excluding the best-rated trial from meta-analysis.
There are several other oddities as well. For instance, Mathie et al judge our study to be NOT free of vested interest. I asked Mathie why they had done this and was told it is because we accepted free trial medication from a homeopathic pharmacy. I would argue that my team was far less plagued by vested interest than the authors of their three best (and of course positive) trials who, as I happen to know, are consultants for homeopathic manufacturers.
And all of this is just in relation to our own study. Norbert Aust has uncovered similar irregularities with other trials and I take the liberty of quoting his comments posted previously again here:
I have reason to believe that this review and metaanalysis in biased in favor of homeopathy. To check this, I compared two studies (1) Jacobs 1994 about the treatment of childhood diarrhea in Nicaragua, (2) Walach 1997 about homeopathic threatment of headaches. The Jacobs study is one of the three that provided ‘reliable evidence’, Walach’s study earned a poor C2.2 rating and was not included in the meta-analyses. Jacobs’ results were in favour of homeopathy, Walach’s not.
For the domains where the rating of Walach’s study was less than that of the Jacobs study, please find citations from the original studies or my short summaries for the point in question.
Domain I: Sequence generation:
“The remedy selected was then mailed to a notary public who held a stock of placebos. The notary threw a dice and mailed either the homeopathic remedy or an appropriate placebo. The notary was provided with a blank randomisation list.”
Rating: UNCLEAR (Medium risk of bias)
“For each of these medications, there was a box of tubes in sequentially numbered order which had been previously randomized into treatment or control medication using a random numbers table in blocks of four”
Rating: YES (Low risk of bias)
Domain IIIb: Blinding of outcome assessor
“The notary was provided with a blank randomization list which was an absolutely unique document. It was only handed out after the biometrician (WG) had deposited all coded original data as a printout at the notary’s office. (…) Data entry was performed blindly by personnel not involved in the study. ”
Rating: UNCLEAR (Medium risk of bias)
“All statistical analyses were done before breaking the randomisation code, using the program …”
Rating: YES (Low risk of bias)
Domain V: Selective outcome reporting
Study protocol was published in 1991 prior to enrollment of participants, all primary outcome variables were reported with respect to all participants and the endpoints.
Rating: NO (high risk of bias)
No prior publication of protocol, but a pilot study exists. However this was published in 1993 only after the trial was performed in 1991. Primary outcome defined (duration of diarrhea), reported but table and graph do not match, secondary outcome (number of unformed stools on day 3) seems defined post hoc, for this is the only one point in time, this outcome yielded a significant result.
Rating: YES (low risk of bias)
Domain VI: Other sources of bias:
Rating: NO (high risk of bias), no details given
Imbalance of group properties (size, weight and age of children), that might have some impact on course of disease, high impact of parallel therapy (rehydration) by far exceeding effect size of homeopathic treatment
Rating: YES (low risk of bias), no details given
In a nutshell: I fail to see the basis for the different ratings in the studies themselves. I assume bias of the authors of the review.
So, what about the question posed in the title of this article? The meta-analysis is clearly not a ‘proof of concept’. But is it proof for misconduct? I asked Mathie and he answered as follows: No, your statement does not reflect the situation at all. As for each and every paper, we selected the main outcome measure for your trial using the objective WHO classification approach (in which quality of life is clearly of lower rank than severity). This is all clearly described in our prospective protocol. Under no circumstances did we approach this matter retrospectively, in the way you are implying.
Some nasty sceptics might have assumed that the handful of rigorous studies with negative results were well-known to most researchers of homeopathy. In this situation, it would have been hugely tempting to write the protocol such that these studies must be excluded. I am thrilled to be told that the authors of the current new meta-analysis (who declared all sorts of vested interests at the end of the article) resisted this temptation.
Guest post by Pete Attkins
Commentator “jm” asked a profound and pertinent question: “What DOES it take for people to get real in this world, practice some common sense, and pay attention to what’s going on with themselves?” This question was asked in the context of asserting that personal experience always trumps the results of large-scale scientific experiments; and asserting that alt-med experts are better able to provide individulized healthcare than 21st Century orthodox medicine.
What does common sense and paying attention lead us to conclude about the following? We test a six-sided die for bias by rolling it 100 times. The number 1 occurs only once and the number 6 occurs many times, never on its own, but in several groups of consecutive sixes.
I think it is reasonable to say that common sense would, and should, lead everyone to conclude that the die is biased and not fit for its purpose as a source of random numbers.
In other words, we have a gut feeling that the die is untrustworthy. Gut instincts and common sense are geared towards maximizing our chances of survival in our complex and unpredictable world — these are innate and learnt behaviours that have enabled humans to survive despite the harshness of our ever changing habitat.
Only very recently in the long history of our species have we developed specialized tools that enable us to better understand our harsh and complex world: science and critical thinking. These tools are difficult to master because they still haven’t been incorporated into our primary and secondary formal education systems.
The vast majority of people do not have these skills therefore, when a scientific finding flies in the face of our gut instincts and/or common sense, it creates an overwhelming desire to reject the finding and classify the scientist(s) as being irrational and lacking basic common sense. It does not create an intense desire to accept the finding then painstakingly learn all of the science that went into producing the finding.
With that in mind, let’s rethink our common sense conclusion that the six-sided die is biased and untrustworthy. What we really mean is that the results have given all of us good reason to be highly suspicious of this die. We aren’t 100% certain that this die is biased, but our gut feeling and common sense are more than adequate to form a reasonable mistrust of it and to avoid using it for anything important to us. Reasons to keep this die rather than discard it might be to provide a source of mild entertainment or to use its bias for the purposes of cheating.
Some readers might be surprised to discover at this point that the results I presented from this apparently heavily-biased die are not only perfectly valid results obtained from a truly random unbiased die, they are to be fully expected. Even if the die had produced 100 sixes in that test, it would not confirm that the die is biased in any way whatsoever. Rolling a truly unbiased die once will produce one of six possible outcomes. Rolling the same die 100 times will produce one unique sequence out of the 6^100 (6.5 x 10^77) possible sequences: all of which are equally valid!
Gut feeling plus common sense rightfully informs us that the probability of a random die producing one hundred consecutive sixes is so incredibly remote that nobody will ever see it occur in reality. This conclusion is also mathematically sound: if there were 6.5 x 10^77 people on Earth, each performing the same test on truly random dice, there is no guarantee that anyone would observe a sequence of one hundred consecutive sixes.
When we observe a sequence such as 2 5 1 4 6 3 1 4 3 6 5 2… common sense informs us that the die is very likely random. If we calculate the arithmetic mean to be very close to 3.5 then common sense will lead us to conclude that the die is both random and unbiased enough to use it as a reliable source of random numbers.
Unfortunately, this is a perfect example of our gut feelings and common sense failing us abysmally. They totally failed to warn us that the 2 5 1 4 6 3 1 4 3 6 5 2… sequence we observed had exactly the same (im)probability of occurring as a sequence of one hundred 6s or any other sequence that one can think of that doesn’t look random to a human observer.
The 100-roll die test is nowhere near powerful enough to properly test a six-sided die, but this test is more than adequately powered to reveal some of our cognitive biases and some of the deficits in our personal mastery of science and critical thinking.
To properly test the die we need to provide solid evidence that it is both truly random and that its measured bias tends towards zero as the number of rolls tends towards infinity. We could use the services of one testing lab to conduct billions of test rolls, but this would not exclude errors caused by such things as miscalibrated equipment and experimenter bias. It is better to subdivide the testing across multiple labs then carefully analyse and appropriately aggregate the results: this dramatically reduces errors caused by equipment and humans.
In medicine, this testing process is performed via systematic reviews of multiple, independent, double-blind, placebo-controlled trials — every trial that is insufficiently powered to add meaningfully to the result is rightfully excluded from the aggregation.
Alt-med relies on a diametrically opposed testing process. It performs a plethora of only underpowered tests; presents those that just happen to show a positive result (just as a random die could’ve produced); and sweeps under the carpet the overwhelming number of tests that produced a negative result. It publishes only the ‘successes’, not its failures. By sweeping its failures under the carpet it feels justified in making the very bold claim: Our plethora of collected evidence shows clearly that it mostly ‘works’ and, when it doesn’t, it causes no harm.
One of the most acidic tests for a hypothesis and its supporting data (which is a mandatory test in a few branches of critical engineering) is to substitute the collected data for random data that has been carefully crafted to emulate the probability mass functions of the collected datasets. This test has to be run multiple times for reasons that I’ve attempted to explain in my random die example. If the proposer of the hypothesis is unable to explain the multiple failures resulting from this acid test then it is highly likely that the proposer either does not fully understand their hypothesis or that their hypothesis is indistinguishable from the null hypothesis.