Sorry, but I am fighting a spell of depression today.
Why? I came across this website which lists the 10 top blogs on alternative medicine. To be precise, here is what they say about their hit-list: this list includes the top 10 alternative medicine bloggers on Twitter, ranked by Klout score. Using Cision’s media database, we compiled the list based on Cision’s proprietary research, with results limited to bloggers who dedicate significant coverage to alternative medicine and therapies…
And here are the glorious top ten:
All of these sites are promotional and lack even the slightest hint of critical evaluation. All of them sell or advertise products and are thus out to make money. All of them are full of quackery, in my view. Some of the most popular bloggers are world-famous quacks!
What about impartial information for the public? What about critical review of the evidence? What about a degree of balance? What about guiding consumers to make responsible, evidence-based decisions? What about preventing harm? What about using scarce resources wisely?
I don’t see any of this on any of the sites.
You see, now I have depressed you too!
Quick, buy some herbal, natural, holistic and integrative anti-depressant! As it happens, I have some for sale….
Antioxidant vitamins include vitamin E, beta-carotene, and vitamin C. They are often recommended and widely used for preventing major cardiovascular outcomes. However, the effect of antioxidant vitamins on cardiovascular events remains unclear. There is plenty of evidence but the trouble is that it is not always of high quality and confusingly contradictory. Consequently, it is possible to cherry-pick the studies you prefer in order to come up with the answer you like. That this approach is counter-productive should be obvious to every reader of this blog. Only a rigorous systematic review can provide an answer that is as reliable as possible with the data available to date. Chinese researchers have just published such an assessment.
They searched PubMed, EmBase, the Cochrane Central Register of Controlled Trials, and the proceedings of major conferences for relevant investigations. To be eligible, studies had to be randomized, placebo-controlled trials reporting on the effects of antioxidant vitamins on cardiovascular outcomes. The primary outcome measures were major cardiovascular events, myocardial infarction, stroke, cardiac death, total death, and any adverse events.
The searches identified 293 articles of which 15 RCTs reporting data on 188209 participants met the inclusion criteria. In total, these studies reported 12749 major cardiovascular events, 6699 myocardial infarction, 3749 strokes, 14122 total death, and 5980 cardiac deaths. Overall, antioxidant vitamin supplementation, as compared to placebo, had no effect on major cardiovascular events (RR, 1.00; 95% CI, 0.96-1.03), myocardial infarction (RR, 0.98; 95% CI, 0.92-1.04), stroke (RR, 0.99; 95% CI, 0.93-1.05), total death (RR, 1.03; 95% CI, 0.98-1.07), cardiac death (RR, 1.02; 95% CI, 0.97-1.07), revascularization (RR, 1.00; 95% CI, 0.95-1.05), total CHD (RR, 0.96; 95% CI, 0.87-1.05), angina (RR, 0.98; 95% CI, 0.90-1.07), and congestive heart failure (RR, 1.07; 95% CI, 0.96 to 1.19).
The authors’ conclusion from these data could not be clearer: Antioxidant vitamin supplementation has no effect on the incidence of major cardiovascular events, myocardial infarction, stroke, total death, and cardiac death.
Few subjects in the realm of nutrition have attracted as much research during recent years as did antioxidants, and it is hard to think of a disease for which they are not recommended by this expert or another. Cardiovascular disease used to be the flag ship in this fleet of conditions; not so long ago, even the conventional medical wisdom sympathized with the notion that the regular supplementation of our diet with antioxidant vitamins might reduce the risk of cardiovascular disease and mortality.
Today, the pendulum has swung back, and it now seems to be mostly the alternative scene that still swears by antioxidants for that purpose. Nobody doubts that antioxidants have important biological functions, but this excellent meta-analysis quite clearly and fairly convincingly shows that buying antioxidant supplements is a waste of money. It does not promote cardiovascular health, it merely generates very expensive urine.
Even after all these years of full-time research into alternative medicine and uncounted exchanges with enthusiasts involved in this sector, I find the logic that is often applied in this field bewildering and the unproductiveness of the dialogue disturbing.
To explain what I mean, it be might best to publish a (fictitious, perhaps slightly exaggerated) debate between a critical thinker or scientist (S) and an uncritical proponent (P) of one particular form of alternative medicine.
P: Did you see this interesting study demonstrating that treatment X is now widely accepted, even by highly critical GPs at the cutting edge of health care?
S: This was a survey, not a ‘study’, and I never found the average GP “highly critical”. Surveys of this nature are fairly useless and they “demonstrate” nothing of real value.
P: Whatever, but it showed that GPs accept treatment X. This can only mean that they realise how safe and effective it is.
S: Not necessarily, GPs might just give in to consumer demand, or the sample was cleverly selected, or the question was asked in a leading manner, etc.
P: Hardly, because there is plenty of good evidence for treatment X.
S: Really? Show me.
P: There is this study here which proves that treatment X works and is risk-free.
S: The study was far too small to demonstrate safety, and it is wide open to multiple sources of bias. Therefore it does not conclusively show efficacy either.
P: You just say this because you don’t like its result! You have a closed mind!
In any case, it was merely an example! There are plenty more positive studies; do your research properly before you talk such nonsense.
S: I did do some research and I found a recent, high quality systematic review that arrived at a negative conclusion about the value of treatment X.
P: That review was done by sceptics who clearly have an axe to grind. It is based on studies which do not account for the intrinsic subtleties of treatment X. Therefore they are unfair tests of treatment X. These trials don’t really count at all. Every insider knows that! The fact that you cite it merely confirms that you do not understand what you are talking about.
S: It seems to me, that you like scientific evidence only when it confirms your belief. This, I am afraid, is what quacks tend to do!
P: I strongly object to being insulted in this way.
S: I did not insult you, I merely made a statement of fact.
P: If you like facts, you have to see that one needs to have sufficient expertise in treatment X in order to apply it properly and effectively. This important fact is neglected in all of those trials that report negative results; and that’s why they are negative. Simple! I really don’t understand why you are too stupid to understand this. Such studies do not show that treatment X is ineffective, but they demonstrate that the investigators were incompetent or hired with the remit to discredit treatment X.
S: I would have thought they are negative because they minimised bias and the danger of generating a false positive result.
P: No, by minimising bias, as you put it, these trials eliminated the factors that are important elements of treatment X.
S: Such as the placebo-effect?
P: That’s what you call it because you irrationally believe in reductionist science.
S: Science requires no belief, I think you are the believer here.
P: The fact is that scientists of your ilk negate all factors related to human interactions. Patients are no machines, you know, they need compassion; we clinicians know that because we work at the coal face of health care. Scientists in their ivory towers have no idea about patient care and just want science for science sake. This is not how you help patients. Show some compassion man!
S: I do know about the importance of compassion and care, but here we are discussing an entirely different topic, namely tests the efficacy or effectiveness of treatments, not patient-care. Let’s focus on one issue at a time.
P: You cannot separate things in this way. We have to take a holistic view. Patients are whole individuals, and you cannot do them justice by running artificial experiments. Every patient is different; clinical trials fail to account for this fact and are therefore fairly irrelevant to us and to our patients. Real life is very different from your imagined little experiments, you know.
S: These are platitudes that are nonsensical in this context and do not contribute anything meaningful to the present discussion. You do not seem to understand the methodology or purpose of a clinical trial.
P: That is typical! Whenever you run out of arguments, you try to change the subject or throw a few insults at me.
S: Not at all, I thought we were talking about clinical trials evaluating the effectiveness of treatment X.
P: That’s right; and they do show that it is effective, provided you consider those which are truly well-done by experts who know about treatment X and believe in it.
S: Not true. Only if you cherry-pick the data will you be able to produce an overall positive result for treatment X.
P: In any case, the real world results of clinical practice show very clearly that it works. It would not have survived for so long, if it didn’t. Nobody can deny that, and nobody should claim that silly little trials done in artificial circumstances are more meaningful than a wealth of experience.
S: Experience has little to do with reliable evidence.
P: To deny the value of experience is just stupid and clearly puts you in the wrong. I have shown you plenty of reliable evidence but you just ignore everything I say that does not go along with your narrow-minded notions about science; science is not the only way of knowing or comprehending things! Stop being obsessed with science.
S: No, you show me rubbish data and have little understanding of science, I am afraid.
P: Here we go again! I have had about enough of that and your blinkered arguments. We are going in circles because you are ignorant and arrogant. I have tried my best to show you the light, but your mind is closed. I offer true insight and you pay me back with insults. You and your cronies are in the pocket of BIG PHARMA. You are cynical, heartless and not interested in the wellbeing of patients. Next you will tell me to vaccinate my kids!
S: I think this is a waste of time.
P: Precisely! Everyone who has followed this debate will see very clearly that you are obsessed with reductionist science and incapable of considering the suffering of whole individuals. You want to deny patients a treatment that really helps them simply because you do not understand how treatment X works. Shame on you!!!
According to the UK General Osteopathic Council, osteopathy is a system of diagnosis and treatment for a wide range of medical conditions. It works with the structure and function of the body, and is based on the principle that the well-being of an individual depends on the skeleton, muscles, ligaments and connective tissues functioning smoothly together.
To an osteopath, for your body to work well, its structure must also work well. So osteopaths work to restore your body to a state of balance, where possible without the use of drugs or surgery. Osteopaths use touch, physical manipulation, stretching and massage to increase the mobility of joints, to relieve muscle tension, to enhance the blood and nerve supply to tissues, and to help your body’s own healing mechanisms. They may also provide advice on posture and exercise to aid recovery, promote health and prevent symptoms recurring.
In case this sounds a bit vague to you, and in case you wonder what this “wide range of conditions” might be, rest assured, you are not alone. So let’s try to be a little more concrete and clear up some of the confusion around this profession. There are two very different types of osteopaths: US osteopaths are virtually identical with conventionally trained physicians; their qualification is equivalent to those of medical practitioners and they can, for instance, specialise to become GPs or neurologists or surgeons etc. Elsewhere, osteopaths are non-medically qualified alternative practitioners. In the UK, they are regulated by statute, in other counties not. And as to the “wide range of conditions”, I am not aware of any disease or symptom for which the evidence is convincing.
Osteopaths most commonly treat patients suffering from Chronic Non-Specific Low Back Pain (CNSLBP) using a set of non-drug interventions, particularly manual therapies such as spinal mobilisation and manipulation. The question is how well are these techniques supported by reliable evidence. To answer it, we must not cherry-pick our evidence but we need to consider the totality of the reliable studies; in other words, we need an up-to-date systematic review. Such an assessment of clinical research into osteopathic intervention for CNSLBP was recently published by Australian experts.
A thorough search of the literature in multiple electronic databases was undertaken, and all articles were included that reported clinical trials; had adult participants; tested the effectiveness and/or efficacy of osteopathic manual therapies applied by osteopaths, and had a study condition of CNSLBP. The quality of the trials was assessed using the Cochrane criteria. Initial searches located 809 papers, 772 of which were excluded on the basis of abstract alone. The remaining 37 papers were subjected to a detailed analysis of the full text, which resulted in 35 further articles being excluded. There were thus only two studies assessing the effectiveness of manual therapies applied by osteopaths in adult patients with CNSLBP. The results of one trial suggested that the osteopathic intervention was similar in effect to a sham intervention, and the other implies equivalence of effect between osteopathic intervention, exercise and physiotherapy.
I guess, this comes as a bit of a surprise to many consumers who have been told over and over again by osteopaths and their supporters that the evidence is sound. Personally, I am not at all surprised because, two years ago, we published a similar review, albeit with a wider spectrum of conditions, namely any type of musculoskeletal pain. We managed to include a total of 16 RCTs. Five of them suggested that osteopathy leads to a significantly stronger reduction of musculoskeletal pain than a range of control interventions. However, 11 RCTs indicated that osteopathy, compared to controls, generates no change in musculoskeletal pain. At the time, we felt that these data fail to produce compelling evidence for the effectiveness of osteopathy as a treatment of musculoskeletal pain.
This lack of convincing evidence is in sharp contrast to the image of osteopaths as back pain specialists. The UK General Osteopathic council, for instance, sates that Osteopaths’ patients include the young, older people, manual workers, office professionals, pregnant women, children and sports people. Patients seek treatment for a wide variety of conditions, including back pain…In addition, thousands of websites try to convince the consumer that osteopathy is a well-proven therapy for chronic low back pain – not to mention the many other conditions for which the evidence is even less sound.
As so often in alternative medicine, these claims seem to be based more on wishful thinking than on reliable evidence. And as so often, the victims of bogus claims are the consumers who are being misled into making wrong therapeutic decisions, wasting money, and delaying recovery from illness.
Ignaz von Peczely (1826-1911), a Hungarian physician, got the idea for iridology (or iris-diagnosis) more than a century ago, after seeing streaks in the iris of a man he was treating for a broken leg, and similar phenomena the iris of an owl whose leg von Peczely had broken many years before. He subsequently became convinced that his method was able to distinguish between healthy organs and those that are overactive, inflamed, or distressed. Iridology became internationally known when US chiropractors began adopting this method in their clinical practice. In the United States, most insurance programs do not cover iridology but, in some European countries, they often do. In Germany, for instance, 80% of the Heilpraktiker (non-medically qualified health practitioners) practice iridology.
Iridologists claim to be able to diagnose the health status of an individual, medical conditions or predispositions to disease through abnormalities of pigmentation in the iris. The popularity of iridology renders it necessary to ask whether this method is valid.
The aim of my systematically review from 1999 was to critically evaluate all available, reliable tests of iridology as a diagnostic tool. Four case control studies were included; these are investigations where iridologists are asked to tell by looking at the iris of individuals whether that person does or does not have a certain condition. The majority of these studies suggested that iridology is not a valid diagnostic method. Back then, I concluded that “the validity of iridology as a diagnostic tool is not supported by scientific evaluations. Patients and therapists should be discouraged from using this method.”
Since the publication of my article, several further studies have emerged:
One German team conducted a study investigating the applicability of iridology as a screening method for colorectal cancer. Digital color slides were obtained from both eyes of 29 patients with histologically diagnosed colorectal cancer and from 29 age- and gender-matched healthy control subjects. The slides were presented in random order to acknowledged iridologists without knowledge of the number of patients in the two categories. The iridologists correctly detected 51.7% and 53.4%, respectively, of the patients’ slides; therefore, the likelihood was statistically no better than chance. Sensitivity was, respectively, 58.6% and 55.2%, and specificity was 44.8% and 51.7%. The authors’ conclusion was blunt: “Iridology had no validity as a diagnostic tool for detecting colorectal cancer in this study.”
A study from South Africa aimed to determine the efficacy of iridology in the identification of moderate to profound sensorineural hearing loss in adolescents. A controlled trial was conducted with an iridologist, blind to the actual hearing status of participants, analysing the irises of participants with and without hearing loss. Fifty hearing impaired and fifty normal hearing subjects, between the ages of 15 and 19 years, controlled for gender, participated in the study. An experienced iridologist analysed the randomised set of participants’ irises. A 70% correct identification of hearing status was obtained with a false negative rate of 41% compared to a 19% false positive rate. The respective sensitivity and specificity rates therefore were 59% and 81%. The authors of this investigation concluded that “iridological analysis of hearing status indicated a statistically significant relationship to actual hearing status (P < 0.05). Although statistically significant sensitivity and specificity rates for identifying hearing loss by iridology were not comparable to those of traditional audiological screening procedures.”
A further German study investigated the value of iridology as a diagnostic tool in detecting some common cancers. One hundred ten subjects were enrolled; 68 subjects had histologically proven cancers of the breast, ovary, uterus, prostate, or colorectum, and 42 were cancer-free controls. All subjects were examined by an experienced practitioner of iridology, who was unaware of their medical details. He was allowed to suggest up to five diagnoses for each subject and his results were then compared with each subject’s medical diagnosis to determine the accuracy of iridology in detecting malignancy. Iridology identified the correct diagnosis in only 3 cases (sensitivity, 0.04). The authors concluded that “iridology was of no value in diagnosing the cancers investigated in this study.”
Based on these results it is impossible, I think, to claim that iridology is a valid or useful diagnostic tool. As there is no anatomical or physiological basis for its assumptions, iridology is not biologically plausible. Furthermore, the available clinical evidence does not support its validity as a diagnostic tool. In other words, iridology is bogus. This statement is in sharp contract to the information consumers receive about the method on uncounted websites, books, articles, etc. One website picked at random provides the following information:
The iris reveals changing conditions of every part and organ of the body. Every organ and part of the body is represented in the iris in a well defined area. In addition, through various marks, signs, and discoloration in the iris, nature reveals inherited weaknesses and strengths.
By means of this art / science, an iridologist (one who studies the coloration and fiber structure of the eye) can tell an individual his/her inherited and acquired tendencies towards health and disease, his current condition in general, and the state of every organ in particular.
Iridology cannot detect a specific disease, but, can tell an individual if they have over or under activity in specific areas of the body. For example, an under-active pancreas might indicate a diabetic condition.
Another source claims:
The underlying platform of iridology is that that eyes act as a ‘window’ to a person’s health & well being. This ‘window’ enables the practitioner to see whether areas or organs within the body are healthy, inflamed or ‘over active’. It also enables them to assess a person’s past/ possible future health problems & consider if the patient has a susceptibility to certain diseases. It is important to understand that iridology is simply a method of diagnosis & analysis.
You may well think that none of this really matters. Who cares whether iridology is bogus or not! I would argue that it does matter. Bogus methods cost money that could be better spent elsewhere. More importantly, false positive and false negative diagnoses generated by bogus diagnostic methods can put lives at risk.
But there is a more general and perhaps more crucial point here: alternative medicine is an area where people far too easily get away with ignoring the published evidence and scientific consensus. In the last two decades, I have seen many alternative modalities getting scientifically dis-proven; not in a single such instance can I remember that the corresponding alternative practitioners and their professional organisations took any notice of this fact, and not once did I notice that their practice had changed.
If research is systematically ignored, it becomes a useless appendix. More importantly, progress is then stifled to the detriment of all our best interests.
A stroke is a condition where brain cells get irreversibly damaged either by a haemorrhage in the brain or by a blood clot cutting off oxygen supply. This process leaves most patients with neurological deficits such as difficulties in moving, speaking, concentrating etc. As other parts of the brain learn to take over, these problems can partly or completely resolve themselves over time, but many patients are left with permanent handicaps. Stroke-rehabilitation can minimise these problems, and there is a long-standing debate as to which measures are most effective. Acupuncture has been discussed as a method to improve the results of stroke-rehabilitation, but the evidence is hotly disputed. This is why a new study in this area is an important contribution to our existing knowledge.
The aim of this randomised trial was to test the effectiveness of acupuncture in promoting the recovery of patients with ischaemic stroke and to determine whether the outcomes of combined physiotherapy and acupuncture are superior to those with physiotherapy alone. The Chinese investigators recruited 120 patients who received one of three daily treatments: 1) acupuncture, 2) physiotherapy, 3) physiotherapy combined with acupuncture. Motor function in the limbs was measured with the Fugl-Meyer assessment (FMA); the modified Barthel index (MBI) was used to rate activities of daily living; both of these measures are validated and well-established. All evaluations were performed by assessors blinded to treatment allocation.
At baseline, FMA and MBI scores did not significantly differ among the treatment groups. Compared with baseline, on day 28 of therapy, the mean FMA scores of the physiotherapy, acupuncture, and combined treatment groups had increased by 65.6%, 57.7%, and 67.2%, respectively; on day 56, FMA scores had increased by 88.1%, 64.5%, and 88.6%, respectively. The respective MBI scores in the three groups had increased by 85.2%, 60.4%, and 63.4% at day 28 and by 108.0%, 71.2%, and 86.2% at day 56, respectively. However, FMA scores did not significantly differ between the three treatment groups on the 28th day. By the day 56, the FMA and MBI scores of the physiotherapy group were 46.1% and 33.2% greater, respectively, than those in the acupuncture group. No significant differences were seen between the combined treatment group and the other groups. The FMA subscores for the upper extremities did not show significant improvements in any group on day 56.
The authors draw the following conclusion: “Acupuncture is less effective for the outcome measures studied than is physiotherapy. Moreover, the therapeutic effect of combining acupuncture with physiotherapy was not superior to that of physiotherapy alone. A larger-scale clinical trial is necessary to confirm these finding.”
Our own study arrived at similarly disappointing conclusions: “Acupuncture is not superior to sham treatment for recovery in activities of daily living and health-related quality of life after stroke, although there may be a limited effect on leg function in more severely affected patients“. Our review of all 10 sham-controlled RCTs in this area is also in line with the results of this new study: “Our meta-analyses of data from rigorous randomized sham-controlled trials did not show a positive effect of acupuncture as a treatment for functional recovery after stroke”
I am quite sure that some acupuncture-enthusiasts will dispute this evidence. They might argue that I am too critical, the trials were not done optimally, that acupuncturists have seen plenty of good results in their clinical practice, that acupuncture is a complex intervention that does not fit into the straight jacket of an RCT, that this or that “prestigious” organisation recommends acupuncture for stroke patients, that it would be wrong not to give acupuncture a try etc. etc. I would counter that the reliable evidence available to date is sufficiently conclusive to stop claiming that acupuncture is effective and thus give false hope to severely suffering, vulnerable patients. Moreover, I would advocate using the sparse available resources to help stroke victims with treatments that demonstrably work.
One of the best-selling supplements in the UK as well as several other countries is evening primrose oil (EPO). It is available via all sorts of outlets (even respectable pharmacies – or is that supposedly respectable?), and is being promoted for a wide range of conditions, including eczema. The NIH website is optimistic about its efficacy: “Evening primrose oil may have modest benefits for eczema.” Our brand-new Cochrane review was aimed at critically assessing the effects of oral EPO or borage oil (BO) on the symptoms of atopic eczema, and it casts considerable doubt on this somewhat uncritical view.
Here is what we did: We searched six databases as well as online trials registers and checked the bibliographies of included studies for further references to relevant trials. We corresponded with trial investigators and pharmaceutical companies to identify unpublished and ongoing trials. We also performed a separate search for adverse effects. All RCTs investigating oral intake of EPO or BO for eczema were included.
Two experts independently applied eligibility criteria, assessed risk of bias, and extracted data. We pooled dichotomous outcomes using risk ratios (RR), and continuous outcomes using the mean difference (MD). Where possible, we pooled study results using random-effects meta-analysis and tested statistical heterogeneity.
And here is what we found: 27 studies with a total of 1596 participants met our inclusion criteria: 19 studies tested EPO, and 8 studies assessed BO. A meta-analysis of results from 7 studies showed that EPO failed to improve global eczema symptoms as reported by participants and doctors. Treatment with BO also failed to improve global eczema symptoms. 67% of the studies had a low risk of bias for random sequence generation; 44%, for allocation concealment; 59%, for blinding; and 37%, for other biases.
Our conclusions were clear: Oral borage oil and evening primrose oil lack effect on eczema; improvement was similar to respective placebos used in trials. Oral BO and EPO are not effective treatments for eczema.
The very wide-spread notion that EPO is effective for eczema and a range of other conditions was originally promoted by the researcher turned entrepreneur, D F Horrobin, who claimed that several human diseases, including eczema, were due to a lack of fatty acid precursors and could thus be effectively treated with EPO. In the 1980s, Horrobin began to sell EPO supplements without having conclusively demonstrated their safety and efficacy; this led to confiscations and felony indictments in the US. As chief executive of Scotia Pharmaceuticals, Horrobin obtained licences for several EPO-preparations which later were withdrawn for lack of efficacy. Charges of mismanagement and fraud led to Horrobin being ousted as CEO by the board of the company. Later, Horrobin published a positive meta-analysis of EPO for eczema where he excluded the negative results of the largest published trial, but included results of 7 of his own unpublished studies. When scientists asked to examine the data, Horrobin’s legal team convinced the journal to refuse the request.
The evidence for EPO is negative not just for eczema. To the best of my knowledge, there is not a single disease or symptom for which it demonstrably works. Our own review of the data concluded ” EPO has not been established as an effective treatment for any condition”
Our new Cochrane review might help to put this long saga to rest. In my view, it is a fascinating tale of a scientist being blinded by creed and ambition. The results of such errors can be dramatic. Horrobin misled all of us: patients, health care professionals, scientists, regulators, decision makers, businessmen. This caused unnecessary expense and set back research efforts in a multitude of areas. I find the tale also fascinating from other perspectives; for instance, it begs the question why so many ‘respectable’ manufacturers and retailers are still allowed to make money on EPO. Is it not time to debunk the EPO-myth and say it as clearly as possible: EPO helps only those who financially profit from misleading the public?
Whenever we consider alternative medicine, we think of therapeutic interventions and tend to forget that alternative practitioners frequently employ diagnostic methods which are alien to mainstream health care. Acupuncturists, iridologists, spiritual healers, massage therapists, reflexologists, applied kinesiologists, homeopaths, chiropractors, osteopaths and many other types of alternative practitioners all have their very own ways of diagnosing what might be wrong with their patients.
The purpose of a diagnostic test or technique is, of course, to establish the presence or absence of an abnormality, condition or disease. Conventional doctors use all sorts of validated diagnostic methods, from physical examination to laboratory tests, from blood pressure measurements to X-rays. Alternative practitioners use mostly alternative methods for arriving at a diagnosis, and we should ask: how reliable are these techniques?
Anyone trying to answer this question, will be surprised to find how very little reliable information on this topic exists. Scientific tests of the validity of alternative diagnostic tests are a bit like gold dust. And this is why a recently published article is, in my view, of particular importance and value.
The aim of this study was to evaluate the inter-rater reliability of pulse-diagnosis as performed by Traditional Korean Medicine (TKM) clinicians. A total 658 patients with stroke who were admitted into Korean oriental medical university hospitals were included. Each patient was seen by two TKM-experts for an examination of the pulse signs – pulse diagnosis is regularly used by practitioners of TKM and Traditional Chinese Medicine (TCM), and is entirely different from what conventional doctors do when they feel the pulse of a patient. Inter-observer reliability was assessed using three methods: simple percentage agreement, the kappa value, and the AC(1) statistic. The kappa value indicated that the inter-observer reliability in evaluating the pulse signs ranged from poor to moderate, whereas the AC(1) analysis suggested that agreement between the two experts was generally high (with the exception of ‘slippery pulse’). The kappa value indicated that the inter-observer reliability was generally moderate to good (with the exceptions of ‘rough pulse’ and ‘sunken pulse’) and that the AC(1) measure of agreement between the two experts was generally high.
Based on these findings, the authors drew the following conclusion: “Pulse diagnosis is regarded as one of the most important procedures in TKM… This study reveals that the inter-observer reliability in making a pulse diagnosis in stroke patients is not particularly high when objectively quantified. Additional research is needed to help reduce this lack of reliability for various portions of the pulse diagnosis.”
This indicates, I think, that the researchers (who are themselves practitioners of TCM!) are not impressed with the inter-rater reliability of the most commonly used diagnostic tool in TCM/TKM. Imagine this to be true for a commonly used test in conventional medicine; imagine, for instance, that one doctor measuring your blood pressure produces entirely different readings than the next one. Hardly acceptable, don’t you think?
And, of course, inter-rater reliability would be only one of several preconditions for their diagnostic methods to be valid. Other essential preconditions for diagnostic tests to be of value are their specificity and their sensitivity; do they discriminate between healthy and unhealthy, and are they capable of differentiating between severely abnormal findings and those that are just a little out of the normal range?
Until we have answers to all the open questions about each specific alternative diagnostic method, it would be unwise to pretend these tests are valid. Imagine a doctor prescribing a life-long anti-hypertensive therapy on the basis of a blood pressure reading that is little more than guess-work!
Since non-validated diagnostic tests can generate both false positive and false negative results, the danger of using them should not be under-estimated. In a way, invalid diagnostic tests are akin to bogus bomb-detectors (which made headlines recently): both are techniques to identify a problem. If the method generates a false positive result, an alert will be issued in vain, people will get anxious for nothing, time and money will be lost, etc. If the method generates a false negative result, we will assume to be safe while, in fact, we are not. In extreme cases, such an error will cost lives.
It is difficult to call those ‘experts’ who advocate using such tests anything else than irresponsible, I’d say. And it is even more difficult to have any confidence in the treatments that might be administered on the basis of such diagnostic methods, wouldn’t you agree?
Some national and international guidelines advise physicians to use spinal manipulation for patients suffering from acute (and chronic) low back pain. Many experts have been concerned about the validity of this advice. Now an up-date of the Cochrane review on this subject seems to provide clarity on this rather important matter.
Its aim was to assess the effectiveness of spinal manipulative therapy (SMT) as a treatment of acute low back pain. Randomized controlled trials (RCTs) testing manipulation/mobilization in adults with low back pain of less than 6-weeks duration were included. The primary outcome measures were pain, functional status and perceived recovery. Secondary endpoints were return-to-work and quality of life. Two authors independently conducted the study selection, risk of bias assessment and data extraction. The effects were examined for SMT versus inert interventions, sham SMT, other interventions, and for SMT as an adjunct to other forms of treatment.
The researchers identified 20 RCTs with a total number of 2674 participants, 12 (60%) RCTs had not been included in the previous version of this review. Only 6 of the 20 studies had a low risk of bias. For pain and functional status, there was low- to very low-quality evidence suggesting no difference in effectiveness of SMT compared with inert interventions, sham SMT or as adjunct therapy. There was varying quality of evidence suggesting no difference in effectiveness of SMT compared with other interventions. Data were sparse for recovery, return-to-work, quality of life, and costs of care.
The authors draw the following conclusion: “SMT is no more effective for acute low back pain than inert interventions, sham SMT or as adjunct therapy. SMT also seems to be no better than other recommended therapies. Our evaluation is limited by the few numbers of studies; therefore, future research is likely to have an important impact on these estimates. Future RCTs should examine specific subgroups and include an economic evaluation.”
In other words, guidelines that recommend SMT for acute low back pain are not based on the current best evidence. But perhaps the situation is different for chronic low back pain? The current Cochrane review of 26 RCTs is equally negative: “High quality evidence suggests that there is no clinically relevant difference between SMT and other interventions for reducing pain and improving function in patients with chronic low-back pain. Determining cost-effectiveness of care has high priority. Further research is likely to have an important impact on our confidence in the estimate of effect in relation to inert interventions and sham SMT, and data related to recovery.”
This clearly begs the question why many of the current guidelines seem to mislead us. I am not sure I know the answer to this one; however I suspect that the panels writing the guidelines might have been dominated by chiropractors and osteopaths or their supporters who have not exactly made a name for themselves for being impartial. Whatever the reason, I think it is time for a re-think and for up-dating guidelines which are out of date and misleading.
Similarly, it might be time to question for what conditions chiropractors and osteopaths, the two professions who use spinal manipulation/mobilisation most, do actually offer anything of real value at all. Back pain and SMT are clearly their domains; if it turns out that SMT is not evidence-based for back pain, what is left? There is no good evidence for anything else, as far as I can see. To make matters worse, there are quite undeniable risks associated with SMT. The conclusion of such considerations is, I fear, obvious: the value of and need for these two professions should be re-assessed.
Evidence-based medicine (EBM) is a tool which enables health care professionals to optimize the chances for patients to be treated according to ethically, legally and medically accepted standards. Many proponents of alternative medicine used to reject the principles of EBM, not least because there is precious little good evidence from reliable clinical trials to support their treatments. In recent years, however, some alternative practitioners have stopped trying to swim against the tide.
They have discreetly changed their tune claiming that they do, in fact, practice EBM. Their argument usually holds that EBM represents much more than just data from clinical trials and that they actually do abide by the rules of EBM when treating their patients. The former claim is correct but the latter is not.
In order to explain why, we ought to first define our terminology. During recent years, several descriptions of EBM have become available. According to David Sackett, who was part of the McMaster group that coined the term, EBM is “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical experience with the best available external clinical evidence from systematic research”. As proposed by Sackett, the practice of EBM rests on the following three pillars:
- External Evidence– clinically relevant and reliable research mostly from clinical investigations into the efficacy and safety of therapeutic interventions – in other words clinical trials and systematic reviews. In a previous blog-post, I have elaborated on the question what evidence means.
- Clinical Expertise– the ability to use clinical skills to identify each patient’s unique health state, diagnosis and risks as well as his/her chances to benefit from the available therapeutic options.
- Patient Values– the individual preferences, concerns and expectations of the patient which are important in order to meet the patient’s needs.
So, how can a homeopath treating a patient with migraine, a chiropractor manipulating a child with asthma, or an acupuncturist needling a consumer for smoking cessation claim to practice EBM? The best available external evidence shows that neither of these therapies is effective. In fact, it even suggests that these options are ineffective for the above-named indications.
Using the first example of the homeopath, the scenario goes something like this: a homeopath believes in the ability of homeopathy and has the clinical expertise in it (he probably has clinical expertise in nothing else but homeopathy). His patient’s preference is very clearly with homeopathy (otherwise, she would not have consulted him). It follows that the homeopath does embrace two pillars of EBM. As to the third pillar – external evidence – he is adamant that clinical trials cannot do justice to something as holistic, subtle, individualized etc. Therefore he refuses to recognize the trial data as conclusive and rather trusts his experience which might be substantial.
I am sure that this line of arguing can convince some people; it certainly seems to appear compelling to those alternative practitioners who claim to practice EBM. However, I cannot agree with them.
The reason is simple: the practice of EBM must rest on three pillars, and each one of those three pillars is essential; we cannot just pick the ones we happen to like and drop the ones which we find award, we need them all.
We might be generous and grant that the homeopath’s pseudo-EBM argument outlined above suggests that his practice rests on two of the three pillars. However, the third one is absent and has been replaced by a bizarre imitation. To pretend that external evidence can be substituted by something else is erroneous and introduces double standards which are not acceptable – not because this would be against some bloodless principles of nit-picking academics, but because it would not be in the best interest of the patient. And, after all, the primary concern of EBM has to be the patient.