MD, PhD, FMedSci, FSB, FRCP, FRCPEd

bogus claims

1 2 3 14

Kinesiology tape is all the rage. Its proponents claim that it increases cutaneous stimulation, which facilitates motor unit firing, and consequently improves functional performance. But is this just clever marketing, wishful thinking or is it true? To find out, we need reliable data.

The current trial results are sparse, confusing and contradictory. A recent systematic review indicated that kinesiology tape may have limited potential to reduce pain in individuals with musculoskeletal injury; however, depending on the conditions, the reduction in pain may not be clinically meaningful. Kinesiology tape application did not reduce specific pain measures related to musculoskeletal injury above and beyond other modalities compared in the context of included articles. 

The authors concluded that kinesiology tape may be used in conjunction with or in place of more traditional therapies, and further research that employs controlled measures compared with kinesiology tape is needed to evaluate efficacy.

This need for further research has just been met by Korean investigators who conducted a study testing the true effects of KinTape by a deceptive, randomized, clinical trial.

Thirty healthy participants performed isokinetic testing of three taping conditions: true facilitative KinTape, sham KinTape, and no KinTape. The participants were blindfolded during the evaluation. Under the pretense of applying adhesive muscle sensors, KinTape was applied to their quadriceps in the first two conditions. Normalized peak torque, normalized total work, and time to peak torque were measured at two angular speeds (60°/s and 180°/s) and analyzed with one-way repeated measures ANOVA.

Participants were successfully deceived and they were ignorant about KinTape. No significant differences were found between normalized peak torque, normalized total work, and time to peak torque at 60°/s or 180°/s (p = 0.31-0.99) between three taping conditions. The results showed that KinTape did not facilitate muscle performance in generating higher peak torque, yielding a greater total work, or inducing an earlier onset of peak torque.

The authors concluded that previously reported muscle facilitatory effects using KinTape may be attributed to placebo effects.

The claims that are being made for kinesiology taping are truly extraordinary; just consider what this website is trying to tell us:

Kinesiology tape is a breakthrough new method for treating athletic sprains, strains and sports injuries. You may have seen Olympic and celebrity athletes wearing multicolored tape on their arms, legs, shoulders and back. This type of athletic tape is a revolutionary therapeutic elastic style of support that works in multiple ways to improve health and circulation in ways that traditional athletic tapes can’t compare. Not only does this new type of athletic tape help support and heal muscles, but it also provides faster, more thorough healing by aiding with blood circulation throughout the body.

Many athletes who have switched to using this new type of athletic tape report a wide variety of benefits including improved neuromuscular movement and circulation, pain relief and more. In addition to its many medical uses, Kinesiology tape is also used to help prevent injuries and manage pain and swelling, such as from edema. Unlike regular athletic taping, using elastic tape allows you the freedom of motion without restricting muscles or blood flow. By allowing the muscles a larger degree of movement, the body is able to heal itself more quickly and fully than before.

Whenever I read such over-enthusiastic promotion that is not based on evidence but on keen salesmanship, my alarm-bells start ringing and I see parallels to the worst type of alternative medicine hype. In fact, kinesiology tapes have all the hallmarks of alternative medicine and its promoters have, as far as I can see, all the characteristics of quacks. The motto seems to be: LET’S EARN SOME MONEY FAST AND IGNORE THE SCIENCE WHILE WE CAN.

Chiropractors, like other alternative practitioners, use their own unique diagnostic tools for identifying the health problems of their patients. One such test is the Kemp’s test, a manual test used by most chiropractors to diagnose problems with lumbar facet joints. The chiropractor rotates the torso of the patient, while her pelvis is fixed; if manual counter-rotative resistance on one side of the pelvis by the chiropractor causes lumbar pain for the patient, it is interpreted as a sign of lumbar facet joint dysfunction which, in turn would be treated with spinal manipulation.

All diagnostic tests have to fulfil certain criteria in order to be useful. It is therefore interesting to ask whether the Kemp’s test meets these criteria. This is precisely the question addressed in a recent paper. Its objective was to evaluate the existing literature regarding the accuracy of the Kemp’s test in the diagnosis of facet joint pain compared to a reference standard.

All diagnostic accuracy studies comparing the Kemp’s test with an acceptable reference standard were located and included in the review. Subsequently, all studies were scored for quality and internal validity.

Five articles met the inclusion criteria. Only two studies had a low risk of bias, and three had a low concern regarding applicability. Pooling of data from studies using similar methods revealed that the test’s negative predictive value was the only diagnostic accuracy measure above 50% (56.8%, 59.9%).

The authors concluded that currently, the literature supporting the use of the Kemp’s test is limited and indicates that it has poor diagnostic accuracy. It is debatable whether clinicians should continue to use this test to diagnose facet joint pain.

The problem with chiropractic diagnostic methods is not confined to the Kemp’s test, but extends to most tests employed by chiropractors. Why should this matter?

If diagnostic methods are not reliable, they produce either false-positive or false-negative findings. When a false-negative diagnosis is made, the chiropractor might not treat a condition that needs attention. Much more common in chiropractic routine, I guess, are false-positive diagnoses. This means chiropractors frequently treat conditions which the patient does not have. This, in turn, is not just a waste of money and time but also, if the ensuing treatment is associated with risks, an unnecessary exposure of patients to getting harmed.

The authors of this review, chiropractors from Canada, should be praised for tackling this subject. However, their conclusion that “it is debatable whether clinicians should continue to use this test to diagnose facet joint pain” is in itself highly debatable: the use of nonsensical diagnostic tools can only result in nonsense and should therefore be disallowed.

Most of the underlying assumptions of alternative medicine (AM) lack plausibility. Whenever this is the case, so the argument put forward by an international team of researchers in a recent paper, there are difficulties involved in obtaining a valid statistical significance in clinical studies.

Using a mostly statistical approach, they argue that, since the prior probability of a research hypothesis is directly related to its scientific plausibility, the commonly used frequentist statistics, which do not account for this probability, are unsuitable for studies exploring matters in various degree disconnected from science. Any statistical significance obtained in this field should be considered with great caution and may be better applied to more plausible hypotheses (like placebo effect) than the specific efficacy of the intervention.

The researchers conclude that, since achieving meaningful statistical significance is an essential step in the validation of medical interventions, AM practices, producing only outcomes inherently resistant to statistical validation, appear not to belong to modern evidence-based medicine.

To emphasize their arguments, the researchers make the following additional points:

  • It is often forgotten that frequentist statistics, commonly used in clinical trials, provides only indirect evidence in support of the hypothesis examined.
  • The p-value inherently tends to exaggerate the support for the hypothesis tested, especially if the scientific plausibility of the hypothesis is low.
  • When the rationale for a clinical intervention is disconnected from the basic principles of science, as in case of complementary alternative medicines, any positive result obtained in clinical studies is more reasonably ascribable to hypotheses (generally to placebo effect) other than the hypothesis on trial, which commonly is the specific efficacy of the intervention.
  • Since meaningful statistical significance as a rule is an essential step to validation of a medical intervention, complementary alternative medicine cannot be considered evidence-based.

Further explanations can be found in the discussion of the article where the authors argue that the quality of the hypothesis tested should be consistent with sound logic and science and therefore have a reasonable prior probability of being correct. As a rule of thumb, assuming a “neutral” attitude towards the null hypothesis (odds = 1:1), a p-value of 0.01 or, better, 0.001 should suffice to give a satisfactory posterior probability of 0.035 and 0.005 respectively.

In the area of AM, hypotheses often are entirely inconsistent with logic and frequently fly in the face of science. Four examples can demonstrate this instantly and sufficiently, I think:

  • Homeopathic remedies which contain not a single ‘active’ molecule are not likely to generate biological effects.
  • Healing ‘energy’ of Reiki masters has no basis in science.
  • Meridians of acupuncture are pure imagination.
  • Chiropractic subluxation have never been shown to exist.

Positive results from clinical trials of implausible forms of AM are thus either due to chance, bias or must be attributed to more credible causes such as the placebo effect. Since the achievement of meaningful statistical significance is an essential step in the validation of medical interventions, unless some authentic scientific support to AM is provided, one has to conclude that AM cannot be considered as evidence-based.

Such arguments are by no means new; they have been voiced over and over again. Essentially, they amount to the old adage: IF YOU CLAIM THAT YOU HAVE A CAT IN YOUR GARDEN, A SIMPLE PICTURE MAY SUFFICE. IF YOU CLAIM THERE IS A UNICORN IN YOUR GARDEN, YOU NEED SOMETHING MORE CONVINCING. An extraordinary claim requires an extraordinary proof! Put into the context of the current discussion about AM, this means that the usual level of clinical evidence is likely to be very misleading as long as it totally neglects the biological plausibility of the prior hypothesis.

Proponents of AM do not like to hear such arguments. They usually insist on what we might call a ‘level playing field’ and fail to see why their assumptions require not only a higher level of evidence but also a reasonable scientific hypothesis. They forget that the playing field is not even to start with; to understand the situation better, they should read this excellent article. Perhaps its elegant statistical approach will convince them – but I would not hold my breath.

Bach Flower Remedies are the brain child of Dr Edward Bach who, as an ex-homeopath, invented his very own highly diluted remedies. Like homeopathic medicines, they are devoid of active molecules and are claimed to work via some non-defined ‘energy’. Consequently, the evidence for these treatments is squarely negative: my systematic review analysed the data of all 7 RCTs of human patients or volunteers that were available in 2010. All but one were placebo-controlled. All placebo-controlled trials failed to demonstrate efficacy. I concluded that the most reliable clinical trials do not show any differences between flower remedies and placebos.

But now, a new investigation has become available. The aim of this study was to evaluate the effect of Bach flower Rescue Remedy on the control of risk factors for cardiovascular disease in rats.

A randomized longitudinal experimental study was conducted on 18 Wistar rats which were randomly divided into three groups of six animals each and orogastrically dosed with either 200μl of water (group A, control), or 100μl of water and 100μl of Bach flower remedy (group B), or 200μl of Bach flower remedy (group C) every 2 days, for 20 days. All animals were fed standard rat chow and water ad libitum.

Urine volume, body weight, feces weight, and food intake were measured every 2 days. On day 20, tests of glycemia, hyperuricemia, triglycerides, high-density lipoprotein (HDL) cholesterol, and total cholesterol were performed, and the anatomy and histopathology of the heart, liver and kidneys were evaluated. Data were analyzed using Tukey’s test at a significance level of 5%.

No significant differences were found in food intake, feces weight, urine volume and uric acid levels between groups. Group C had a significantly lower body weight gain than group A and lower glycemia compared with groups A and B. Groups B and C had significantly higher HDL-cholesterol and lower triglycerides than controls. Animals had mild hepatic steatosis, but no cardiac or renal damage was observed in the three groups.

From these results, the authors conclude that Bach flower Rescue Remedy was effective in controlling glycemia, triglycerides, and HDL-cholesterol and may serve as a strategy for reducing risk factors for cardiovascular disease in rats. This study provides some preliminary “proof of concept” data that Bach Rescue Remedy may exert some biological effects.

If ever there was a bizarre study, it must be this one:

  • As far as I know, nobody has ever claimed that Rescue Remedy modified cardiovascular risk factors.
  • It seems debatable whether the observed changes are all positive as far as the cardiovascular risk is concerned.
  • It seems odd that a remedy that does not contain active molecules is associated with some sort of dose-effect response.
  • The modification of cardiovascular risk factors in rats might be of little relevance for humans.
  • A strategy for reducing cardiovascular risk factors in rats seems a strange idea.
  • Even the authors cannot offer a mechanism of action [other than pure magic].

Does this study tell us anything of value? The authors are keen to point out that it provides a preliminary proof of concept for Rescue Remedy having biological effects. Somehow, I doubt that this conclusion will convince many of my readers.

Medical treatments with no direct effect, such as homeopathy, are surprisingly popular. But how does a good reputation of such treatments spread and persist? Researchers from the Centre for the Study of Cultural Evolution in Stockholm believe that they have identified the mechanism.

They argue that most medical treatments result in a range of outcomes: some people improve while others deteriorate. If the people who improve are more inclined to tell others about their experiences than the people who deteriorate, ineffective or even harmful treatments would maintain a good reputation.

They conducted a fascinating study to test the hypothesis that positive outcomes are overrepresented in online medical product reviews, examined if this reputational distortion is large enough to bias people’s decisions, and explored the implications of this bias for the cultural evolution of medical treatments.

The researchers compared outcomes of weight loss treatments and fertility treatments as evidenced in clinical trials to outcomes reported in 1901 reviews on Amazon. Subsequently, in a series of experiments, they evaluated people’s choice of weight loss diet after reading different reviews. Finally, a mathematical model was used to examine if this bias could result in less effective treatments having a better reputation than more effective treatments.

The results of these investigations confirmed the hypothesis that people with better outcomes are more inclined to write reviews. After 6 months on the diet, 93% of online reviewers reported a weight loss of 10 kg or more, while just 27% of clinical trial participants experienced this level of weight change. A similar positive distortion was found in fertility treatment reviews. In a series of experiments, the researchers demonstrated that people are more inclined to begin a diet that was backed by many positive reviews, than a diet with reviews that are representative of the diet’s true effect. A mathematical model of medical cultural evolution suggested that the size of the positive distortion critically depends on the shape of the outcome distribution.

The authors concluded that online reviews overestimate the benefits of medical treatments, probably because people with negative outcomes are less inclined to tell others about their experiences. This bias can enable ineffective medical treatments to maintain a good reputation.

To me, this seems eminently plausible; but there are, of course, other reasons why bogus treatments survive or even thrive – and they may vary in their importance to the overall effect from treatment to treatment. As so often in health care, things are complex and there are multiple factors that contribute to a phenomenon.

It has been estimated that 40 – 70% of all cancer patients use some form of alternative medicine; may do so in the hope this might cure their condition. A recent article by Turkish researchers - yet again - highlights how dangerous such behaviour can turn out to be.

The authors report the cases of two middle-aged women suffering from malignant breast masses. The patients experienced serious complications in response to self-prescribed use of alternative medicine practices to treat their condition in lieu of evidence-based medical treatments. In both cases, the use and/or inappropriate application of alternative medical approaches promoted the progression of malignant fungating lesions in the breast. The first patient sought medical assistance upon development of a fungating lesion, 7∼8 cm in diameter and involving 1/3 of the breast, with a palpable mass of 5×6 cm immediately beneath the wound. The second patient sought medical assistance after developing of a wide, bleeding, ulcerous area with patchy necrotic tissue that comprised 2/3 of the breast and had a 10×6 cm palpable mass under the affected area.

The authors argue that the use of some non-evidence-based medical treatments as complementary to evidence-based medical treatments may benefit the patient on an emotional level; however, this strategy should be used with caution, as the non-evidence-based therapies may cause physical harm or even counteract the evidence-based treatment.

Their conclusions: a malignant, fungating wound is a serious complication of advanced breast cancer. It is critical that the public is informed about the potential problems of self-treating wounds such as breast ulcers and masses. Additionally, campaigns are needed to increase awareness of the risks and life-threatening potential of using non-evidence-based medical therapies exclusively.

I have little to add to this; perhaps just a further reminder that the risk extends, of course, to all serious conditions: even a seemingly harmless but ineffective therapy can become positively life-threatening, if it is used as an alternative to an effective treatment. I am sure that some ‘alternativists’ will claim that I am alarmist; but I am also convinced that they are wrong.

In 2004, I published an article rather boldly entitled ‘Ear candles: a triumph of ignorance over science’. Here is its summary:

Ear candles are hollow tubes coated in wax which are inserted into patients’ ears and then lit at the far end. The procedure is used as a complementary therapy for a wide range of conditions. A critical assessment of the evidence shows that its mode of action is implausible and demonstrably wrong. There are no data to suggest that it is effective for any condition. Furthermore, ear candles have been associated with ear injuries. The inescapable conclusion is that ear candles do more harm than good. Their use should be discouraged.

Sadly, since the publication of this paper, ear candles have not become less but more popular. There are about 3 000 000 websites on the subject; most are trying to sell products and make claims which are almost comically misguided; three examples have to suffice:

I said ALMOST comical because such nonsense has, of course a downside. Not only are consumers separated from their cash for no benefit whatsoever, but they are also exposed to danger; again, three examples from the medical literature might explain:

  • Otolaryngologists from London described a case of ear candling presenting as hearing loss, and they concluded that this useless therapy can actually cause damage to the ears.
  • A 50-year-old woman presented to her GP following an episode of ear candling. After 15 minutes, the person performing the candling burned herself while attempting to remove the candle and spilled candle wax into the patient’s right ear canal. On examination, a piece of candle wax was found in the patient’s ear, and she was referred to the local ear, nose, and throat department. Under general aesthetic, a large mass of solidified yellow candle wax was removed from the deep meatus of the ear. The patient had a small perforation in her right tympanic membrane. Results of a pure tone audiogram showed a mild conductive hearing loss on the right side. At a follow-up appointment 1 month later, the perforation was still there, and the patient’s hearing had not improved.
  • case report of a 4-year-old girl from New Zealand was published. The patient was diagnosed to suffer from otitis media. During the course of the ear examination white deposits were noticed on her eardrum; this was confirmed as being caused by ear candling.

I should stress that we do not know how often such events happen; there is no monitoring system, and one might expect that the vast majority of cases do not get published. Most consumers who experience such problems, I would guess, are far to embarrassed to admit that they have been taken in by this sort of quackery.

It was true 10 yeas ago and it is true today: ear candles are a triumph of ignorance over science. But also they are a victory of gullibility over common sense and the unethical exploitation of naive hope by greedy frauds.

DOCTOR Jeffrey Collins, a chiropractor from the Chicago area, just sent me an email which, I think, is remarkable and hilarious - so much so that I want to share it with my readers. Here it is in its full length and beauty:

If you really think you can resolve all back pain syndromes with a pill then you are dumber than you look. I’ve been a chiropractor for 37 years and the primary difference between seeing me vs. an orthopedic surgeon for back pain is simple. When you have ANY fixation in the facet joint, the motor untitled is compromised. These are the load bearing joints in the spine and only an idiot would not realize they are the primary source of pain. The idea of giving facet blocks under fluoroscopy is so dark ages. Maybe you could return to blood letting. The fact that you attack chiropractors as being dangerous when EVERY DAY medical doctors kill people but that’s OK in the name of science. Remember Vioxx? Oh yeah that drug killed over 80,000 patients that they could find. It was likely double that. Oddly I have treated over 10,000 in my career and nobody died. Not one. I guess I was just lucky. I went to Palmer in Iowa. The best chiropractors come out of there. I should qualify that. The ones that have a skill adjusting the spine. 

I will leave you with this as a simple analogy most patients get. Anyone who has ever “cracked their knuckles” will tell you that they got immediate relief and joint function was restored instanter. That’s chiropractic in a nutshell. Not complicated and any chiropractor worth his salt can do that for 37 years without one adverse incident. A monkey could hand out pain pills and you know it. Only in America do you have to get a script to get to a drugstore so everybody gets a cut. What a joke. Somehow mitigating pain makes you feel better about yourselves when you are the real sham. Funny how chiropractors pay the LOWEST malpractice rates in the country. That must be luck as well. Where’s your science now? I would love to debate a guy like you face to face. If you ever come to Chicago email me and let’s meet. Then again guys like you never seem to like confrontation. 

I’ve enjoyed this and glad I found your site. Nobody reads the crap that you write and I found this by mistake. Keep the public in the dark as long as you can. It’s only a matter of time before it’s proven DRUGS ARE WORTHLESS.

I am pleased that DOCTOR Collins had fun. Now let me try to have some merriment as well.

This comment is a classic in several ways, for instance, it

  • starts with a frightfully primitive insult,
  • boasts of the author’s authority (37 years of experience) without mentioning anything that remotely resembles real evidence,
  • provides pseudoscientific explanations for quackery,
  • returns to insults (only an idiot return to blood letting),
  • uses classical fallacies (…medical doctors kill people),
  • returns to more boasting about authority (I went to Palmer in Iowa. The best chiropractors come out of there…),
  • injects a little conspiracy theory (…everybody gets a cut),
  • returns to insults (…you are the real sham… guys like you never seem to like confrontation.) 
  • and ends with an apocalyptic finish: It’s only a matter of time before it’s proven DRUGS ARE WORTHLESS.

I should not mock DOCTOR Collins, though; I should be thankful to him for at least two reasons. Firstly, he confirmed my theory that “Ad hominem attacks are signs of victories of reason over unreason“. Secondly, he made a major contribution to my enjoyment of this otherwise somewhat dreary bank holiday, and I hope the same goes for my readers.

Twenty years ago, when I started my Exeter job as a full-time researcher of complementary/alternative medicine (CAM), I defined the aim of my unit as applying science to CAM. At the time, this intention upset quite a few CAM-enthusiasts. One of the most prevalent arguments of CAM-proponents against my plan was that the study of CAM with rigorous science was quite simply an impossibility. They claimed that CAM included mind and body practices, holistic therapies, and other complex interventions which cannot not be put into the ‘straight jacket’ of conventional research, e. g. a controlled clinical trial. I spent the next few years showing that this notion was wrong. Gradually and hesitantly CAM researchers seemed to agree with my view – not all, of course, but first a few and then slowly, often reluctantly the majority of them.

What followed was a period during which several research groups started conducting rigorous tests of the hypotheses underlying CAM. All too often, the results turned out to be disappointing, to say the least: not only did most of the therapies in question fail to show efficacy, they were also by no means free of risks. Worst of all, perhaps, much of CAM was disclosed as being biologically implausible. The realization that rigorous scientific scrutiny often generated findings which were not what proponents had hoped for led to a sharp decline in the willingness of CAM-proponents to conduct rigorous tests of their hypotheses. Consequently, many asked whether science was such a good idea after all.

But that, in turn, created a new problem: once they had (at least nominally) committed themselves to science, how could they turn against it? The answer to this dilemma was easier that anticipated: the solution was to appear dedicated to science but, at the same time, to argue that, because CAM is subtle, holistic, complex etc., a different scientific approach was required. At this stage, I felt we had gone ‘full circle’ and had essentially arrived back where we were 20 years ago - except that CAM-proponents no longer rejected the scientific method outright but merely demanded different tools.

A recent article may serve as an example of this new and revised stance of CAM-proponents on science. Here proponents of alternative medicine argue that a challenge for research methodology in CAM/ICH* is the growing recognition that CAM/IHC practice often involves complex combination of novel interventions that include mind and body practices, holistic therapies, and others. Critics argue that the reductionist placebo controlled randomized control trial (RCT) model that works effectively for determining efficacy for most pharmaceutical or placebo trial RCTs may not be the most appropriate for determining effectiveness in clinical practice for either CAM/IHC or many of the interventions used in primary care, including health promotion practices. Therefore the reductionist methodology inherent in efficacy studies, and in particular in RCTs, may not be appropriate to study the outcomes for much of CAM/IHC, such as Traditional Korean Medicine (TKM) or other complex non-CAM/IHC interventions—especially those addressing comorbidities. In fact it can be argued that reductionist methodology may disrupt the very phenomenon, the whole system, that the research is attempting to capture and evaluate (i.e., the whole system in its naturalistic environment). Key issues that surround selection of the most appropriate methodology to evaluate complex interventions are well described in the Kings Fund report on IHC and also in the UK Medical Research Council (MRC) guidelines for evaluating complex interventions—guidelines which have been largely applied to the complexity of conventional primary care and care for patients with substantial comorbidity. These reports offer several potential solutions to the challenges inherent in studying CAM/IHC. [* IHC = integrated health care]

Let’s be clear and disclose what all of this actually means. The sequence of events, as I see it, can be summarized as follows:

  • We are foremost ALTERNATIVE! Our treatments are far too unique to be subjected to reductionist research; we therefore reject science and insist on an ALTERNATIVE.
  • We (well, some of us) have reconsidered our opposition and are prepared to test our hypotheses scientifically (NOT LEAST BECAUSE WE NEED THE RECOGNITION THAT THIS MIGHT BRING).
  • We are dismayed to see that the results are mostly negative; science, it turns out, works against our interests.
  • We need to reconsider our position.
  • We find it inconceivable that our treatments do not work; all the negative scientific results must therefore be wrong.
  • We always said that our treatments are unique; now we realize that they are far too holistic and complex to be submitted to reductionist scientific methods.
  • We still believe in science (or at least want people to believe that we do) - but we need a different type of science.
  • We insist that RCTs (and all other scientific methods that fail to demonstrate the value of CAM) are not adequate tools for testing complex interventions such as CAM.
  • We have determined that reductionist research methods disturb our subtle treatments.
  • We need pragmatic trials and similarly ‘soft’ methods that capture ‘real life’ situations, do justice to CAM and rarely produce a negative result.

What all of this really means is that, whenever the findings of research fail to disappoint CAM-proponents, the results are by definition false-negative. The obvious solution to this problem is to employ different (weaker) research methods, preferably those that cannot possibly generate a negative finding. Or, to put it bluntly: in CAM, science is acceptable only as long as it produces the desired results.

Linus Carl Pauling (1901 – 1994), the American scientist, peace activist, author, and educator who won two Nobel prizes, was one of the most influential chemists in history and ranks among the most important scientists of the 20th century. Linus Pauling’s work on vitamin C, however, generated considerable controversy. Pauling wrote many papers and a popular book, Cancer and Vitamin C. Vitamin C, we know today, protects cells from oxidative DNA damage and might thereby block carcinogenesis. Pauling popularised the regular intake of vitamin C; eventually he published two studies of end-stage cancer patients; their results apparently showed that vitamin C quadrupled survival times. A re-evaluation, however, found that the vitamin C groups were less sick on entry to the study. Later clinical trials concluded that there was no benefit to high-dose vitamin C. Since then, the established opinion is that the best evidence does not support a role for high dose vitamin C in the treatment of cancer. Despite all this, high dose IV vitamin C is in unexpectedly wide use by CAM practitioners.

Yesterday, new evidence has been published in the highly respected journal ‘Nature’; does it vindicate Pauling and his followers?

Chinese oncologists conducted a meta-analysis to assess the association between vitamin C intake and the risk to acquire lung cancer. Pertinent studies were identified by a searches of several electronic databases through December of 2013. Random-effect model was used to combine the data for analysis. Publication bias was estimated using Begg’s funnel plot and Egger’s regression asymmetry test.

Eighteen articles reporting 21 studies involving 8938 lung cancer cases were included in this meta-analysis. Pooled results suggested that highest vitamin C intake level versus lowest level was significantly associated with the risk of lung cancer. The effect was largest in investigations from the United States and in prospective studies. A linear dose-response relationship was found, with the risk of lung cancer decreasing by 7% for every 100 mg/day increase in the intake of vitamin C . No publication bias was found.

The authors conclude that their analysis suggested that the higher intake of vitamin C might have a protective effect against lung cancer, especially in the United States, although this conclusion needs to be confirmed.

Does this finding vindicate Pauling’s theory? Not really.

Even though the above-quoted conclusions seem to suggest a causal link, we are, in fact, far from having established one. The meta-analysis pooled mainly epidemiological data from various studies. Such investigations are doubtlessly valuable but they are fraught with uncertainties and cannot prove causality. For instance, there could be dozens of factors that have confounded these data in such a way that they produce a misleading result. The simplest explanation of the meta-analytic results might be that people who have a very high vitamin C intake tend to have generally healthier life-styles than those who take less vitamin C. When conducting a meta-analysis, one does, of course, try to account for such factors; but in many cases the necessary information to do that is not available, and therefore uncertainty persists.

In other words, the authors were certainly correct when stating that their findings needed to be confirmed. Pauling’s theory cannot be vindicated by such reports – in fact, the authors do not even mention Pauling with one word.

1 2 3 14
Recent Comments
Click here for a comprehensive list of recent comments.
Categories