Guest post by Pete Attkins
Commentator “jm” asked a profound and pertinent question: “What DOES it take for people to get real in this world, practice some common sense, and pay attention to what’s going on with themselves?” This question was asked in the context of asserting that personal experience always trumps the results of large-scale scientific experiments; and asserting that alt-med experts are better able to provide individulized healthcare than 21st Century orthodox medicine.
What does common sense and paying attention lead us to conclude about the following? We test a six-sided die for bias by rolling it 100 times. The number 1 occurs only once and the number 6 occurs many times, never on its own, but in several groups of consecutive sixes.
I think it is reasonable to say that common sense would, and should, lead everyone to conclude that the die is biased and not fit for its purpose as a source of random numbers.
In other words, we have a gut feeling that the die is untrustworthy. Gut instincts and common sense are geared towards maximizing our chances of survival in our complex and unpredictable world — these are innate and learnt behaviours that have enabled humans to survive despite the harshness of our ever changing habitat.
Only very recently in the long history of our species have we developed specialized tools that enable us to better understand our harsh and complex world: science and critical thinking. These tools are difficult to master because they still haven’t been incorporated into our primary and secondary formal education systems.
The vast majority of people do not have these skills therefore, when a scientific finding flies in the face of our gut instincts and/or common sense, it creates an overwhelming desire to reject the finding and classify the scientist(s) as being irrational and lacking basic common sense. It does not create an intense desire to accept the finding then painstakingly learn all of the science that went into producing the finding.
With that in mind, let’s rethink our common sense conclusion that the six-sided die is biased and untrustworthy. What we really mean is that the results have given all of us good reason to be highly suspicious of this die. We aren’t 100% certain that this die is biased, but our gut feeling and common sense are more than adequate to form a reasonable mistrust of it and to avoid using it for anything important to us. Reasons to keep this die rather than discard it might be to provide a source of mild entertainment or to use its bias for the purposes of cheating.
Some readers might be surprised to discover at this point that the results I presented from this apparently heavily-biased die are not only perfectly valid results obtained from a truly random unbiased die, they are to be fully expected. Even if the die had produced 100 sixes in that test, it would not confirm that the die is biased in any way whatsoever. Rolling a truly unbiased die once will produce one of six possible outcomes. Rolling the same die 100 times will produce one unique sequence out of the 6^100 (6.5 x 10^77) possible sequences: all of which are equally valid!
Gut feeling plus common sense rightfully informs us that the probability of a random die producing one hundred consecutive sixes is so incredibly remote that nobody will ever see it occur in reality. This conclusion is also mathematically sound: if there were 6.5 x 10^77 people on Earth, each performing the same test on truly random dice, there is no guarantee that anyone would observe a sequence of one hundred consecutive sixes.
When we observe a sequence such as 2 5 1 4 6 3 1 4 3 6 5 2… common sense informs us that the die is very likely random. If we calculate the arithmetic mean to be very close to 3.5 then common sense will lead us to conclude that the die is both random and unbiased enough to use it as a reliable source of random numbers.
Unfortunately, this is a perfect example of our gut feelings and common sense failing us abysmally. They totally failed to warn us that the 2 5 1 4 6 3 1 4 3 6 5 2… sequence we observed had exactly the same (im)probability of occurring as a sequence of one hundred 6s or any other sequence that one can think of that doesn’t look random to a human observer.
The 100-roll die test is nowhere near powerful enough to properly test a six-sided die, but this test is more than adequately powered to reveal some of our cognitive biases and some of the deficits in our personal mastery of science and critical thinking.
To properly test the die we need to provide solid evidence that it is both truly random and that its measured bias tends towards zero as the number of rolls tends towards infinity. We could use the services of one testing lab to conduct billions of test rolls, but this would not exclude errors caused by such things as miscalibrated equipment and experimenter bias. It is better to subdivide the testing across multiple labs then carefully analyse and appropriately aggregate the results: this dramatically reduces errors caused by equipment and humans.
In medicine, this testing process is performed via systematic reviews of multiple, independent, double-blind, placebo-controlled trials — every trial that is insufficiently powered to add meaningfully to the result is rightfully excluded from the aggregation.
Alt-med relies on a diametrically opposed testing process. It performs a plethora of only underpowered tests; presents those that just happen to show a positive result (just as a random die could’ve produced); and sweeps under the carpet the overwhelming number of tests that produced a negative result. It publishes only the ‘successes’, not its failures. By sweeping its failures under the carpet it feels justified in making the very bold claim: Our plethora of collected evidence shows clearly that it mostly ‘works’ and, when it doesn’t, it causes no harm.
One of the most acidic tests for a hypothesis and its supporting data (which is a mandatory test in a few branches of critical engineering) is to substitute the collected data for random data that has been carefully crafted to emulate the probability mass functions of the collected datasets. This test has to be run multiple times for reasons that I’ve attempted to explain in my random die example. If the proposer of the hypothesis is unable to explain the multiple failures resulting from this acid test then it is highly likely that the proposer either does not fully understand their hypothesis or that their hypothesis is indistinguishable from the null hypothesis.
Aromatherapy is one of the most popular of all alternative therapies. It is most certainly a very agreeable experience. But is it more that a bit of pampering? Does it cure any diseases?
If you believe aromatherapists, their treatment is effective for almost everything. And, of course, there are studies to suggest that, indeed, it works for several conditions. But regular readers of this blog will by now know that it is a bad idea to go by just one single trial; we always must rely on the totality of the most reliable evidence. In other words, we must look for systematic reviews. Recently, such an article has been published.
The aim of this review was to systematically assess the effectiveness of aromatherapy for stress management. Seven databases were searched from their inception through April 2014. RCTs testing aromatherapy against any type of control intervention in healthy but stressed persons assessing stress and cortisol levels were considered. Two reviewers independently performed the selection of the studies, data abstraction and validations. The risk of bias was assessed using Cochrane criteria.
Five RCTs met the authors’ inclusion criteria. Most of the RCTs had high risk of bias. Four RCTs tested the effects of aroma inhalation compared with no treatment, no aroma, and no odour oil. The meta-analysis of these data suggested that aroma inhalation had favourable effects on stress management. Three RCTs tested aroma inhalation on saliva or serum cortisol level compared to controls, and the meta-analysis of these data failed to show significant difference between two groups
The authors concluded that there is limited evidence suggesting that aroma inhalation may be effective in controlling stress. However, the number, size and quality of the RCTs are too low to draw firm conclusions.
This is by no means the only systematic review of this area. In fact, there are so many that, in 2012, we decided to do an overview of systematic reviews evaluating the effectiveness of aromatherapy. We searched 12 electronic databases and our departmental files without restrictions of time or language. The methodological quality of all systematic reviews was evaluated independently by two authors. Of 201 potentially relevant publications, 10 met our inclusion criteria. Most of the systematic reviews were of poor methodological quality. The clinical subject areas were hypertension, depression, anxiety, pain relief, and dementia. For none of the conditions was the evidence convincing. Our conclusion: due to a number of caveats, the evidence is not sufficiently convincing that aromatherapy is an effective therapy for any condition.
So, what does all of this mean? I think it indicates that most of the claims made by aromatherapists are not evidence-based. Or, to express it differently: aromatherapy is hardly more than a bit of old-fashioned pampering – nothing wrong with that, of course, as long as you don’t fall for the hype of those who promote it.
Getting good and experienced lecturers for courses is not easy. Having someone who has done more research than most working in the field and who is internationally known, might therefore be a thrill for students and an image-boosting experience of colleges. In the true Christmas spirit, I am today making the offer of being of assistance to the many struggling educational institutions of alternative medicine .
A few days ago, I tweeted about my willingness to give free lectures to homeopathic colleges (so far without response). Having thought about it a bit, I would now like to extend this offer. I would be happy to give a free lecture to the students of any educational institution of alternative medicine. I suggest to
- do a general lecture on the clinical evidence of the 4 major types of alternative medicine (acupuncture, chiropractic, herbal medicine, homeopathy) or
- give a more specific lecture with in-depth analyses of any given alternative therapy.
I imagine that most of the institutions in question might be a bit anxious about such an idea, but there is no need to worry: I guarantee that everything I say will be strictly and transparently evidence-based. I will disclose my sources and am willing to make my presentation available to students so that they can read up the finer details about the evidence later at home. In other words, I will do my very best to only transmit the truth about the subject at hand.
Nobody wants to hire a lecturer without having at least a rough outline of what he will be talking about – fair enough! Here I present a short summary of the lecture as I envisage it:
- I will start by providing a background about myself, my qualifications and my experience in researching and lecturing on the matter at hand.
- This will be followed by a background on the therapies in question, their history, current use etc.
- Next I would elaborate on the main assumptions of the therapies in question and on their biological plausibility.
- This will be followed by a review of the claims made for the therapies in question.
- The main section of my lecture would be to review the clinical evidence regarding the efficacy of therapies in question. In doing this, I will not cherry-pick my evidence but rely, whenever possible, on authoritative systematic reviews, preferably those from the Cochrane Collaboration.
- This, of course, needs to be supplemented by a review of safety issues.
- If wanted, I could also say a few words about the importance of the placebo effect.
- I also suggest to discuss some of the most pertinent ethical issues.
- Finally, I would hope to arrive at a few clear conclusions.
You see, all is entirely up to scratch!
Perhaps you have some doubts about my abilities to lecture? I can assure you, I have done this sort of thing all my life, I have been a professor at three different universities, and I will probably manage a lecture to your students.
A final issue might be the costs involved. As I said, I would charge neither for the preparation (this can take several days depending on the exact topic), nor for the lecture itself. All I would hope for is that you refund my travel (and, if necessary over-night) expenses. And please note: this is time-limited: approaches will be accepted until 1 January 2015 for lectures any time during 2015.
I can assure you, this is a generous offer that you ought to consider seriously – unless, of course, you do not want your students to learn the truth!
(In which case, one would need to wonder why not)
The Alexander Technique is a method aimed at re-educating people to do everyday tasks with less muscular and mental tension. According to the ‘Complete Guide to the Alexander Technique’, this method can help you if:
- You suffer from repetitive strain injury or carpal tunnel syndrome.
- You have a backache or stiff neck and shoulders.
- You become uncomfortable when sitting at your computer for long periods of time.
- You are a singer, musician, actor, dancer or athlete and feel you are not performing at your full potential.
Sounds good!? But which of these claims are actually supported by sound evidence.
Our own systematic review from 2003 of the Alexander Technique (AT) found just 4 clinical studies. Only two of these trials were methodologically sound and clinically relevant. Their results were promising and implied that AT is effective in reducing the disability of patients suffering from Parkinson’s disease and in improving pain behaviour and disability in patients with back pain. A more recent review concluded as follows: Strong evidence exists for the effectiveness of Alexander Technique lessons for chronic back pain and moderate evidence in Parkinson’s-associated disability. Preliminary evidence suggests that Alexander Technique lessons may lead to improvements in balance skills in the elderly, in general chronic pain, posture, respiratory function and stuttering, but there is insufficient evidence to support recommendations in these areas.
This suggests that the ‘Complete Guide’ is based more on wishful thinking than on evidence. But what about the value of AT for performers – after all, it is for this purpose that Alexander developed his method?
A recent systematic review aimed to evaluate the evidence for the effectiveness of AT sessions on musicians’ performance, anxiety, respiratory function and posture. The following electronic databases were searched up to February 2014 for relevant publications: PUBMED, Google Scholar, CINAHL, EMBASE, AMED, PsycINFO and RILM. The search criteria were “Alexander Technique” AND “music*”. References were searched, and experts and societies of AT or musicians’ medicine contacted for further publications.
In total, 237 citations were assessed. 12 studies were included for further analysis, 5 of which were randomised controlled trials (RCTs), 5 controlled but not randomised (CTs), and 2 mixed methods studies. Main outcome measures in RCTs and CTs were music performance, respiratory function, performance anxiety, body use and posture. Music performance was judged by external experts and found to be improved by AT in 1 of 3 RCTs; in 1 RCT comparing neurofeedback (NF) to AT, only NF caused improvements. Respiratory function was investigated in 2 RCTs, but not improved by AT training. Performance anxiety was mostly assessed by questionnaires and decreased by AT in 2 of 2 RCTs and in 2 of 2 CTs.
From this evidence, the authors drew the following conclusion: A variety of outcome measures have been used to investigate the effectiveness of AT sessions in musicians. Evidence from RCTs and CTs suggests that AT sessions may improve performance anxiety in musicians. Effects on music performance, respiratory function and posture yet remain inconclusive. Future trials with well-established study designs are warranted to further and more reliably explore the potential of AT in the interest of musicians.
So, there you are: if you are a performing artist, AT seems to be useful for you. If you have health problems (other than perhaps back pain), I would look elsewhere for help.
Rigorous research into the effectiveness of a therapy should tell us the truth about the ability of this therapy to treat patients suffering from a given condition — perhaps not one single study, but the totality of the evidence (as evaluated in systematic reviews) should achieve this aim. Yet, in the realm of alternative medicine (and probably not just in this field), such reviews are often highly contradictory.
A concrete example might explain what I mean.
There are numerous systematic reviews assessing the effectiveness of acupuncture for fibromyalgia syndrome (FMS). It is safe to assume that the authors of these reviews have all conducted comprehensive searches of the literature in order to locate all the published studies on this subject. Subsequently, they have evaluated the scientific rigor of these trials and summarised their findings. Finally they have condensed all of this into an article which arrives at a certain conclusion about the value of the therapy in question. Understanding this process (outlined here only very briefly), one would expect that all the numerous reviews draw conclusions which are, if not identical, at least very similar.
However, the disturbing fact is that they are not remotely similar. Here are two which, in fact, are so different that one could assume they have evaluated a set of totally different primary studies (which, of course, they have not).
One recent (2014) review concluded that acupuncture for FMS has a positive effect, and acupuncture combined with western medicine can strengthen the curative effect.
Another recent review concluded that a small analgesic effect of acupuncture was present, which, however, was not clearly distinguishable from bias. Thus, acupuncture cannot be recommended for the management of FMS.
How can this be?
By contrast to most systematic reviews of conventional medicine, systematic reviews of alternative therapies are almost invariably based on a small number of primary studies (in the above case, the total number was only 7 !). The quality of these trials is often low (all reviews therefore end with the somewhat meaningless conclusion that more and better studies are needed).
So, the situation with primary studies of alternative therapies for inclusion into systematic reviews usually is as follows:
- the number of trials is low
- the quality of trials is even lower
- the results are not uniform
- the majority of the poor quality trials show a positive result (bias tends to generate false positive findings)
- the few rigorous trials yield a negative result
Unfortunately this means that the authors of systematic reviews summarising such confusing evidence often seem to feel at liberty to project their own pre-conceived ideas into their overall conclusion about the effectiveness of the treatment. Often the researchers are in favour of the therapy in question – in fact, this usually is precisely the attitude that motivated them to conduct a review in the first place. In other words, the frequently murky state of the evidence (as outlined above) can serve as a welcome invitation for personal bias to do its effect in skewing the overall conclusion. The final result is that the readers of such systematic reviews are being misled.
Authors who are biased in favour of the treatment will tend to stress that the majority of the trials are positive. Therefore the overall verdict has to be positive as well, in their view. The fact that most trials are flawed does not usually bother them all that much (I suspect that many fail to comprehend the effects of bias on the study results); they merely add to their conclusions that “more and better trials are needed” and believe that this meek little remark is sufficient evidence for their ability to critically analyse the data.
Authors who are not biased and have the necessary skills for critical assessment, on the other hand, will insist that most trials are flawed and therefore their results must be categorised as unreliable. They will also emphasise the fact that there are a few reliable studies and clearly point out that these are negative. Thus their overall conclusion must be negative as well.
In the end, enthusiasts will conclude that the treatment in question is at least promising, if not recommendable, while real scientists will rightly state that the available data are too flimsy to demonstrate the effectiveness of the therapy; as it is wrong to recommend unproven treatments, they will not recommend the treatment for routine use.
The difference between the two might just seem marginal – but, in fact, it is huge: IT IS THE DIFFERENCE BETWEEN MISLEADING PEOPLE AND GIVING RESPONSIBLE ADVICE; THE DIFFERENCE BETWEEN VIOLATING AND ADHERING TO ETHICAL STANDARDS.
Whenever I give a public lecture about homeopathy, I explain what it is, briefly go in to its history, explain what its assumptions are, and what the evidence tells us about its efficacy and safety. When I am finished, there usually is a discussion with the audience. This is the part I like best; in fact, it is the main reason why I made the effort to do the lecture in the first place.
The questions vary, of course, but you can bet your last shirt that someone asks: “We know it works for animals; animals cannot experience a placebo-response, and therefore your claim that homeopathy relies on nothing but the placebo-effect must be wrong!” At this stage I often despair a little, I must admit. Not because the question is too daft, but because I did address it during my lecture. Thus I feel that I have failed to get the right message across – I despair with my obviously poor skills of giving an informative lecture!
Yet I need to answer the above question, of course. So I reiterate that the perceived effectiveness of homeopathy relies not just on the placebo-effect but also on phenomena such as regression towards the mean, natural history of the condition etc. I also usually mention that it is erroneous to assume that animals cannot benefit from placebo-effects; they can be conditioned, and pets can react to the expectations of their owners.
Finally, I need to mention the veterinary clinical evidence which – just like in the case of human patients – fails to show that homeopathic remedies are better than placebos for treating animals. Until recently, this was not an easy task because no systematic review of randomised placebo-controlled trials (RCTs) of veterinary homeopathy was available. Now, I am happy to announce, this situation has changed.
Using Cochrane methods, a brand-new review aimed to assess risk of bias and to quantify the effect size of homeopathic interventions compared with placebo for each eligible peer-reviewed trial. Judgement in 7 assessment domains enabled a trial’s risk of bias to be designated as low, unclear or high. A trial was judged to comprise reliable evidence, if its risk of bias was low or was unclear in specified domains. A trial was considered to be free of vested interest, if it was not funded by a homeopathic pharmacy.
The 18 RCTs found by the researchers were disparate in nature, representing 4 species and 11 different medical conditions. Reliable evidence, free from vested interest, was identified in only two trials:
- homeopathic Coli had a prophylactic effect on porcine diarrhoea (odds ratio 3.89, 95 per cent confidence interval [CI], 1.19 to 12.68, P=0.02);
- individualised homeopathic treatment did not have a more beneficial effect on bovine mastitis than placebo intervention (standardised mean difference -0.31, 95 per cent CI, -0.97 to 0.34, P=0.35).
The authors conclusions are clear: Mixed findings from the only two placebo-controlled RCTs that had suitably reliable evidence precluded generalisable conclusions about the efficacy of any particular homeopathic medicine or the impact of individualised homeopathic intervention on any given medical condition in animals.
My task when lecturing about homeopathy has thus become a great deal easier. But homeopathy-fans are not best pleased with this new article, I guess. They will try to claim that it was a biased piece of research conducted, most likely, by notorious anti-homeopaths who cannot be trusted. So who are the authors of this new publication?
They are RT Mathie from the British Homeopathic Association and J Clausen from one of Germany’s most pro-homeopathic institution, the ‘Karl und Veronica Carstens-Stiftung’.
DOES ANYONE BELIEVE THAT THIS ARTICLE IS BIASED AGAINST HOMEOPATHY?
One of the most commonly ‘accepted’ indications for acupuncture is anxiety. Many trials have suggested that it is effective for that condition. But is this really true? To find out, we need someone to conduct a systematic review or meta-analysis.
Korean researchers have just published such a paper; they wanted to assess the preoperative anxiolytic efficacy of acupuncture therapy and therefore conducted a meta-analysis of all RCTs on the subject. Four electronic databases were searched up to February 2014. Data were included in the meta-analysis from RCTs in which groups receiving preoperative acupuncture treatment were compared with control groups receiving a placebo for anxiety.
Fourteen publications with a total of 1,034 patients were included. Six RCTs, using the State-Trait Anxiety Inventory-State (STAI-S), reported that acupuncture interventions led to greater reductions in preoperative anxiety relative to sham acupuncture. A further eight publications, employing visual analogue scales, also indicated significant differences in preoperative anxiety amelioration between acupuncture and sham acupuncture.
The authors concluded that aacupuncture therapy aiming at reducing preoperative anxiety has a statistically significant effect relative to placebo or nontreatment conditions. Well-designed and rigorous studies that employ large sample sizes are necessary to corroborate this finding.
From these conclusions most casual readers might get the impression that acupuncture is indeed effective. One has to dig a bit deeper to realise that is perhaps not so.
Why? Because the quality of the primary studies was often dismally poor. Most did not even mention adverse effects which, in my view, is a clear breach of publication ethics. What is more, all the studies were wide open to bias. The authors of the meta-analysis include in their results section the following short paragraph:
The 14 included studies exhibited various degrees of bias susceptibility (Figure 2 and Figure 3). The agreement rate, as measured using Cohen’s kappa, was 0.8 . Only six studies reported concealed allocation; the other six described a method of adequate randomization, although the word “randomization” appeared in all of the articles. Thirteen studies prevented blinding of the participants. Participants in these studies had no previous experience of acupuncture. According to STRICTA, two studies enquired after patients’ beliefs as a group: there were no significant differences [20, 24].
There is a saying amongst experts about such meta-analyses: RUBBISH IN, RUBBISH OUT. It describes the fact that several poor studies, pooled meta-analytically, can never give a reliable result.
This does, however, not mean that such meta-analyses are necessarily useless. If the authors prominently (in the abstract) stress that the quality of the primary studies was wanting and that therefore the overall result is unreliable, they might inspire future researchers to conduct more rigorous trials and thus generate progress. Most importantly, by insisting on pointing out these limitations and by not drawing positive conclusions from flawed data, they would avoid misleading those health care professionals – and let’s face it, they are the majority – who merely read the abstract or even just the conclusions of such articles.
The authors of this review have failed to do any of this; they and the journal EBCAM have thus done a disservice to us all by contributing to the constant drip of misleading and false-positive information about the value of acupuncture.
Reflexology? Isn’t that an alternative therapy? And as such, a physiotherapist would not normally use it, most of us might think.
Well, think again! Here is what the UK Chartered Society of Physiotherapists writes about reflexology:
Developed centuries ago in countries such as China, Egypt and India, reflexology is often referred to as a ‘gentle’ and ‘holistic’ therapy that benefits both mind and body. It centres on the feet because these are said by practitioners to be a mirror, or topographical map, for the rest of the body. Manipulation of certain pressure, or reflex, points is claimed to have an effect on corresponding zones in the body. The impact, say reflexologists, extends throughout – to bones, muscles, organs, glands, circulatory and neural pathways. The head and hands can also be massaged in some cases. The treatment is perhaps best known for use in connection with relaxation and relief from stress, anxiety, pain, sleep disorders, headaches, migraine, menstrual and digestive problems. But advocates say it can be used to great effect far more widely, often in conjunction with other treatments.
Reflexology, or Reflex Therapy (RT) as some physiotherapists prefer to call it, clearly is approved by the UK Chartered Society of Physiotherapists. And what evidence do they have for it?
One hundred members of the Association of Chartered Physiotherapists in Reflex Therapy (ACPIRT) participated in an audit to establish a baseline of practice. Findings indicate that experienced therapists use RT in conjunction with their professional skills to induce relaxation (95%) and reduce pain (86%) for patients with conditions including whiplash injury and chronic pain. According to 68% of respondents, RT is “very good,” “good” or “as good as” orthodox physiotherapy practices. Requiring minimal equipment, RT may be as cost effective as orthodox physiotherapy with regards to duration and frequency of treatment.
But that’s not evidence!!! I hear you grumble. No, it isn’t, I agree.
Is there good evidence to show that RT is effective?
I am afraid not!
My own systematic review concluded that the best evidence available to date does not demonstrate convincingly that reflexology is an effective treatment for any medical condition.
Does that mean that the Chartered Society of Physiotherapists promotes quackery?
I let my readers answer that question.
Kinesiology tape is all the rage. Its proponents claim that it increases cutaneous stimulation, which facilitates motor unit firing, and consequently improves functional performance. But is this just clever marketing, wishful thinking or is it true? To find out, we need reliable data.
The current trial results are sparse, confusing and contradictory. A recent systematic review indicated that kinesiology tape may have limited potential to reduce pain in individuals with musculoskeletal injury; however, depending on the conditions, the reduction in pain may not be clinically meaningful. Kinesiology tape application did not reduce specific pain measures related to musculoskeletal injury above and beyond other modalities compared in the context of included articles.
The authors concluded that kinesiology tape may be used in conjunction with or in place of more traditional therapies, and further research that employs controlled measures compared with kinesiology tape is needed to evaluate efficacy.
This need for further research has just been met by Korean investigators who conducted a study testing the true effects of KinTape by a deceptive, randomized, clinical trial.
Thirty healthy participants performed isokinetic testing of three taping conditions: true facilitative KinTape, sham KinTape, and no KinTape. The participants were blindfolded during the evaluation. Under the pretense of applying adhesive muscle sensors, KinTape was applied to their quadriceps in the first two conditions. Normalized peak torque, normalized total work, and time to peak torque were measured at two angular speeds (60°/s and 180°/s) and analyzed with one-way repeated measures ANOVA.
Participants were successfully deceived and they were ignorant about KinTape. No significant differences were found between normalized peak torque, normalized total work, and time to peak torque at 60°/s or 180°/s (p = 0.31-0.99) between three taping conditions. The results showed that KinTape did not facilitate muscle performance in generating higher peak torque, yielding a greater total work, or inducing an earlier onset of peak torque.
The authors concluded that previously reported muscle facilitatory effects using KinTape may be attributed to placebo effects.
The claims that are being made for kinesiology taping are truly extraordinary; just consider what this website is trying to tell us:
Kinesiology tape is a breakthrough new method for treating athletic sprains, strains and sports injuries. You may have seen Olympic and celebrity athletes wearing multicolored tape on their arms, legs, shoulders and back. This type of athletic tape is a revolutionary therapeutic elastic style of support that works in multiple ways to improve health and circulation in ways that traditional athletic tapes can’t compare. Not only does this new type of athletic tape help support and heal muscles, but it also provides faster, more thorough healing by aiding with blood circulation throughout the body.
Many athletes who have switched to using this new type of athletic tape report a wide variety of benefits including improved neuromuscular movement and circulation, pain relief and more. In addition to its many medical uses, Kinesiology tape is also used to help prevent injuries and manage pain and swelling, such as from edema. Unlike regular athletic taping, using elastic tape allows you the freedom of motion without restricting muscles or blood flow. By allowing the muscles a larger degree of movement, the body is able to heal itself more quickly and fully than before.
Whenever I read such over-enthusiastic promotion that is not based on evidence but on keen salesmanship, my alarm-bells start ringing and I see parallels to the worst type of alternative medicine hype. In fact, kinesiology tapes have all the hallmarks of alternative medicine and its promoters have, as far as I can see, all the characteristics of quacks. The motto seems to be: LET’S EARN SOME MONEY FAST AND IGNORE THE SCIENCE WHILE WE CAN.
Chiropractors, like other alternative practitioners, use their own unique diagnostic tools for identifying the health problems of their patients. One such test is the Kemp’s test, a manual test used by most chiropractors to diagnose problems with lumbar facet joints. The chiropractor rotates the torso of the patient, while her pelvis is fixed; if manual counter-rotative resistance on one side of the pelvis by the chiropractor causes lumbar pain for the patient, it is interpreted as a sign of lumbar facet joint dysfunction which, in turn would be treated with spinal manipulation.
All diagnostic tests have to fulfil certain criteria in order to be useful. It is therefore interesting to ask whether the Kemp’s test meets these criteria. This is precisely the question addressed in a recent paper. Its objective was to evaluate the existing literature regarding the accuracy of the Kemp’s test in the diagnosis of facet joint pain compared to a reference standard.
All diagnostic accuracy studies comparing the Kemp’s test with an acceptable reference standard were located and included in the review. Subsequently, all studies were scored for quality and internal validity.
Five articles met the inclusion criteria. Only two studies had a low risk of bias, and three had a low concern regarding applicability. Pooling of data from studies using similar methods revealed that the test’s negative predictive value was the only diagnostic accuracy measure above 50% (56.8%, 59.9%).
The authors concluded that currently, the literature supporting the use of the Kemp’s test is limited and indicates that it has poor diagnostic accuracy. It is debatable whether clinicians should continue to use this test to diagnose facet joint pain.
The problem with chiropractic diagnostic methods is not confined to the Kemp’s test, but extends to most tests employed by chiropractors. Why should this matter?
If diagnostic methods are not reliable, they produce either false-positive or false-negative findings. When a false-negative diagnosis is made, the chiropractor might not treat a condition that needs attention. Much more common in chiropractic routine, I guess, are false-positive diagnoses. This means chiropractors frequently treat conditions which the patient does not have. This, in turn, is not just a waste of money and time but also, if the ensuing treatment is associated with risks, an unnecessary exposure of patients to getting harmed.
The authors of this review, chiropractors from Canada, should be praised for tackling this subject. However, their conclusion that “it is debatable whether clinicians should continue to use this test to diagnose facet joint pain” is in itself highly debatable: the use of nonsensical diagnostic tools can only result in nonsense and should therefore be disallowed.