MD, PhD, FMedSci, FSB, FRCP, FRCPEd

study design

1 2 3 5

This post is dedicated to Mel Koppelman.

Those who followed the recent discussions about acupuncture on this blog will probably know her; she is an acupuncturist who (thinks she) knows a lot about research because she has several higher qualifications (but was unable to show us any research published by herself). Mel seems very quick in lecturing others about research methodology. Yesterday, she posted this comment in relation to my previous post on a study of aromatherapy and reflexology:

Professor Ernst, This post affirms yet again a rather poor understanding of clinical trial methodology. A pragmatic trial such as this one with a wait-list control makes no attempt to look for specific effects. You say “it is quite simply wrong to assume that this outcome is specifically related to the two treatments.” Where have specific effects been tested or assumed in this study? Your statement in no way, shape or form negates the author’s conclusions that “aromatherapy massage and reflexology are simple and effective non-pharmacologic nursing interventions.” Effectiveness is not a measure of specific effects.

I am most grateful for this comment because it highlights an issue that I had wanted to address for some time: The meanings of the two terms ‘efficacy and effectiveness’ and their differences as seen by scientists and by alternative practitioners/researchers.

Let’s start with the definitions.

I often use the excellent book of Alan Earl-Slater entitled THE HANDBOOK OF CLINICAL TRIALS AND OTHER RESEARCH. In it, EFFICACY is defined as ‘the degree to which an intervention does what it is intended to do under ideal conditions. EFFECTIVENESS is the degree to which a treatment works under real life conditions. An EFFECTIVENESS TRIAL is a trial that ‘is said to approximate reality (i. e. clinical practice). It is sometimes called a pragmatic trial’. An EFFICACY TRIAL ‘is a clinical trial that is said to take place under ideal conditions.’

In other words, an efficacy trial investigates the question, ‘can the therapy work?’, and an effectiveness trial asks, ‘does this therapy work?’ In both cases, the question relate to the therapy per se and not to the plethora of phenomena which are not directly related to it. It seems logical that, where possible, the first question would need to be addressed before the second – it does make little sense to test for effectiveness, if efficacy has not been ascertained, and effectiveness without efficacy does not seem to be possible.

In my 2007 book entitled UNDERSTANDING RESEARCH IN COMPLEMENTARY AND ALTERNATIVE MEDICINE (written especially for alternative therapists like Mel), I adopted these definitions and added: “It is conceivable that a given therapy works only under optimal conditions but not in everyday practice. For instance, in clinical practice patients may not comply with a therapy because it causes adverse effects.” I should have added perhaps that adverse effects are by no means the only reason for non-compliance, and that non-compliance is not the only reason why an efficacious treatment might not be effective.

Most scientists would agree with the above definitions. In fact, I am not aware of a debate about them in scientific circles. But they are not something alternative practitioners tend to like. Why? Because, using these strict definitions, many alternative therapies are neither of proven efficacy nor effectiveness.

What can be done about this unfortunate situation?

Simple! Let’s re-formulate the definitions of efficacy and effectiveness!

Efficacy, according to some alternative medicine proponents, refers to the therapeutic effects of the therapy per se, in other words, its specific effects. (That coincides almost with the scientific definition of this term – except, of course, it fails to tell us anything about the boundary conditions [optimal or real-life conditions].)

Effectiveness, according to the advocates of alternative therapies, refers to its specific effects plus its non-specific effects. Some researchers have even introduced the term ‘real-life effectiveness’ for this.

This is why, the authors of the study discussed in my previous post, could conclude that “aromatherapy massage and reflexology are simple… effective… interventions… to help manage pain and fatigue in patients with rheumatoid arthritis.” Based on their data, neither aromatherapy nor reflexology has been shown to be effective. They might appear to be effective because patients expected to get better, or patients in the no-treatment control group felt worse for not getting the extra care. Based on studies of this nature, giving patients £10 or a box of chocolate might also turn out to be “simple… effective… interventions… to help manage pain and fatigue in patients with rheumatoid arthritis.” Based on these definitions of efficacy and effectiveness, there are hardly any limits to bogus claims for any old quackery.

Such obfuscation suits proponents of alternative therapies fine because, using such definitions, virtually every treatment anyone might ever think of can be shown to be effective! Wishful thinking, it seems, can fulfil almost any dream, it can even turn the truth upside down.

Or can anyone name an alternative treatment that cannot even generate a placebo response when administered with empathy, sympathy and care? Compared to doing nothing, virtually every ineffective therapy might generate outcomes that make the treatment look effective. Even the anticipation of an effect alone might do the trick. How often have you had a tooth-ache, went to the dentist, and discovered sitting in the waiting room that the pain had gone? Does that mean that sitting in a waiting room is an effective treatment for dental pain?

In fact, some enthusiasts of alternative medicine could soon begin to argue that, with their new definition of ‘effectiveness’, we no longer need controlled clinical trials at all, if we want to demonstrate how effective alternative therapies truly are. We can just do observational studies without a control group, note that lots of patients get better, and ‘Bob is your uncle’!!! This is much faster, saves money, time and effort, and has the undeniable advantage of never generating a negative result.

To most outsiders, all this might seem a bit like splitting hair. However, I fear that it is far from that. In fact, it turns out to be a fairly fundamental issue in almost any discussion about the value or otherwise of alternative medicine. And, I think, it is also a matter of principle that reaches far beyond alternative medicine: if we allow various interest groups, lobbyists, sects, cults etc. to use their own definitions of fundamentally important terms, any dialogue, understanding or progress becomes almost impossible.

Yesterday, I wrote about a new acupuncture trial. Amongst other things, I wanted to find out whether the author who had previously insisted I answer his questions about my view on the new NICE guideline would himself answer a few questions when asked politely. To remind you, this is what I wrote:

This new study was designed as a randomized, sham-controlled trial of acupuncture for persistent allergic rhinitis in adults investigated possible modulation of mucosal immune responses. A total of 151 individuals were randomized into real and sham acupuncture groups (who received twice-weekly treatments for 8 weeks) and a no acupuncture group. Various cytokines, neurotrophins, proinflammatory neuropeptides, and immunoglobulins were measured in saliva or plasma from baseline to 4-week follow-up.

Statistically significant reduction in allergen specific IgE for house dust mite was seen only in the real acupuncture group. A mean (SE) statistically significant down-regulation was also seen in pro-inflammatory neuropeptide substance P (SP) 18 to 24 hours after the first treatment. No significant changes were seen in the other neuropeptides, neurotrophins, or cytokines tested. Nasal obstruction, nasal itch, sneezing, runny nose, eye itch, and unrefreshed sleep improved significantly in the real acupuncture group (post-nasal drip and sinus pain did not) and continued to improve up to 4-week follow-up.

The authors concluded that acupuncture modulated mucosal immune response in the upper airway in adults with persistent allergic rhinitis. This modulation appears to be associated with down-regulation of allergen specific IgE for house dust mite, which this study is the first to report. Improvements in nasal itch, eye itch, and sneezing after acupuncture are suggestive of down-regulation of transient receptor potential vanilloid 1.

…Anyway, the trial itself raises a number of questions – unfortunately I have no access to the full paper – which I will post here in the hope that my acupuncture friend, who are clearly impressed by this paper, might provide the answers in the comments section below:

  1. Which was the primary outcome measure of this trial?
  2. What was the power of the study, and how was it calculated?
  3. For which outcome measures was the power calculated?
  4. How were the subjective endpoints quantified?
  5. Were validated instruments used for the subjective endpoints?
  6. What type of sham was used?
  7. Are the reported results the findings of comparisons between verum and sham, or verum and no acupuncture, or intra-group changes in the verum group?
  8. What other treatments did each group of patients receive?
  9. Does anyone really think that this trial shows that “acupuncture is a safe, effective and cost-effective treatment for allergic rhinitis”?

In the comments section, the author wrote: “after you have read the full text and answered most of your questions for yourself, it might then be a more appropriate time to engage in any meaningful discussion, if that is in fact your intent”, and I asked him to send me his paper. As he does not seem to have the intention to do so, I will answer the questions myself and encourage everyone to have a close look at the full paper [which I can supply on request].

  1. The myriad of lab tests were defined as primary outcome measures.
  2. Two sentences are offered, but they do not allow me to reconstruct how this was done.
  3. No details are provided.
  4. Most were quantified with a 3 point scale.
  5. Mostly not.
  6. Needle insertion at non-acupoints.
  7. The results are a mixture of inter- and intra-group differences.
  8. Patients were allowed to use conventional treatments and the frequency of this use was reported in patient diaries.
  9. I don’t think so.

So, here is my interpretation of this study:

  • It lacked power for many outcome measures, certainly the clinical ones.
  • There were hardly any differences between the real and the sham acupuncture group.
  • Most of the relevant results were based on intra-group changes, rather than comparing sham with real acupuncture, a fact, which is obfuscated in the abstract.
  • In a controlled trial fluctuations within one group must never be interpreted as caused by the treatment.
  • There were dozens of tests for statistical significance, and there seems to be no correction for multiple testing.
  • Thus the few significant results that emerged when comparing sham with real acupuncture might easily be false positives.
  • Patient-blinding seems questionable.
  • McDonald as the only therapist of the study might be suspected to have influenced his patients through verbal and non-verbal communications.

I am sure there are many more flaws, particularly in the stats, and I leave it to others to identify them. The ones I found are, however, already serious enough, in my view, to call for a withdrawal of this paper. Essentially, the authors seem to have presented a study with largely negative findings as a trial with positive results showing that acupuncture is an effective therapy for allergic rhinitis. Subsequently, McDonald went on social media to inflate his findings even more. One might easily ask: is this scientific misconduct or just poor science?

I would be most interested to hear what you think about it [if you want to see the full article, please send me an email].

While looking up an acupuncturist who has recently commented on this blog trying to teach me how to do science and understand research methodology, I was impressed that he, Dr John McDonald, PhD, has just published a clinical trial. Not many acupuncturists do that, you know, and I very much applaud this action, which even seems to have earned him his PhD! McDonald is understandably proud of his achievement – all the more because the study arrived at positive conclusions. This is what he wrote about it:

…So, in a nutshell, acupuncture is a safe, effective and cost-effective treatment for allergic rhinitis which produces lasting changes in the immune system and hence improvements in symptoms and quality of life.    Dr John McDonald

Fascinating! I quickly looked up the paper. Here it is:

This new study was designed as a randomized, sham-controlled trial of acupuncture for persistent allergic rhinitis in adults investigated possible modulation of mucosal immune responses. A total of 151 individuals were randomized into real and sham acupuncture groups (who received twice-weekly treatments for 8 weeks) and a no acupuncture group. Various cytokines, neurotrophins, proinflammatory neuropeptides, and immunoglobulins were measured in saliva or plasma from baseline to 4-week follow-up.

Statistically significant reduction in allergen specific IgE for house dust mite was seen only in the real acupuncture group. A mean (SE) statistically significant down-regulation was also seen in pro-inflammatory neuropeptide substance P (SP) 18 to 24 hours after the first treatment. No significant changes were seen in the other neuropeptides, neurotrophins, or cytokines tested. Nasal obstruction, nasal itch, sneezing, runny nose, eye itch, and unrefreshed sleep improved significantly in the real acupuncture group (post-nasal drip and sinus pain did not) and continued to improve up to 4-week follow-up.

The authors concluded that acupuncture modulated mucosal immune response in the upper airway in adults with persistent allergic rhinitis. This modulation appears to be associated with down-regulation of allergen specific IgE for house dust mite, which this study is the first to report. Improvements in nasal itch, eye itch, and sneezing after acupuncture are suggestive of down-regulation of transient receptor potential vanilloid 1.

These conclusions seem to be based on the data of the study. But they are oddly out of line with the above statement made by McDonald about his trial. What could be the reason for this discrepancy? Could it be that he behaves ‘scientifically’ correct when under the watchful eye of numerous co-authors from the School of Medicine, Menzies Health Institute, Griffith University, Queensland, Australia, the School of Medicine, Menzies Health Institute, Griffith University, Queensland, Australia, the National Institute of Complementary Medicine, Western Sydney University, Sydney, Australia, the Health Innovations Research Institute and School of Health Sciences, RMIT University, Melbourne, Victoria, Australia, and the Stanford University, Palo Alto, California? And could it be that he is a little more ‘liberal’ when on his own? A mere speculation, of course, but it would be nice to know.

Anyway, the trial itself raises a number of questions – unfortunately I have no access to the full paper – which I will post here in the hope that my acupuncture friend, who are clearly impressed by this paper, might provide the answers in the comments section below:

  1. Which was the primary outcome measure of this trial?
  2. What was the power of the study, and how was it calculated?
  3. For which outcome measures was the power calculated?
  4. How were the subjective endpoints quantified?
  5. Were validated instruments used for the subjective endpoints?
  6. What type of sham was used?
  7. Are the reported results the findings of comparisons between verum and sham, or verum and no acupuncture, or intra-group changes in the verum group?
  8. Was the success of patient-blinding checked, quantified and successful?
  9. What other treatments did each group of patients receive?
  10. Does anyone really think that this trial shows that “acupuncture is a safe, effective and cost-effective treatment for allergic rhinitis”?

Homeopathy is not blessed with many geniuses, it seems. Therefore, it is all the more noteworthy that there is one who seems to be so extraordinarily gifted that everything she touches turns to gold.

Her new and remarkable study intended to measure the efficacy of individualized homeopathic treatment for binge eating in adult males.

This case study was a 9-week pilot using an embedded, mixed-methods design. A 3-week baseline period was followed by a 6-week treatment period. The setting was the Homeopathic Health Clinic at the University of Johannesburg in Johannesburg, South Africa. Through purposive sampling, the research team recruited 15 Caucasian, male participants, aged 18-45 y, who were exhibiting binge eating. Individualized homeopathic remedies were prescribed to each participant. Participants were assessed by means of (1) a self-assessment calendar (SAC), recording the frequency and intensity of binging; (2) the Binge Eating Scale (BES), a psychometric evaluation of severity; and (3) case analysis evaluating changes with time.

Ten participants completed the study. The study found a statistically significant improvement with regard to the BES (P = .003) and the SAC (P = .006), with a large effect size, indicating that a decrease occurred in the severity and frequency of binging behaviour during the study period.

The authors concluded that this small study showed the potential benefits of individualized homeopathic treatment of binge eating in males, decreasing both the frequency and severity of binging episodes. Follow-up studies are recommended to explore this treatment modality as a complementary therapeutic option in eating disorders characterized by binge eating.

While two of the three authors have not ventured into trials of homeopathy before, the third and senior author (Janice Pellow from the Department of Homoeopathy, University of Johannesburg, South Africa) already has several homeopathic studies to her name. They seem all quite similar:

Number 1 was a clinical trial that concluded:

The study was too small to be conclusive, but results suggest the homeopathic complex, together with physiotherapy, can significantly improve symptoms associated with chronic low back pain due to osteoarthritis.

Number 2 was an RCT which concluded:

The homeopathic complex used in this study exhibited significant anti-inflammatory and pain-relieving qualities in children with acute viral tonsillitis.

Number 3 was a pilot study concluding:

Findings suggest that daily use of the homeopathic complex does have an effect over a 4-week period on physiological and cognitive arousal at bedtime as well as on sleep onset latency in psychophysiological onset insomnia sufferers.

Number 4 was an RCT that concluded:

The homeopathic medicine reduced the sensitivity reaction of cat allergic adults to cat allergen, according to the skin prick test.

See what I mean? Five studies and 5 positive results!

Considering that they were obtained with different types of homeopathy, with different patients suffering from different conditions, with different trial designs and with different sets of co-workers, this is an even more remarkable achievement. In the hands of Janice Pellow, homeopathy seems to work under all circumstances and for all conditions.

I feel a Noble Prize might be in the air.

Pity that she would not score all that highly on my (self-invented) TI.

 

Mindfulness-based stress reduction (MBSR) has not been rigorously evaluated as a treatment of chronic low back pain. According to its authors, this RCT was aimed at evaluating “the effectiveness for chronic low back pain of MBSR vs cognitive behavioural therapy (CBT) or usual care.”

The investigators randomly assigned patients to receive MBSR (n = 116), CBT (n = 113), or usual care (n = 113). CBT meant training to change pain-related thoughts and behaviours and MBSR meant training in mindfulness meditation and yoga. Both were delivered in 8 weekly 2-hour groups. Usual care included whatever care participants received.

Coprimary outcomes were the percentages of participants with clinically meaningful (≥30%) improvement from baseline in functional limitations (modified Roland Disability Questionnaire [RDQ]; range, 0-23) and in self-reported back pain bothersomeness (scale, 0-10) at 26 weeks. Outcomes were also assessed at 4, 8, and 52 weeks.

There were 342 randomized participants with a mean duration of back pain of 7.3 years. They attended 6 or more of the 8 sessions, 294 patients completed the study at 26 weeks, and 290 completed it at 52 weeks. In intent-to-treat analyses at 26 weeks, the percentage of participants with clinically meaningful improvement on the RDQ was higher for those who received MBSR (60.5%) and CBT (57.7%) than for usual care (44.1%), and RR for CBT vs usual care, 1.31 [95% CI, 1.01-1.69]). The percentage of participants with clinically meaningful improvement in pain bothersomeness at 26 weeks was 43.6% in the MBSR group and 44.9% in the CBT group, vs 26.6% in the usual care group, and RR for CBT vs usual care was 1.69 [95% CI, 1.18-2.41]). Findings for MBSR persisted with little change at 52 weeks for both primary outcomes.

The authors concluded that among adults with chronic low back pain, treatment with MBSR or CBT, compared with usual care, resulted in greater improvement in back pain and functional limitations at 26 weeks, with no significant differences in outcomes between MBSR and CBT. These findings suggest that MBSR may be an effective treatment option for patients with chronic low back pain.

At first glance, this seems like a well-conducted study. It was conducted by one of the leading back pain research team and was published in a top-journal. It will therefore have considerable impact. However, on closer examination, I have serious doubts about certain aspects of this trial. In my view, both the aims and the conclusions of this RCT are quite simply wrong.

The authors state that they aimed at evaluating “the effectiveness for chronic low back pain of MBSR vs cognitive behavioural therapy (CBT) or usual care.” This is not just misleading, it is wrong! The correct aim should have been to evaluate “the effectiveness for chronic low back pain of MBSR plus usual care vs cognitive behavioural therapy plus usual care or usual care alone.” One has to go into the method section to find the crucial statement: “All participants received any medical care they would normally receive.”

Consequently, the conclusions are equally wrong. They should have read as follows: Among adults with chronic low back pain, treatment with MBSR plus usual care or CBT plus usual care, compared with usual care alone, resulted in greater improvement in back pain and functional limitations at 26 weeks, with no significant differences in outcomes between MBSR and CBT.

In other words, this is yet another trial with the dreaded ‘A+B vs B’ design. Because A+B is always more than B (even if A is just a placebo), such a study will never generate a negative result (even if A is just a placebo). The results are therefore entirely compatible with the notion that the two tested treatments are pure placebos. Add to this the disappointment many patients in the ‘usual care group’ might have felt for not receiving an additional therapy for their pain, and you have a most plausible explanation for the observed outcomes.

I am totally puzzled why the authors failed to discuss these possibilities and limitations in full, and I am equally bewildered that JAMA published such questionable research.

 

In recent blogs, I have written much about acupuncture and particularly about the unscientific notions of traditional acupuncturists. I was therefore surprised to see that a UK charity is teaming up with traditional acupuncturists in an exercise that looks as though it is designed to mislead the public.

The website of ‘Anxiety UK’ informs us that this charity and the British Acupuncture Council (BAcC) have launched a ‘pilot project’ which will see members of Anxiety UK being able to access traditional acupuncture through this new partnership. Throughout the pilot project, they proudly proclaim, data will be collected to “determine the effectiveness of traditional acupuncture for treating those living with anxiety and anxiety based depression.”

This, they believe, will enable both parties to continue to build a body of evidence to measure the success rate of this type of treatment. Anxiety UK’s Chief Executive Nicky Lidbetter said: “This is an exciting project and will provide us with valuable data and outcomes for those members who take part in the pilot and allow us to assess the benefits of extending the pilot to a regular service for those living with anxiety. “We know anecdotally that many people find complementary therapies used to support conventional care can provide enormous benefit, although it should be remembered they are used in addition to and not instead of seeking medical advice from a doctor or taking prescribed medication. This supports our strategic aim to ensure that we continue to make therapies and services that are of benefit to those with anxiety and anxiety based depression, accessible.”

And what is wrong with that, you might ask.

What is NOT wrong with it, would be my response.

To start with, traditional acupuncture relies of obsolete assumptions like yin and yang, meridians, energy flow, acupuncture points etc. They have one thing in common: they fly in the face of science and evidence. But this might just be a triviality. More important is, I believe, the fact that a pilot project cannot determine the effectiveness of a therapy. Therefore the whole exercise smells very much like a promotional activity for pure quackery.

And what about the hint in the direction of anecdotal evidence in support of the study? Are they not able to do a simple Medline search? Because, if they had done one, they would have found a plethora of articles on the subject. Most of them show that there are plenty of studies but their majority is too flawed to draw firm conclusions.

A review by someone who certainly cannot be accused of being biased against alternative medicine, for instance, informs us that “trials in depression, anxiety disorders and short-term acute anxiety have been conducted but acupuncture interventions employed in trials vary as do the controls against which these are compared. Many trials also suffer from small sample sizes. Consequently, it has not proved possible to accurately assess the effectiveness of acupuncture for these conditions or the relative effectiveness of different treatment regimens. The results of studies showing similar effects of needling at specific and non-specific points have further complicated the interpretation of results. In addition to measuring clinical response, several clinical studies have assessed changes in levels of neurotransmitters and other biological response modifiers in an attempt to elucidate the specific biological actions of acupuncture. The findings offer some preliminary data requiring further investigation.”

Elsewhere, the same author, together with other pro-acupuncture researchers, wrote this: “Positive findings are reported for acupuncture in the treatment of generalised anxiety disorder or anxiety neurosis but there is currently insufficient research evidence for firm conclusions to be drawn. No trials of acupuncture for other anxiety disorders were located. There is some limited evidence in favour of auricular acupuncture in perioperative anxiety. Overall, the promising findings indicate that further research is warranted in the form of well designed, adequately powered studies.”

What does this mean in the context of the charity’s project?

I think, it tells us that acupuncture for anxiety is not exactly the most promising approach to further investigate. Even in the realm of alternative medicine, there are several interventions which are supported by more encouraging evidence. And even if one disagrees with this statement, one cannot possibly disagree with the fact that more flimsy research is not required. If we do need more studies, they must be rigorous and not promotion thinly disguised as science.

I guess the ultimate question here is one of ethics. Do charities not have an ethical and moral duty to spend our donations wisely and productively? When does such ill-conceived pseudo-research cross the line to become offensive or even fraudulent?

As it is ‘ACUPUNCTURE AWARENESS WEEK’, I thought I make a constructive contribution to this field by assessing what is currently being published on the subject. For this purpose, I looked at the first 100 Medline-listed articles of 2016. This has the advantage, of course, that all the numbers thus generated can be seen as absolute and as percentage figures at the same time. I categorised the articles according to where they were published and what their subject was.

My results show that, of the first 100 articles,

  • 33 were published in CAM journals,
  • 67  were published in mainstream medical journals,
  • 6 were RCTs,
  • 6 were other clinical studies,
  • 30 were pre-clinical investigations,
  • 27 were systematic reviews,
  • 8 were surveys,
  • 23 were other types of papers.

I have to admit, these results are not as bad as I had feared. What seems impressive is foremost the notion that acupuncture research has entered the mainstream journals. But there are issues that might be of concern; in my view these results suggest that:

  • Too little research is focussed on the two big questions: efficacy and safety.
  • In relation to the meagre output in RCTs, there are too many systematic reviews.
  • As long as we cannot be sure that acupuncture is more than a placebo, all these pre-clinical studies seem a bit out of place.
  • The vast majority of the articles were in low or very low impact journals.
  • There was only one paper that I would consider outstanding (my next post will discuss it).

So, what conclusions can one draw from these data?

Not many, I fear.

My little exploration does not lend itself to grand, generalizable or far-reaching conclusions. Acupuncture fans might proudly say: LOOK HOW FAR WE HAVE COME! Less enthusiastic experts, however, might think: LOOK HOW FAR YOU HAVE TO GO!

Cancer-related fatigue (CRF) is one of the most common symptoms reported by cancer patients, and it is a symptom that is often difficult to treat. As always in such a situation, there are lots of alternative therapies on offer. Yet the evidence for most is flimsy, to put it mildly.

But perhaps there is hope? The very first RCT with a 2016 date to be reviewed on this blog investigated the efficacy of the amino acid jelly Inner Power(®) (IP), a semi-solid, orally administrable dietary supplement containing coenzyme Q10 and L-carnitine, in controlling CRF in breast cancer patients in Japan.

Breast cancer patients with CRF undergoing chemotherapy were randomly assigned to receive IP once daily or regular care for 21 days. The primary endpoint was the change in the worst level of fatigue during the past 24 h (Brief Fatigue Inventory [BFI] item 3 score) from day 1 (baseline) to day 22. Secondary endpoints were change in global fatigue score (GFS; the average of all BFI items), anxiety and depression assessed by the Hospital Anxiety and Depression Scale (HADS), quality of life assessed by the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30) and EORTC Breast Cancer-Specific QLQ (EORTC QLQ-BR23), and adverse events.

Fifty-nine patients were enrolled in the study, of whom 57 were included in the efficacy analysis. Changes in the worst level of fatigue, GFS, and current feeling of fatigue were significantly different between the intervention and control groups, whereas the change in the average feeling of fatigue was not significantly different between groups. HADS, EORTC QLQ-C30, and EORTC QLQ-BR23 scores were not significantly different between the two groups. No severe adverse events were observed.

The authors concluded that ‘IP may control moderate-severe CRF in breast cancer patients.’

The website of the manufacturer provides the following information on IP:

Inner Power is a functional food that provides various nutrients, such as zinc and copper. Zinc is a nutrient that your body needs to maintain your sense of taste. Zinc is also vital in keeping the skin and mucous membranes healthy and in regulating metabolism of proteins and nucleic acids. Copper helps the body form red blood cells and bones and regulates many enzymes that are found in the body. One pouch of Inner Power each day is the recommended daily serving.

  • Consuming a large amount of the product will not cure any underlying disease or improve your health condition.
  • Do not consume too much of the product because excessive zinc intake may inhibit the absorption of copper.
  • Observe the recommended daily serving of the product. This product should not be given to infants or children.

The recommended daily serving of the product (1 pouch/day) contains 43% of the reference daily intake of zinc and 50% of the reference daily intake of copper. Inner Power is neither categorized as a food for special dietary use nor approved individually by the Ministry of Health, Labour, and Welfare. You should eat well-balanced meals consisting of staple foods, including a main dish and side dishes.

I cannot say that this inspires me with confidence.

What about the trial itself?

To be honest, I am not impressed. The most obvious flaw is, I think, that there was not the slightest attempt to control for placebo effects. As I pointed out so many times before: with the ‘A+B versus B’ design, one can make any old placebo appear to be effective.

The randomized, placebo-controlled, double-blind trial is usually the methodology to test the efficacy of a therapy that carries the least risk of bias. This fact is an obvious annoyance to some alt med enthusiasts, because such trials far too often fail to produce the results they were hoping for.

But there is no need to despair. Here I provide a few simple tips on how to mislead the public with seemingly rigorous trials.

1 FRAUD

The most brutal method for misleading people is simply to cheat. The Germans have a saying, ‘Papier ist geduldig’ (paper is patient), implying that anyone can put anything on paper. Fortunately we currently have plenty of alt med journals which publish any rubbish anyone might dream up. The process of ‘peer-review’ is one of several mechanisms supposed to minimise the risk of scientific fraud. Yet alt med journals are more clever than that! They tend to have a peer-review that rarely involves independent and critical scientists, more often than not you can even ask that you best friend is invited to do the peer-review, and the alt med journal will follow your wish. Consequently the door is wide open to cheating. Once your fraudulent paper has been published, it is almost impossible to tell that something is fundamentally wrong.

But cheating is not confined to original research. You can also apply the method to other types of research, of course. For instance, the authors of the infamous ‘Swiss report’ on homeopathy generated a false positive picture using published systematic reviews of mine by simply changing their conclusions from negative to positive. Simple!

2 PRETTIFICATION

Obviously, outright cheating is not always as simple as that. Even in alt med, you cannot easily claim to have conducted a clinical trial without a complex infrastructure which invariably involves other people. And they are likely to want to have some control over what is happening. This means that complete fabrication of an entire data set may not always be possible. What might still be feasible, however, is the ‘prettification’ of the results. By just ‘re-adjusting’ a few data points that failed to live up to your expectations, you might be able to turn a negative into a positive trial. Proper governance is aimed at preventing his type of ‘mini-fraud’ but fortunately you work in alt med where such mechanisms are rarely adequately implemented.

3 OMISSION

Another very handy method is the omission of aspects of your trial which regrettably turned out to be in disagreement with the desired overall result. In most studies, one has a myriad of endpoints. Once the statistics of your trial have been calculated, it is likely that some of them yield the wanted positive results, while others do not. By simply omitting any mention of the embarrassingly negative results, you can easily turn a largely negative study into a seemingly positive one. Normally, researchers have to rely on a pre-specified protocol which defines a primary outcome measure. Thankfully, in the absence of proper governance, it usually is possible to publish a report which obscures such detail and thus mislead the public (I even think there has been an example of such an omission on this very blog).

4 STATISTICS

Yes – lies, dam lies, and statistics! A gifted statistician can easily find ways to ‘torture the data until they confess’. One only has to run statistical test after statistical test, and BINGO one will eventually yield something that can be marketed as the longed-for positive result. Normally, researchers must have a protocol that pre-specifies all the methodologies used in a trial, including the statistical analyses. But, in alt med, we certainly do not want things to function normally, do we?

5 TRIAL DESIGNS THAT CANNOT GENERATE A NEGATIVE RESULT

All the above tricks are a bit fraudulent, of course. Unfortunately, fraud is not well-seen by everyone. Therefore, a more legitimate means of misleading the public would be highly desirable for those aspiring alt med researchers who do not want to tarnish their record to their disadvantage. No worries guys, help is on the way!

The fool-proof trial design is obviously the often-mentioned ‘A+B versus B’ design. In such a study, patients are randomized to receive an alt med treatment (A) together with usual care (B) or usual care (B) alone. This looks rigorous, can be sold as a ‘pragmatic’ trial addressing a real-fife problem, and has the enormous advantage of never failing to produce a positive result: A+B is always more than B alone, even if A is a pure placebo. Such trials are akin to going into a hamburger joint for measuring the calories of a Big Mac without chips and comparing them to the calories of a Big Mac with chips. We know the result before the research has started; in alt med, that’s how it should be!

I have been banging on about the ‘A+B versus B’ design often enough, but recently I came across a new study design used in alt med which is just as elegantly misleading. The trial in question has a promising title: Quality-of-life outcomes in patients with gynecologic cancer referred to integrative oncology treatment during chemotherapy. Here is the unabbreviated abstract:

OBJECTIVE:

Integrative oncology incorporates complementary medicine (CM) therapies in patients with cancer. We explored the impact of an integrative oncology therapeutic regimen on quality-of-life (QOL) outcomes in women with gynecological cancer undergoing chemotherapy.

PATIENTS AND METHODS:

A prospective preference study examined patients referred by oncology health care practitioners (HCPs) to an integrative physician (IP) consultation and CM treatments. QOL and chemotherapy-related toxicities were evaluated using the Edmonton Symptom Assessment Scale (ESAS) and Measure Yourself Concerns and Wellbeing (MYCAW) questionnaire, at baseline and at a 6-12-week follow-up assessment. Adherence to the integrative care (AIC) program was defined as ≥4 CM treatments, with ≤30 days between each session.

RESULTS:

Of 128 patients referred by their HCP, 102 underwent IP consultation and subsequent CM treatments. The main concerns expressed by patients were fatigue (79.8 %), gastrointestinal symptoms (64.6 %), pain and neuropathy (54.5 %), and emotional distress (45.5 %). Patients in both AIC (n = 68) and non-AIC (n = 28) groups shared similar demographic, treatment, and cancer-related characteristics. ESAS fatigue scores improved by a mean of 1.97 points in the AIC group on a scale of 0-10 and worsened by a mean of 0.27 points in the non-AIC group (p = 0.033). In the AIC group, MYCAW scores improved significantly (p < 0.0001) for each of the leading concerns as well as for well-being, a finding which was not apparent in the non-AIC group.

CONCLUSIONS:

An IP-guided CM treatment regimen provided to patients with gynecological cancer during chemotherapy may reduce cancer-related fatigue and improve other QOL outcomes.

A ‘prospective preference study’ – this is the design the world of alt med has been yearning for! Its principle is beautiful in its simplicity. One merely administers a treatment or treatment package to a group of patients; inevitably some patients take it, while others don’t. The reasons for not taking it could range from lack of perceived effectiveness to experience of side-effects. But never mind, the fact that some do not want your treatment provides you with two groups of patients: those who comply and those who do not comply. With a bit of skill, you can now make the non-compliers appear like a proper control group. Now you only need to compare the outcomes and BOB IS YOUR UNCLE!

Brilliant! Absolutely brilliant!

I cannot think of a more deceptive trial-design than this one; it will make any treatment look good, even one that is a mere placebo. Alright, it is not randomized, and it does not even have a proper control group. But it sure looks rigorous and meaningful, this ‘prospective preference study’!

The authors of a recent paper inform us that Reiki is a Japanese system of energy healing that has been used for over 2 500 years. It involves the transfer of energy from the practitioner to the receiver, which promotes healing, and can be done by either contact or non-contact methods. Both the receiver and the practitioner may feel the energy in various forms (warmth, cold, tingling, vibration, pulsations and/or floating sensations). Reiki can also be self-administered if one is a Reiki practitioner. Reiki is mainly used to address stress, anxiety, and pain reduction while also promoting a sense of well-being and improving quality of life.

Such statements should make us weary: what is presented here as fact is nothing more than conjecture – and very, very implausible conjecture too. Anyone who writes stuff like this in the introduction of a scientific paper is, in my view, unlikely to be objective and could be well on the way to present some nasty piece of pseudo-science.

But I am, of course, pre-judging the issue; let’s have a quick look at the article itself.

The purpose of this study was to determine the effects of a 20-week structured self-Reiki program on stress reduction and relaxation in college students. Students were recruited from Stockton University and sessions were conducted in the privacy of their residence. Twenty students completed the entire study consisting of 20 weeks of self-Reiki done twice weekly. Each participant completed a Reiki Baseline Credibility Scale, a Reiki Expectancy Scale, and a Perceived Stress Scale (PSS) after acceptance into the study. The PSS was completed every four weeks once the interventions were initiated. A global assessment questionnaire was completed at the end of the study. Logs summarizing the outcome of each session were submitted at the end of the study.

With the exception of three participants, participants believed that Reiki is a credible technique for reducing stress levels. Except for two participants, participants agreed that Reiki would be effective in reducing stress levels. All participants experienced stress within the month prior to completing the initial PSS. There was a significant reduction in stress levels from pre-study to post-study. There was a correlation between self-rating of improvement and final PSS scores. With one exception, stress levels at 20 weeks did not return to pre-study stress levels.

The authors concluded that this study supports the hypothesis that the calming effect of Reiki may be achieved through the use of self-Reiki.

QED – my suspicions were fully confirmed. This study shows precisely nothing, and it certainly does not support any hypothesis regarding Reiki.

If we recruited 20 volunteers who were sufficiently gullible to believe that watching an ice-cube slowly melting in the kitchen sink, or anything else that we can think of, has profound effects on their vital energy, or chi, or karma, or anything else, we would almost certaily generate similar results.

My conclusion is therefore very different from those of the original authors: THIS STUDY SUPPORTS THE HYPOTHESIS THAT GULLIBLE PEOPLE CAN BE EASILY MISLEAD ABOUT BOGUS THERAPIES WITH PSEUDO-SCIENTIFIC STUDIES BY IRRESPONSIBLE WOULD-BE SCIENTISTS.

1 2 3 5
Recent Comments

Note that comments can now be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories