MD, PhD, FMedSci, FRSB, FRCP, FRCPEd.

research

1 2 3 9

I have often cautioned my readers about the ‘evidence’ supporting acupuncture (and other alternative therapies). Rightly so, I think. Here is yet another warning.

This systematic review assessed the clinical effectiveness of acupuncture in the treatment of postpartum depression (PPD). Nine trials involving 653 women were selected. A meta-analysis demonstrated that the acupuncture group had a significantly greater overall effective rate compared with the control group. Moreover, acupuncture significantly increased oestradiol levels compared with the control group. Regarding the HAMD and EPDS scores, no difference was found between the two groups. The Chinese authors concluded that acupuncture appears to be effective for postpartum depression with respect to certain outcomes. However, the evidence thus far is inconclusive. Further high-quality RCTs following standardised guidelines with a low risk of bias are needed to confirm the effectiveness of acupuncture for postpartum depression.

What a conclusion!

What a review!

What a journal!

What evidence!

Let’s start with the conclusion: if the authors feel that the evidence is ‘inconclusive’, why do they state that ‘acupuncture appears to be effective for postpartum depression‘. To me this does simply not make sense!

Such oddities are abundant in the review. The abstract does not mention the fact that all trials were from China (published in Chinese which means that people who cannot read Chinese are unable to check any of the reported findings), and their majority was of very poor quality – two good reasons to discard the lot without further ado and conclude that there is no reliable evidence at all.

The authors also tell us very little about the treatments used in the control groups. In the paper, they state that “the control group needed to have received a placebo or any type of herb, drug and psychological intervention”. But was acupuncture better than all or any of these treatments? I could not find sufficient data in the paper to answer this question.

Moreover, only three trials seem to have bothered to mention adverse effects. Thus the majority of the studies were in breach of research ethics. No mention is made of this in the discussion.

In the paper, the authors re-state that “this meta-analysis showed that the acupuncture group had a significantly greater overall effective rate compared with the control group. Moreover, acupuncture significantly increased oestradiol levels compared with the control group.” This is, I think, highly misleading (see above).

Finally, let’s have a quick look at the journal ‘Acupuncture in Medicine’ (AiM). Even though it is published by the BMJ group (the reason for this phenomenon can be found here: “AiM is owned by the British Medical Acupuncture Society and published by BMJ“; this means that all BMAS-members automatically receive the journal which thus is a resounding commercial success), it is little more than a cult-newsletter. The editorial board is full of acupuncture enthusiasts, and the journal hardly ever publishes anything that is remotely critical of the wonderous myths of acupuncture.

My conclusion considering all this is as follows: we ought to be very careful before accepting any ‘evidence’ that is currently being published about the benefits of acupuncture, even if it superficially looks ok. More often than not, it turns out to be profoundly misleading, utterly useless and potentially harmful pseudo-evidence.


Reference

Acupunct Med. 2018 Jun 15. pii: acupmed-2017-011530. doi: 10.1136/acupmed-2017-011530. [Epub ahead of print]

Effectiveness of acupuncture in postpartum depression: a systematic review and meta-analysis.

Li S, Zhong W, Peng W, Jiang G.

How often do we hear this sentence: “I know, because I have done my research!” I don’t doubt that most people who make this claim believe it to be true.

But is it?

What many mean by saying, “I know, because I have done my research”, is that they went on the internet and looked at a few websites. Others might have been more thorough and read books and perhaps even some original papers. But does that justify their claim, “I know, because I have done my research”?

The thing is, there is research and there is research.

The dictionary defines research as “The systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.” This definition is helpful because it mentions several issues which, I believe, are important.

Research should be:

  • systematic,
  • an investigation,
  • establish facts,
  • reach new conclusions.

To me, this indicates that none of the following can be truly called research:

  • looking at a few randomly chosen papers,
  • merely reading material published by others,
  • uncritically adopting the views of others,
  • repeating the conclusions of others.

Obviously, I am being very harsh and uncompromising here. Not many people could, according to these principles, truthfully claim to have done research in alternative medicine. Most people in this realm do not fulfil any of those criteria.

As I said, there is research and research – research that meets the above criteria, and the type of research most people mean when they claim: “I know, because I have done my research.”

Personally, I don’t mind that the term ‘research’ is used in more than one way:

  • there is research meeting the criteria of the strict definition
  • and there is a common usage of the word.

But what I do mind, however, is when the real research is claimed to be as relevant and reliable as the common usage of the term. This would be a classical false equivalence, akin to putting experts on a par with pseudo-experts, to believing that facts are no different from fantasy, or to assume that truth is akin to post-truth.

Sadly, in the realm of alternative medicine (and alarmingly, in other areas as well), this is exactly what has happened since quite some time. No doubt, this might be one reason why many consumers are so confused and often make wrong, sometimes dangerous therapeutic decisions. And this is why I think it is important to point out the difference between research and research.

On this blog, we constantly discuss the shortcomings of clinical trials of (and other research into) alternative medicine. Yet, there can be no question that research into conventional medicine is often unreliable as well.

What might be the main reasons for this lamentable fact?

A recent BMJ article discussed 5 prominent reasons:

Firstly, much research fails to address questions that matter. For example, new drugs are tested against placebo rather than against usual treatments. Or the question may already have been answered, but the researchers haven’t undertaken a systematic review that would have told them the research was not needed. Or the research may use outcomes, perhaps surrogate measures, that are not useful.

Secondly, the methods of the studies may be inadequate. Many studies are too small, and more than half fail to deal adequately with bias. Studies are not replicated, and when people have tried to replicate studies they find that most do not have reproducible results.

Thirdly, research is not efficiently regulated and managed. Quality assurance systems fail to pick up the flaws in the research proposals. Or the bureaucracy involved in having research funded and approved may encourage researchers to conduct studies that are too small or too short term.

Fourthly, the research that is completed is not made fully accessible. Half of studies are never published at all, and there is a bias in what is published, meaning that treatments may seem to be more effective and safer than they actually are. Then not all outcome measures are reported, again with a bias towards those are positive.

Fifthly, published reports of research are often biased and unusable. In trials about a third of interventions are inadequately described meaning they cannot be implemented. Half of study outcomes are not reported.

END OF QUOTE

Apparently, these 5 issues are the reason why 85% of biomedical research is being wasted.

That is in CONVENTIONAL medicine, of course.

What about alternative medicine?

There is no question in my mind that the percentage figure must be even higher here. But do the same reasons apply? Let’s go through them again:

  1. Much research fails to address questions that matter. That is certainly true for alternative medicine – just think of the plethora of utterly useless surveys that are being published.
  2. The methods of the studies may be inadequate. Also true, as we have seen hundreds of time on this blog. Some of the most prevalent flaws include in my experience small sample sizes, lack of adequate controls (e.g. A+B vs B design) and misleading conclusions.
  3. Research is not efficiently regulated and managed. True, but probably not a specific feature of alternative medicine research.
  4. Research that is completed is not made fully accessible. most likely true but, due to lack of information and transparency, impossible to judge.
  5. Published reports of research are often biased and unusable. This is unquestionably a prominent feature of alternative medicine research.

All of this seems to indicate that the problems are very similar – similar but much more profound in the realm of alternative medicine, I’d say based on many years of experience (yes, what follows is opinion and not evidence because the latter is hardly available).

The thing is that, like almost any other job, research needs knowledge, skills, training, experience, integrity and impartiality to do it properly. It simply cannot be done well without such qualities. In alternative medicine, we do not have many individuals who have all or even most of these qualities. Instead, we have people who often are evangelic believers in alternative medicine, want to further their field by doing some research and therefore acquire a thin veneer of scientific expertise.

In my 25 years of experience in this area, I have not often seen researchers who knew that research is for testing hypotheses and not for trying to prove one’s hunches to be correct. In my own team, those who were the most enthusiastic about a particular therapy (and were thus seen as experts in its clinical application), were often the lousiest researchers who had the most difficulties coping with the scientific approach.

For me, this continues to be THE problem in alternative medicine research. The investigators – and some of them are now sufficiently skilled to bluff us to believe they are serious scientists – essentially start on the wrong foot. Because they never were properly trained and educated, they fail to appreciate how research proceeds. They hardly know how to properly establish a hypothesis, and – most crucially – they don’t know that, once that is done, you ought to conduct investigation after investigation to show that your hypothesis is incorrect. Only once all reasonable attempts to disprove it have failed, can your hypothesis be considered correct. These multiple attempts of disproving go entirely against the grain of an enthusiast who has plenty of emotional baggage and therefore cannot bring him/herself to honestly attempt to disprove his/her beloved hypothesis.

The plainly visible result of this situation is the fact that we have dozens of alternative medicine researchers who never publish a negative finding related to their pet therapy (some of them were admitted to what I call my HALL OF FAME on this blog, in case you want to verify this statement). And the lamentable consequence of all this is the fast-growing mountain of dangerously misleading (but often seemingly robust) articles about alternative treatments polluting Medline and other databases.

The Impact Factor (IF) of a journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. It is frequently used as a measure of the importance of a journal within its field; journals with higher impact factors are often deemed to be more important than those with lower ones. The IF for any given year can be calculated as the number of citations, received in that year, of articles published in that journal during the two preceding years, divided by the total number of articles published in that journal during the two preceding years.

press-release celebrated the new IF of the journal ‘HOMEOPATHY’ which has featured on this blog before. I am sure that you all want to share in this joy:

START OF QUOTE

For the second year running there has been an increase in the number of times articles published in the Faculty of Homeopathy’s journal Homeopathy have been cited in articles in other peer-reviewed publications. The figure known as the Impact Factor (IF) has risen from 1.16 to 1.524, which represents a 52% increase in the number of citations.

An IF is used to determine the impact a particular journal has in a given field of research and is therefore widely used as a measure of quality. The latest IF assessment for Homeopathy covers citations during 2017 for articles published in the previous two years (2015 and 2016).

Dr Peter Fisher, Homeopathy’s editor-in-chief, said: “Naturally the editorial team is delighted by this news. This success is due to the quality and international nature of research and other content we publish. So I thank all those who have contributed such high quality papers, maintaining the journal’s position as the world’s foremost publication in the scholarly study of homeopathy. I would particularly like to thank our senior deputy editor, Dr Robert Mathie for all his hard work.”

First published in 1911 as the British Homoeopathic Journal, Homeopathy is the only homeopathic journal indexed by Medline, with over 100,000 full-text downloads per year. In January 2018, publishing responsibilities for the quarterly journal moved to Thieme, an award-winning medical and science publisher.

Greg White, Faculty chief executive, said: “Moving to a new publisher can be difficult, but the decision we took last year is certainly paying dividends. I would therefore like to thank everyone at Thieme for the part they are playing in the journal’s continued success.”

END OF QUOTE

While the champagne corks might be popping in homeopathic circles, I want to try and give some perspective to this celebration.

The IP has rightly been criticised so many times for so many reasons, that it is now not generally considered to be a valuable measure for anything. The main reason for this is that it can be (and is being) manipulated in numerous ways. But even if we accept the IP as a meaningful parameter, we must ask what an IP of 1.5 means and how it compares to other medical journals’ IP.

Here are some IFs of general and specialised medical journals readers of this blog might know:

Annals Int Med: 2016/2017 Impact Factor : 17.135,

BMJ: 2016/2017 Impact Factor : 20.785,

Circulation: 2016/2017 Impact Factor : 19.309,

Diabetes Care: 2016/2017 Impact Factor : 11.857,

Gastroenterology: 2016/2017 Impact Factor : 18.392,

Gut: 2016/2017 Impact Factor : 16.658,

J Clin Oncol: 2016/2017 Impact Factor : 24.008,

Lancet: 2016/2017 Impact Factor : 47.831,

Nature Medicine: 2016/2017 Impact Factor : 29.886,

Plos Medicine: 2016/2017 Impact Factor : 11.862,

Trends Pharm Sci: 2016/2017 Impact Factor : 12.797,

This selection seems to indicate that an IF of 1.5 is modest, to say the least. In turn, this means that the above press-release is perhaps just a little bit on the hypertrophic side.

But, of course, it’s all about homeopathy where, as we all know, LESS IS MORE!

One of the biggest danger of SCAM, in my view, is the fact that SCAM-practitioners all too often advise their patients to forego effective conventional medicine. This probably applies to most medicines, but is best-researched for immunisations. A recent article puts it clearly:

… negative attitudes towards vaccines reflect a broader and deeper set of beliefs about health and wellbeing… this alternative worldview is influenced by ontological confusions (e.g. regarding purity, natural energy), and knowledge based on personal lived experience and trusted peers, rather than the positivist epistemological framework. [This] view is supported by recent social-psychological research, including strong correlations of vaccine scepticism with adherence to complementary and alternative medicine, magical health beliefs, and conspiracy ideation. For certain well-educated and well-resourced individuals, opposition to vaccines represents an expression of personal intuition and agency, in achieving a positive and life-affirming approach to health and wellbeing. These core beliefs are not amenable to change – and especially resistant to communications from orthodox, authoritative sources.

The authors concluded suggesting that a better long-term strategy is to combine with other disciplines in order to address the root causes of vaccine scepticism. Vaccine scepticism is unlikely to thrive in a cultural context that trusts and values the scientific consensus.

If I understand them correctly, the authors believe it is necessary to change the societal attitude to science.

I am sure they are correct.

We live in a time when anyone’s opinion is deemed as valuable as the next person’s. Pseudo-experts who have their knowledge from a couple of google searches are being considered as trustworthy as the true experts who have the background, knowledge and experience to issue responsible advice. Science is viewed by many as just another way of knowing, or even as the new cult or religion that must be viewed with suspicion.

It is clear that these are deplorable developments. But how to stop them?

This is where it gets complex.

One is tempted to lay the blame at the door of our politicians. Why do we tolerate the fact that so many of them have not the slightest inkling about science?

But hold on, WE elected them!

Why?

Because large sections of the public are ignorant too.

So, one must start much earlier. We need better science education, and that has to begin in the first year of schooling! We need evening classes in critical thinking. We need adult science courses for politicians.

But this is not going to happen, because our politicians fail to see the importance of such measures (and, of course, they might feel that an uneducated public is easier to govern than an educated one).

How to break this vicious circle?

It is clear from these simple (and simplistic) reflections that a multifactorial approach is required. And it is clear that it ought to be a strategy that prevents standards in the most general terms from slipping ever lower. But how?

I wish I knew!!!

I have already posted challenges to homeopaths. For instance, in a previous post, I asked the ‘homeopaths of the world’ to answer a few questions satisfactorily. In return, I promised to no longer doubt their memory of water theory. If they cannot do this, I contended, they should to admit that all their ‘sciency’ theories about the mode of action of highly diluted homeopathic remedies are really quite silly – more silly even than Hahnemann’s idea of a ‘spirit-like’ effect.

And then there is the challenge to correctly identify their own remedies. In return, they would even earn the neat sum of Euro 50 000.

So far, none of these challenges have been met. But one must not give up hope!!!

Meanwhile, I have decided to issue another one. Let me explain:

One argument that the ‘defenders of the homeopathic realm’ love and almost invariably use, when someone states that it is time to move on and ban homeopathy to the history books, is this one:

IF WE BANNED HOMEOPATHY FROM OUR CLINICAL ROUTINE, WE WOULD ALSO HAVE TO BAN MANY OF THE TREATMENTS USED IN CONVENTIONAL MEDICINE WHICH ARE EQUALLY POORLY SUPPORTED BY SOUND EVIDENCE FOR EFFICACY.

This looks like a good argument!

I am sure that politicians, journalists, consumers and even many healthcare professionals find it convincing.

We know that lots of conventional treatments are less well supported than many of us would hope or think.

But less well-supported than homeopathy?

Let’s see: Homeopathy has been around for ~200 years. Controlled clinical trials of homeopathy have been conducted since 1835. Today, we have about 500 controlled clinical trials of homeopathy. The totality of these data fails to convincingly demonstrate that homeopathy is more than a placebo.

Are there many other therapies that fulfil these criteria? Personally, I am not aware of such a therapy, and if I did know one, I am fairly certain that I would advocate its elimination from our clinical routine.

But I am, of course, not an expert in all fields of healthcare.

Perhaps such treatments do exist!

I want to find out, and – as always – the burden of proof is with those who use this argument.

Which brings me to my challenge.

I HEREWITH CHALLENGE HOMEOPATHS AND THEIR FOLLOWERS TO NAME THERAPIES THAT ARE AS USELESS AS HOMEOPATHY!

To be clear, they ought to fulfil the following criteria:

  1. The treatment must be about 200 years old (plenty of time for a thorough evaluation).
  2. It should have been extensively tested in about 500 controlled clinical trials.
  3. The totality of this evidence should be negative.
  4. The treatment should be part of the clinical routine and have ardent proponents who insist it should be paid for by public funds.

I hope lots of homeopaths can name lots of such therapies.

Failing this, they should think twice before they use the above argument again.

 

Shiatsu is an alternative therapy that is popular, but has so far attracted almost no research. Therefore, I was excited when I saw a new paper on the subject. Sadly, my excitement waned quickly when I stared reading the abstract.

This single-blind randomized controlled study was aimed to evaluate shiatsu on mood, cognition, and functional independence in patients undergoing physical activity. Alzheimer disease (AD) patients with depression were randomly assigned to the “active group” (Shiatsu + physical activity) or the “control group” (physical activity alone).

Shiatsu was performed by the same therapist once a week for ten months. Global cognitive functioning (Mini Mental State Examination – MMSE), depressive symptoms (Geriatric Depression Scale – GDS), and functional status (Activity of Daily Living – ADL, Instrumental ADL – IADL) were assessed before and after the intervention.

The researchers found a within-group improvement of MMSE, ADL, and GDS in the Shiatsu group. However, the analysis of differences before and after the interventions showed a statistically significant decrease of GDS score only in the Shiatsu group.

The authors concluded that the combination of Shiatsu and physical activity improved depression in AD patients compared to physical activity alone. The pathomechanism might involve neuroendocrine-mediated effects of Shiatsu on neural circuits implicated in mood and affect regulation.

The Journal Complementary Therapies in Medicine also published three ‘Highlights’ of this study:

  • We first evaluated the effect of Shiatsu in depressed patients with Alzheimer’s disease (AD).
  • Shiatsu significantly reduced depression in a sample of mild-to-moderate AD patients.
  • Neuroendocrine-mediated effect of Shiatsu may modulate mood and affect neural circuits.

Where to begin?

1 The study is called a ‘pilot’. As such it should not draw conclusions about the effectiveness of Shiatsu.

2 The design of the study was such that there was no accounting for the placebo effect (the often-discussed ‘A+B vs B’ design); therefore, it is impossible to attribute the observed outcome to Shiatsu. The ‘highlight’ – Shiatsu significantly reduced depression in a sample of mild-to-moderate AD patients – therefore turns out to be a low-light.

3 As this was a study with a control group, within-group changes are irrelevant and do not even deserve a mention.

4 The last point about the mode of action is pure speculation, and not borne out of the data presented.

5 Accumulating so much nonsense in one research paper is, in my view, unethical.

Research into alternative medicine does not have a good reputation – studies like this one are not inclined to improve it.

Yesterday, it was announced that homeopaths can easily and quickly earn a sizable amount of money.

The announcement was made during the German sceptics conference ‘Skepkon‘ in Koeln. As I could not be present, I obtained the photo via Twitter.

So, if you are a homeopath or a fan of homeopathy, all you have to do – as the above slide says – is to reproducibly identify homeopathic remedies in high potency. The procedure for obtaining the money has to follow three pre-defined steps:

  1. Identification of three homeopathic preparations in high potency according to a proscribed protocol.
  2. Documentation of a method enabling a third party to identify the remedies.
  3. Verification of the experiment by repeating it.

Anyone interested must adhere to the full instructions published by the German sceptics GWUP:

1. Review of test protocol

Together with a representative of GWUP, the applicants review and agree on this protocol prior to the start of the test. Minor changes may be applied if justified, provided they are mutually agreed to in advance and do not affect the validity of the test, especially the blinding and randomization of the samples. In any case we want to avoid that the results get compromised or their credibility impeached by modifications of the protocol while the test is already under way. After mutual confirmation, the test protocol is binding for the whole duration of the test and its evaluation.

2. Selection of drugs

The applicant proposes which three drugs should be used in the trial. This gives them the opportunity to select substances that they think they could distinguish particularly well as homeopathic remedies. The potency may be selected freely as well, whereby the following conditions must be observed:

– all drugs must be available as sugar globules of the same grade (“Globuli” in German);
– the same potency, namely D- or C-potency above D24 / C12, is used for all three drugs;
– all drugs can be procured from the same producer.

3. Procurement of samples

The samples will be purchased by GWUP and shipped from the vendor directly to the notary who will perform the randomization. GWUP will purchase sufficient numbers of packages to complete the series of 12 samples according to the randomization list. The procurement will ensure that the samples derive from different batches of production as follows.

3.1. Common remedies

Common remedies, i.e. remedies sold in high numbers, will be procured from randomly selected pharmacies from the biggest cities in Germany (Berlin, Hamburg, Munich, Cologne, Frankfurt, Stuttgart…). Each pharmacy supplies a bottle for each of the three selected remedies and ships it directly to the notary in charge of randomization. If the applicants need a sample of known content for calibration, then this will be procured from yet another pharmacy in another German city.

3.2. Special remedies

If due to low sales it is possible that the above procedure is not sufficient to have all samples from different batches, a randomly selected pharmacy will be appointed to produce all the samples from raw materials purchased from the producer. GWUP will procure the mother tinctures, the raw sugar pills, and bottles and packages, to be shipped directly to the respective pharmacy who then will do the potentization, label the bottles and send them to the notary. If there are extra samples of known content required for calibration, then an extra set of samples will be produced. One set of samples will be kept in a sealed package for future reference.

The applicant and GWUP mutually agree on which procedure is used before the start of procurement. If more than 10 grams of globules per sample are required for the procedure used for inentification, the applicant has to indicate this in advance, and GWUP will take this into account.

4. Randomization / blinding

The randomization and blinding is done by a sworn-in public notary in Würzburg, Germany, who is selected by a random procedure. Würzburg is chosen because the first part of the task is to be evaluated at the University of Würzburg, for all participants based in Europe. For overseas applicants, the location will be mutually agreed on.

The notary receives a coding list showing how the three drugs A, B and C are to be distributed among the twelve samples. This list is compiled by the GWUP representative by throwing dice. The notary also determines which drug is assigned to which letter by throwing dice. Note that the drugs may not be present in the set in equal numbers.

The notary completely removes the original label from the bottle and replaces it with the number without opening the bottle. The randomization protocol is deposited in a sealed envelope with the notary public without a copy being made beforehand. The notary disposes of surplus packs. If special remedies are processed, one set of marked samples is sealed and forwarded to GWUP for later reference in a sealed package.

The coded bottles are sent from the notary to the applicant without individual packaging and documentation. The applicant confirms receipt of the samples.

5. Identification

The applicant identifies which of the 12 bottles contains which drug, using any method and procedure of his choice. There is no limit as to the method used for identification, and this well may be a procedure not currently recognized by modern science. However, GWUP at the start requires a short and rough outline of how the applicant wants to proceed, and GWUP reserves the right to reject applications whose sincerity for scientific work seems questionable.

The applicant is also required to specify a period of time within which they will be able to produce their results. This period may not exceed six months. If it expires without the applicant being able to show their results, the outcome will be considered negative. However, the candidate may apply for an extension in good time before the deadline, if they can provide a reasonable explanation and is not caused by the respective identification process as such.

The applicant is explicitly advised to observe ethics standards, and to procure the consent of an appropriate ethics committee if their method involves testing on humans or animals.

6. Result Pt. 1

If reasonable, the applicant may present their findings as part of the PSI-Tests held annually by GWUP at the University of Würzburg. The applicant’s result will be compared to the coding protocol from the notary. The number of bottles in which the notary’s record corresponds to the applicant’s details is determined. The result includes a description of the method used, if possible with meaningful intermediate data such as measurement protocols or symptom lists of drug provings.

The first part of the test is considered a success if the content of no more than one bottle is identified incorrectly and a description of the procedure is produced.

7. Result Pt. 2 and 3: Replication and Verification

Replication of the test is to ensure that a successful first result was not caused by chance alone. In addition, the procedure explained by the applicant is to be verified in a way depending on its nature. The objective is to verify that the identification was indeed performed by using this very method, and that the description is complete and suitable for a third party to achieve the same outcome.

For replication, steps 2 to 5 will be repeated. Applicants may select to use the same drugs as before. In this case they will then procured from another manufacturer or prepared by a different pharmacy with raw material from a different supplier. Alternatively, the candidate may indicate three new drugs which then can be obtained from the original vendor.

For a successful replication the same precision as before is required, that is, that only one out of 12 bottles may be identified incorrectly.

The evaluation and presentation of these results may take place at any location, press or other media may be invited to the event as agreed to by the applicant and GWUP.


Is anyone going to take up this challenge?

Personally, I don’t hold my breath.

Many years ago (at a time when homeopaths still saw me as one of their own), I had plans to do a similar but slightly less rigorous test as part of a doctoral thesis for one of my students.

Our investigation was straight forward: we approached several of the world’s leading/most famous homeopaths and asked them to participate. Their task was to tell us which homeopathic remedy they thought was easiest to differentiate from a placebo. Subsequently we would post them several vials – I think the number was 10 – and ask them to tell us which contained the remedy of their choice (in a C30 potency), and which the placebo (the distribution was 50:50, and the authenticity of each vial was to be confirmed by a notary). The experimental method for identifying which was which was entirely left to each participating homeopath; they were even allowed to use multiple, different tests. Based on the results, we would then calculate whether their identification skills were better than pure chance.

Sadly, the trial never happened. Initially, we had a positive response from some homeopaths who were interested in participating. However, when they then saw the exact protocol, they all pulled out.

But times may have changed; perhaps today there are some homeopaths out there who actually believe in homeopathy?

Perhaps our strategy to work only with ‘the best’ homeopaths was wrong?

Perhaps there are some homeopaths who are less risk-adverse?

I sure hope that lots of enthusiastic homeopaths will take up this challenge.

GOOD LUCK! And watch this space.

I have often criticised papers published by chiropractors.

Not today!

This article is excellent and I therefore quote extensively from it.

The objective of this systematic review was to investigate, if there is any evidence that spinal manipulations/chiropractic care can be used in primary prevention (PP) and/or early secondary prevention in diseases other than musculoskeletal conditions. The authors conducted extensive literature searches to locate all studies in this area. Of the 13.099 titles scrutinized, 13 articles were included (8 clinical studies and 5 population studies). They dealt with various disorders of public health importance such as diastolic blood pressure, blood test immunological markers, and mortality. Only two clinical studies could be used for data synthesis. None showed any effect of spinal manipulation/chiropractic treatment.

The authors concluded that they found no evidence in the literature of an effect of chiropractic treatment in the scope of PP or early secondary prevention for disease in general. Chiropractors have to assume their role as evidence-based clinicians and the leaders of the profession must accept that it is harmful to the profession to imply a public health importance in relation to the prevention of such diseases through manipulative therapy/chiropractic treatment.

In addition to this courageous conclusion (the paper is authored by a chiropractor and published in a chiro journal), the authors make the following comments:

Beliefs that a spinal subluxation can cause a multitude of diseases and that its removal can prevent them is clearly at odds with present-day concepts, as the aetiology of most diseases today is considered to be multi-causal, rarely mono-causal. It therefore seems naïve when chiropractors attempt to control the combined effects of environmental, social, biological including genetic as well as noxious lifestyle factors through the simple treatment of the spine. In addition, there is presently no obvious emphasis on the spine and the peripheral nervous system as the governing organ in relation to most pathologies of the human body.

The ‘subluxation model’ can be summarized through several concepts, each with its obvious weakness. According to the first three, (i) disturbances in the spine (frequently called ‘subluxations’) exist and (ii) these can cause a multitude of diseases. (iii) These subluxations can be detected in a chiropractic examination, even before symptoms arise. However, to date, the subluxation has been elusive, as there is no proof for its existence. Statements that there is a causal link between subluxations and various diseases should therefore not be made. The fourth and fifth concepts deal with the treatment, namely (iv) that chiropractic adjustments can remove subluxations, (v) resulting in improved health status. However, even if there were an improvement of a condition following treatment, this does not mean that the underlying theory is correct. In other words, any improvement may or may not be caused by the treatment, and even if so, it does not automatically validate the underlying theory that subluxations cause disease…

Although at first look there appears to be a literature on this subject, it is apparent that most authors lack knowledge in research methodology. The two methodologically acceptable studies in our review were found in PubMed, whereas most of the others were identified in the non-indexed literature. We therefore conclude that it may not be worthwhile in the future to search extensively the non-indexed chiropractic literature for high quality research articles.

One misunderstanding requires some explanations; case reports are usually not considered suitable evidence for effect of treatment, even if the cases relate to patients who ‘recovered’ with treatment. The reasons for this are multiple, such as:

  • Individual cases, usually picked out on the basis of their uniqueness, do not reflect general patterns.
  • Individual successful cases, even if correctly interpreted must be validated in a ‘proper’ research design, which usually means that presumed effect must be tested in a properly powered and designed randomized controlled trial.
  • One or two successful cases may reflect a true but very unusual recovery, and such cases are more likely to be written up and published as clinicians do not take the time to marvel over and spend time on writing and publishing all the other unsuccessful treatment attempts.
  • Recovery may be co-incidental, caused by some other aspect in the patient’s life or it may simply reflect the natural course of the disease, such as natural remission or the regression towards the mean, which in human physiology means that low values tend to increase and high values decrease over time.
  • Cases are usually captured at the end because the results indicate success, meaning that the clinical file has to be reconstructed, because tests were used for clinical reasons and not for research reasons (i.e. recorded by the treating clinician during an ordinary clinical session) and therefore usually not objective and reproducible.
  • The presumed results of the treatment of the disease is communicated from the patient to the treating clinician and not to a third, neutral person and obviously this link is not blinded, so the clinician is both biased in favour of his own treatment and aware of which treatment was given, and so is the patient, which may result in overly positive reporting. The patient wants to please the sympathetic clinician and the clinician is proud of his own work and overestimates the results.
  • The long-term effects are usually not known.
  • Further, and most importantly, there is no control group, so it is impossible to compare the results to an untreated or otherwise treated person or group of persons.

Nevertheless, it is common to see case reports in some research journals and in communities with readers/practitioners without a firmly established research culture it is often considered a good thing to ‘start’ by publishing case reports.

Case reports are useful for other reasons, such as indicating the need for further clinical studies in a specific patient population, describing a clinical presentation or treatment approach, explaining particular procedures, discussing cases, and referring to the evidence behind a clinical process, but they should not be used to make people believe that there is an effect of treatment…

For groups of chiropractors, prevention of disease through chiropractic treatment makes perfect sense, yet the credible literature is void of evidence thereof. Still, the majority of chiropractors practising this way probably believe that there is plenty of evidence in the literature. Clearly, if the chiropractic profession wishes to maintain credibility, it is time seriously to face this issue. Presently, there seems to be no reason why political associations and educational institutions should recommend spinal care to prevent disease in general, unless relevant and acceptable research evidence can be produced to support such activities. In order to be allowed to continue this practice, proper and relevant research is therefore needed…

All chiropractors who want to update their knowledge or to have an evidence-based practice will search new information on the internet. If they are not trained to read the scientific literature, they might trust any article. In this situation, it is logical that the ‘believers’ will choose ‘attractive’ articles and trust the results, without checking the quality of the studies. It is therefore important to educate chiropractors to become relatively competent consumers of research, so they will not assume that every published article is a verity in itself…

END OF QUOTES

YES, YES YES!!!

I am so glad that some experts within the chiropractic community are now publishing statements like these.

This was long overdue.

How was it possible that so many chiropractors so far failed to become competent consumers of research?

Do they and their professional organisations not know that this is deeply unethical?

Actually, I fear they do and did so for a long time.

Why then did they not do anything about it ages ago?

I fear, the answer is as easy as it is disappointing:

If chiropractors systematically trained to become research-competent, the chiropractic profession would cease to exist; they would become a limited version of physiotherapists. There is simply not enough positive evidence to justify chiropractic. In other words, as chiropractic wants to survive, it has little choice other than remaining ignorant of the current best evidence.

Since many months, I have noticed a proliferation of so-called pilot studies of alternative therapies. A pilot study (also called feasibility study) is defined as a small scale preliminary study conducted in order to evaluate feasibility, time, cost, adverse events, and improve upon the study design prior to performance of a full-scale research project. Here I submit that most of the pilot studies of alternative therapies are, in fact, bogus.

To qualify as a pilot study, an investigation needs to have an aim that is in line with the above-mentioned definition. Another obvious hallmark must be that its conclusions are in line with this aim. We do not need to conduct much research to find that even these two elementary preconditions are not fulfilled by the plethora of pilot studies that are currently being published, and that proper pilot studies of alternative medicine are very rare.

Three recent examples of dodgy pilot studies will have to suffice (but rest assured, there are many, many more).

Foot Reflexotherapy Induces Analgesia in Elderly Individuals with Low Back Pain: A Randomized, Double-Blind, Controlled Pilot Study

The aim of this study was to evaluate the effects of foot reflexotherapy on pain and postural balance in elderly individuals with low back pain. And the conclusions drawn by its authors were that this study demonstrated that foot reflexotherapy induced analgesia but did not affect postural balance in elderly individuals with low back pain.

Effect of Tai Chi Training on Dual-Tasking Performance That Involves Stepping Down among Stroke Survivors: A Pilot Study.

The aim of this study was to investigate the effect of Tai Chi training on dual-tasking performance that involved stepping down and compared it with that of conventional exercise among stroke survivors. And the conclusions read: These results suggest a beneficial effect of Tai Chi training on cognition among stroke survivors without compromising physical task performance in dual-tasking.

The Efficacy of Acupuncture on Anthropometric Measures and the Biochemical Markers for Metabolic Syndrome: A Randomized Controlled Pilot Study.

The aim of this study was to evaluate the efficacy [of acupuncture] over 12 weeks of treatment and 12 weeks of follow-up. And the conclusion: Acupuncture decreases WC, HC, HbA1c, TG, and TC values and blood pressure in MetS.

It is almost painfully obvious that these studies are not ‘pilot’ studies as defined above.

So, what are they, and why are they so popular in alternative medicine?

The way I see it, they are the result of amateur researchers conducting pseudo-research for publication in lamentable journals in an attempt to promote their pet therapies (I have yet to find such a study that reports a negative finding). The sequence of events that lead to the publication of such pilot studies is usually as follows:

  • An enthusiast or a team of enthusiasts of alternative medicine decide that they will do some research.
  • They have no or very little know-how in conducting a clinical trial.
  • They nevertheless feel that such a study would be nice as it promotes both their careers and their pet therapy.
  • They design some sort of a plan and start recruiting patients for their trial.
  • At this point they notice that things are not as easy as they had imagined.
  • They have too few funds and too little time to do anything properly.
  • This does, however, not stop them to continue.
  • The trial progresses slowly, and patient numbers remain low.
  • After a while the would-be researchers get fed up and decide that their study has enough patients to stop the trial.
  • They improvise some statistical analyses with their results.
  • They write up the results the best they can.
  • They submit it for publication in a 3rd class journal and, in order to get it accepted, they call it a ‘pilot study’.
  • They feel that this title is an excuse for even the most obvious flaws in their work.
  • The journal’s reviewers and editors are all proponents of alternative medicine who welcome any study that seems to confirm their belief.
  • Thus the study does get published despite the fact that it is worthless.

Some might say ‘so what? no harm done!’

But I beg to differ: these studies pollute the medical literature and misguide people who are unable or unwilling to look behind the smoke-screen. Enthusiasts of alternative medicine popularise these bogus trials, while hiding the fact that their results are unreliable. Journalists report about them, and many consumers assume they are being told the truth – after all it was published in a ‘peer-reviewed’ medical journal!

My conclusions are as simple as they are severe:

  • Such pilot studies are the result of gross incompetence on many levels (researchers, funders, ethics committees, reviewers, journal editors).
  • They can cause considerable harm, because they mislead many people.
  • In more than one way, they represent a violation of medical ethics.
  • The could be considered scientific misconduct.
  • We should think of stopping this increasingly common form of scientific misconduct.
1 2 3 9
Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted.


Click here for a comprehensive list of recent comments.

Categories