MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

study design

On this blog, we constantly discuss the shortcomings of clinical trials of (and other research into) alternative medicine. Yet, there can be no question that research into conventional medicine is often unreliable as well.

What might be the main reasons for this lamentable fact?

A recent BMJ article discussed 5 prominent reasons:

Firstly, much research fails to address questions that matter. For example, new drugs are tested against placebo rather than against usual treatments. Or the question may already have been answered, but the researchers haven’t undertaken a systematic review that would have told them the research was not needed. Or the research may use outcomes, perhaps surrogate measures, that are not useful.

Secondly, the methods of the studies may be inadequate. Many studies are too small, and more than half fail to deal adequately with bias. Studies are not replicated, and when people have tried to replicate studies they find that most do not have reproducible results.

Thirdly, research is not efficiently regulated and managed. Quality assurance systems fail to pick up the flaws in the research proposals. Or the bureaucracy involved in having research funded and approved may encourage researchers to conduct studies that are too small or too short term.

Fourthly, the research that is completed is not made fully accessible. Half of studies are never published at all, and there is a bias in what is published, meaning that treatments may seem to be more effective and safer than they actually are. Then not all outcome measures are reported, again with a bias towards those are positive.

Fifthly, published reports of research are often biased and unusable. In trials about a third of interventions are inadequately described meaning they cannot be implemented. Half of study outcomes are not reported.

END OF QUOTE

Apparently, these 5 issues are the reason why 85% of biomedical research is being wasted.

That is in CONVENTIONAL medicine, of course.

What about alternative medicine?

There is no question in my mind that the percentage figure must be even higher here. But do the same reasons apply? Let’s go through them again:

  1. Much research fails to address questions that matter. That is certainly true for alternative medicine – just think of the plethora of utterly useless surveys that are being published.
  2. The methods of the studies may be inadequate. Also true, as we have seen hundreds of time on this blog. Some of the most prevalent flaws include in my experience small sample sizes, lack of adequate controls (e.g. A+B vs B design) and misleading conclusions.
  3. Research is not efficiently regulated and managed. True, but probably not a specific feature of alternative medicine research.
  4. Research that is completed is not made fully accessible. most likely true but, due to lack of information and transparency, impossible to judge.
  5. Published reports of research are often biased and unusable. This is unquestionably a prominent feature of alternative medicine research.

All of this seems to indicate that the problems are very similar – similar but much more profound in the realm of alternative medicine, I’d say based on many years of experience (yes, what follows is opinion and not evidence because the latter is hardly available).

The thing is that, like almost any other job, research needs knowledge, skills, training, experience, integrity and impartiality to do it properly. It simply cannot be done well without such qualities. In alternative medicine, we do not have many individuals who have all or even most of these qualities. Instead, we have people who often are evangelic believers in alternative medicine, want to further their field by doing some research and therefore acquire a thin veneer of scientific expertise.

In my 25 years of experience in this area, I have not often seen researchers who knew that research is for testing hypotheses and not for trying to prove one’s hunches to be correct. In my own team, those who were the most enthusiastic about a particular therapy (and were thus seen as experts in its clinical application), were often the lousiest researchers who had the most difficulties coping with the scientific approach.

For me, this continues to be THE problem in alternative medicine research. The investigators – and some of them are now sufficiently skilled to bluff us to believe they are serious scientists – essentially start on the wrong foot. Because they never were properly trained and educated, they fail to appreciate how research proceeds. They hardly know how to properly establish a hypothesis, and – most crucially – they don’t know that, once that is done, you ought to conduct investigation after investigation to show that your hypothesis is incorrect. Only once all reasonable attempts to disprove it have failed, can your hypothesis be considered correct. These multiple attempts of disproving go entirely against the grain of an enthusiast who has plenty of emotional baggage and therefore cannot bring him/herself to honestly attempt to disprove his/her beloved hypothesis.

The plainly visible result of this situation is the fact that we have dozens of alternative medicine researchers who never publish a negative finding related to their pet therapy (some of them were admitted to what I call my HALL OF FAME on this blog, in case you want to verify this statement). And the lamentable consequence of all this is the fast-growing mountain of dangerously misleading (but often seemingly robust) articles about alternative treatments polluting Medline and other databases.

Is homeopathy effective for specific conditions? The FACULTY OF HOMEOPATHY (FoH, the professional organisation of UK doctor homeopaths) say YES. In support of this bold statement, they cite a total of 35 systematic reviews of homeopathy with a focus on specific clinical areas. “Nine of these 35 reviews presented conclusions that were positive for homeopathy”, they claim. Here they are:

Allergies and upper respiratory tract infections 8,9
Childhood diarrhoea 10
Post-operative ileus 11
Rheumatic diseases 12
Seasonal allergic rhinitis (hay fever) 13–15
Vertigo 16

And here are the references (I took the liberty of adding my comments in blod):

8. Bornhöft G, Wolf U, Ammon K, et al. Effectiveness, safety and cost-effectiveness of homeopathy in general practice – summarized health technology assessment. Forschende Komplementärmedizin, 2006; 13 Suppl 2: 19–29.

This is the infamous ‘Swiss report‘ which, nowadays, only homeopaths take seriously.

9. Bellavite P, Ortolani R, Pontarollo F, et al. Immunology and homeopathy. 4. Clinical studies – Part 1. Evidence-based Complementary and Alternative Medicine: eCAM, 2006; 3: 293–301.

This is not a systematic review as it lacks any critical assessment of the primary data and includes observational studies and even case series.

10. Jacobs J, Jonas WB, Jimenez-Perez M, Crothers D. Homeopathy for childhood diarrhea: combined results and metaanalysis from three randomized, controlled clinical trials. Pediatric Infectious Disease Journal, 2003; 22: 229–234.

This is a meta-analysis by Jennifer Jacobs (who recently featured on this blog) of 3 studies by Jennifer Jacobs; hardly convincing I’d say.

11. Barnes J, Resch K-L, Ernst E. Homeopathy for postoperative ileus? A meta-analysis. Journal of Clinical Gastroenterology, 1997; 25: 628–633.

This is my own paper! It concluded that “several caveats preclude a definitive judgment.”

12. Jonas WB, Linde K, Ramirez G. Homeopathy and rheumatic disease. Rheumatic Disease Clinics of North America, 2000; 26: 117–123.

This is not a systematic review; here is the (unabridged) abstract:

Despite a growing interest in uncovering the basic mechanisms of arthritis, medical treatment remains symptomatic. Current medical treatments do not consistently halt the long-term progression of these diseases, and surgery may still be needed to restore mechanical function in large joints. Patients with rheumatic syndromes often seek alternative therapies, with homeopathy being one of the most frequent. Homeopathy is one of the most frequently used complementary therapies worldwide.

Proper systematic reviews fail to show that homeopathy is an effective treatment for rheumatic conditions (see for instance here and here).

13. Wiesenauer M, Lüdtke R. A meta-analysis of the homeopathic treatment of pollinosis with Galphimia glauca. Forschende Komplementärmedizin und Klassische Naturheilkunde, 1996; 3: 230–236.

This is a meta-analysis by Wiesenauer of trials conducted by Wiesenauer.

My own, more recent analysis of these data arrived at a considerably less favourable conclusion: “… three of the four currently available placebo-controlled RCTs of homeopathic Galphimia glauca (GG) suggest this therapy is an effective symptomatic treatment for hay fever. There are, however, important caveats. Most essentially, independent replication would be required before GG can be considered for the routine treatment of hay fever. (Focus on Alternative and Complementary Therapies September 2011 16(3))

14. Taylor MA, Reilly D, Llewellyn-Jones RH, et al. Randomised controlled trials of homoeopathy versus placebo in perennial allergic rhinitis with overview of four trial series. British Medical Journal, 2000; 321: 471–476.

This is a meta-analysis by David Reilly of 4 RCTs which were all conducted by David Reilly. This attracted heavy criticism; see here and here, for instance.

15. Bellavite P, Ortolani R, Pontarollo F, et al. Immunology and homeopathy. 4. Clinical studies – Part 2. Evidence-based Complementary and Alternative Medicine: eCAM, 2006; 3: 397–409.

This is not a systematic review as it lacks any critical assessment of the primary data and includes observational studies and even case series.

16. Schneider B, Klein P, Weiser M. Treatment of vertigo with a homeopathic complex remedy compared with usual treatments: a meta-analysis of clinical trials. Arzneimittelforschung, 2005; 55: 23–29.

This is a meta-analysis of 2 (!) RCTs and 2 observational studies of ‘Vertigoheel’, a preparation which is not a homeopathic but a homotoxicologic remedy (it does not follow the ‘like cures like’ assumption of homeopathy) . Moreover, this product contains pharmacologically active substances (and nobody doubts that active substances can have effects).

________________________________________________________________________________

So, positive evidence from 9 systematic reviews in 6 specific clinical areas?

I let you answer this question.

Shiatsu is an alternative therapy that is popular, but has so far attracted almost no research. Therefore, I was excited when I saw a new paper on the subject. Sadly, my excitement waned quickly when I stared reading the abstract.

This single-blind randomized controlled study was aimed to evaluate shiatsu on mood, cognition, and functional independence in patients undergoing physical activity. Alzheimer disease (AD) patients with depression were randomly assigned to the “active group” (Shiatsu + physical activity) or the “control group” (physical activity alone).

Shiatsu was performed by the same therapist once a week for ten months. Global cognitive functioning (Mini Mental State Examination – MMSE), depressive symptoms (Geriatric Depression Scale – GDS), and functional status (Activity of Daily Living – ADL, Instrumental ADL – IADL) were assessed before and after the intervention.

The researchers found a within-group improvement of MMSE, ADL, and GDS in the Shiatsu group. However, the analysis of differences before and after the interventions showed a statistically significant decrease of GDS score only in the Shiatsu group.

The authors concluded that the combination of Shiatsu and physical activity improved depression in AD patients compared to physical activity alone. The pathomechanism might involve neuroendocrine-mediated effects of Shiatsu on neural circuits implicated in mood and affect regulation.

The Journal Complementary Therapies in Medicine also published three ‘Highlights’ of this study:

  • We first evaluated the effect of Shiatsu in depressed patients with Alzheimer’s disease (AD).
  • Shiatsu significantly reduced depression in a sample of mild-to-moderate AD patients.
  • Neuroendocrine-mediated effect of Shiatsu may modulate mood and affect neural circuits.

Where to begin?

1 The study is called a ‘pilot’. As such it should not draw conclusions about the effectiveness of Shiatsu.

2 The design of the study was such that there was no accounting for the placebo effect (the often-discussed ‘A+B vs B’ design); therefore, it is impossible to attribute the observed outcome to Shiatsu. The ‘highlight’ – Shiatsu significantly reduced depression in a sample of mild-to-moderate AD patients – therefore turns out to be a low-light.

3 As this was a study with a control group, within-group changes are irrelevant and do not even deserve a mention.

4 The last point about the mode of action is pure speculation, and not borne out of the data presented.

5 Accumulating so much nonsense in one research paper is, in my view, unethical.

Research into alternative medicine does not have a good reputation – studies like this one are not inclined to improve it.

Personally, I find our good friend Dana Ullman truly priceless. There are several reasons for that; one is that he is often so exemplarily wrong that it helps me to explain fundamental things more clearly. With a bit of luck, this might enable me to better inform people who might be thinking a bit like Dana. In this sense, our good friend Dana has significant educational value.

Recently, he made this comment:

According to present and former editors of THE LANCET and the NEW ENGLAND JOURNAL OF MEDICINE, “evidence based medicine” can no longer be trusted. There is obviously no irony in Ernst and his ilk “banking” on “evidence” that has no firm footing except their personal belief systems: https://medium.com/@drjasonfung/the-corruption-of-evidence-based-medicine-killing-for-profit-41f2812b8704

Ernst is a fundamentalist whose God is reductionistic science, a 20th century model that has little real meaning today…but this won’t stop the new attacks on me personally…

END OF COMMENT

Where to begin?

Let’s start with some definitions.

  • Evidence is the body of facts that leads to a given conclusion. Because the outcomes of treatments such as homeopathy depend on a multitude of factors, the evidence for or against their effectiveness is best based not on experience but on clinical trials and systematic reviews of clinical trials (this is copied from my book).
  • EBM is the integration of best research evidence with clinical expertise and patient values. It thus rests on three pillars: external evidence, ideally from systematic reviews, the clinician’s experience, and the patient’s preferences (and this is from another book).

Few people would argue that EBM, as it is applied currently, is without fault. Certainly I would not suggest that; I even used to give lectures about the limitations of EBM, and many experts (who are much wiser than I) have written about the many problems with EBM. It is important to note that such criticism demonstrates the strength of modern medicine and not its weakness, as Dana seems to think: it is a sign of a healthy debate aimed at generating progress. And it is noteworthy that internal criticism of this nature is largely absent in alternative medicine.

The criticism of EBM is often focussed on the unreliability of the what I called above the ‘best research evidence’. Let me therefore repeat what I wrote about it on this blog in 2012:

… The multifactorial nature of any clinical response requires controlling for all the factors that might determine the outcome other than the treatment per se. Ideally, we would need to create a situation or an experiment where two groups of patients are exposed to the full range of factors, and the only difference is that one group does receive the treatment, while the other one does not. And this is precisely the model of a controlled clinical trial.

Such studies are designed to minimise all possible sources of bias and confounding. By definition, they have a control group which means that we can, at the end of the treatment period, compare the effects of the treatment in question with those of another intervention, a placebo or no treatment at all.

Many different variations of the controlled trial exist so that the exact design can be adapted to the requirements of the particular treatment and the specific research question at hand. The over-riding principle is, however, always the same: we want to make sure that we can reliably determine whether or not the treatment was the cause of the clinical outcome.

Causality is the key in all of this; and here lies the crucial difference between clinical experience and scientific evidence. What clinician witness in their routine practice can have a myriad of causes; what scientists observe in a well-designed efficacy trial is, in all likelihood, caused by the treatment. The latter is evidence, while the former is not.

Don’t get me wrong; clinical trials are not perfect. They can have many flaws and have rightly been criticised for a myriad of inherent limitations. But it is important to realise that, despite all their short-comings, they are far superior than any other method for determining the efficacy of medical interventions.

There are lots of reasons why a trial can generate an incorrect, i.e. a false positive or a false negative result. We therefore should avoid relying on the findings of a single study. Independent replications are usually required before we can be reasonably sure.

Unfortunately, the findings of these replications do not always confirm the results of the previous study. Whenever we are faced with conflicting results, it is tempting to cherry-pick those studies which seem to confirm our prior belief – tempting but very wrong. In order to arrive at the most reliable conclusion about the efficacy of any treatment, we need to consider the totality of the reliable evidence. This goal is best achieved by conducting a systematic review.

In a systematic review, we assess the quality and quantity of the available evidence, try to synthesise the findings and arrive at an overall verdict about the efficacy of the treatment in question. Technically speaking, this process minimises selection and random biases. Systematic reviews and meta-analyses [these are systematic reviews that pool the data of individual studies] therefore constitute, according to a consensus of most experts, the best available evidence for or against the efficacy of any treatment.

END OF QUOTE

Other criticism is aimed at the way EBM is currently used (and abused). This criticism is often justified and necessary, and it is again the expression of our efforts to generate progress. EBM is practised by humans; and humans are far from perfect. They can be corrupt, misguided, dishonest, sloppy, negligent, stupid, etc., etc. Sadly, that means that the practice of EBM can have all of these qualities as well. All we can do is to keep on criticising malpractice, educate people, and hope that this might prevent the worst abuses in future.

Dana and many of his fellow SCAMers have a different strategy; they claim that EBM “can no longer be trusted” (interestingly they never tell us what system might be better; eminence-based medicine? experience-based medicine? random-based medicine? Dana-based medicine?).

The claim that EBM can no longer be trusted is clearly not true, counter-productive and unethical; and I suspect they know it.

Why then do they make it?

Because they feel that it entitles them to argue that homeopathy (or any other form of SCAM) cannot be held to EBM-standards. If EBM is unreliable, surely, nobody can ask the ‘Danas of this world’ to provide anything like sound data!!! And that, of course, would be just dandy for business, wouldn’t it?

So, let’s not be deterred  or misled by these deliberately destructive people. Their motives are transparent and their arguments are nonsensical. EBM is not flawless, but with our continued efforts it will improve. Or, to repeat something that I have said many times before: EBM is the worst form of healthcare, except for all other known options.

THE CONVERSATION recently carried an article shamelessly promoting osteopathy. It seems to originate from the University of Swansea, UK, and is full of bizarre notions. Here is an excerpt:

To find out more about how osteopathy could potentially affect mental health, at our university health and well-being academy, we have recently conducted one of the first studies on the psychological impact of OMT – with positive results.

For the last five years, therapists at the academy have been using OMT to treat members of the public who suffer from a variety of musculoskeletal disorders which have led to chronic pain. To find out more about the mental health impacts of the treatment, we looked at three points in time – before OMT treatment, after the first week of treatment, and after the second week of treatment – and asked patients how they felt using mental health questionnaires.

This data has shown that OMT is effective for reducing anxiety and psychological distress, as well as improving patient self-care. But it may not be suitable for all mental illnesses associated with chronic pain. For instance, we found that OMT was less effective for depression and fear avoidance.

All is not lost, though. Our results also suggested that the positive psychological effects of OMT could be further optimised by combining it with therapy approaches like acceptance and commitment therapy (ACT). Some research indicates that psychological problems such as anxiety and depression are associated with inflexibility, and lead to experiential avoidance. ACT has a positive effect at reducing experiential avoidance, so may be useful with reducing the fear avoidance and depression (which OMT did not significantly reduce).

Other researchers have also suggested that this combined approach may be useful for some subgroups receiving OMT where they may accept this treatment. And, further backing this idea up, there has already been at least one pilot clinical trial and a feasibility study which have used ACT and OMT with some success.

Looking to build on our positive results, we have now begun to develop our ACT treatment in the academy, to be combined with the osteopathic therapy already on offer. Though there will be a different range of options, one of these ACT therapies is psychoeducational in nature. It does not require an active therapist to work with the patient, and can be delivered through internet instruction videos and homework exercises, for example.

Looking to the future, this kind of low cost, broad healthcare could not only save the health service money if rolled out nationwide but would also mean that patients only have to undergo one treatment.

END OF QUOTE

So, they recruited a few patients who had come to receive osteopathic treatments (a self-selected population full of expectation and in favour of osteopathy), let them fill a few questionnaires and found some positive changes. From that, they conclude that OMT (osteopathic manipulative therapy) is effective. Not only that, they advocate that OMT is rolled out nationwide to save NHS funds.

Vis a vis so much nonsense, I am (almost) speechless!

As this comes not from some commercial enterprise but from a UK university, the nonsense is intolerable, I find.

Do I even need to point out what is wrong with it?

Not really, it’s too obvious.

But, just in case some readers struggle to find the fatal flaws of this ‘study’, let me mention just the most obvious one. There was no control group! That means the observed outcome could be due to many factors that are totally unrelated to OMT – such as placebo-effect, regression towards the mean, natural history of the condition, concomitant treatments, etc. In turn, this also means that the nationwide rolling out of their approach would most likely be a costly mistake.

The general adoption of OMT would of course please osteopaths a lot; it could even reduce anxiety – but only that of the osteopaths and their bank-managers, I am afraid.

One thing one cannot say about George Vithoulkas, the ueber-guru of homeopathy, is that he is not as good as his word. Last year, he announced that he would focus on publishing case reports that would convince us all that homeopathy is effective:

…the only evidence that homeopathy can present to the scientific world at this moment are these thousands of cured cases. It is a waste of time, money, and energy to attempt to demonstrate the effectiveness of homeopathy through double blind trials.

… the international “scientific” community, which has neither direct perception nor personal experience of the beneficial effects of homeopathy, is forced to repeat the same old mantra: “Where is the evidence? Show us the evidence!” … the successes of homeopathy have remained hidden in the offices of hardworking homeopaths – and thus go largely ignored by the world’s medical authorities, governments, and the whole international scientific community…

… simple questions that are usually asked by the “gnorant”, for example, “Can homeopathy cure cancer, multiple sclerosis, ulcerative colitis, etc.?” are invalid and cannot elicit a direct answer because the reality is that many such cases can be ameliorated significantly, and a number can be cured…

And focussing on successful cases is just what the great Vithoulkas now does.

Together with homeopaths from the  Centre for Classical Homeopathy, Vijayanagar, Bangalore, India, Vithoulkas has recently published a retrospective case series of 10 Indian patients who were diagnosed with dengue fever and treated exclusively with homeopathic remedies at Bangalore, India. This case series demonstrates with evidence of laboratory reports that even when the platelets dropped considerably there was good result without resorting to any other means.

The homeopaths concluded that a need for further, larger studies is indicated by this evidence, to precisely define the role of homeopathy in treating dengue fever. This study also emphasises the importance of individualised treatment during an epidemic for favourable results with homeopathy.

Bravo!

Keeping one’s promise must be a good thing.

But how meaningful are these 10 cases?

Dengue is a viral infection which, in the vast majority of cases, takes a benign course. After about two weeks, patients tend to be back to normal, even if they receive no treatment at all. In other words, the above-quoted case series is an exact description of the natural history of the condition. To put it even more bluntly: if these patients would have been treated with kind attention and good general care, the outcome would not have been one iota different.

To me, this means that “to precisely define the role of homeopathy in treating dengue fever” would be a waste of resources. It’s role is already clear: there is no role of homeopathy in the treatment of this (or any other) condition.

Sorry George.

Yesterday, it was announced that homeopaths can easily and quickly earn a sizable amount of money.

The announcement was made during the German sceptics conference ‘Skepkon‘ in Koeln. As I could not be present, I obtained the photo via Twitter.

So, if you are a homeopath or a fan of homeopathy, all you have to do – as the above slide says – is to reproducibly identify homeopathic remedies in high potency. The procedure for obtaining the money has to follow three pre-defined steps:

  1. Identification of three homeopathic preparations in high potency according to a proscribed protocol.
  2. Documentation of a method enabling a third party to identify the remedies.
  3. Verification of the experiment by repeating it.

Anyone interested must adhere to the full instructions published by the German sceptics GWUP:

1. Review of test protocol

Together with a representative of GWUP, the applicants review and agree on this protocol prior to the start of the test. Minor changes may be applied if justified, provided they are mutually agreed to in advance and do not affect the validity of the test, especially the blinding and randomization of the samples. In any case we want to avoid that the results get compromised or their credibility impeached by modifications of the protocol while the test is already under way. After mutual confirmation, the test protocol is binding for the whole duration of the test and its evaluation.

2. Selection of drugs

The applicant proposes which three drugs should be used in the trial. This gives them the opportunity to select substances that they think they could distinguish particularly well as homeopathic remedies. The potency may be selected freely as well, whereby the following conditions must be observed:

– all drugs must be available as sugar globules of the same grade (“Globuli” in German);
– the same potency, namely D- or C-potency above D24 / C12, is used for all three drugs;
– all drugs can be procured from the same producer.

3. Procurement of samples

The samples will be purchased by GWUP and shipped from the vendor directly to the notary who will perform the randomization. GWUP will purchase sufficient numbers of packages to complete the series of 12 samples according to the randomization list. The procurement will ensure that the samples derive from different batches of production as follows.

3.1. Common remedies

Common remedies, i.e. remedies sold in high numbers, will be procured from randomly selected pharmacies from the biggest cities in Germany (Berlin, Hamburg, Munich, Cologne, Frankfurt, Stuttgart…). Each pharmacy supplies a bottle for each of the three selected remedies and ships it directly to the notary in charge of randomization. If the applicants need a sample of known content for calibration, then this will be procured from yet another pharmacy in another German city.

3.2. Special remedies

If due to low sales it is possible that the above procedure is not sufficient to have all samples from different batches, a randomly selected pharmacy will be appointed to produce all the samples from raw materials purchased from the producer. GWUP will procure the mother tinctures, the raw sugar pills, and bottles and packages, to be shipped directly to the respective pharmacy who then will do the potentization, label the bottles and send them to the notary. If there are extra samples of known content required for calibration, then an extra set of samples will be produced. One set of samples will be kept in a sealed package for future reference.

The applicant and GWUP mutually agree on which procedure is used before the start of procurement. If more than 10 grams of globules per sample are required for the procedure used for inentification, the applicant has to indicate this in advance, and GWUP will take this into account.

4. Randomization / blinding

The randomization and blinding is done by a sworn-in public notary in Würzburg, Germany, who is selected by a random procedure. Würzburg is chosen because the first part of the task is to be evaluated at the University of Würzburg, for all participants based in Europe. For overseas applicants, the location will be mutually agreed on.

The notary receives a coding list showing how the three drugs A, B and C are to be distributed among the twelve samples. This list is compiled by the GWUP representative by throwing dice. The notary also determines which drug is assigned to which letter by throwing dice. Note that the drugs may not be present in the set in equal numbers.

The notary completely removes the original label from the bottle and replaces it with the number without opening the bottle. The randomization protocol is deposited in a sealed envelope with the notary public without a copy being made beforehand. The notary disposes of surplus packs. If special remedies are processed, one set of marked samples is sealed and forwarded to GWUP for later reference in a sealed package.

The coded bottles are sent from the notary to the applicant without individual packaging and documentation. The applicant confirms receipt of the samples.

5. Identification

The applicant identifies which of the 12 bottles contains which drug, using any method and procedure of his choice. There is no limit as to the method used for identification, and this well may be a procedure not currently recognized by modern science. However, GWUP at the start requires a short and rough outline of how the applicant wants to proceed, and GWUP reserves the right to reject applications whose sincerity for scientific work seems questionable.

The applicant is also required to specify a period of time within which they will be able to produce their results. This period may not exceed six months. If it expires without the applicant being able to show their results, the outcome will be considered negative. However, the candidate may apply for an extension in good time before the deadline, if they can provide a reasonable explanation and is not caused by the respective identification process as such.

The applicant is explicitly advised to observe ethics standards, and to procure the consent of an appropriate ethics committee if their method involves testing on humans or animals.

6. Result Pt. 1

If reasonable, the applicant may present their findings as part of the PSI-Tests held annually by GWUP at the University of Würzburg. The applicant’s result will be compared to the coding protocol from the notary. The number of bottles in which the notary’s record corresponds to the applicant’s details is determined. The result includes a description of the method used, if possible with meaningful intermediate data such as measurement protocols or symptom lists of drug provings.

The first part of the test is considered a success if the content of no more than one bottle is identified incorrectly and a description of the procedure is produced.

7. Result Pt. 2 and 3: Replication and Verification

Replication of the test is to ensure that a successful first result was not caused by chance alone. In addition, the procedure explained by the applicant is to be verified in a way depending on its nature. The objective is to verify that the identification was indeed performed by using this very method, and that the description is complete and suitable for a third party to achieve the same outcome.

For replication, steps 2 to 5 will be repeated. Applicants may select to use the same drugs as before. In this case they will then procured from another manufacturer or prepared by a different pharmacy with raw material from a different supplier. Alternatively, the candidate may indicate three new drugs which then can be obtained from the original vendor.

For a successful replication the same precision as before is required, that is, that only one out of 12 bottles may be identified incorrectly.

The evaluation and presentation of these results may take place at any location, press or other media may be invited to the event as agreed to by the applicant and GWUP.


Is anyone going to take up this challenge?

Personally, I don’t hold my breath.

Many years ago (at a time when homeopaths still saw me as one of their own), I had plans to do a similar but slightly less rigorous test as part of a doctoral thesis for one of my students.

Our investigation was straight forward: we approached several of the world’s leading/most famous homeopaths and asked them to participate. Their task was to tell us which homeopathic remedy they thought was easiest to differentiate from a placebo. Subsequently we would post them several vials – I think the number was 10 – and ask them to tell us which contained the remedy of their choice (in a C30 potency), and which the placebo (the distribution was 50:50, and the authenticity of each vial was to be confirmed by a notary). The experimental method for identifying which was which was entirely left to each participating homeopath; they were even allowed to use multiple, different tests. Based on the results, we would then calculate whether their identification skills were better than pure chance.

Sadly, the trial never happened. Initially, we had a positive response from some homeopaths who were interested in participating. However, when they then saw the exact protocol, they all pulled out.

But times may have changed; perhaps today there are some homeopaths out there who actually believe in homeopathy?

Perhaps our strategy to work only with ‘the best’ homeopaths was wrong?

Perhaps there are some homeopaths who are less risk-adverse?

I sure hope that lots of enthusiastic homeopaths will take up this challenge.

GOOD LUCK! And watch this space.

We recently discussed the deplorable case of Larry Nassar and the fact that the ‘American Osteopathic Association’ stated that intravaginal manipulations are indeed an approved osteopathic treatment. At the time, I thought this was a shocking claim. So, imagine my surprise when I was alerted to a German trial of osteopathic intravaginal manipulations.

Here is the full and unaltered abstract of the study:

Introduction: 50 to 80% of pregnant women suffer from low back pain (LBP) or pelvic pain (Sabino und Grauer, 2008). There is evidence for the effectiveness of manual therapy like osteopathy, chiropractic and physiotherapy in pregnant women with LBP or pelvic pain (Liccardione et al., 2010). Anatomical, functional and neural connections support the relationship between intrapelvic dysfunctions and lumbar and pelvic pain (Kanakaris et al., 2011). Strain, pressure and stretch of visceral and parietal peritoneum, bladder, urethra, rectum and fascial tissue can result in pain and secondary in muscle spasm. Visceral mobility, especially of the uterus and rectum, can induce tension on the inferior hypogastric plexus, which may influence its function. Thus, stretching the broad ligament of the uterus and the intrapelvic fascia tissue during pregnancy can reinforce the influence of the inferior hypogastric plexus. Based on above facts an additional intravaginal treatment seems to be a considerable approach in the treatment of low back pain in pregnant women.
Objective: The purpose of this study was to compare the effect of osteopathic treatment including intravaginal techniques versus osteopathic treatment only in females with pregnancy-related low back pain.
Methods: Design: The study was performed as a randomized controlled trial. The participants were randomized by drawing lots, either into the intervention group including osteopathic and additional intravaginal treatment (IV) or a control group with osteopathic treatment only (OI). Setting: Medical practice in south of Germany.
Participants 46 patients were recruited between the 30th and 36th week of pregnancy suffering from low back pain.
Intervention Both groups received three treatments within a period of three weeks. Both groups were treated with visceral, mobilization, and myofascial techniques in the cervical, thoracic and lumbar spine, the pelvic and the abdominal region (American Osteopathic Association Guidelines, 2010). The IV group received an additional treatment with intravaginal techniques in supine position. This included myofascial techniques of the M. levator ani and the internal obturator muscles, the vaginal tissue, the pubovesical and uterosacral ligaments as well as the inferior hypogastric plexus.
Main outcome measures As primary outcome the back pain intensity was measured by Visual Analogue Scale (VAS). Secondary outcome was the disability index assessed by Oswestry-Low-Back-Pain-Disability-Index (ODI), and Pregnancy-Mobility-Index (PMI).
Results: 46 participants were randomly assigned into the intervention group (IV; n = 23; age: 29.0 ±4.8 years; height: 170.1 ±5.8 cm; weight: 64.2 ±10.3 kg; BMI: 21.9 ±2.6 kg/m2) and the control group (OI; n = 23; age: 32.0 ±3.9 years; height: 168.1 ±3.5 cm; weight: 62.3 ±7.9 kg; BMI: 22.1 ±3.2 kg/m2). Data from 42 patients were included in the final analyses (IV: n=20; OI: n=22), whereas four patients dropped out due to general pregnancy complications. Back pain intensity (VAS) changed significantly in both groups: in the intervention group (IV) from 59.8 ±14.8 to 19.6 ±8.4 (p<0.05) and in the control group (OI) from 57.4 ±11.3 to 24.7 ±12.8. The difference between groups of 7.5 (95%CI: -16.3 to 1.3) failed to demonstrate statistical significance (p=0.93). Pregnancy-Mobility-Index (PMI) changed significantly in both groups, too. IV group: from 33.4 ±8.9 to 29.6 ±6.6 (p<0.05), control group (OI): from 36.3 ±5.2 to 29.7 ±6.8. The difference between groups of 2.6 (95%CI: -5.9 to 0.6) was not statistically significant (p=0.109). Oswestry-Low-Back-Pain-Disability-Index (ODI) changed significantly in the intervention group (IV) from 15.1 ±7.8 to 9.2 ±3.6 (p<0.05) and also significantly in the control group (OI) from 13.8 ±4.9 to 9.2 ±3.0. Between-groups difference of 1.3 (95%CI: -1.5 to 4.1) was not statistically significant (p=0.357).
Conclusions: In this sample a series of osteopathic treatments showed significant effects in reducing pain and increasing the lumbar range of motion in pregnant women with low back pain. Both groups attained clinically significant improvement in functional disability, activity and quality of life. Furthermore, no benefit of additional intravaginal treatment was observed.

END OF QUOTE

My first thoughts after reading this were: how on earth did the investigators get this past an ethics committee? It cannot be ethical, in my view, to allow osteopaths (in Germany, they have no relevant training to speak of) to manipulate women intravaginally. How deluded must an osteopath be to plan and conduct such a trial? What were the patients told before giving informed consent? Surely not the truth!

My second thoughts were about the scientific validity of this study: the hypothesis which this trial claims to be testing is a far-fetched extrapolation, to put it mildly; in fact, it is not a hypothesis, it’s a very daft idea. The control-intervention is inadequate in that it cannot control for the (probably large) placebo effects of intravaginal manipulations. The observed outcomes are based on within-group comparisons and are therefore most likely unrelated to the treatments applied. The conclusion is as barmy as it gets; a proper conclusion should clearly and openly state that the results did not show any effects of the intravaginal manipulations.

In summary, this is a breathtakingly idiotic trial, and everyone involved in it (ethics committee, funding body, investigators, statistician, reviewers, journal editor) should be deeply ashamed and apologise to the poor women who were abused in a most deplorable fashion.

Amongst all the implausible treatments to be found under the umbrella of ‘alternative medicine’, Reiki might be one of the worst, i. e. least plausible and outright bizarre (see for instance here, here and here). But this has never stopped enthusiasts from playing scientists and conducting some more pseudo-science.

This new study examined the immediate symptom relief from a single reiki or massage session in a hospitalized population at a rural academic medical centre. It was designed as a retrospective analysis of prospectively collected data on demographic, clinical, process, and quality of life for hospitalized patients receiving massage therapy or reiki. Hospitalized patients requesting or referred to the healing arts team received either a massage or reiki session and completed pre- and post-therapy symptom questionnaires. Differences between pre- and post-sessions in pain, nausea, fatigue, anxiety, depression, and overall well-being were recorded using an 11-point Likert scale.

Patients reported symptom relief with both reiki and massage therapy. Reiki improved fatigue and anxiety  more than massage. Pain, nausea, depression, and well being changes were not different between reiki and massage encounters. Immediate symptom relief was similar for cancer and non-cancer patients for both reiki and massage therapy and did not vary based on age, gender, length of session, and baseline symptoms.

The authors concluded that reiki and massage clinically provide similar improvements in pain, nausea, fatigue, anxiety, depression, and overall well-being while reiki improved fatigue and anxiety more than massage therapy in a heterogeneous hospitalized patient population. Controlled trials should be considered to validate the data.

Don’t I just adore this little addendum to the conclusions, “controlled trials should be considered to validate the data” ?

The thing is, there is nothing to validate here!

The outcomes are not due to the specific effects of Reiki or massage; they are almost certainly caused by:

  • the extra attention,
  • the expectation of patients,
  • the verbal or non-verbal suggestions of the therapists,
  • the regression towards the mean,
  • the natural history of the condition,
  • the concomitant therapies administered in parallel,
  • the placebo effect,
  • social desirability.

Such pseudo-research only can only serve one purpose: to mislead (some of) us into thinking that treatments such as Reiki might work.

What journal would be so utterly devoid of critical analysis to publish such unethical nonsense?

Ahh … it’s our old friend the Journal of Alternative and Complementary Medicine

Say no more!

Since many months, I have noticed a proliferation of so-called pilot studies of alternative therapies. A pilot study (also called feasibility study) is defined as a small scale preliminary study conducted in order to evaluate feasibility, time, cost, adverse events, and improve upon the study design prior to performance of a full-scale research project. Here I submit that most of the pilot studies of alternative therapies are, in fact, bogus.

To qualify as a pilot study, an investigation needs to have an aim that is in line with the above-mentioned definition. Another obvious hallmark must be that its conclusions are in line with this aim. We do not need to conduct much research to find that even these two elementary preconditions are not fulfilled by the plethora of pilot studies that are currently being published, and that proper pilot studies of alternative medicine are very rare.

Three recent examples of dodgy pilot studies will have to suffice (but rest assured, there are many, many more).

Foot Reflexotherapy Induces Analgesia in Elderly Individuals with Low Back Pain: A Randomized, Double-Blind, Controlled Pilot Study

The aim of this study was to evaluate the effects of foot reflexotherapy on pain and postural balance in elderly individuals with low back pain. And the conclusions drawn by its authors were that this study demonstrated that foot reflexotherapy induced analgesia but did not affect postural balance in elderly individuals with low back pain.

Effect of Tai Chi Training on Dual-Tasking Performance That Involves Stepping Down among Stroke Survivors: A Pilot Study.

The aim of this study was to investigate the effect of Tai Chi training on dual-tasking performance that involved stepping down and compared it with that of conventional exercise among stroke survivors. And the conclusions read: These results suggest a beneficial effect of Tai Chi training on cognition among stroke survivors without compromising physical task performance in dual-tasking.

The Efficacy of Acupuncture on Anthropometric Measures and the Biochemical Markers for Metabolic Syndrome: A Randomized Controlled Pilot Study.

The aim of this study was to evaluate the efficacy [of acupuncture] over 12 weeks of treatment and 12 weeks of follow-up. And the conclusion: Acupuncture decreases WC, HC, HbA1c, TG, and TC values and blood pressure in MetS.

It is almost painfully obvious that these studies are not ‘pilot’ studies as defined above.

So, what are they, and why are they so popular in alternative medicine?

The way I see it, they are the result of amateur researchers conducting pseudo-research for publication in lamentable journals in an attempt to promote their pet therapies (I have yet to find such a study that reports a negative finding). The sequence of events that lead to the publication of such pilot studies is usually as follows:

  • An enthusiast or a team of enthusiasts of alternative medicine decide that they will do some research.
  • They have no or very little know-how in conducting a clinical trial.
  • They nevertheless feel that such a study would be nice as it promotes both their careers and their pet therapy.
  • They design some sort of a plan and start recruiting patients for their trial.
  • At this point they notice that things are not as easy as they had imagined.
  • They have too few funds and too little time to do anything properly.
  • This does, however, not stop them to continue.
  • The trial progresses slowly, and patient numbers remain low.
  • After a while the would-be researchers get fed up and decide that their study has enough patients to stop the trial.
  • They improvise some statistical analyses with their results.
  • They write up the results the best they can.
  • They submit it for publication in a 3rd class journal and, in order to get it accepted, they call it a ‘pilot study’.
  • They feel that this title is an excuse for even the most obvious flaws in their work.
  • The journal’s reviewers and editors are all proponents of alternative medicine who welcome any study that seems to confirm their belief.
  • Thus the study does get published despite the fact that it is worthless.

Some might say ‘so what? no harm done!’

But I beg to differ: these studies pollute the medical literature and misguide people who are unable or unwilling to look behind the smoke-screen. Enthusiasts of alternative medicine popularise these bogus trials, while hiding the fact that their results are unreliable. Journalists report about them, and many consumers assume they are being told the truth – after all it was published in a ‘peer-reviewed’ medical journal!

My conclusions are as simple as they are severe:

  • Such pilot studies are the result of gross incompetence on many levels (researchers, funders, ethics committees, reviewers, journal editors).
  • They can cause considerable harm, because they mislead many people.
  • In more than one way, they represent a violation of medical ethics.
  • The could be considered scientific misconduct.
  • We should think of stopping this increasingly common form of scientific misconduct.
Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories