MD, PhD, FMedSci, FRSB, FRCP, FRCPEd.

methodology

On this blog, we have often noted that (almost) all TCM trials from China report positive results. Essentially, this means we might as well discard them, because we simply cannot trust their findings. While being asked to comment on a related issue, it occurred to me that this might be not so much different with Korean acupuncture studies. So, I tried to test the hypothesis by running a quick Medline search for Korean acupuncture RCTs. What I found surprised me and eventually turned into a reminder of the importance of critical thinking.

Even though I found pleanty of articles on acupuncture coming out of Korea, my search generated merely 3 RCTs. Here are their conclusions:

RCT No1

The results of this study show that moxibustion (3 sessions/week for 4 weeks) might lower blood pressure in patients with prehypertension or stage I hypertension and treatment frequency might affect effectiveness of moxibustion in BP regulation. Further randomized controlled trials with a large sample size on prehypertension and hypertension should be conducted.

RCT No2

The results of this study show that acupuncture might lower blood pressure in prehypertension and stage I hypertension, and further RCT need 97 participants in each group. The effect of acupuncture on prehypertension and mild hypertension should be confirmed in larger studies.

RCT No3

Bee venom acupuncture combined with physiotherapy remains clinically effective 1 year after treatment and may help improve long-term quality of life in patients with AC of the shoulder.

So yes, according to this mini-analysis, 100% of the acupuncture RCTs from Korea are positive. But the sample size is tiny and I many not have located all RCTs with my ‘rough and ready’ search.

But what are all the other Korean acupuncture articles about?

Many are protocols for RCTs which is puzzling because some of them are now so old that the RCT itself should long have emerged. Could it be that some Korean researchers publish protocols without ever publishing the trial? If so, why? But most are systematic reviews of RCTs of acupuncture. There must be about one order of magnitude more systematic reviews than RCTs!

Why so many?

Perhaps I can contribute to the answer of this question; perhaps I am even guilty of the bonanza.

In the period between 2008 and 2010, I had several Korean co-workers on my team at Exeter, and we regularly conducted systematic reviews of acupuncture for various indications. In fact, the first 6 systematic reviews include my name. This research seems to have created a trend with Korean acupuncture researchers, because ever since they seem unable to stop themselves publishing such articles.

So far so good, a plethora of systematic reviews is not necessarily a bad thing. But looking at the conclusions of these systematic reviews, I seem to notice a worrying trend: while our reviews from the 2008-2010 period arrived at adequately cautious conclusions, the new reviews are distinctly more positive in their conclusions and uncritical in their tone.

Let me explain this by citing the conclusions of the very first (includes me as senior author) and the very last review (does not include me) currently listed in Medline:

1st review

penetrating or non-penetrating sham-controlled RCTs failed to show specific effects of acupuncture for pain control in patients with rheumatoid arthritis. More rigorous research seems to be warranted.

Last review

Electroacupuncture was an effective treatment for MCI [mild cognitive impairment] patients by improving cognitive function. However, the included studies presented a low methodological quality and no adverse effects were reported. Thus, further comprehensive studies with a design in depth are needed to derive significant results.

Now, you might claim that the evidence for acupuncture has overall become more positive over time, and that this phenomenon is the cause for the observed shift. Yet, I don’t see that at all. I very much fear that there is something else going on, something that could be called the suspension of critical thinking.

Whenever I have asked a Chinese researcher why they only publish positive conclusions, the answer was that, in China, it would be most impolite to publish anything that contradicts the views of the researchers’ peers. Therefore, no Chinese researcher would dream of doing it, and consequently, critical thinking is dangerously thin on the ground.

I think that a similar phenomenon might be at the heart of what I observe in the Korean acupuncture literature: while I always tried to make sure that the conclusions were adequately based on the data, the systematic reviews were ok. When my influence disappeared and the reviews were done exclusively by Korean researchers, the pressure of pleasing the Korean peers (and funders) became dominant. I suggest that this is why conclusions now tend to first state that the evidence is positive and subsequently (almost as an after-thought) add that the primary trials were flimsy. The results of this phenomenon could be serious:

  • progress is being stifled,
  • the public is being misled,
  • funds are being wasted,
  • the reputation of science is being tarnished.

Of course, the only right way to express this situation goes something like this:

BECAUSE THE QUALITY OF THE PRIMARY TRIALS IS INADEQUATE, THE EFFECTIVENESS OF ACUPUNCTURE REMAINS UNPROVEN.

 

 

Some people seem to think that all so-called alternative medicine (SCAM) is ineffective, harmful or both. And some believe that I am hell-bent to make sure that this message gets out there. I recommend that these guys read my latest book or this 2008 article (sadly now out-dated) and find those (admittedly few) SCAMs that demonstrably generate more good than harm.

The truth, as far as this blog is concerned, is that I am constantly on the lookout to review research that shows or suggests that a therapy is effective or a diagnostic technique is valid (if you see such a paper that is sound and new, please let me know). And yesterday, I have been lucky:

This paper has just been presented at the ESC Congress in Paris.

Its authors are: A Pandey (1), N Huq (1), M Chapman (1), A Fongang (1), P Poirier (2)

(1) Cambridge Cardiac Care Centre – Cambridge – Canada

(2) Université Laval, Faculté de Pharmacie – Laval – Canada

Here is the abstract in full:

Introduction: Regular physical activity may modulate the inflammatory process and be cardio-protective. Yoga is a form of exercise that may have cardiovascular benefits. The effects of yoga on global cardiovascular risk have not been adequately described. The purpose of this study is to determine whether the addition of yoga to a regular exercise regimen reduces global cardiovascular risk.
Methods: Sixty consecutive individuals with essential hypertension were recruited in a lifestyle intervention program. All individuals with known hypertensive end organ damage, known cardiovascular diseases, as well as those taking medications/supplements that affected blood pressure, blood sugar, cholesterol or vascular inflammation were excluded. Participants were randomized to either a yoga group or similar duration stretching control group. Participants, over the 3-month intervention regimen, performed 15 minutes of either yoga or stretching in addition to 30 minutes of aerobic exercises thrice weekly. Blood pressure, cholesterol levels and hs-CRP were measured, and Reynold’s Global Cardiovascular Risk Score was calculated at baseline and at the end of the 3-month intervention program.
Results: At screening, there were no statistically significant differences between the groups in any measured parameters or the 10-year risk of a cardiovascular event as measured by the Reynolds Risk Score. (8.2 vs. 9.0%; yoga vs. control group) After the 3-month intervention period, there was a statistically significantly greater decrease in the Reynold’s Risk Score in the yoga vs. the control group. (7.0 vs. 8.4%, p=0.003, relative reduction 13.2 vs. 6.5%, p<0.0001)
Conclusions: In patients with essential hypertension on no medications and with no known end organ damage, the practice of yoga incorporated into a 3-month exercise intervention program was associated with significant greater improvement in the Reynold’s Risk of a 10-year cardiovascular event, when compared to the control stretching group. If these results are validated in more diverse populations over a longer duration of follow up, yoga may represent an important addition to traditional cardiovascular disease prevention programs.

Yes, this study was small, too small to draw far-reaching conclusions. And no, we don’t know what precisely ‘yoga’ entailed (we need to wait for the full publication to get this information plus all the other details needed to evaluate the study properly). Yet, this is surely promising: yoga has few adverse effects, is liked by many consumers, and could potentially help millions to reduce their cardiovascular risk. What is more, there is at least some encouraging previous evidence.

But what I like most about this abstract is the fact that the authors are sufficiently cautious in their conclusions and even state ‘if these results are validated…’

SCAM-researchers, please take note!

The journal NATURE has just published an excellent article by Andrew D. Oxman and an alliance of 24 leading scientists outlining the importance and key concepts of critical thinking in healthcare and beyond. The authors state that the Key Concepts for Informed Choices is not a checklist. It is a starting point. Although we have organized the ideas into three groups (claims, comparisons and choices), they can be used to develop learning resources that include any combination of these, presented in any order. We hope that the concepts will prove useful to people who help others to think critically about what evidence to trust and what to do, including those who teach critical thinking and those responsible for communicating research findings.

Here I take the liberty of citing a short excerpt from this paper:

CLAIMS:

Claims about effects should be supported by evidence from fair comparisons. Other claims are not necessarily wrong, but there is an insufficient basis for believing them.

Claims should not assume that interventions are safe, effective or certain.

  • Interventions can cause harm as well as benefits.
  • Large, dramatic effects are rare.
  • We can rarely, if ever, be certain about the effects of interventions.

Seemingly logical assumptions are not a sufficient basis for claims.

  • Beliefs alone about how interventions work are not reliable predictors of the presence or size of effects.
  • An outcome may be associated with an intervention but not caused by it.
  • More data are not necessarily better data.
  • The results of one study considered in isolation can be misleading.
  • Widely used interventions or those that have been used for decades are not necessarily beneficial or safe.
  • Interventions that are new or technologically impressive might not be better than available alternatives.
  • Increasing the amount of an intervention does not necessarily increase its benefits and might cause harm.

Trust in a source alone is not a sufficient basis for believing a claim.

  • Competing interests can result in misleading claims.
  • Personal experiences or anecdotes alone are an unreliable basis for most claims.
  • Opinions of experts, authorities, celebrities or other respected individuals are not solely a reliable basis for claims.
  • Peer review and publication by a journal do not guarantee that comparisons have been fair.

COMPARISONS:

Studies should make fair comparisons, designed to minimize the risk of systematic errors (biases) and random errors (the play of chance).

Comparisons of interventions should be fair.

  • Comparison groups and conditions should be as similar as possible.
  • Indirect comparisons of interventions across different studies can be misleading.
  • The people, groups or conditions being compared should be treated similarly, apart from the interventions being studied.
  • Outcomes should be assessed in the same way in the groups or conditions being compared.
  • Outcomes should be assessed using methods that have been shown to be reliable.
  • It is important to assess outcomes in all (or nearly all) the people or subjects in a study.
  • When random allocation is used, people’s or subjects’ outcomes should be counted in the group to which they were allocated.

Syntheses of studies should be reliable.

  • Reviews of studies comparing interventions should use systematic methods.
  • Failure to consider unpublished results of fair comparisons can bias estimates of effects.
  • Comparisons of interventions might be sensitive to underlying assumptions.

Descriptions should reflect the size of effects and the risk of being misled by chance.

  • Verbal descriptions of the size of effects alone can be misleading.
  • Small studies might be misleading.
  • Confidence intervals should be reported for estimates of effects.
  • Deeming results to be ‘statistically significant’ or ‘non-significant’ can be misleading.
  • Lack of evidence for a difference is not the same as evidence of no difference.

CHOICES:

What to do depends on judgements about the problem, the relevance (applicability or transferability) of evidence available and the balance of expected benefits, harm and costs.

Problems, goals and options should be defined.

  • The problem should be diagnosed or described correctly.
  • The goals and options should be acceptable and feasible.

Available evidence should be relevant.

  • Attention should focus on important, not surrogate, outcomes of interventions.
  • There should not be important differences between the people in studies and those to whom the study results will be applied.
  • The interventions compared should be similar to those of interest.
  • The circumstances in which the interventions were compared should be similar to those of interest.

Expected pros should outweigh cons.

  • Weigh the benefits and savings against the harm and costs of acting or not.
  • Consider how these are valued, their certainty and how they are distributed.
  • Important uncertainties about the effects of interventions should be reduced by further fair comparisons.

__________________________________________________________________________

END OF QUOTE

I have nothing to add to this, except perhaps to point out how very relevant all of this, of course, is for SCAM and to warmly recommend you study the full text of this brilliant paper.

John Dormandy was a consultant vascular surgeon, researcher, and medical educator best known for innovative work on the diagnosis and management of peripheral arterial disease. He had a leading role in developing, and garnering international support for, uniform guidelines that had a major impact on vascular care among specialists.

The Trans-Atlantic Inter-Society Consensus on Management of Peripheral Arterial Disease (TASC) was published in 2000.1 Dormandy, a former president of clinical medicine at the Royal Society of Medicine, was the genial force behind it, steering cooperation between medical and surgical society experts in Europe and North America.

“TASC became the standard for describing the severity of the problem that patients had and then defining what options there were to try and treat them,” says Alison Halliday, professor of vascular surgery at Oxford University who worked with Dormandy at St George’s Hospital, London. “It was the first time anybody had tried to get this general view on the complex picture of lower limb artery disease,” she says.

After stumbling across this totally unexpected obituary in the BMJ, I was deeply saddened. John was a close friend and mentor; I admired and loved him. He has influenced my life more than anyone else.

Our paths first crossed in 1979 when I applied for a post in his lab at St George’s Hospital, London. Even though I had never really envisaged a career in research, I wanted this job badly. At the time, I had been working as a SHO in a psychiatric hospital and was most unhappy. All I wished at that stage was to get out of psychiatry.

John offered me the position (mainly because of my MD thesis in blood clotting, I think) which was to run his haemorheology (the study of the flow properties of blood) lab. At the time, St Georges consisted of a research tract, a library, a squash court and a mega-building site for the main hospital.

John’s supervision was more than relaxed. As he was a busy surgeon then operating at a different site, I saw him only about once per fortnight, usually for less than 5 minutes. John gave me plenty of time to read (and to play squash!). As he was one of the world leader in haemorheology research, the lab was always full with foreign visitors who wanted to learn our methodologies. We all learnt from each other and had a great time!

After about two years, I had become a budding scientist. John’s mentoring had been minimal but nevertheless most effective. After I left to go back to Germany and finish my clinical training, we stayed in contact. In Munich, I managed to build up my own lab and continued to do haemorheology research. We thus met regularly, published papers and a book together, organised conferences, etc. It was during this time that my former boss became my friend.

Later, he also visited us in Vienna several times, and when I told him that I wanted to come back to England to do research in alternative medicine, he was puzzled but remained supportive (even wrote one of the two references that got me the Exeter job). I think he initially felt this might be a waste of a talent, but he soon changed his mind when he saw what I was up to.

John was one of the most original thinkers I have ever met. His intellect was as sharp as a razor and as fast as lightening. His research activities (>220 Medline listed papers) focussed on haemorheology, vascular surgery and multi-national mega-trials. And, of course, he had a wicket sense of humour. When he had become the clinical director of St George’s, he had to implement a strict no-smoking policy throughout the hospital. Being an enthusiastic cigar smoker, this presented somewhat of a problem for him. The solution was simple: at the entrance of his office John put a sign ‘You are now leaving the premises of St George’s Hospital’.

I saw John last in February this year. My wife and I had invited him for dinner, and when I phoned him to confirm the booking he said: ‘We only need a table for three; Klari (his wife) won’t join us, she died just before Christmas.’ I know how he must have suffered but, in typical Dormandy style, he tried to dissimulate and make light of his bereavement. During dinner he told me about the book he had just published: ‘Not a bestseller, in fact, it’s probably the most boring book you can find’. He then explained the concept of his next book, a history of medicine seen through the medical histories of famous people, and asked, ‘What’s your next one?’, ‘It’s called ‘Don’t believe what you think’, ‘Marvellous title!’, he exclaimed.

We parted that evening saying ‘see you soon’.

I will miss my friend vey badly.

Many so-called alternative medicine (SCAM) traditions have their very own diagnostic techniques, unknown to conventional clinicians. Think, for instance, of:

  • iridology,
  • applied kinesiology,
  • tongue diagnosis,
  • pulse diagnosis,
  • Kirlean photography,
  • live blood cell analysis,
  • the Vega test,
  • dowsing.

(Those interested in more detail can find a critical assessment of these and other diagnostic SCAM methods in my new book.)

And what about homeopathy?

Yes, homeopathy is also a diagnostic method.

Let me explain.

According to Hahnemann’s classical homeopathy, the homeopath should not be interested in conventional diagnostic labels. Instead, classical homeopaths are focussed on the symptoms and characteristics of the patient. They conduct a lengthy history to learn all about them, and they show little or no interest in a physical examination of their patient or other diagnostic procedures. Once they are confident to have all the information they need, they try to find the optimal homeopathic remedy.

This is done by matching the symptoms with the drug pictures of homeopathic remedies. Any homeopathic drug picture is essentially based on what has been noted in homeopathic provings where healthy volunteers take a remedy and monitor all that symptoms, sensations and feelings they experience subsequently. Here is an example:

The perfect match is what homeopaths thrive to find with their long and tedious procedure of taking a history. And the perfectly matching homeopathic remedy is essentially the homeopathic diagnosis.

Now, here is the thing: most SCAM diagnostic techniques have been tested (and found to be useless), but homeopathy as a diagnostic tool has – as far as I know – never been submitted to any rigorous tests (if you know otherwise, please let me know). And this, of course,  begs an important question: is it right – ethical, legal, moral – to use homeopathy without such evidence being available?

The simplest such test would be quite easy to conduct: one would send the same patient to 10 or 20 experienced homeopaths and see how many of them prescribe the same remedy.

Simple! But I shudder to think what such an experiment might reveal.

The Society of Homeopaths (SoH) is the professional organisation of UK lay homeopaths (those with no medical training). The SoH has recently published a membership survey. Here are some of its findings:

  • 89% of all respondents are female,
  • 70% are between the ages of 35 and 64.
  • 91% of respondents are currently in practice.
  • 87% are RSHoms.
  • The majority has been in practice for an average of 11 – 15 years.
  • 64% identified their main place of work as their home.
  • 51% work within a multidisciplinary clinic.
  • 43% work in a beauty clinic.
  • 85% offer either telephone or video call consultations.
  • Just under 50% see 5 or fewer patients each week.
  • 38% are satisfied with the number of patients they are seeing.
  • 80% felt confident or very confident about their future.
  • 65% feel supported by the SoH.

What can we conclude from these data?

Nothing!

Why?

Because this truly homeopathic survey is based on exactly 132 responses which equates to 14% of all SoH members.

If, however, we were able to conclude anything at all, it would be that the amateur researchers at the SoH cause Hahnemann to turn in his grave. Offering telephone/video consultations and working in a beauty salon would probably have annoyed the old man. But what would have definitely made him jump with fury in his Paris grave is a stupid survey like this one.

George Vithoulkas, has been mentioned on this blog repeatedly. He is a lay homeopath – one that has no medical background – and has, over the years, become an undisputed hero within the world of homeopathy. Yet, Vithoulkas’ contribution to homeopathy research is perilously close to zero. Judging from a recent article in which he outlines the rules of rigorous research, his understanding of research methodology is even closer to zero. Here is a crucial excerpt from this paper intercepted by a few comment from me in brackets and bold print.

Which are [the] homoeopathic principles to be respected [in clinical trials and meta-analyses]?

1. Homoeopathy does not treat diseases, but only diseased individuals. Therefore, every case may need a different remedy although the individuals may be suffering from the same pathology. This rule was violated by almost all the trials in most meta-analyses. (This statement is demonstrably false; there even has been a meta-analysis of 32 trials that respect this demand)

2. In the homoeopathic treatment of serious chronic pathology, if the remedy is correct usually a strong initial aggravation takes place []. Such an aggravation may last from a few hours to a few weeks and even then we may have a syndrome-shift and not the therapeutic results expected. If the measurements take place in the aggravation period, the outcome will be classified negative. (Homeopathic aggravations exist only in the mind of homeopaths; our systematic review failed to find proof for their existence.)

This factor was also ignored in most trials []. At least sufficient time should be given in the design of the trial, in order to account for the aggravation period. The contrary happened in a recent study [], where the aggravation period was evaluated as a negative sign and the homoeopathic group was pronounced worse than the placebo []. (There are plenty of trials where the follow-up period is long enough to account for this [non-existing] phenomenon.)

3. In severe chronic conditions, the homoeopath may need to correctly prescribe a series of remedies before the improvement is apparent. Such a second or third prescription should take place only after evaluating the effects of the previous remedies []. Again, this rule has also been ignored in most studies. (Again, this is demonstrably wrong; there are many trials where the homeopath was able to adjust his/her prescription according to the clinical response of the patient.)

4. As the prognosis of a chronic condition and the length of time after which any amelioration set in may differ from one to another case [], the treatment and the study-design respectively should take into consideration the length of time the disease was active and also the severity of the case. (This would mean that conditions that have a short history, like post-operative ileus, bruising after injury, common cold, etc. should respond well after merely a short treatment with homeopathics. As this is not so, Vithoulkas’ argument seems to be invalid.)

5. In our experience, Homeopathy has its best results in the beginning stages of chronic diseases, where it might be possible to prevent the further development of the chronic state and this is its most important contribution. Examples of pathologies to be included in such RCTs trials are ulcerative colitis, sinusitis, asthma, allergic conditions, eczema, gangrene rheumatoid arthritis as long as they are within the first six months of their appearance. (Why then is there a lack of evidence that any of the named conditions respond to homeopathy?)

In conclusion, three points should be taken into consideration relating to trials that attempt to evaluate the effectiveness of homoeopathy.

First, it is imperative that from the point of view of homoeopathy, the above-mentioned principles should be discussed with expert homoeopaths before researchers undertake the design of any homoeopathic protocol. (I am not aware of any trial where this was NOT done!)

Second, it would be helpful if medical journals invited more knowledgeable peer-reviewers who understand the principles of homoeopathy. (I am not aware of any trial where this was NOT done!)

Third, there is a need for at least one standardized protocol for clinical trials that will respect not only the state-of-the-art parameters from conventional medicine but also the homoeopathic principles []. (Any standardised protocol would be severely criticised; a good study protocol must always take account of the specific research question and therefore cannot be standardised.)

Fourth, experience so far has shown that the therapeutic results in homeopathy vary according to the expertise of the practitioner. Therefore, if the objective is to validate the homeopathic therapeutic modality, the organizers of the trial have to pick the best possible prescribers existing in the field. (I am not aware of any trial where this was NOT done!)

Only when these points are transposed and put into practice, the trials will be respected and accepted by both homoeopathic practitioners and conventional medicine and can be eligible for meta-analysis.

___________________________________________________________________

I suspect what the ‘GREAT VITHOULKAS’ really wanted to express are ‘THE TWO ESSENTIAL PRINCIPLES OF HOMEOPATHY RESEARCH’:

  1. A well-designed study of homeopathy can always be recognised by its positive result.
  2. Any trial that fails to yield a positive finding is, by definition, wrongly designed.

A team from Israel conducted a pragmatic trial to evaluate the impact of So-called Alternative Medicine (SCAM) treatments on postoperative symptoms. Patients ≥ 18 years referred to SCAM treatments by surgical medical staff were allocated to standard of care with SCAM treatment (SCAM group) or without SCAM. Referral criteria were patient preference and practitioner availability. SCAM treatments included Acupuncture, Reflexology, or Guided Imagery. The primary outcome variable was the change from baseline in symptom severity, measured by Visual Analogue Scale (VAS).

A total of 1127 patients were enrolled, 916 undergoing 1214 SCAM treatments and 211 controls. Socio-demographic characteristics were similar in both groups. Patients in the SCAM group had more severe baseline symptoms. Symptom reduction was greater in the SCAM group compared with controls. No significant adverse events were reported with any of the CAM therapies.

The authors concluded that SCAM treatments provide additional relief to Standard Of Care (SOC) for perioperative symptoms. Larger randomized control trial studies with longer follow-ups are needed to confirm these benefits.

Imagine a situation where postoperative patients are being asked “do you want merely our standard care or do you prefer having a lot of extra care, fuss and attention? Few would opt for the former – perhaps just 211 out of a total of 1127, as in the trial above. Now imagine being one of those patients receiving a lot of extra care and attention; would you not feel better, and would your symptoms not improve faster?

I am sure you have long guessed where I am heading. The infamous A+B versus B design has been discussed often enough on this blog. Researchers using it can be certain that they will generate a positive result for their beloved SCAM – even if the SCAM itself is utterly ineffective. The extra care and attention plus the raised expectation will do the trick. If the researchers want to make extra sure that their bogus treatments come out of this study smelling of roses, they can – like our Israeli investigators – omit to randomise patients to the two groups and let them chose according to their preference.

To cut a long story short: this study had zero chance to yield a negative result.

  • As such it was not a test but a promotion of SCAM.
  • As such it was not science but pseudo science.
  • As such it was not ethical but unethical.

WHEN WILL WE FINALLY STOP PUBLISHING SUCH MISLEADING NONSENSE?

Tian Jiu (TJ) therapy is a so-called alternative medicine (SCAM) that has been widely utilized in the management of allergic rhinitis (AR). TJ is also known as “drug moxibustion” or “vesiculating moxibustion.” Herbal patches are applied on the selected acupoints or the diseased body part. In TCM, this treatment is said to regulate the functions of meridians and zang-fu organs, warm the channels, disperse coldness, invigorate qi movement, harmonize nutrient absorption and defence mechanisms, and resolve stagnation in the body and stasis of the blood.

But does it work? This single-blinded, three-arm, randomized controlled study evaluated the efficacy of TJ therapy in AR. A total of 138 AR patients were enrolled. The TJ group and placebo group both received 4-weeks of treatment with either TJ or placebo patches for 2 hours. The patches were applied to Dazhui (GV 14), bilateral Feishu (UB 13), and bilateral Shenshu (UB 23) points. Patients received one session per week and then underwent a 4-week follow-up. The waitlist group received no treatment during the corresponding treatment period, but would be given compensatory TJ treatment in the next 4 weeks.

The primary outcome was the change of the Total Nasal Symptom Score (TNSS) after treatment. The secondary outcomes included the changes of Rhinoconjunctivitis Quality of Life Questionnaire (RQLQ) and rescue medication score (RMS).

After the treatment period, the total TNSS in TJ group was significantly reduced compared with baseline, but showed no statistical difference compared with placebo. Among the four domains of TNSS, the change of nasal obstruction exhibited statistical difference compared with placebo group. The total RQLQ score in TJ group was significantly reduced compared with both placebo and waitlist groups. The needs of rescue medications were not different between the two groups.

There were no serious adverse events. The common adverse events included flush, pruritus, blister, and pigmentation, occurring in 17, 23, 3, and 36 person-times among TJ group, and 3, 7, 1, and 4 person-times among placebo group, respectively. These adverse events were generally tolerated and disappeared quickly after removing the patches.

The authors (from the Hong Kong Chinese Medicine Clinical Study Centre, School of Chinese Medicine, Hong Kong Baptist University) concluded that this randomized, single-blinded, controlled trial served primary evidence of the efficacy and safety of TJ therapy on AR in Hong Kong. This pilot study provided a fundamental TJ protocol for future research. Through adjusting treatment timing, frequency, retention time, and even body response settings, it has the potential to develop into an optimal therapeutic method for future application.

The authors of this poorly written paper seem to ignore their own findings by concluding as they do. The fact is that the primary endpoint of this trial failed to show a significant difference between TJ and placebo. Moreover, TJ does have considerable adverse effects. Therefore, this study  fails to demonstrate both the effectiveness and the safety of TJ as a treatment of AR.

PS

I often hesitate whether or not to discuss the plethora such frightfully incompetent research. The reason I sometimes do it is to alert the public to the fact that so much utter rubbish is published by incompetent researchers in trashy (but Medline-listed) journals, passed by incompetent ethics committees, supported by naïve funding agencies, accepted by reviewers and editors who evidently do not do their job properly. Do all these people have forgotten that they have a responsibility towards the public?

It is time to stop this nonsense!

It gives a bad name to science, misleads the public and inhibits progress.

I have become used to lamentably poor research in the realm of SCAM, particularly homeopathy. Thus, there is little that can amaze me these days; at least this is what I had thought. But this paper is an exception. The new trial is entitled ‘ETHICAL CLINICAL TRIAL OF LESSER KNOWN HOMEOPATHIC REMEDIES IN INFERTILITY IN FEMALES’, and it is truly outstanding. Here is the abstract:

Background & Objective:  Homoeopathy with time honoured results, has a great number of cured cases of infertility, but without much evidence. So, it is imperative to show scientifically the scope of homoeopathy in treating infertility cases. Materials and Methodology: 7 lesser known medicines (Alteris farinosa, Janosia Ashoka, Viburnum opulus, Euphonium, Ustilago, Bacillus sycocuss, Bacillus morgan) were prescribed to the sample size (n=23), at the project site O.P.D/I.P.D. of Homoeopathy university, Saipura, Jaipur and Dr Madan Pratap Khunteta Homoeopathic Medical College, Hospital & Research Centre, Station Road, Jaipur & its extension O.P.D.’s. for study within 12 months. Result-In the present study 7 (30.43%) patients were prescribed Janosia Ashoka amongst whom 2(28.57%) showed marked improvement, while 5(71.43%) remained in the state of status quo. Conclusion- Study has shown encouraging and effective treatment in infertility in females.

It does not tell us much; therefore, let me copy several crucial passages from the paper itself:

Objectives of the study-

  • To study the efficacy of homoeopathic medicines in the treatment of infertility in females.
  • To enhance the knowledge of materia medica in cases of infertility in females.

Material and Methodology-

The study was conducted at O.P.D./I.P.D.of Homoeopathy University, Saipura, Sanganer and Dr M.P.K. Homoeopathic Medical College &Research Centre, Station Road, Jaipur from 2010 to 2013 for a total period of 3 Years. A sample size of n=23 and 7 lesser known remedies were selected for the studies.

Result-

Inferences- Based on clinical symptoms and pathological investigations. It was inferred that out of 23 patients taken for study, 2 (8.69%) patients showed marked improvement, while 21 (91.31%) patients remained in the state of status quo.

_________________________________________________

No, I am not kidding you. There is no further relevant information about the trial methodology nor about the results. Therefore, I feel unable to even criticise this study; it is even too awful for a critique.

As I said: outstanding!

And all this could be quite funny – except, of course, some nutter will undoubtedly use this paper for claiming that there is evidence for homeopathy to efficiently treat female infertility.

You have to be a homeopath to call this an ethical trial!

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories