MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

by Norbert Aust as ‘guest blogger’ and Edzard Ernst

Professor Frass has repeatedly stated that his published criticism of the Lancet meta-analysis has never been refuted, and therefore homeopathy is a valid therapy. The last time we heard him say this was during a TV discussion (March 2018) where he said that, if one succeeded in scientifically refuting the arguments set out in his paper, one would show the ineffectiveness of homeopathy.

In today’s post, we quote the paper Frass refers to, published as a ‘letter to the editor’ (published in the journal Homeopathy) by Frass et al (bold typing), and provide our rebuttal (in normal print) of it:

Even with careful selection, it remains problematic to compare studies of a pool of 165 for homeopathy vs 4200,000 for conventional medicine. This factor of 41000 already contains asymmetry.

We see no good reasons why the asymmetry poses a problem; it does not conceivably impact on the outcome, nor does it bias the results. In fact, such asymmetries are common is research.

Furthermore, it appears that there is discrimination when publications in English (94/110, 85% in the conventional medicine group vs 58/110, 53% in the homeopathy group) are rated higher quality (Table 2).

We cannot confirm that the table demonstrates such a discrimination, nor do we understand how this would disadvantage homeopathy.

Neither the Summary nor the Introduction clearly specify the aim of the study.

The authors stated that they “analysed trials of homoeopathy and conventional medicine and estimated treatment effects in trials least likely to be affected by bias”. It is hardly difficult to transform this into their aim: the authors aimed at analysing trials of homoeopathy and conventional medicine and estimating treatment effects in trials least likely to be affected by bias.

Furthermore, the design of the study differs substantially from the final analysis and therefore the prolonged description of how the papers and databases were selected is misleading: instead of analysing all 110 studies retrieved by their defined inclusion and exclusion criteria, the authors reduce the number of investigated studies to ‘larger trials of higher quality’. By using these sub-samples, the results seem to differ between conventional medicine and homeopathy.

This statement discloses a misconception of the approach used in the meta-analysis.  The meta-analysis of all 110 trials found some advantages of homeopathy. When the authors performed a sensitivity analysis with high quality and larger studies, this advantage disappeared. The sensitivity analysis was to determine whether the overall treatment effect seen in the initial analysis was real or false-positive. In the case of homeopathy, it turned out to be false (and presumably for this reason, the authors hardly mention it in their paper), whereas for the trials of conventional medicines, it was real. This procedure is in keeping with the authors’ stated aims.

The meta-analysis does not compare studies of homeopathy vs studies of conventional medicine, but specific effects of these two methods in separate analyses. Therefore, a direct comparison must not be made from this study.

We fail to see the significance in terms of the research question stated by the authors. Even Frass et al use direct comparisons above.

However, there remains great uncertainty about the selection of the eight homeopathy and the six conventional medicine studies: the cut-off point seems to be arbitrarily chosen: if one looks at Figure 2, the data look very much the same for both groups. This holds true even if various levels of SE are considered. Therefore, the selection of larger trials of higher quality is a post-festum hypothesis but not a pre-set criterion.

This is not true, Shang et al clearly stated in their paper: “Trials with SE (standard error) in the lowest quartile were considered larger trials.” It is common, reasonable and in keeping with the authors’ aims to conduct sensitivity analyses using a subset of trials that seem more reliable than the average.

The question remains: was the restriction to larger trials of higher quality part of the original protocol or was this a data-driven decision? Since we cannot find this proposed reduction in the abstract, we doubt that it was included a priori.

We are puzzled by this statement and fail to understand why Frass et al insist that this information should have been in the abstract.

However, even if one assumes that this was a predefined selection, there are still some problems with the authors’ interpretation: for larger trials of higher reported methodological quality, the odds ratio was 0.88 (CI 95%: 0.65–1.19) based on eight trials of homeopathy: although this finding does not prove an effect of the study design on the 5% level, neither does it disprove the hypothesis that the results might have been achieved by homeopathy. For conventional medicine, the odds ratio was 0.58 (CI 95% 0.39–0.85), which indicates that the results may not be explained by mere chance with a 5% uncertainty.

As the outcome failed to reach the level of significance, the null-hypothesis (“there is no difference”) cannot be discarded, and this is sufficient evidence to show that there is no evidence for the effectiveness of homeopathy. The comment by Frass et al seems to be based on a misunderstanding how science operates.

Although the authors acknowledge that ‘to prove a negative is impossible’ the authors clearly favour the view that there is evidence that homoeopathy exhibits no effect beyond the placebo-effect. However, this conclusion was drawn after a substantial modification of the original protocol which considerably weakens its validity from the methodological point of view. After acquiring the trials by their original inclusion- and exclusion criteria they introduced a further criterion, ‘larger trials of higher reported methodological quality’. Thus, eight trials (=46% of the larger trials) in the homoeopathy group were left and only six (32%) in conventional medicine group (an odds ratio of 0.75 in favour of homoeopathy).

As explained above, the authors’ reasoning was clear and rational; it did not follow the logic suggested by Frass et al. which confirms our suspicion already mentioned above that Frass et al misunderstood the concept of the Shang meta-analysis.

But the decisive point is that it is unlikely that these six trials are still matched to the eight samples of homoeopathy (although each of the 110 in the original was matched). Consequently, one cannot conclude that these trials are still comparable. Thus, any comparisons of results between them are unjustified.

Further evidence that Frass et al misunderstood the concept of the Shang meta-analysis.

The rationale for this major alteration of the study protocol was the assumption, that these larger, higher quality trials are not biased, but no evidence or databased justification is given. Neither the actual data (odds ratio, matching parameters…) nor a funnel plot (to indicate that there is no bias) of the final 14 trials are supplied although these parameters constitute the ground of their conclusion.

Further evidence that Frass et al misunderstood the concept of the Shang meta-analysis.

The other 206 trials (94% of the originally selected according to the protocol) were discarded because of possible publication biases as visualized by the funnel plots. However, the use of funnel plots is also questionable. Funnel plots are thought to detect publication bias, and heterogeneity to detect fundamental differences between studies.

Further evidence that Frass et al misunderstood the concept of the Shang meta-analysis.

New evidence suggests that both of these common beliefs are badly flawed. Using 198 published meta-analyses, Tang and Liu demonstrate that the shape of a funnel plot is largely determined by the arbitrary choice of the method to construct the plot. When a different definition of precision and/or effect measure was used, the conclusion about the shape of the plot was altered in 37 (86%) of the 43 meta-analyses with an asymmetrical plot suggesting selection bias. In the absence of a consensus on how the plot should be constructed, asymmetrical funnel plots should be interpreted cautiously.

Further evidence that Frass et al misunderstood the concept of the Shang meta-analysis.

These findings also suggest that the discrepancies between large trials and corresponding meta-analyses and heterogeneity in metaanalyses may also be determined by how they are evaluated. Researchers tend to read asymmetric funnel plots as evidence of publication bias, even though metaanalyses without publication bias frequently have asymmetric plots and meta-analysis with publication bias frequently have symmetric plots, simply due to chance.

Perhaps we should mention that the senior author of the Lancet meta-analysis, Mathias Egger, is the clinical epidemiologist who invented the funnel plot and certainly knows how to use and interpret it.

Use of funnel plots is even more unreliable when there is heterogeneity. Apart from the questionable selection of the samples there is a further aspect of randomness which further weakens their conclusion: the odds ratio of the eight trials of homoeopathy was 0.88 (CI 0.65–1.19), which might be significant around the 7–8% level. Actually, the reader might be interested to know at which exact level homeopathy would have become significant. Thus, there is no support of their conclusion any more when you shift the level of significance by mere, say 2–3%.

What number of grains is required to build a heap? Certainly there is such a limit. Five grains are not a heap, five billion are. But if you select any specific value, you will find it hard to explain if one grain less changes the characteristic of a heap to become a number of grains only. Same here. If p = 0.05 is the limit of significance, p = 0.05001 is not significant, let alone, when p is 2-3%higher than that.

In addition, with such controversial hypotheses the scientific community would tend to use a level of significance of 1% in which case the odds ratio of the conventional studies would not be significant either.

The level of 5% is commonly applied in medical research; it is the accepted standard. Frass et al also apply it in their studies; but here they want to change it. Why, to suit their preconceived ideas?

From a statistical point of view, the power of the test, considering the small sample sizes, should have been stated, especially in the case of a nonsignificant result.

This might have been informative but is rarely done in meta-analyses.

Above all, the choice of which trials are to be evaluated is crucial. By choosing a different sample of eight trials (eg the eight trials in ‘acute infections of the upper respiratory tract’, as mentioned in the Discussion section) a radically different conclusion would have had to be drawn (namely a substantial beneficial effect of homeopathy—as the authors state).

Further evidence that Frass et al misunderstood the concept of the Shang meta-analysis.

The authors may not be aware that larger trials are usually not ‘classical’ homeopathic interventions, because the main principle of homeopathy, individualization are difficult to apply in large trials. In this respect, the whole study lacks sound understanding of what homeopathy really is.

This is a red herring; firstly the authors did not aim to evaluate individualised homeopathy. Secondly, Frass et al know very well that clinical homeopathy is not individualised and regarded as entirely legitimate by homeopaths. And finally, the largest trial of individualised homeopathy included in Mathie’s review of individualized homeopathy had 251 participants.


So, why has so far no rebuttal of this ‘letter to the Editor’ been published? We suspect that the journal Homeopathy has little incentive to publish a critical response, and critics of homeopathy have even less motivation to submit one to this journal. Other journals have no reason at all to pursue a discussion started in ‘Homeopathy’. In other words, Frass et al were safe from any rebuttal – until today, that is.

11 Responses to Prof Frass’ criticism of the Lancet meta-analysis of homeopathy: a rebuttal of a rebuttal

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories