MD, PhD, MAE, FMedSci, FRSB, FRCP, FRCPEd.

Yesterday, I wrote about a new acupuncture trial. Amongst other things, I wanted to find out whether the author who had previously insisted I answer his questions about my view on the new NICE guideline would himself answer a few questions when asked politely. To remind you, this is what I wrote:

This new study was designed as a randomized, sham-controlled trial of acupuncture for persistent allergic rhinitis in adults investigated possible modulation of mucosal immune responses. A total of 151 individuals were randomized into real and sham acupuncture groups (who received twice-weekly treatments for 8 weeks) and a no acupuncture group. Various cytokines, neurotrophins, proinflammatory neuropeptides, and immunoglobulins were measured in saliva or plasma from baseline to 4-week follow-up.

Statistically significant reduction in allergen specific IgE for house dust mite was seen only in the real acupuncture group. A mean (SE) statistically significant down-regulation was also seen in pro-inflammatory neuropeptide substance P (SP) 18 to 24 hours after the first treatment. No significant changes were seen in the other neuropeptides, neurotrophins, or cytokines tested. Nasal obstruction, nasal itch, sneezing, runny nose, eye itch, and unrefreshed sleep improved significantly in the real acupuncture group (post-nasal drip and sinus pain did not) and continued to improve up to 4-week follow-up.

The authors concluded that acupuncture modulated mucosal immune response in the upper airway in adults with persistent allergic rhinitis. This modulation appears to be associated with down-regulation of allergen specific IgE for house dust mite, which this study is the first to report. Improvements in nasal itch, eye itch, and sneezing after acupuncture are suggestive of down-regulation of transient receptor potential vanilloid 1.

…Anyway, the trial itself raises a number of questions – unfortunately I have no access to the full paper – which I will post here in the hope that my acupuncture friend, who are clearly impressed by this paper, might provide the answers in the comments section below:

  1. Which was the primary outcome measure of this trial?
  2. What was the power of the study, and how was it calculated?
  3. For which outcome measures was the power calculated?
  4. How were the subjective endpoints quantified?
  5. Were validated instruments used for the subjective endpoints?
  6. What type of sham was used?
  7. Are the reported results the findings of comparisons between verum and sham, or verum and no acupuncture, or intra-group changes in the verum group?
  8. What other treatments did each group of patients receive?
  9. Does anyone really think that this trial shows that “acupuncture is a safe, effective and cost-effective treatment for allergic rhinitis”?

In the comments section, the author wrote: “after you have read the full text and answered most of your questions for yourself, it might then be a more appropriate time to engage in any meaningful discussion, if that is in fact your intent”, and I asked him to send me his paper. As he does not seem to have the intention to do so, I will answer the questions myself and encourage everyone to have a close look at the full paper [which I can supply on request].

  1. The myriad of lab tests were defined as primary outcome measures.
  2. Two sentences are offered, but they do not allow me to reconstruct how this was done.
  3. No details are provided.
  4. Most were quantified with a 3 point scale.
  5. Mostly not.
  6. Needle insertion at non-acupoints.
  7. The results are a mixture of inter- and intra-group differences.
  8. Patients were allowed to use conventional treatments and the frequency of this use was reported in patient diaries.
  9. I don’t think so.

So, here is my interpretation of this study:

  • It lacked power for many outcome measures, certainly the clinical ones.
  • There were hardly any differences between the real and the sham acupuncture group.
  • Most of the relevant results were based on intra-group changes, rather than comparing sham with real acupuncture, a fact, which is obfuscated in the abstract.
  • In a controlled trial fluctuations within one group must never be interpreted as caused by the treatment.
  • There were dozens of tests for statistical significance, and there seems to be no correction for multiple testing.
  • Thus the few significant results that emerged when comparing sham with real acupuncture might easily be false positives.
  • Patient-blinding seems questionable.
  • McDonald as the only therapist of the study might be suspected to have influenced his patients through verbal and non-verbal communications.

I am sure there are many more flaws, particularly in the stats, and I leave it to others to identify them. The ones I found are, however, already serious enough, in my view, to call for a withdrawal of this paper. Essentially, the authors seem to have presented a study with largely negative findings as a trial with positive results showing that acupuncture is an effective therapy for allergic rhinitis. Subsequently, McDonald went on social media to inflate his findings even more. One might easily ask: is this scientific misconduct or just poor science?

I would be most interested to hear what you think about it [if you want to see the full article, please send me an email].

57 Responses to Acupuncture for allergic rhinitis: scientific misconduct or just poor science?

  • In terms of the outcome measurements, the following are listed:

    Total and Specific Antibodies
    Cytokines
    Neuropeptides and Neurotrophins
    Eosinophilic Cationic Protein
    Instantaneous Total Nasal Symptom Score
    Mini-Rhinoconjunctivitis Quality of Life Questionnaire
    Daily Symptom and Medication Diary
    Mean Inferior Turbinate Obstruction
    Peak Nasal Inspiratory Flow

    The Daily Symptom and Medication Diary consisted of the binary recording of seven symptoms:

    Participants completed a daily symptom and medication diary from week 1 to week 12. The presence or absence of 7 symptoms was recorded daily: nasal itch, eye itch, sneezing, runny nose, postnasal drip, unrefreshed sleep, and sinus pain.

    Other initial observations:

    No demographic data and characteristics of the participants
    Only partial data on baseline measurements
    The dropout rate seems to be high (~25%) but no details given

    Intriguingly, the paper states:

    There were no statistically significant differences between groups at baseline in sex, age, total IgE, allergen specific IgE, NGF, BDNF, SP, VIP, CGRP, ECP, eotaxin, symptom severity, PNIF or MITO. However, there were some significant between-group differences in IL-2, IL-4, IL-10, IL-12 (p70), and IFN-g, with the no acupuncture group having significantly higher levels of these cytokines than the sham acupuncture group. Because these cytokines had no significant changes after treatment, these baseline differences are unlikely to have had any effect on the results of the study.

    • “the paper states”

      You are a liar! The paper does not states this. The original quote is this:

      “There was no correlation between objective measures of nasal patency and subjective measures of perceived nasal obstruction at any time point… Acupuncture treatment for persistent allergic rhinitis appeared to have a significant effect in decreasing the total IgE and allergen specific IgE for house dust mite, which persisted at 4-week followup.

      • Alan Henness quoted from the Results section of the paper. Egger quoted from:

        Discussion section
        “There was no correlation between objective measures of nasal patency and subjective measures of perceived nasal obstruction at any time point (Table 6). Lack of correlation between objective and subjective measures of nasal obstruction has been previously reported.”

        Conclusions section
        “Acupuncture treatment for persistent allergic rhinitis appeared to have a significant effect in decreasing the total IgE and allergen specific IgE for house dust mite, which persisted at 4-week follow-up.”

      • Egger said:

        “the paper states”

        You are a liar! The paper does not states this. The original quote is this:

        “There was no correlation between objective measures of nasal patency and subjective measures of perceived nasal obstruction at any time point… Acupuncture treatment for persistent allergic rhinitis appeared to have a significant effect in decreasing the total IgE and allergen specific IgE for house dust mite, which persisted at 4-week followup.“

        Now, now, Egger. Please try to be civil.

        I quoted this from the paper:

        There were no statistically significant differences between groups at baseline in sex, age, total IgE, allergen specific IgE, NGF, BDNF, SP, VIP, CGRP, ECP, eotaxin, symptom severity, PNIF or MITO. However, there were some significant between-group differences in IL-2, IL-4, IL-10, IL-12 (p70), and IFN-g, with the no acupuncture group having significantly higher levels of these cytokines than the sham acupuncture group. Because these cytokines had no significant changes after treatment, these baseline differences are unlikely to have had any effect on the results of the study.

        Here it is:

        Yet I cannot find the exact text you quoted. Perhaps you could provide your copy so we can compare them?

        • I know a man called Egger; he is Prof in Bern. He is civil.
          And then I see another Egger; the one who occasionally posts nonsense on my blog. He seems unable to be civil.

      • @Egger
        “…which persisted at 4-week followup.”

        I hadn’t spotted this one before. The paper provides IgE data for weeks 0 and 12. No mention in the results or experimental design of any samples taken for IgE measurements at 4 week followup!

      • Sorry, I’m mistaken: the 12-week data are those for the 4-week follow-up. My objection remains that writing something persists suggests measurements at more than a single time point.

  • There appears to be some worrying aspects to this work.
    Multiple allergies were selected for, there is the possibility that patients did not have an allergy for house dust mite but results for HDM are all that are seen.

    As Professor Ernst notes, there was no blinding of the acupuncturist. There is also the strange comment that no sham procedure is inert.

    The specific IgE results show a statistically different result between baseline and week 12 for the active arm of 19(10-28) & 18(10-26) but then no significance between active and sham groups at baseline of 31(20-42) & 19(10-28). As Frank Odds mentioned in the previous article, recording these particular results to four digits of precision is remarkable.

    No mention is made of correction for multiple p value determinations.

    All in all the methodology and the results do not merit the conclusions and certainly not the press release. Some of the statistical results are surprising and require some explanation.

  • You are absolutely right Professor Ernst. This paper is an example of blatant misuse of the scientific toolkit to produce something resembling results. I obtained a full copy and read through it.

    The method – or let us rather call it mistake because I believe this kind of scientific misconduct seldom entails intentional deception – exemplified here is unfortunately very common and involves both very inappropriate study design and erroneous use of statistics, i.e. omitting to apply correction for multiple tests when doing an extraordinarily large number of tests of a single hypothesis.
    Think of it as Carpetbombing-research.
    This grand fallacy so frequently encountered, was intentionally demonstrated so brilliantly by John Bohannon et al. who did a study of the effect of a piece of chocolate a day on multiple outcomes. They simply selected their hypothesis after the fact, thereby pretending to have demonstrated that chocolate helped with weight loss.
    That is why one of the mainstays of science is testing an a-priori hypothesis.

    • Björn

      As I suggested in my comment under the first post on this yesterday, we needed the full paper to understand the various dates.

      The trial registration gives the registration as 26 May 2009 but was ‘Retrospectively registered’ 18 months later on 30 November 2010. The paper tells us that recruitment took place from October 2009 and May 2012 and that the trial was completed in August 2012 – the trial was registered more than a year into recruitment.

      • Bjorn,

        “or let us rather call it mistake because I believe this kind of scientific misconduct seldom entails intentional deception”

        My experience at the NICM tells me that there is a 99% likelihood that this “mistake” was intentional. Why? My former colleagues at the NICM are highly intelligent, experienced people who knows their stuff. The co-author on this paper, CA Smith, have decades of experience in clinical trials etc. and she has even been named the researcher of the year in 2015; “Awarded for her excellence in research, Professor Caroline Smith has made a sustained and significant contribution to establishing the evidence base of acupuncture….”

        http://nicm.edu.au/news/excellence_in_research_-_researcher_of_the_year_awarded_to_professor_caroline_smith

        My golden rule. As soon as I see the NICM affiliation on any scientific publication I tend to look extremely cautiously and sceptically at their results and especially their interpretation of the results. There are many examples where they “misrepresent” their results.

        • Fraud, misconduct or mistake? It is difficult to see how any researcher could draw the conclusions of this author given the data. However the incompetence is clear and it matches the general incompetence I see in the vast majority of workers in pseudomedicine. These people are just not trained to understand critical evaluation of their own work.

          • it would be good, if McDonald would respond to our questions – after all, he was soooooo insistent that I respond to his questions a few days ago. If he doesn’t respond, I will have to write to the editor of the journal and ask him to withdraw the paper.

          • Always difficult and unfortunately they use exactly this argument when they are caught out. “We did not know”. When you look at the link you will see that the co-author, Smith, was awarded researcher of the year for her decades of experience in clinical trials etc. She should know. But the only way to ascertain ourselves if this is indeed intentional will be to have a look at all the NICMs acupuncture clinical trial results (about 10) and see if there is a tendency to do this sort of thing. I might be a bit biased so I invite anyone to have a look.

  • I posted this comment in the earlier thread about this paper before I read Edzard’s post today. I’m linking to it now as it seems to complement the other comments made so far here.

  • I am still mystified as to how they can derive significance from baseline to 12 weeks for the activate arm of the house dust mite specific IgE. This is important as it forms a major part of the discussion and the conclusions.

    The specific IgE results at baseline is 19 95% confidence limits (10-28)
    The week 12 results are 18 (10-26)

    It does not seem possible that there is a significant statistical difference and certainly not a clinical difference.

    The standard deviation of these populations is 25.0 and 22.3 respectively. Being rusty in these methods I checked that these standard deviations do indeed represent the given confidence limits using an on line CL calculator.

    Personally, I wouldn’t bother testing for significance given the wide variance but here goes plugging in the numbers I get a significance of 0.87 or rather, an insignificant difference, a result quite different from the author who calculates it at <0.05. I would be grateful if someone else checked my results with their own calculation.

    The wide variation would be expected from a population that may have contained patients who were not sensitive to house dust mite. Also the results of four patients have been somehow lost.

    • @Acleron

      You are right to be mystified. Equally mystifying is the 19 result (or 18.87, as the authors would have it) at week 0 for the ‘real acupuncture’ group. In the ‘ no acupuncture’ group it was 30 (95% CI 18, 41) and in the ‘sham acupuncture’ group it was 31 (20, 42). Even these fairly large baseline differences from the ‘real acupuncture’ group are not statistically significant by t-test. But a reduction from a mean of 19 to a mean of 18, wow! that’s evidence that acupuncture is effective therapy for allergic rhinitis.

      The paper is statistically incompetent and bears signs of carelessness with detail. The study design is flawed. Like Edzard says, the paper should be withdrawn from publication.

  • Acupuncture is a safe and effective treatment, with significant improvements in clinical symptoms, quality of life and a reduction in use of relief medications.1

    1 McDonald, JL. 2014, ‘ The Effects of Acupuncture on Mucosal Immunity in the Upper Respiratory Tract’, PhD thesis, Griffith University
    END OF QUOTE

    I just picked this up from an acupuncture-site [http://www.kiacupuncture.com.au/challenge.html]
    Interesting how they interpret his data, I find.

    • The terms and conditions applying to the 8 week hay fever challenge (point 5 contradicts the first three points somewhat) (http://www.kiacupuncture.com.au/terms.html)

      8 Week hayfever challenge (Commenced September 2015)

      1. All 16 acupuncture treatments must be purchased at $85 each (total $1360) at the time of the initial consultation.
      2. All 16 acupuncture treatments to be booked at the time of the initial consultation.
      3. Customers must attend a minimum of 2 acupuncture treatments per week for 8 consecutive weeks.
      4. Symptom questionnaire must be completed at the time of each treatment to track symptom improvement.
      5. Customers completing less than 8 or more than 9 treatments will not be eligible for a refund. ????
      6. Herbs, supplements and other products purchased during the course of treatment will not be subject to refund as part of the 8 week hayfever challenge.
      7. Customers who comply with all the aforementioned conditions and show no improvement at all (as measured by self reporting of symptoms during treatment sessions) in allergic rhinitis (hayfever) symptoms by the completion of the 8th treatment, will be eligible for a complete refund of the $1360 and acupuncture treatment will cease at that time.

      At the end of the day it always seem as if money play the biggest role in a CM scientific result!

  • I do wonder if this is the first time McDonald’s paper has actually come under any sort of scrutiny or whether it’s just been uncritically accepted by needlers everywhere?

    • AAAI is claimed to be peer reviewed and as it comes out monthly, the delays you observed would indicate some changes were made, just not enough.

      Whether it is uncritically accepted by acupuncturists? Lol.

  • I tried to email Prof Smith about this, but the email bounced. does anyone have her current email address?

  • There is yet another problem, the patient selection and disposal.

    The selection is either/or HDM or grass pollen. Why should acupuncture affect one and not the other?

    Disposal
    152 were screened, 142 were randomised but 151 commenced treatment. So 10 were lost. The numbers were made up by recycling 9 from the no treatment group but it appears that these were not randomised. This becomes important when the specific IgE results for the no acupuncture group are so different from the active arm, where were these 9 placed? The statistics, such as they are, depend of the patients and measurements being independent. Having the same patients in more than one arm negates the mathematics.

  • A McDonald does not respond, I just sent this email to one of the senior authors:

    Dear Prof Smith,

    the recent study of acupuncture for allergic rhinitis is the source of some concern. Details can be found here [http://edzardernst.com/2016/05/acupuncture-for-allergic-rhinitis-a-case-of-scientific-misconduct/] and in the comments following my article. As the 1st author does not seem to want to respond, could I ask you to reply to the post and the comments made by others, please? You can do this by responding to this email or by posting comments on the blog.

    Many thanks for your efforts and best regards

    E Ernst

  • Here are some of my concerns about the paper…

    1. It is a very confusing paper. I think my confusion is perhaps due to its meandering obfuscations rather than due to a failure on my part to grasp the essence of clearly written medical reports.

    2. The endless repetition of the word “significant” applied to sample sizes in the thirties flies in the face of everything I’ve learnt about test and measurement.

    3. “The study excluded adults with asthma, but it is estimated that 40% of those with allergic rhinitis also have asthma and as many as 80% to 95% of those with allergic asthma have concomitant rhinitis.” So why did the authors chose asthma as an exclusion criterion, and why did they omit this from the Abstract section (the section that mentions the “American College of Allergy, Asthma & Immunology”)?

    4. Table 1 is bizarre. Why does the “real acupuncture” group have a very much lower level of both total IgE and house dust mite IgE at Week 0? This is the only aspect of the paper that I think is worthy of the term “significant”. From my engineering perspective, such a large initial discrepancy between the groups would automatically nullify the test results, therefore the test would be terminated at this point rather than wasting time and money by continuing with the test. I shan’t repeat the details provided by Frank Odds on Saturday 28 May 2016 at 09:07.

    5. The paper contains numeric values that have a precision much greater than their accuracy. I was instructed during high school maths and science classes that, in any profession, doing this is regarded as a strong indicator of incompetence.

    • @Pete Attkins

      Re your point 5. I, too, was taught at an early age about precision and accuracy, but it is amazing how, in a remarkable number of scientific journals, authors seem to be oblivious of the difference. “I just copied the readout from the machine” is the excuse most commonly trotted out. I don’t regard this as a sign of incompetence: it’s worse. It indicates the experimenter doesn’t think about what they’re doing and saying/writing.

      • Frank, It’s the same with those who use spreadsheets — every result formatted to (usually) two decimal places with no thought given to what the numbers actually mean. It’s a lack of thought and a dire lack of numeracy skills.

        • @Pete Attkins

          “George, Your semantic filibustering is nothing other than semantic filibustering.”

          ich beginne einen Trend in Ihren Schreibstil zu entdecken. Er ist undurchsichtig, unklar, widersinnig und auBerst eigenartig. Es ist offensichtlich, daB Sie sich nicht in hoch akademische Unterhaltungen einmischen sollten.

          Die von Ihnen initiierten Konversationen lassen erkennen, daB Sie sich nur mit sich selber beschäftigen. Wenn Sie das glücklich macht, akzeptiere ich das. Ich mochte allerdings nicht, daB Sie ein bitterer frustrierter alter Herr werden der sich orakelhaft ausdruckt und so ein wenig Aufregung in sein bescheidenes Leben bekommt. Bitte lernen Sie, in Zukunft eine gut ausgewogene und fundierte Kritik zu geben. Ich schätze Ihre Schlussfolgerungen nicht.

          Haben Sie einen Schonen
          Abend

    • “5. The paper contains numeric values that have a precision much greater than their accuracy. I was instructed during high school maths and science classes that, in any profession, doing this is regarded as a strong indicator of incompetence.”

      I was interested in number 5 judgement. “Precision” is if the repeated observations are close together. “Accurate” or “accuracy” is when the observations are close to the actual value of the quantity. In theory, we know that one can have precision without accuracy, but what are the numeric values that you refer too in that statement? It becomes even more confusing when the reviewer doesn’t provide a clear, detailed rationale accompanying the statement.

      Bests

      • @George

        It was really the implied precisions of the data that concerned me. To express a mean as, for example, 408.74 implies that, if additional data were available, the mean would still be in the range 408.69–408.79. Since the standard error of the mean was given as 299.12 (standard deviation ~1700), this is highly improbable. Two-digit precision would be more appropriate (mean 410, SE 300, SD 1700). The change should not affect the calculated p-values.

      • “False precision (also called overprecision, fake precision, misplaced precision and spurious accuracy) occurs when numerical data are presented in a manner that implies better precision than is actually the case; since precision is a limit to accuracy, this often leads to overconfidence in the accuracy as well.[1]”
        https://en.wikipedia.org/wiki/False_precision

        1. “Implausibly precise statistics…are often bogus.” — John Allen Paulos, A Mathematician Reads the Newspaper (Anchor, 1995), p. 139.
        http://www.fallacyfiles.org/fakeprec.html

  • Perhaps someone who has a PubMed Commons login should leave a comment under the paper’s entry there?

  • I have now read the full paper. As Prof Ernst says, the glaring problem with this paper is that almost all the statistically significant results are from within group changes over time i.e. what you would get from an uncontrolled trial, and there is no correction for multiple comparisons.

    Apart from the substance P result, the few significant biochemical results are clearly meaningless false positives from testing 16 substances at many intervals and not correcting for multiple comparisons. For example they report a significant reduction in specific IgE for Bahia grass in the no acupuncture group, which they describe as “unexplained” but I think shows they are detecting random variations in biomarkers.

    There appears to be an error in their analysis of the Total IgE. They state there was no statistically significant difference between groups at week zero and 12 but significant within group differences between week 1 and 12. It is hard to see how the between group difference between baseline IgE of 338 vs 218 is considered insignificant while the within group change from 218 to 192 is considered statistically significant. These within group changes are 3 to 6 times smaller than baseline differences between groups. For all three groups total IgE dropped by between 9.3% and 11.8% yet the 9.3% in the no acupuncture group is described as “but no significant difference” while 10.0% and 11.8% are described as significant. Similarly house dust mite specific IgE baselines ranging from 18.87 to 31.22 are described as not significantly different yet a drop from 18.87 to 17.82 (5.6% or about 1.3 x the standard error by my calculation) is described as significant.

    My estimate is that the group baseline total IgE and house dust mite specific IgE are statistically significantly different, while the reductions of between 9 and 12% in total IgE are similar to each other and do not reflect a treatment effect. Even if they are statistically significant, they are far smaller than the between group baseline differences and so cannot be meaninful. My rough calculations find the house dust mite specific IgE drop in the real acupuncture group is less than 1.5 times the standard error, even without correction for multiple comparisons, but I am no expert.

    Examining table 4, the claimed daily diary clinical experience improvements all result from the real acupuncture group’s having worse mean symptoms in week one. That group’s best score, was fractionally worse than weeks 8 and 9 of the no intervention group, and the confidence intervals of the final week’s mean scores for the three groups overlap substantially, showing that the difference is not statistically significant. The group with the best questionaire symptoms at week 12 was the sham acupuncture group. Other problems with this data include: Lack of a pre-treatment diary score for an actual base-line, 40% drop out rate for the diary, giving potential for diary keepers being a biased self-selected sample, and the crudeness of the underlying question: yes/no for seven symptoms is a very crude measure even when totalled over a week.

    The Mini RQLQ data is a little more convincing, but sham acupuncture’s achieving a bigger reduction in symptoms at week 9 than real acupuncture and a statistically significant reduction at 12 weeks shows that the reductions are not an acupuncture specific effect.

    The items chosen for the abstract are interesting
    1. The reduction in mite specific IgE when the baseling groups are significantly different and my calculation is that the reduction is not, even without multiple comparison correction.
    2. A reduction in substance P in saliva 18-24 hours after the first treatement, which is not listed as an outcome in the trial registration, which has no non-intervention control and is a fairly meaningless single point in time single parameter. Saliva substance P does seem to be lower at each test 18-24 hours after acupuncture intervention.
    3. Five individual clinical signs that improved in the real acupuncture group, four of which are from the self recorded daily symptom diaries where the real acupuncture group reported worse week 1 status than the other groups and no better result at weeks 9 or 12 than the other groups, and the other one of which is not reported in the body of the paper as published.

    Two of these three abstract results appear to me to be invalid. The substance P result appears valid although lacking in clinical significance.

    I think this shows that the abstract misrepresents the actual study data and the study has been poorly analysed.

    • Thanks David. Very interesting.

      “Apart from the substance P result, the few significant biochemical results are clearly meaningless false positives from testing 16 substances at many intervals and not correcting for multiple comparisons.” …

      Do you think that even with corrections (i am talking hypothetically i.e. in response to your inference of “testing 16 substances”) that correction is paramount vs the decision to use a different statistical approach?

      If you were critiquing and also making balanced recommendations what would you have done (to avoid further loss power and Type 1 Error)?

      Bests

      • George, from this data I would conclude that acupuncture does not work for allergic rhinitis so there is no point in hunting for biomarkers. The study’s clinical outcomes are not different between sham and real acupuncture.

        Instead the authors pretend the sham acupuncture must have been effective. Logically their position implies that acupuncture requires almost no expertise because sticking the needles in anywhere works. In fact it is known that acupuncture is just a theatrical placebo plus a relaxed lie down.

        To avoid power loss and type 1 error, make a precise hypothesis, test just 1 primary outcome, not 16, and don’t test things that are known not to work.

        • @Davidp.
          “George, from this data I would conclude that acupuncture does not work for allergic rhinitis so there is no point in hunting for biomarkers. The study’s clinical outcomes are not different between sham and real acupuncture.”

          It is an interesting concept David. Because on the one-hand surrogate measures:
          Replace the variable of interest;
          May not advance to expected clinical outcome(s).

          On the other-hand surrogate measures:
          Could confirm a mechanism is acting as predicted;
          Be practical and logistical (although to what scientific expense).

          “To avoid power loss and type 1 error, make a precise hypothesis, test just 1 primary outcome, not 16, and don’t test things that are known not to work.”

          As it appears that this study set out to be “Exploratory” (important to note) – I would have suggested maybe a “Principle Component Analysis” would of been useful.

          Bests

          • @George

            “As it appears that this study set out to be “Exploratory” (important to note)…” Not true! The word “exploratory” appears nowhere in the paper.

            “… I would have suggested maybe a “Principle [sic] Component Analysis” would of [sic] been useful.” A multivariate analysis might have been appropriate in some instances, but the authors state: “Valid multivariate analysis of variance tests could not be performed because normality tests and multivariate tests failed for most of the data.” In other words multivariate analysis didn’t show what the authors wanted their study to show.

          • George, Is there such a thing as a non-exploratory study?

            Exploratory [adjective]: relating to or involving exploration or investigation.

          • @Frank

            @George

            “As it appears that this study set out to be “Exploratory” (important to note)…”

            Frank: “Not true! The word “exploratory” appears nowhere in the paper.”

            The author in response on Sun 29th May 2:02 “Acupuncture for allergic rhinitis” topic stated that his study was:

            “Early Exploratory of potential immune reactions”. I think it’s fair to suggest that this is highly likely.

            Bests

          • sadly, the [first] author seems to say all sots of things about his study which are not true

          • Hello Pete,

            “George, Is there such a thing as a non-exploratory study?
            Exploratory [adjective]: relating to or involving exploration or investigation.”

            Pete: “Non-Exploratory..”

            Well, when it comes to “research” of course there are different designs. Yes. Most certainly!

            An “Exploratory” study is characterised by for example (incase you need this information) more tightly defined inclusion/exclusion criteria, and placebo when say compared to a pragmatic trial design.

            The “Exploratory” study design may also be applied in the EARLY stages of developing an intervention. Pete, it really is about understanding terms in context. All research investigates something, there are different ways of going about it.

            I hope that helps.
            Bests

          • @Frank

            “Valid multivariate analysis of variance tests could not be performed because normality tests and multivariate tests failed for most of the data.” In other words multivariate analysis didn’t show what the authors wanted their study to show.”

            I see, thank you.

          • George, Your semantic filibustering is nothing other than semantic filibustering.

    • Thank you David. I found your post informative.
      Bests

  • ANF president Matthew Bauer, calls this study ‘ground-breaking research’ in his latest video

  • Interesting. I’ve found medical acupuncture to help hay fever suffers instantly. I’ll keep doing it regardless of research results or mechanism of action.

    • Well, that’s it, then! And I have definitely seen ghosts, I’ve found standing on one leg at midnight instantly helps my sinusitis and I know my astrologer can predict when bad things are going to happen. I’ll keep believing in the evidence of my own senses regardless of research results or mechanism of action, so there! And, of course, no way am I a blinkered dupe.

    • …and you missed a “w” on the “surname”… “Roger” that?

    • I’ve found “medical acupuncture” helps….” is this different from other types? Like Chiropractic-acupuncture or aeronautical-acupuncture or military-acupuncture?
      And how do you know you are doing “medical acupuncture”? Is it because it has no research or mechanism of action? like your brain?

Leave a Reply to Pete Attkins Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via email

Enter your email address to receive notifications of new blog posts by email.

Recent Comments

Note that comments can be edited for up to five minutes after they are first submitted but you must tick the box: “Save my name, email, and website in this browser for the next time I comment.”

The most recent comments from all posts can be seen here.

Archives
Categories