The journal NATURE has just published an excellent article by Andrew D. Oxman and an alliance of 24 leading scientists outlining the importance and key concepts of critical thinking in healthcare and beyond. The authors state that the Key Concepts for Informed Choices is not a checklist. It is a starting point. Although we have organized the ideas into three groups (claims, comparisons and choices), they can be used to develop learning resources that include any combination of these, presented in any order. We hope that the concepts will prove useful to people who help others to think critically about what evidence to trust and what to do, including those who teach critical thinking and those responsible for communicating research findings.
Here I take the liberty of citing a short excerpt from this paper:
CLAIMS:
Claims about effects should be supported by evidence from fair comparisons. Other claims are not necessarily wrong, but there is an insufficient basis for believing them.
Claims should not assume that interventions are safe, effective or certain.
- Interventions can cause harm as well as benefits.
- Large, dramatic effects are rare.
- We can rarely, if ever, be certain about the effects of interventions.
Seemingly logical assumptions are not a sufficient basis for claims.
- Beliefs alone about how interventions work are not reliable predictors of the presence or size of effects.
- An outcome may be associated with an intervention but not caused by it.
- More data are not necessarily better data.
- The results of one study considered in isolation can be misleading.
- Widely used interventions or those that have been used for decades are not necessarily beneficial or safe.
- Interventions that are new or technologically impressive might not be better than available alternatives.
- Increasing the amount of an intervention does not necessarily increase its benefits and might cause harm.
Trust in a source alone is not a sufficient basis for believing a claim.
- Competing interests can result in misleading claims.
- Personal experiences or anecdotes alone are an unreliable basis for most claims.
- Opinions of experts, authorities, celebrities or other respected individuals are not solely a reliable basis for claims.
- Peer review and publication by a journal do not guarantee that comparisons have been fair.
COMPARISONS:
Studies should make fair comparisons, designed to minimize the risk of systematic errors (biases) and random errors (the play of chance).
Comparisons of interventions should be fair.
- Comparison groups and conditions should be as similar as possible.
- Indirect comparisons of interventions across different studies can be misleading.
- The people, groups or conditions being compared should be treated similarly, apart from the interventions being studied.
- Outcomes should be assessed in the same way in the groups or conditions being compared.
- Outcomes should be assessed using methods that have been shown to be reliable.
- It is important to assess outcomes in all (or nearly all) the people or subjects in a study.
- When random allocation is used, people’s or subjects’ outcomes should be counted in the group to which they were allocated.
Syntheses of studies should be reliable.
- Reviews of studies comparing interventions should use systematic methods.
- Failure to consider unpublished results of fair comparisons can bias estimates of effects.
- Comparisons of interventions might be sensitive to underlying assumptions.
Descriptions should reflect the size of effects and the risk of being misled by chance.
- Verbal descriptions of the size of effects alone can be misleading.
- Small studies might be misleading.
- Confidence intervals should be reported for estimates of effects.
- Deeming results to be ‘statistically significant’ or ‘non-significant’ can be misleading.
- Lack of evidence for a difference is not the same as evidence of no difference.
CHOICES:
What to do depends on judgements about the problem, the relevance (applicability or transferability) of evidence available and the balance of expected benefits, harm and costs.
Problems, goals and options should be defined.
- The problem should be diagnosed or described correctly.
- The goals and options should be acceptable and feasible.
Available evidence should be relevant.
- Attention should focus on important, not surrogate, outcomes of interventions.
- There should not be important differences between the people in studies and those to whom the study results will be applied.
- The interventions compared should be similar to those of interest.
- The circumstances in which the interventions were compared should be similar to those of interest.
Expected pros should outweigh cons.
- Weigh the benefits and savings against the harm and costs of acting or not.
- Consider how these are valued, their certainty and how they are distributed.
- Important uncertainties about the effects of interventions should be reduced by further fair comparisons.
__________________________________________________________________________
END OF QUOTE
I have nothing to add to this, except perhaps to point out how very relevant all of this, of course, is for SCAM and to warmly recommend you study the full text of this brilliant paper.
BRAVO!
Excellent paper, thank you for the link.
Great! Lets apply these to vaccinations and other mass dosing medical approaches instead of just assuming that [all] vaccinations are safe and effective [for all people all the time]. No more one-size-fits-all medicine!
As before, your insight into medicine is very limited Roger. No one, least of all those who deal with public health and vaccine science, assume that all vaccinations are safe and effective for everyone.
Like a singer who needs to learn and train before public appearance in order to avoid being sarcastically criticised and bombarded with rotten eggs and tomatoes, you need to read up on issues you wish to comment on. And by that I am not referring to your reading of mercola, naturalnews, greenmedinfo, whale-dot-to, or similar fake and fraudulent health-info sites. These should be easy to avoid if you really care about truth and reliable information.
I do not share your enthusiasm about this article. Yes, the authors make a quite nice effort in pointing out several important issues.
But when promoting critical thinking… why this biased focus on health-related issues?!
In my opinion, the authors missed the opportunity to emphasize the value of critical thinking for assessing ALL claims and personal beliefs. They could have pointed out far more clearly that critical thinking should be a fundamental tool for epistemology in general, including e.g. supernatural claims.
Maybe they wanted to avoid offending the religious sentiments of the broad readership of Nature Journal, but in my opinion, limiting the “key concepts for informed choices” mainly on health-related claims left a somewhat lukewarm impression on me.
as I said in the intro: I selected those items relevant to SCAM
No, I read the whole thing… and I am sorry to say that I am not impressed.
The article was published in Nature, one of the scientific “high-impact” journals.
I would think that most of the readers of Nature are scientifically literate and for this audience, the points made are pretty much “no duh”.
When I read the title, I was expecting more and after reading the whole article, I was somewhat disappointed. It is too tame, specific suggestion on how to promote critical thinking in the general public are missing and controversial topics like religious or other supernatural claims were not even addressed.
So in my opinion, the article is “ok”, but not much more.
But never mind, maybe I am just grumpy because I just had to come back home from a beautiful village in the alps to a far less beautiful city in NRW.
😉
So, because the audience is “scientifically literate,” they should make it more, what, sciency?
As a lay person interested in the truth, I welcome this article. Scientists can talk among themselves all day long but then the public—you know, real people? The ones who need to know?—will never hear about it.
I’m glad I was told about it. I will pass it along to others.
Thanks, Edzard.
Dear Ron,
First, let me point out that scientists are also „real people“. I don´t like to be painted as someone sitting in an ivory tower with my colleagues and looking down on laypersons.
Second, NATURE is a journal freely available to the public, so everybody can “hear about” the content. However, I assume that those people reading this and similar journals like SCIENCE are (generally speaking) more knowledgeable in terms of critical thinking that the average person. It is a self-selecting group.
Now, I do not know what you mean with “more (…) sciency”, so I can´t comment.
But let me be clear:
*Do I disagree with the content article? No, not at all.
*Should it be read by as many people as possible? Sure, why not.
*But: Do I agree with EE that this is a (quote) “excellent article” and “brilliant paper”? No, I don´t.
I have mentioned some reasons for my lack of enthusiasm. To be more specific:
*The article was published in the “Comment” section of Nature. Articles published in this section are basically opinion pieces and can therefore be more courageous than classical research articles, which can be found in the “Research” section of Nature. Imo, this article is not very courageous.
*Let me quote the title & sub-heading of this article: “Key concepts for making informed choices. Teach people to think critically about claims and comparisons using these concepts, urge Andrew D. Oxman and an alliance of 24 researchers — they will make better decisions.”
Note: this is not limited to health-related claims, but when reading the text, it strongly was focussed on these aspects (just count how often they use the word “intervention”).
I think that the concept of critical thinking should be applied to ALL aspects of life and the authors have missed the opportunity to be clear about this.
*Finally, let me point out just two aspects that are missing for a “brilliant article” on critical thinking:
1. The readers should be encouraged to have a very critical look at their own lives first. What do they believe and why? What methods do they use to distinguish right from wrong (or “real” from “not real”)? How can we improve our internal critical skills?
Due to my job and my personal interests, I spend a lot of time with scientists / scientifically literate people. I am amazed of the level of cognitive dissonance that even highly scientifically literate people can have.
Let me illustrate this problem with the well-known example of Prof. Francis Collins. Very recently, an article (more or less a hymn of praise) was published in SCIENCE journal.
https://www.sciencemag.org/news/2019/08/decade-francis-collins-has-shielded-nih-while-making-waves-his-own
As NIH director, Prof. Collins is one of the most influential scientist on this planet – and imo, a poster boy for cognitive dissonance, in spite of all his scientific knowledge. He is an outspoken Christian and according to his own account, a major reason that convinced him that Christianity is true was that during a hike, he saw a waterfall which was frozen into three parts. In his mind, this was a sign for the Christian trinity (LOL). Of course, none of this cognitive dissonance was mentioned in the hymn of praise.
The prevalence of such cognitive dissonance, also amongst scientist, is a big problem that should be addressed in an opinion piece about critical thinking. It wasn´t mentioned at all here.
2. For a “brilliant article”, more concrete suggestions on how to promote critical thinking should have been offered. The authors could e.g. have:
– encouraged people to stop politely respecting all kinds of supernatural claims in public and private discussions and be more outspoken when promoting critical thinking about such claims, even in a climate of “political correctness”.
– encouraged people to join the sceptical movement of their countries.
– shown some concrete ideas on how to incorporate critical thinking in politics and education.
@Jashak; thank you very much for your very perspicacious posting!
Kind of you, Michael, thanks!