The “felt need to speak with a single voice,” worries about lost credibility should they overstate their case, and an unwillingness to estimate at all in the face of contradictory data have left scientists providing assessments to policy-makers and the public that dangerously underestimate the pace of climate change, a new analysis warns.
In a book published in March and summarized for Scientific American, researchers Naomi Oreskes, Michael Oppenheimer, and Dale Jamieson look at “the workings of scientific assessments for policy, with particular attention to their internal dynamics”. They say they’re particularly concerned with “how scientists respond to the pressures—sometimes subtle, sometimes overt—that arise when they know that their conclusions will be disseminated beyond the research community—in short, when they know that the world is watching.”
Contrary to the fulminations of the climate denier set, Oreskes and her fellow researchers say they “found little reason to doubt the results of scientific assessments overall,” and likewise reported “no evidence of fraud, malfeasance, or deliberate deception or manipulation.” Nor did they discover “any reason to doubt that scientific assessments accurately reflect the views of their expert communities.”
What they did find, however, was the tendency for climate scientists “to underestimate the severity of threats and the rapidity with which they might unfold.” They cite “the perceived need for consensus” and “the felt need to speak in a single voice” as a major driver behind a conservatism that is also powered by concern that if dissent is made public, policy-makers will either “conflate differences of opinion with ignorance and use this as justification for inaction,” or flounder in the face of ambiguity.
Rather than risk that kind of outcome, scientists “will actively seek to find their common ground and focus on areas of agreement,” the authors say. “In some cases, they will only put forward conclusions on which they can all agree.”
In practice, Oreskes and colleagues say that approach can lead to a very low common denominator.
“Consider a case in which most scientists think the correct answer to a question is in the range 1–10, but some believe that it could be as high as 100,” they write. “In such a case, everyone will agree that it is at least 1–10, but not everyone will agree that it could be as high as 100. Therefore, the area of agreement is 1–10, and this is reported as the consensus view.”
More generally, “wherever there is a range of possible outcomes that includes a long, high-end tail of probability, the area of overlap will necessarily lie at or near the low end.”
The push for consensus “may also be driven by a mental model that sees facts as matters about which all reasonable people should be able to agree, versus differences of opinion or judgements that are potentially irresolvable.”
Another contributor to the drift towards underestimation is “an asymmetry in how scientists think about error and its effects on their reputations.” As a general rule, “many scientists worry that if they overestimate a threat, they will lose credibility, whereas if they underestimate it, it will have little (if any) reputational impact.” Among climate scientists, this worry is “reinforced by the drumbeat of climate denial, in which scientists are accused of being ‘alarmists’ who ‘exaggerate the threat’.”
The pressure to “speak with a single voice” can also drive scientists to “avoid discussing tricky issues” and err “on the side of least drama” in discussing climate policies, the authors say. “In short, the push for agreement and caution may undermine other important goals, including inclusivity, accuracy, and comprehension.”
To avoid such dangerous foreshortenings in understanding, Oreskes and her fellow investigators recommend that “while scientists in assessments generally aim for consensus,” that should not necessarily be the goal.
“Depending on the state of scientific knowledge, consensus may or may not emerge from an assessment, but it should not be viewed as something that needs to be achieved and certainly not as something to be enforced,” they write. “Where there are substantive differences of opinion, they should be acknowledged and the reasons for them explained (to the extent that they can be explained).” And scientific communities themselves should “be open to experimenting with alternative models for making and expressing group judgments, and to learning more about how policy-makers actually interpret the findings that result.”