FANDOM


All three IPCC Working Groups in the AR5 have agreed to use two metrics for communicating the degree of certainty in key findings (Mastrandrea et al., 2010)[1]:

  • Confidence in the validity of a finding, based on the type, amount, quality, and consistency of evidence (e.g., data, mechanistic understanding, theory, models, expert judgment) and the degree of agreement. Confidence is expressed qualitatively.
  • Quantified measures of uncertainty in a finding expressed probabilistically (based on statistical analysis of observations or model results, or expert judgement).

A level of confidence synthesizes the Chapter teams’ judgements about the validity of findings as determined through evaluation of the available evidence and the degree of scientific agreement. The evidence and agreement scale underpins the assessment, as it is on the basis of evidence and agreement that statements can be made with scientific confidence (in this sense, the evidence and agreement scale replaces the ‘level of scientific understanding’ scale used in previous WGI assessments). There is flexibility in this relationship; for a given evidence and agreement statement, different confidence levels could be assigned, but increasing levels of evidence and degrees of agreement are correlated with increasing confidence. Confidence cannot necessarily be assigned for all combinations of evidence and agreement, but where key variables are highly uncertain, the available evidence and scientific agreement regarding that variable are presented and discussed. Confidence should not be interpreted probabilistically, and it is distinct from ‘statistical confidence’.

WGI AR5 Fig1-11

Figure 1.11 The basis for the confidence level is given as a combination of evidence (limited, medium, robust) and agreement (low, medium and high) (Mastrandrea et al., 2010)[1].

The confidence level is based on the evidence (robust, medium and limited) and the agreement (high, medium and low). A combination of different methods, e.g., observations and modelling, is important for evaluating the confidence level. Figure 1.11 shows how the combined evidence and agreement results in five levels for the confidence level used in this assessment.

The qualifier ‘likelihood’ provides calibrated language for describing quantified uncertainty. It can be used to express a probabilistic estimate of the occurrence of a single event or of an outcome, for example, a climate parameter, observed trend, or projected change lying in a given range. Statements made using the likelihood scale may be based on statistical or modelling analyses, elicitation of expert views, or other quantitative analyses. Where sufficient information is available it is preferable to eschew the likelihood qualifier in favour of the full probability distribution or the appropriate probability range. See Table 1.2 for the list of ‘likelihood’ qualifiers to be used in AR5.

Table 1.2 Likelihood terms associated with outcomes used in the AR5.

TermLikelihood of the outcome
Virtually certain99-100% probability
Very likely90-100% probability
Likely66-100% probability
About as unlikely as not33-66% probability
Unlikely0-33% probability
Very unlikely1-10% probability
Exceptionally unlikely0-1% probability

Notes: Additional terms that were used in limited circumstances in the AR4 (extremely likely = 95−100% probability, more likely than not = >50−100% probability, and extremely unlikely = 0−5% probability) may also be used in the AR5 when appropriate.

Many social sciences studies have found that the interpretation of uncertainty is contingent on the presentation of information, the context within which statements are placed and the interpreter’s own lexical preferences. Readers often adjust their interpretation of probabilistic language according to the magnitude of perceived potential consequences (Patt and Schrag, 2003[2]; Patt and Dessai, 2005[3]). Furthermore, the framing of a probabilistic statement impinges on how it is interpreted (Kahneman and Tversky, 1979[4]): for example, a 10% chance of dying is interpreted more negatively than a 90% chance of surviving.

In addition, work examining expert judgement and decision making shows that people—including scientific experts—are prone to a range of heuristics and biases that affect their judgement (e.g., Kahneman et al., 1982)[5]. For example, in the case of expert judgements there is a tendency towards overconfidence both at the individual level (Morgan et al., 1990[6]) and at the group level as people converge on a view and draw confidence in its reliability from each other. However, in an assessment of the state of scientific knowledge across a field such as climate change—characterized by complexity of process and heterogeneity of data constraints—some degree of expert judgement is inevitable (Mastrandrea et al., 2010)[1].

These issues were brought to the attention of chapter teams so that contributors to the AR5 might be sensitized to the ways presentation, framing, context and potential biases might affect their own assessments and might contribute to readers’ understanding of the information presented in this assessment. There will always be room for debate about how to summarize such a large and growing literature. The uncertainty guidance is aimed at providing a consistent, calibrated set of words through which to communicate the uncertainty, confidence and degree of consensus prevailing in the scientific literature. In this sense the guidance notes and practices adopted by IPCC for the presentation of uncertainties should be regarded as an interdisciplinary work in progress, rather than as a finalized, comprehensive approach. Moreover, one precaution that should be considered is that translation of this assessment from English to other languages may lead to a loss of precision.

NotesEdit

  1. 1.0 1.1 1.2 Mastrandrea, M. D., et al., 2010: Guidance notes for lead authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties. Available at http:// www.ipcc.ch (accessed 07-10-2013).
  2. Patt, A. G., and D. P. Schrag, 2003: Using specific language to describe risk and probability. Clim. Change, 61, 17–30.
  3. Patt, A. G., and S. Dessai, 2005: Communicating uncertainty: Lessons learned and suggestions for climate change assessment. C. R. Geosci., 337, 425–441.
  4. Kahneman, D., and A. Tversky, 1979: Prospect theory: An analysis of decision under risk. Econometrica, 47, 263–291.
  5. Kahneman, D., P. Slovic, and A. Tversky, Eds., 1982: Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 544 pp.
  6. Morgan, M. G., M. Henrion, and M. Small, 1990: Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 332 pp.
Navigation
ES 1.1 1.2.1 1.2.2 1.2.3 1.3 1.3.1 1.3.2 1.3.3 1.3.4 1.3.4.1 1.3.4.2 1.3.4.3 1.4.1 1.4.2 1.4.3 1.4.4 1.5 1.5.1 1.5.2 1.6 Box 1 FAQ Refs

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.