41 research outputs found
Quantifying statistical uncertainty in the attribution of human influence on severe weather
Event attribution in the context of climate change seeks to understand the
role of anthropogenic greenhouse gas emissions on extreme weather events,
either specific events or classes of events. A common approach to event
attribution uses climate model output under factual (real-world) and
counterfactual (world that might have been without anthropogenic greenhouse gas
emissions) scenarios to estimate the probabilities of the event of interest
under the two scenarios. Event attribution is then quantified by the ratio of
the two probabilities. While this approach has been applied many times in the
last 15 years, the statistical techniques used to estimate the risk ratio based
on climate model ensembles have not drawn on the full set of methods available
in the statistical literature and have in some cases used and interpreted the
bootstrap method in non-standard ways. We present a precise frequentist
statistical framework for quantifying the effect of sampling uncertainty on
estimation of the risk ratio, propose the use of statistical methods that are
new to event attribution, and evaluate a variety of methods using statistical
simulations. We conclude that existing statistical methods not yet in use for
event attribution have several advantages over the widely-used bootstrap,
including better statistical performance in repeated samples and robustness to
small estimated probabilities. Software for using the methods is available
through the climextRemes package available for R or Python. While we focus on
frequentist statistical methods, Bayesian methods are likely to be particularly
useful when considering sources of uncertainty beyond sampling uncertainty.Comment: 41 pages, 11 figures, 1 tabl
On the uncertainty of long-period return values of extreme daily precipitation
Methods for calculating return values of extreme precipitation and their uncertainty are compared using daily precipitation rates over the Western U.S. and Southwestern Canada from a large ensemble of climate model simulations. The roles of return-value estimation procedures and sample size in uncertainty are evaluated for various return periods. We compare two different generalized extreme value (GEV) parameter estimation techniques, namely L-moments and maximum likelihood (MLE), as well as empirical techniques. Even for very large datasets, confidence intervals calculated using GEV techniques are narrower than those calculated using empirical methods. Furthermore, the more efficient L-moments parameter estimation techniques result in narrower confidence intervals than MLE parameter estimation techniques at small sample sizes, but similar best estimates. It should be noted that we do not claim that either parameter fitting technique is better calibrated than the other to estimate long period return values. While a non-stationary MLE methodology is readily available to estimate GEV parameters, it is not for the L-moments method. Comparison of uncertainty quantification methods are found to yield significantly different estimates for small sample sizes but converge to similar results as sample size increases. Finally, practical recommendations about the length and size of climate model ensemble simulations and the choice of statistical methods to robustly estimate long period return values of extreme daily precipitation statistics and quantify their uncertainty
Recommended from our members
Progress and challenges of demand-led co-produced sub-seasonal-to-seasonal (S2S) climate forecasts in Nigeria
This paper identifies fundamental issues which prevent the effective uptake of climate information services in Nigeria. We propose solutions which involve the extension of short-range (1 to 5 days) forecasts beyond that of medium-range (7 to 15 days) timescales through the operational use of current forecast data as well as improve collaboration and communication with forecast users. Using newly available data to provide seamless operational forecasts from short-term to sub-seasonal timescales, we examine evidence to determine if effective demand-led sub-seasonal-to-seasonal (S2S) climate forecasts can be co-produced. This evidence involves: itemization of forecast products delivered to stakeholders, with their development methodology; enumeration of inferences of forecast products and their influences on decisions taken by stakeholders; user-focused discussions of improvements on co-produced products; and the methods of evaluating the performance of the forecast products.
We find that extending the production pipeline of short-range forecast timescales beyond the medium-range, such that the medium-range forecast timescales can be fed into existing tools for applying short-range forecasts, assisted in mitigating the risks of sub-seasonal climate variability on socio-economic activities in Nigeria. We also find that enhancing of collaboration and communication channels between the producers and the forecast product users helps to: enhance the development of user-tailored impact-based forecasts; increases users’ trusts in the forecasts; and, seamlessly improves forecast evaluations. In general, these measures lead to more smooth delivery and increase in uptake of climate information services in Nigeria
Rapid systematic assessment of the detection and attribution of regional anthropogenic climate change
Despite being a well-established research field, the detection and attribution of observed climate change to anthropogenic forcing is not yet provided as a climate service. One reason for this is the lack of a methodology for performing tailored detection and attribution assessments on a rapid time scale. Here we develop such an approach, based on the translation of quantitative analysis into the “confidence” language employed in recent Assessment Reports of the Intergovernmental Panel on Climate Change. While its systematic nature necessarily ignores some nuances examined in detailed expert assessments, the approach nevertheless goes beyond most detection and attribution studies in considering contributors to building confidence such as errors in observational data products arising from sparse monitoring networks. When compared against recent expert assessments, the results of this approach closely match those of the existing assessments. Where there are small discrepancies, these variously reflect ambiguities in the details of what is being assessed, reveal nuances or limitations of the expert assessments, or indicate limitations of the accuracy of the sort of systematic approach employed here. Deployment of the method on 116 regional assessments of recent temperature and precipitation changes indicates that existing rules of thumb concerning the detectability of climate change ignore the full range of sources of uncertainty, most particularly the importance of adequate observational monitoring