421 research outputs found

    State space models for non‐stationary intermittently coupled systems: an application to the North Atlantic oscillation

    Get PDF
    This is the final version. Available on open access from Wiley via the DOI in this recordData availability: The data that are analysed in the paper and the programs that were used to analyse them can be obtained from https://rss.onlinelibrary.wiley.com/hub/journal/14679876/seriescdatasetsWe develop Bayesian state space methods for modelling changes to the mean level or temporal correlation structure of an observed time series due to intermittent coupling with an unobserved process. Novel intervention methods are proposed to model the effect of repeated coupling as a single dynamic process. Latent time varying auto‐regressive components are developed to model changes in the temporal correlation structure. Efficient filtering and smoothing methods are derived for the resulting class of models. We propose methods for quantifying the component of variance attributable to an unobserved process, the effect during individual coupling events and the potential for skilful forecasts. The methodology proposed is applied to the study of winter time variability in the dominant pattern of climate variation in the northern hemisphere: the North Atlantic oscillation. Around 70% of the interannual variance in the winter (December–January–February) mean level is attributable to an unobserved process. Skilful forecasts for the winter (December–January–February) mean are possible from the beginning of December.Natural Environment Research Council (NERC

    A geostatistical extreme-value framework for fast simulation of natural hazard events

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student's t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements.This work has been kindly funded by the Willis Research Network

    Inference for spatial processes using imperfect data from measurements and numerical simulations

    Get PDF
    This is the final version of the article. Available from the arXiv.org via the link in this record.We present a framework for inference for spatial processes that have actual values imperfectly represented by data. Environmental processes represented as spatial fields, either at fixed time points, or aggregated over fixed time periods, are studied. Data from both measurements and simulations performed by complex computer models are used to infer actual values of the spatial fields. Methods from geostatistics and statistical emulation are used to explicitly capture discrepancies between a spatial field's actual and simulated values. A geostatistical model captures spatial discrepancy: the difference in spatial structure between simulated and actual values. An emulator represents the intensity discrepancy: the bias in simulated values of given intensity. Measurement error is also represented. Gaussian process priors represent each source of error, which gives an analytical expression for the posterior distribution for the actual spatial field. Actual footprints for 50 European windstorms, which represent maximum wind gust speeds on a grid over a 72-hour period, are derived from wind gust speed measurements taken at stations across Europe and output simulated from a downscaled version of the Met Office Unified Model. The derived footprints have realistic spatial structure, and gust speeds closer to the measurements than originally simulated.We thank Phil Sansom for helpful discussion. We thank the Willis Research Network for supporting this work, the Met Office for providing the windstorm measurement data, and Julia Roberts for help with data provision

    Inference for spatial processes using imperfect data from measurements and numerical simulations

    Get PDF
    This is the final version of the article. Available from the arXiv.org via the link in this record.We present a framework for inference for spatial processes that have actual values imperfectly represented by data. Environmental processes represented as spatial fields, either at fixed time points, or aggregated over fixed time periods, are studied. Data from both measurements and simulations performed by complex computer models are used to infer actual values of the spatial fields. Methods from geostatistics and statistical emulation are used to explicitly capture discrepancies between a spatial field's actual and simulated values. A geostatistical model captures spatial discrepancy: the difference in spatial structure between simulated and actual values. An emulator represents the intensity discrepancy: the bias in simulated values of given intensity. Measurement error is also represented. Gaussian process priors represent each source of error, which gives an analytical expression for the posterior distribution for the actual spatial field. Actual footprints for 50 European windstorms, which represent maximum wind gust speeds on a grid over a 72-hour period, are derived from wind gust speed measurements taken at stations across Europe and output simulated from a downscaled version of the Met Office Unified Model. The derived footprints have realistic spatial structure, and gust speeds closer to the measurements than originally simulated.We thank Phil Sansom for helpful discussion. We thank the Willis Research Network for supporting this work, the Met Office for providing the windstorm measurement data, and Julia Roberts for help with data provision

    Three recommendations for evaluating climate predictions

    Get PDF
    This is the final version of the article. Available from Wiley / Royal Meteorological Society via the DOI in this record.Evaluation is important for improving climate prediction systems and establishing the credibility of their predictions of the future. This paper shows how the choices that must be made about how to evaluate predictions affect the outcome and ultimately our view of the prediction system's quality. The aim of evaluation is to measure selected attributes of the predictions, but some attributes are susceptible to having their apparent performance artificially inflated by the presence of climate trends, thus rendering past performance an unreliable indicator of future performance. We describe a class of performance measures that are immune to such spurious skill. The way in which an ensemble prediction is interpreted also has strong implications for the apparent performance, so we give recommendations about how evaluation should be tailored to different interpretations. Finally, we explore the role of the timescale of the predictand in evaluation and suggest ways to describe the relationship between timescale and performance. The ideas in this paper are illustrated using decadal temperature hindcasts from the CMIP5 archive.This work was part of the EQUIP project (http://www.equip.leeds.ac.uk) funded by NERC Directed Grant NE/H003509/1. The authors thank Leon Hermanson, Doug Smith and Holger Pohlmann for useful discussion, Helen Hanlon for assistance with obtaining data, and two anonymous reviewers for comments that helped us to improve the presentation of our ideas

    A weibull approach for improving climate model projections of tropical cyclone wind-speed distributions

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.Open Access ArticleReliable estimates of future changes in extreme weather phenomena, such as tropical cyclone maximum wind speeds, are critical for climate change impact assessments and the development of appropriate adaptation strategies. However, global and regional climate model outputs are often too coarse for direct use in these applications, with variables such as wind speed having truncated probability distributions compared to those of observations. This poses two problems: How canmodel-simulated variables best be adjusted to make themmore realistic? And how can such adjustments be used to make more reliable predictions of future changes in their distribution? This study investigates North Atlantic tropical cyclone maximum wind speeds from observations (1950- 2010) and regional climate model simulations (1995-2005 and 2045-55 at 12- and 36-km spatial resolutions). The wind speed distributions in these datasets are well represented by the Weibull distribution, albeit with different scale and shape parameters. A power-law transfer function is used to recalibrate the Weibull variables and obtain future projections of wind speeds. Two different strategies, bias correction and change factor, are tested by using 36-km model data to predict future 12-km model data (pseudo-observations). The strategies are also applied to the observations to obtain likely predictions of the future distributions of wind speeds. The strategies yield similar predictions of likely changes in the fraction of events within Saffir-Simpson categories-for example, an increase from 21% (1995-2005) to 27%-37% (2045-55) for category 3 or above events and an increase from 1.6% (1995- 2005) to 2.8%-9.8% (2045-55) for category 5 events. © 2014 American Meteorological Society.Acknowledgments. Support for this work was provided by theWillis Research Network, the Research Program to Secure Energy for America, NSF EASM Grant S1048841, and the NCARWeather and Climate Assessment Science Program. We thank Sherrie Fredrick for extracting data, and Cindy Bruyère, James Done, and Ben Youngman for productive discussions that enhanced this research. We also thank Dr. Adam Monahan and one anonymous reviewer for their insightful comments and suggestions

    The importance of sea ice area biases in 21st century multimodel projections of Antarctic temperature and precipitation

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.Climate models exhibit large biases in sea ice area (SIA) in their historical simulations. This study explores the impacts of these biases on multimodel uncertainty in Coupled Model Intercomparison Project phase 5 (CMIP5) ensemble projections of 21st century change in Antarctic surface temperature, net precipitation, and SIA. The analysis is based on time slice climatologies in the Representative Concentration Pathway 8.5 future scenario (2070-2099) and historical (1970-1999) simulations across 37 different CMIP5 models. Projected changes in net precipitation, temperature, and SIA are found to be strongly associated with simulated historical mean SIA (e.g., cross-model correlations of r = 0.77, 0.71, and -0.85, respectively). Furthermore, historical SIA bias is found to have a large impact on the simulated ratio between net precipitation response and temperature response. This ratio is smaller in models with smaller-than-observed SIA. These strong emergent relationships on SIA bias could, if found to be physically robust, be exploited to give more precise climate projections for Antarctica.We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups (listed in Table S1 of this paper) for producing and making available their model output. For CMIP the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provided the coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. The original CMIP5 data can be accessed through the ESGF data portals (see http://pcmdi-cmip.llnl.gov/cmip5/ availability.html). This study is part of the British Antarctic Survey Polar Science for Planet Earth Programme. It was funded by The UK Natural Environment Research Council (grant reference NE/K00445X/1). We would like to thank Paul Holland for his useful discussions and comments on an earlier version of this manuscript

    Best practices for post-processing ensemble climate forecasts, part I: selecting appropriate recalibration methods

    Get PDF
    ArticleThis is the final version of the article. Available from the publisher via the DOI in this record.This study describes a systematic approach to selecting optimal statistical recalibration methods and hindcast designs for producing reliable probability forecasts on seasonal-to-decadal time scales. A new recalibration method is introduced that includes adjustments for both unconditional and conditional biases in the mean and variance of the forecast distribution, and linear time-dependent bias in the mean. The complexity of the recalibration can be systematically varied by restricting the parameters. Simple recalibration methods may outperform more complex ones given limited training data. A new cross-validation methodology is proposed that allows the comparison of multiple recalibration methods and varying training periods using limited data. Part I considers the effect on forecast skill of varying the recalibration complexity and training period length. The interaction between these factors is analysed for grid box forecasts of annual mean near-surface temperature from the CanCM4 model. Recalibration methods that include conditional adjustment of the ensemble mean outperform simple bias correction by issuing climatological forecasts where the model has limited skill. Trend-adjusted forecasts outperform forecasts without trend adjustment at almost 75% of grid boxes. The optimal training period is around 30 years for trend-adjusted forecasts, and around 15 years otherwise. The optimal training period is strongly related to the length of the optimal climatology. Longer training periods may increase overall performance, but at the expense of very poor forecasts where skill is limited

    On the use of Bayesian decision theory for issuing natural hazard warnings

    Get PDF
    This is the final version of the article. Available from the Royal Society via the DOI in this record.Warnings for natural hazards improve societal resilience and are a good example of decision-making under uncertainty. A warning system is only useful if well defined and thus understood by stakeholders. However, most operational warning systems are heuristic: not formally or transparently defined. Bayesian decision theory provides a framework for issuing warnings under uncertainty but has not been fully exploited. Here, a decision theoretic framework is proposed for hazard warnings. The framework allows any number of warning levels and future states of nature, and a mathematical model for constructing the necessary loss functions for both generic and specific end-users is described. The approach is illustrated using one-day ahead warnings of daily severe precipitation over the UK, and compared to the current decision tool used by the UK Met Office. A probability model is proposed to predict precipitation, given ensemble forecast information, and loss functions are constructed for two generic stakeholders: an end-user and a forecaster. Results show that the Met Office tool issues fewer high-level warnings compared with our system for the generic end-user, suggesting the former may not be suitable for risk averse end-users. In addition, raw ensemble forecasts are shown to be unreliable and result in higher losses from warnings.This work was supported by the Natural Environment Research Council (Consortium on Risk in the Environment: Diagnostics, Integration, Benchmarking, Learning and Elicitation (CREDIBLE); grant no. NE/J017043/1)
    corecore