356 research outputs found

    A geostatistical extreme-value framework for fast simulation of natural hazard events

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student's t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements.This work has been kindly funded by the Willis Research Network

    Inference for spatial processes using imperfect data from measurements and numerical simulations

    Get PDF
    This is the final version of the article. Available from the arXiv.org via the link in this record.We present a framework for inference for spatial processes that have actual values imperfectly represented by data. Environmental processes represented as spatial fields, either at fixed time points, or aggregated over fixed time periods, are studied. Data from both measurements and simulations performed by complex computer models are used to infer actual values of the spatial fields. Methods from geostatistics and statistical emulation are used to explicitly capture discrepancies between a spatial field's actual and simulated values. A geostatistical model captures spatial discrepancy: the difference in spatial structure between simulated and actual values. An emulator represents the intensity discrepancy: the bias in simulated values of given intensity. Measurement error is also represented. Gaussian process priors represent each source of error, which gives an analytical expression for the posterior distribution for the actual spatial field. Actual footprints for 50 European windstorms, which represent maximum wind gust speeds on a grid over a 72-hour period, are derived from wind gust speed measurements taken at stations across Europe and output simulated from a downscaled version of the Met Office Unified Model. The derived footprints have realistic spatial structure, and gust speeds closer to the measurements than originally simulated.We thank Phil Sansom for helpful discussion. We thank the Willis Research Network for supporting this work, the Met Office for providing the windstorm measurement data, and Julia Roberts for help with data provision

    Inference for spatial processes using imperfect data from measurements and numerical simulations

    Get PDF
    This is the final version of the article. Available from the arXiv.org via the link in this record.We present a framework for inference for spatial processes that have actual values imperfectly represented by data. Environmental processes represented as spatial fields, either at fixed time points, or aggregated over fixed time periods, are studied. Data from both measurements and simulations performed by complex computer models are used to infer actual values of the spatial fields. Methods from geostatistics and statistical emulation are used to explicitly capture discrepancies between a spatial field's actual and simulated values. A geostatistical model captures spatial discrepancy: the difference in spatial structure between simulated and actual values. An emulator represents the intensity discrepancy: the bias in simulated values of given intensity. Measurement error is also represented. Gaussian process priors represent each source of error, which gives an analytical expression for the posterior distribution for the actual spatial field. Actual footprints for 50 European windstorms, which represent maximum wind gust speeds on a grid over a 72-hour period, are derived from wind gust speed measurements taken at stations across Europe and output simulated from a downscaled version of the Met Office Unified Model. The derived footprints have realistic spatial structure, and gust speeds closer to the measurements than originally simulated.We thank Phil Sansom for helpful discussion. We thank the Willis Research Network for supporting this work, the Met Office for providing the windstorm measurement data, and Julia Roberts for help with data provision

    Three recommendations for evaluating climate predictions

    Get PDF
    This is the final version of the article. Available from Wiley / Royal Meteorological Society via the DOI in this record.Evaluation is important for improving climate prediction systems and establishing the credibility of their predictions of the future. This paper shows how the choices that must be made about how to evaluate predictions affect the outcome and ultimately our view of the prediction system's quality. The aim of evaluation is to measure selected attributes of the predictions, but some attributes are susceptible to having their apparent performance artificially inflated by the presence of climate trends, thus rendering past performance an unreliable indicator of future performance. We describe a class of performance measures that are immune to such spurious skill. The way in which an ensemble prediction is interpreted also has strong implications for the apparent performance, so we give recommendations about how evaluation should be tailored to different interpretations. Finally, we explore the role of the timescale of the predictand in evaluation and suggest ways to describe the relationship between timescale and performance. The ideas in this paper are illustrated using decadal temperature hindcasts from the CMIP5 archive.This work was part of the EQUIP project (http://www.equip.leeds.ac.uk) funded by NERC Directed Grant NE/H003509/1. The authors thank Leon Hermanson, Doug Smith and Holger Pohlmann for useful discussion, Helen Hanlon for assistance with obtaining data, and two anonymous reviewers for comments that helped us to improve the presentation of our ideas

    The importance of sea ice area biases in 21st century multimodel projections of Antarctic temperature and precipitation

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.Climate models exhibit large biases in sea ice area (SIA) in their historical simulations. This study explores the impacts of these biases on multimodel uncertainty in Coupled Model Intercomparison Project phase 5 (CMIP5) ensemble projections of 21st century change in Antarctic surface temperature, net precipitation, and SIA. The analysis is based on time slice climatologies in the Representative Concentration Pathway 8.5 future scenario (2070-2099) and historical (1970-1999) simulations across 37 different CMIP5 models. Projected changes in net precipitation, temperature, and SIA are found to be strongly associated with simulated historical mean SIA (e.g., cross-model correlations of r = 0.77, 0.71, and -0.85, respectively). Furthermore, historical SIA bias is found to have a large impact on the simulated ratio between net precipitation response and temperature response. This ratio is smaller in models with smaller-than-observed SIA. These strong emergent relationships on SIA bias could, if found to be physically robust, be exploited to give more precise climate projections for Antarctica.We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups (listed in Table S1 of this paper) for producing and making available their model output. For CMIP the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provided the coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. The original CMIP5 data can be accessed through the ESGF data portals (see http://pcmdi-cmip.llnl.gov/cmip5/ availability.html). This study is part of the British Antarctic Survey Polar Science for Planet Earth Programme. It was funded by The UK Natural Environment Research Council (grant reference NE/K00445X/1). We would like to thank Paul Holland for his useful discussions and comments on an earlier version of this manuscript

    Best practices for post-processing ensemble climate forecasts, part I: selecting appropriate recalibration methods

    Get PDF
    ArticleThis is the final version of the article. Available from the publisher via the DOI in this record.This study describes a systematic approach to selecting optimal statistical recalibration methods and hindcast designs for producing reliable probability forecasts on seasonal-to-decadal time scales. A new recalibration method is introduced that includes adjustments for both unconditional and conditional biases in the mean and variance of the forecast distribution, and linear time-dependent bias in the mean. The complexity of the recalibration can be systematically varied by restricting the parameters. Simple recalibration methods may outperform more complex ones given limited training data. A new cross-validation methodology is proposed that allows the comparison of multiple recalibration methods and varying training periods using limited data. Part I considers the effect on forecast skill of varying the recalibration complexity and training period length. The interaction between these factors is analysed for grid box forecasts of annual mean near-surface temperature from the CanCM4 model. Recalibration methods that include conditional adjustment of the ensemble mean outperform simple bias correction by issuing climatological forecasts where the model has limited skill. Trend-adjusted forecasts outperform forecasts without trend adjustment at almost 75% of grid boxes. The optimal training period is around 30 years for trend-adjusted forecasts, and around 15 years otherwise. The optimal training period is strongly related to the length of the optimal climatology. Longer training periods may increase overall performance, but at the expense of very poor forecasts where skill is limited

    On the use of Bayesian decision theory for issuing natural hazard warnings

    Get PDF
    This is the final version of the article. Available from the Royal Society via the DOI in this record.Warnings for natural hazards improve societal resilience and are a good example of decision-making under uncertainty. A warning system is only useful if well defined and thus understood by stakeholders. However, most operational warning systems are heuristic: not formally or transparently defined. Bayesian decision theory provides a framework for issuing warnings under uncertainty but has not been fully exploited. Here, a decision theoretic framework is proposed for hazard warnings. The framework allows any number of warning levels and future states of nature, and a mathematical model for constructing the necessary loss functions for both generic and specific end-users is described. The approach is illustrated using one-day ahead warnings of daily severe precipitation over the UK, and compared to the current decision tool used by the UK Met Office. A probability model is proposed to predict precipitation, given ensemble forecast information, and loss functions are constructed for two generic stakeholders: an end-user and a forecaster. Results show that the Met Office tool issues fewer high-level warnings compared with our system for the generic end-user, suggesting the former may not be suitable for risk averse end-users. In addition, raw ensemble forecasts are shown to be unreliable and result in higher losses from warnings.This work was supported by the Natural Environment Research Council (Consortium on Risk in the Environment: Diagnostics, Integration, Benchmarking, Learning and Elicitation (CREDIBLE); grant no. NE/J017043/1)

    Detecting improvements in forecast correlation skill: Statistical testing and power analysis

    Get PDF
    This is the final version. Available from the American Meteorological Society via the DOI in this recordThe skill of weather and climate forecast systems is often assessed by calculating the correlation coefficient between past forecasts and their verifying observations. Improvements in forecast skill can thus be quantified by correlation differences. The uncertainty in the correlation difference needs to be assessed to judge whether the observed difference constitutes a genuine improvement, or is compatible with random sampling variations. A widely used statistical test for correlation difference is known to be unsuitable, because it assumes that the competing forecasting systems are independent. In this paper, appropriate statistical methods are reviewed to assess correlation differences when the competing forecasting systems are strongly correlated with one another. The methods are used to compare correlation skill between seasonal temperature forecasts that differ in initialization scheme and model resolution. A simple power analysis framework is proposed to estimate the probability of correctly detecting skill improvements, and to determine the minimum number of samples required to reliably detect improvements. The proposed statistical test has a higher power of detecting improvements than the traditional test. The main examples suggest that sample sizes of climate hindcasts should be increased to about 40 years to ensure sufficiently high power. It is found that seasonal temperature forecasts are significantly improved by using realistic land surface initial conditions.The authors acknowledge support by the European Union Program FP7/2007-13 under Grant Agreement 3038378 (SPECS). The work of O. Bellprat was funded by ESA under the Climate Change Initiative (CCI) Living Planet Fellowship VERITAS-CCI

    Simulating multimodal seasonality in extreme daily precipitation occurrence

    Get PDF
    Floods pose multi-dimensional hazards to critical infrastructure and society and these hazards may increase under climate change. While flood conditions are dependent on catchment type and soil conditions, seasonal precipitation extremes also play an important role. The extreme precipitation events driving flood occurrence may arrive non-uniformly in time. In addition, their seasonal and inter-annual patterns may also cause sequences of several events and enhance likely flood responses. Spatial and temporal patterns of extreme daily precipitation occurrence are characterized across the UK. Extreme and very heavy daily precipitation is not uniformly distributed throughout the year, but exhibits spatial differences, arising from the relative proximity to the North Atlantic Ocean or North Sea. Periods of weeks or months are identified during which extreme daily precipitation occurrences are most likely to occur, with some regions of the UK displaying multimodal seasonality. A Generalized Additive Model is employed to simulate extreme daily precipitation occurrences over the UK from 1901-2010 and to allow robust statistical testing of temporal changes in the seasonal distribution. Simulations show that seasonality has the strongest correlation with intra-annual variations in extreme event occurrence, while Sea Surface Temperature (SST) and Mean Sea Level Pressure (MSLP) have the strongest correlation with inter-annual variations. The north and west of the UK are dominated by MSLP in the mid-North Atlantic and the south and east are dominated by local SST. All regions now have a higher likelihood of autumnal extreme daily precipitation than earlier in the twentieth century. This equates to extreme daily precipitation occurring earlier in the autumn in the north and west, and later in the autumn 41 in the south and east. The change in timing is accompanied by increases in the probability of extreme daily precipitation occurrences during the autumn, and in the number of days with a very high probability of an extreme event. These results indicate a higher probability of several extreme occurrences in succession and a potential increase in floodingNCAR is sponsored by the National Science Foundation. M.R.T. was partially supported by NSF EASM grant S1048841, the NCAR Weather and Climate Assessment Science Program and a NERC funded Postgraduate Research Studentship NE/G523498/1 (2008-2012). H.J.F. was supported by a NERC Postdoctoral Fellowship Award NE/D009588/1 (2006−2010) and is now funded by the Wolfson Foundation and the Royal Society as a Royal Society Wolfson Research Merit Award holder (WM140025)

    Antipyretic medication for a feverish planet

    Get PDF
    This is the final version. Available on open access from Springer Nature via the DOI in this record. University of Genev
    • …
    corecore