781 research outputs found
Spatio-temporal modelling of extreme storms
A flexible spatio-temporal model is implemented to analyse extreme
extra-tropical cyclones objectively identified over the Atlantic and Europe in
6-hourly re-analyses from 1979-2009. Spatial variation in the extremal
properties of the cyclones is captured using a 150 cell spatial regularisation,
latitude as a covariate, and spatial random effects. The North Atlantic
Oscillation (NAO) is also used as a covariate and is found to have a
significant effect on intensifying extremal storm behaviour, especially over
Northern Europe and the Iberian peninsula. Estimates of lower bounds on minimum
sea-level pressure are typically 10-50 hPa below the minimum values observed
for historical storms with largest differences occurring when the NAO index is
positive.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS766 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Inherent bounds on forecast accuracy due to observation uncertainty caused by temporal sampling
© Copyright 2015 American Meteorological Society (AMS).Author Affiliations: MARION P. MITTERMAIER (Numerical Modelling, Weather Science, Met Office, Exeter, United Kingdom). DAVID B. STEPHENSON (Exeter Climate Systems, Department of Mathematics and Computer Science, Exeter University,
Exeter, United Kingdom)Synoptic observations are often treated as error-free representations of the true state of the real world. For example, when observations are used to verify numerical weather prediction (NWP) forecasts, forecast-observation differences (the total error) are often entirely attributed to forecast inaccuracy. Such simplification is no longer justifiable for short-lead forecasts made with increasingly accurate higher-resolution models. For example, at least 25% of t + 6 h individual Met Office site-specific (postprocessed) temperature forecasts now typically have total errors of less than 0.2 K, which are comparable to typical instrument measurement errors of around 0.1 K. In addition to instrument errors, uncertainty is introduced by measurements not being taken concurrently with the forecasts. For example, synoptic temperature observations in the United Kingdom are typically taken 10 min before the hour, whereas forecasts are generally extracted as instantaneous values on the hour. This study develops a simple yet robust statistical modeling procedure for assessing how serially correlated subhourly variations limit the forecast accuracy that can be achieved. The methodology is demonstrated by application to synoptic temperature observations sampled every minute at several locations around the United Kingdom. Results show that subhourly variations lead to sizeable forecast errors of 0.16-0.44 K for observations taken 10 min before the forecast issue time. The magnitude of this error depends on spatial location and the annual cycle, with the greater errors occurring in the warmer seasons and at inland sites. This important source of uncertainty consists of a bias due to the diurnal cycle, plus irreducible uncertainty due to unpredictable subhourly variations that fundamentally limit forecast accuracy.NCAR-DT
Appalachia in the Sixties: Decade of Reawakening
In The Southern Appalachian Region: A Survey, published by the University Press of Kentucky in 1962, Rupert Vance suggested a decennial review of the region\u27s progress. No systematic study comparable to that made at the beginning of the decade is available to answer the question of how far Appalachia has come since then, but David S. Walls and John B. Stephenson have assembled a broad range of firsthand reports which together convey the story of Appalachia in the sixties. These observations of journalists, field workers, local residents, and social scientists have been gathered from a variety of sources ranging from national magazines to county weeklies.
Focusing mainly on the coalfields of West Virginia, eastern Kentucky, southwestern Virginia, and north-central Tennessee, the editors first present selections that reflect the rediscovery” of the region as a problem area in the early sixties and describe the federal programs designed to rehabilitate it and their results. Other sections focus on the politics of the coal industry, the extent and impact of the continued migration from the region, and the persistence of human suffering and environmental devastation. A final section moves into the 1970s with proposals for the future. Although they conclude that there is little ground for claiming success in solving the region\u27s problems, the editors find signs of hope in the scattered movements toward grass-roots organization described by some of the contributors, and in the new tendency to define solutions in terms of reconstruction rather than amelioration.
David S. Walls, professor emeritus of sociology at Sonoma State University, served on the staff of the Appalachian Volunteers, doing community-organizing work in central Appalachia. He is the author of The Activist\u27s Almanac: The Concerned Citizen\u27s Guide to the Leading Advocacy Organizations in America.
John B. Stephenson, a native of the Appalachian mountains, was the first director of the Appalachian Center at the University of Kentucky and served as the president of Berea College from 1984 to 1994. He was the author of numerous books including Shiloh: A Mountain Community.https://uknowledge.uky.edu/upk_appalachian_studies/1005/thumbnail.jp
A Bayesian framework for verification and recalibration of ensemble forecasts: How uncertain is NAO predictability?
Predictability estimates of ensemble prediction systems are uncertain due to
limited numbers of past forecasts and observations. To account for such
uncertainty, this paper proposes a Bayesian inferential framework that provides
a simple 6-parameter representation of ensemble forecasting systems and the
corresponding observations. The framework is probabilistic, and thus allows for
quantifying uncertainty in predictability measures such as correlation skill
and signal-to-noise ratios. It also provides a natural way to produce
recalibrated probabilistic predictions from uncalibrated ensembles forecasts.
The framework is used to address important questions concerning the skill of
winter hindcasts of the North Atlantic Oscillation for 1992-2011 issued by the
Met Office GloSea5 climate prediction system. Although there is much
uncertainty in the correlation between ensemble mean and observations, there is
strong evidence of skill: the 95% credible interval of the correlation
coefficient of [0.19,0.68] does not overlap zero. There is also strong evidence
that the forecasts are not exchangeable with the observations: With over 99%
certainty, the signal-to-noise ratio of the forecasts is smaller than the
signal-to-noise ratio of the observations, which suggests that raw forecasts
should not be taken as representative scenarios of the observations. Forecast
recalibration is thus required, which can be coherently addressed within the
proposed framework.Comment: 36 pages, 10 figure
On the predictability of extremes: Does the butterfly effect ever decrease?
This is the peer reviewed version of the following article: Sterk, A. E., Stephenson, D. B., Holland, M. P. and Mylne, K. R. (2015), On the predictability of extremes: Does the butterfly effect ever decrease?. Quarterly Journal of the Royal Meteorological Society, which has been published in final form at http://dx.doi.org/10.1002/qj.2627. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving: http://olabout.wiley.com/WileyCDA/Section/id-820227.html#termsThis study investigates whether or not predictability always decreases for more extreme events. Predictability is measured by the Mean Squared Error (MSE), estimated here from the difference of pairs of ensemble forecasts, conditioned on one of the forecast variables (the 'pseudo-observation') exceeding a threshold. Using an exchangeable linear regression model for pairs of forecast variables, we show that the MSE can be decomposed into the sum of three terms: a threshold-independent constant, a mean term that always increases with threshold, and a variance term that can either increase, decrease, or stay constant with threshold. Using the generalised Pareto distribution to model wind speed excesses over a threshold, we show that MSE always increases with threshold at sufficiently high threshold. However, MSE can be a decreasing function of threshold at lower thresholds but only if the forecasts have finite upper bounds. The methods are illustrated by application to daily wind speed forecasts for London made using the 24 member Met Office Global and Regional Ensemble Prediction System from 1 January 2009 to 31 May 2011. For this example, the mean term increases faster than the variance term decreases with increasing threshold, and so predictability decreases for more extreme events.Engineering and Physical Sciences Research Council (EPSRC)Netherlands Organisation for Scientific Research (NWO
Recommended from our members
Equitability revisited: why the “equitable threat score” is not equitable
In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures
- …