747 research outputs found

    Improving on the empirical covariance matrix using truncated PCA with white noise residuals

    Full text link
    The empirical covariance matrix is not necessarily the best estimator for the population covariance matrix: we describe a simple method which gives better estimates in two examples. The method models the covariance matrix using truncated PCA with white noise residuals. Jack-knife cross-validation is used to find the truncation that maximises the out-of-sample likelihood score

    Probabilistic temperature forecasting: a comparison of four spread-regression models

    Full text link
    Spread regression is an extension of linear regression that allows for the inclusion of a predictor that contains information about the variance. It can be used to take the information from a weather forecast ensemble and produce a probabilistic prediction of future temperatures. There are a number of ways that spread regression can be formulated in detail. We perform an empirical comparison of four of the most obvious methods applied to the calibration of a year of ECMWF temperature forecasts for London Heathrow

    Improving probabilistic weather forecasts using seasonally varying calibration parameters

    Full text link
    We show that probabilistic weather forecasts of site specific temperatures can be dramatically improved by using seasonally varying rather than constant calibration parameters

    Probabilistic temperature forecasting: a summary of our recent research results

    Full text link
    We summarise the main results from a number of our recent articles on the subject of probabilistic temperature forecasting

    Probabilistic forecasts of temperature: measuring the utility of the ensemble spread

    Full text link
    The spread of ensemble weather forecasts contains information about the spread of possible future weather scenarios. But how much information does it contain, and how useful is that information in predicting the probabilities of future temperatures? One traditional answer to this question is to calculate the spread-skill correlation. We discuss the spread-skill correlation and how it interacts with some simple calibration schemes. We then point out why it is not, in fact, a useful measure for the amount of information in the ensemble spread, and discuss a number of other measures that are more useful

    Moment based methods for ensemble assessment and calibration

    Full text link
    We describe various moment-based ensemble interpretation models for the construction of probabilistic temperature forecasts from ensembles. We apply the methods to one year of medium range ensemble forecasts and perform in and out of sample testing. Our main conclusion is that probabilistic forecasts derived from the ensemble mean using regression are just as good as those based on the ensemble mean and the ensemble spread using a more complex calibration algorithm. The explanation for this seems to be that the predictable component of the variability of the forecast uncertainty is only a small fraction of the total forecast uncertainty. Users of ensemble temperature forecasts are advised, until further evidence becomes available, to ignore the ensemble spread and build probabilistic forecasts based on the ensemble mean alone

    Do probabilistic medium-range temperature forecasts need to allow for non-normality?

    Full text link
    The gaussian spread regression model for the calibration of site specific ensemble temperature forecasts depends on the apparently restrictive assumption that the uncertainty around temperature forecasts is normally distributed. We generalise the model using the kernel density to allow for much more flexible distribution shapes. However, we do not find any meaningful improvement in the resulting probabilistic forecast when evaluated using likelihood based scores. We conclude that the distribution of uncertainty is either very close to normal, or if it is not close to normal, then the non-normality is not being predicted by the ensemble forecast that we test

    Use of the likelihood for measuring the skill of probabilistic forecasts

    Full text link
    We define the likelihood and give a number of justifications for its use as a skill measure for probabilistic forecasts. We describe a number of different scores based on the likelihood, and briefly investigate the relationships between the likelihood, the mean square error and the ignorance.Comment: Version 1 (3rd August 2003) contained some incorrect statements about the relationship between the likelihood and the Brier score. These have now been remove

    The problem with the Brier score

    Full text link
    The Brier score is frequently used by meteorologists to measure the skill of binary probabilistic forecasts. We show, however, that in simple idealised cases it gives counterintuitive results. We advocate the use of an alternative measure that has a more compelling intuitive justification

    Do medium range ensemble forecasts give useful predictions of temporal correlations?

    Full text link
    Medium range ensemble forecasts are typically used to derive predictions of the conditional marginal distributions of future events on individual days. We assess whether they can also be used to predict the conditional correlations between different days
    • …
    corecore