214 research outputs found

    Beyond probabilities: A possibilistic framework to interpret ensemble predictions and fuse imperfect sources of information

    Get PDF
    AbstractEnsemble forecasting is widely used in medium‐range weather predictions to account for the uncertainty that is inherent in the numerical prediction of high‐dimensional, nonlinear systems with high sensitivity to initial conditions. Ensemble forecasting allows one to sample possible future scenarios in a Monte‐Carlo‐like approximation through small strategical perturbations of the initial conditions, and in some cases stochastic parametrization schemes of the atmosphere–ocean dynamical equations. Results are generally interpreted in a probabilistic manner by turning the ensemble into a predictive probability distribution. Yet, due to model bias and dispersion errors, this interpretation is often not reliable and statistical postprocessing is needed to reach probabilistic calibration. This is all the more true for extreme events which, for dynamical reasons, cannot generally be associated with a significant density of ensemble members. In this work we propose a novel approach: a possibilistic interpretation of ensemble predictions, taking inspiration from possibility theory. This framework allows us to integrate in a consistent manner other imperfect sources of information, such as the insight about the system dynamics provided by the analogue method. We thereby show that probability distributions may not be the best way to extract the valuable information contained in ensemble prediction systems, especially for large lead times. Indeed, shifting to possibility theory provides more meaningful results without the need to resort to additional calibration, while maintaining or improving skills. Our approach is tested on an imperfect version of the Lorenz '96 model, and results for extreme event prediction are compared against those given by a standard probabilistic ensemble dressing

    Computing with confidence

    Get PDF
    The advent of accessible high-speed computing has revolutionised engineering which has been utterly transformed from a largely solitary pencil-and-paper endeavour to a collective enterprise based on computer calculations and widely shared software tools

    Singhing with confidence: visualising the performance of confidence procedures

    Get PDF
    Confidence intervals are an established means of portraying uncertainty about an inferred parameter and can be generated through the use of confidence distributions. For a confidence distribution to be ideal, it must maintain frequentist coverage of the true parameter. This can be represented for a precise distribution by adherence to a cumulative unit uniform distribution, referred to here as a Singh plot. This manuscript extends this to imprecise confidence structures with bounds around the uniform distribution, and describes how deviations convey information regarding the characteristics of confidence structures designed for inference and prediction. This quick visual representation, in a manner similar to ROC curves, aids the development of robust structures and methods that make use of confidence. A demonstration of the utility of Singh plots is provided with an assessment of the coverage of the ProUCL Chebyshev upper confidence limit estimator for the mean of an unknown distribution.Comment: 12 Pages Textx, 9 Figure

    Coverage Probability Fails to Ensure Reliable Inference

    Get PDF
    Satellite conjunction analysis is the assessment of collision risk during a close encounter between a satellite and another object in orbit. A counterintuitive phenomenon has emerged in the conjunction analysis literature, namely, probability dilution, in which lower quality data paradoxically appear to reduce the risk of collision. We show that probability dilution is a symptom of a fundamental deficiency in probabilistic representations of statistical inference, in which there are propositions that will consistently be assigned a high degree of belief, regardless of whether or not they are true. We call this deficiency false confidence. In satellite conjunction analysis, it results in a severe and persistent underestimate of collision risk exposure. We introduce the Martin--Liu validity criterion as a benchmark by which to identify statistical methods that are free from false confidence. Such inferences will necessarily be non-probabilistic. In satellite conjunction analysis, we show that uncertainty ellipsoids satisfy the validity criterion. Performing collision avoidance maneuvers based on ellipsoid overlap will ensure that collision risk is capped at the user-specified level. Further, this investigation into satellite conjunction analysis provides a template for recognizing and resolving false confidence issues as they occur in other problems of statistical inference.Comment: 18 pages, 3 figure
    corecore