1,448 research outputs found

    How to create an operational multi-model of seasonal forecasts?

    Get PDF
    Seasonal forecasts of variables like near-surface temperature or precipitation are becoming increasingly important for a wide range of stakeholders. Due to the many possibilities of recalibrating, combining, and verifying ensemble forecasts, there are ambiguities of which methods are most suitable. To address this we compare approaches how to process and verify multi-model seasonal forecasts based on a scientific assessment performed within the framework of the EU Copernicus Climate Change Service (C3S) Quality Assurance for Multi-model Seasonal Forecast Products (QA4Seas) contract C3S 51 lot 3. Our results underpin the importance of processing raw ensemble forecasts differently depending on the final forecast product needed. While ensemble forecasts benefit a lot from bias correction using climate conserving recalibration, this is not the case for the intrinsically bias adjusted multi-category probability forecasts. The same applies for multi-model combination. In this paper, we apply simple, but effective, approaches for multi-model combination of both forecast formats. Further, based on existing literature we recommend to use proper scoring rules like a sample version of the continuous ranked probability score and the ranked probability score for the verification of ensemble forecasts and multi-category probability forecasts, respectively. For a detailed global visualization of calibration as well as bias and dispersion errors, using the Chi-square decomposition of rank histograms proved to be appropriate for the analysis performed within QA4Seas.The research leading to these results is part of the Copernicus Climate Change Service (C3S) (Framework Agreement number C3S_51_Lot3_BSC), a program being implemented by the European Centre for Medium-Range Weather Forecasts (ECMWF) on behalf of the European Commission. Francisco Doblas-Reyes acknowledges the support by the H2020 EUCP project (GA 776613) and the MINECO-funded CLINSA project (CGL2017-85791-R)

    The DWD climate predictions website: Towards a seamless outlook based on subseasonal, seasonal and decadal predictions

    Get PDF
    The climate predictions website of the Deutscher Wetterdienst (DWD, https://www.dwd.de/climatepredictions) presents a consistent operational outlook for the coming weeks, months and years, focusing on the needs of German users. At global scale, subseasonal predictions from the European Centre of Medium-Range Weather Forecasts as well as seasonal and decadal predictions from the DWD are used. Statistical downscaling is applied to achieve high resolution over Germany. Lead-time dependent bias correction is performed on all time scales. Additionally, decadal predictions are recalibrated. The website offers ensemble mean and probabilistic predictions for temperature and precipitation combined with their skill (mean squared error skill score, ranked probability skill score). Two levels of complexity are offered: basic climate predictions display simple, regionally averaged information for Germany, German regions and cities as maps, time series and tables. The skill is presented as traffic light. Expert climate predictions show complex, gridded predictions for Germany (at high resolution), Europe and the world as maps and time series. The skill is displayed as the size of dots. Their color is related to the signal in the prediction. The website was developed in cooperation with users from different sectors via surveys, workshops and meetings to guarantee its understandability and usability. The users realize the potential of climate predictions, but some need advice in using probabilistic predictions and skill. Future activities will include the further development of predictions to improve skill (multi-model ensembles, teleconnections), the introduction of additional products (data provision, extremes) and the further clarification of the information (interactivity, video clips)

    Assessing Credibility In Subjective Probability Judgment

    Get PDF
    Subjective probability judgments (SPJs) are an essential component of decision making under uncertainty. Yet, research shows that SPJs are vulnerable to a variety of errors and biases. From a practical perspective, this exposes decision makers to risk: if SPJs are (reasonably) valid, then expectations and choices will be rational; if they are not, then expectations may be erroneous and choices suboptimal. However, existing methods for evaluating SPJs depend on information that is typically not available to decision makers (e.g., ground truth; correspondence criteria). To address this issue, I develop a method for evaluating SPJs based on a construct I call credibility. At the conceptual level, credibility describes the relationship between an individual’s SPJs and the most defensible beliefs that one could hold, given all available information. Thus, coefficients describing credibility (i.e., “credibility estimates”) ought to reflect an individual’s tendencies towards error and bias in judgment. To determine whether empirical models of credibility can capture this information, this dissertation examines the reliability, validity, and utility of credibility estimates derived from a model that I call the linear credibility framework. In Chapter 1, I introduce the linear credibility framework and demonstrate its potential for validity and utility in a proof-of-concept simulation. In Chapter 2, I apply the linear credibility framework to SPJs from three empirical sources and examine the reliability and validity of credibility estimates as predictors of judgmental accuracy (among other measures of “good” judgment). In Chapter 3, I use credibility estimates from the same three sources to recalibrate and improve SPJs (i.e., increase accuracy) out-of-sample. In Chapter 4, I discuss the robustness of empirical models of credibility and present two studies in which I use exploratory research methods to (a) tailor the linear credibility framework to the data at hand; and (b) boost performance. Across nine studies, I conclude that the linear credibility framework is a robust (albeit imperfect) model of credibility that can provide reliable, valid, and useful estimates of credibility. Because the linear credibility framework is an intentionally weak model, I argue that these results represent a lower-bound for the performance of empirical models of credibility, more generally

    Beyond probabilities: A possibilistic framework to interpret ensemble predictions and fuse imperfect sources of information

    Get PDF
    AbstractEnsemble forecasting is widely used in medium‐range weather predictions to account for the uncertainty that is inherent in the numerical prediction of high‐dimensional, nonlinear systems with high sensitivity to initial conditions. Ensemble forecasting allows one to sample possible future scenarios in a Monte‐Carlo‐like approximation through small strategical perturbations of the initial conditions, and in some cases stochastic parametrization schemes of the atmosphere–ocean dynamical equations. Results are generally interpreted in a probabilistic manner by turning the ensemble into a predictive probability distribution. Yet, due to model bias and dispersion errors, this interpretation is often not reliable and statistical postprocessing is needed to reach probabilistic calibration. This is all the more true for extreme events which, for dynamical reasons, cannot generally be associated with a significant density of ensemble members. In this work we propose a novel approach: a possibilistic interpretation of ensemble predictions, taking inspiration from possibility theory. This framework allows us to integrate in a consistent manner other imperfect sources of information, such as the insight about the system dynamics provided by the analogue method. We thereby show that probability distributions may not be the best way to extract the valuable information contained in ensemble prediction systems, especially for large lead times. Indeed, shifting to possibility theory provides more meaningful results without the need to resort to additional calibration, while maintaining or improving skills. Our approach is tested on an imperfect version of the Lorenz '96 model, and results for extreme event prediction are compared against those given by a standard probabilistic ensemble dressing

    An entropy-based machine learning algorithm for combining macroeconomic forecasts

    Get PDF
    This paper applies a Machine Learning approach with the aim of providing a single aggregated prediction from a set of individual predictions. Departing from the well-known maximum-entropy inference methodology, a new factor capturing the distance between the true and the estimated aggregated predictions presents a new problem. Algorithms such as ridge, lasso or elastic net help in finding a new methodology to tackle this issue. We carry out a simulation study to evaluate the performance of such a procedure and apply it in order to forecast and measure predictive ability using a dataset of predictions on Spanish gross domestic product
    corecore