32,557 research outputs found
Point and interval forecasts of age-specific life expectancies: A model averaging approach
Background: Any improvement in the forecast accuracy of life expectancy would be beneficial for policy decision regarding the allocation of current and future resources. In this paper, I revisit some methods for forecasting age-specific life expectancies. Objective: This paper proposes a model averaging approach to produce accurate point forecasts of age-specific life expectancies. Methods: Illustrated by data from fourteen developed countries, we compare point and interval fore-casts among ten principal component methods, two random walk methods, and two uni-variate time-series methods. Results: Based on averaged one-step-ahead and ten-step-ahead forecast errors, random walk with drift and Lee-Miller methods are the two most accurate methods for producing point fore-casts. By combining their forecasts, point forecast accuracy is improved. As measured by averaged coverage probability deviance, the Hyndman-Ullah methods generally provide more accurate interval forecasts than the Lee-Carter methods. However, the Hyndman-Ullah methods produce wider half-widths of prediction interval than the Lee-Carter meth-ods. Conclusions: Model averaging approach should be considered to produce more accurate point forecasts
Model confidence sets and forecast combination: an application to age-specific mortality
Background: Model averaging combines forecasts obtained from a range of models, and it often produces more accurate forecasts than a forecast from a single model.
Objective: The crucial part of forecast accuracy improvement in using the model averaging lies in the determination of optimal weights from a finite sample. If the weights are selected sub-optimally, this can affect the accuracy of the model-averaged forecasts. Instead of choosing the optimal weights, we consider trimming a set of models before equally averaging forecasts from the selected superior models. Motivated by Hansen et al. (2011), we apply and evaluate the model confidence set procedure when combining mortality forecasts.
Data & Methods: The proposed model averaging procedure is motivated by Samuels and Sekkel (2017) based on the concept of model confidence sets as proposed by Hansen et al. (2011) that incorporates the statistical significance of the forecasting performance. As the model confidence level increases, the set of superior models generally decreases. The proposed model averaging procedure is demonstrated via national and sub-national Japanese mortality for retirement ages between 60 and 100+.
Results: Illustrated by national and sub-national Japanese mortality for ages between 60 and 100+, the proposed model-average procedure gives the smallest interval forecast errors, especially for males. Conclusion: We find that robust out-of-sample point and interval forecasts may be obtained from the trimming method. By robust, we mean robustness against model misspecification
Calibrating Weather Forecast using Bayesian Model Averaging and Geostatistical Output Perturbation
Numerical Weather Prediction (NWP) has not yet been able to produce the weather forecast accurately. In order to overcome that, one approach could be taken is ensemble postprocessing. Ensemble is a combination of several methods to improve its accuracy and precision yet still possesses underdispersive nature. Bayesian Model Averaging (BMA) is intended to calibrate the ensemble prediction and create more reliable interval, though, does not consider spatial correlation. Unlike BMA, Geostatistical Output Perturbation (GOP) reckons spatial correlation among many locations altogether. Analysis applied to calibrate the temperature forecast at eight meteorological sites within Jakarta, Bogor, Tangerang and Bekasi (Jabotabek) are BMA and GOP. The ensemble members of BMA are the prediction of PLS, PCR, and Ridge. For training period over 30 days and based on some assessment indicators, BMA is better than GOP in terms of accuracy, precision, and calibratio
Recommended from our members
Operational solar forecasting for the real-time market
Despite the significant progress made in solar forecasting over the last decade, most of the proposed models cannot be readily used by independent system operators (ISOs). This article proposes an operational solar forecasting algorithm that is closely aligned with the real-time market (RTM) forecasting requirements of the California ISO (CAISO). The algorithm first uses the North American Mesoscale (NAM) forecast system to generate hourly forecasts for a 5-h period that are issued 12 h before the actual operating hour, satisfying the lead-time requirement. Subsequently, the world's fastest similarity search algorithm is adopted to downscale the hourly forecasts generated by NAM to a 15-min resolution, satisfying the forecast-resolution requirement. The 5-h-ahead forecasts are repeated every hour, following the actual rolling update rate of CAISO. Both deterministic and probabilistic forecasts generated using the proposed algorithm are empirically evaluated over a period of 2 years at 7 locations in 5 climate zones
Forecasting multiple functional time series in a group structure: an application to mortality’
When modeling sub-national mortality rates, we should consider three features: (1) how to incorporate any possible correlation among sub-populations to potentially improve forecast accuracy through multi-population joint modeling; (2) how to reconcile sub-national mortality forecasts so that they aggregate adequately across various levels of a group structure; (3) among the forecast reconciliation methods, how to combine their forecasts to achieve improved forecast accuracy. To address these issues, we introduce an extension of grouped univariate functional time series method. We first consider a multivariate functional time series method to jointly forecast multiple related series. We then evaluate the impact and benefit of using forecast combinations among the forecast reconciliation methods. Using the Japanese regional age-specific mortality rates, we investigate one-step-ahead to 15-step-ahead point and interval forecast accuracies of our proposed extension and make recommendations
Probabilistic wind speed forecasting in Hungary
Prediction of various weather quantities is mostly based on deterministic
numerical weather forecasting models. Multiple runs of these models with
different initial conditions result ensembles of forecasts which are applied
for estimating the distribution of future weather quantities. However, the
ensembles are usually under-dispersive and uncalibrated, so post-processing is
required.
In the present work Bayesian Model Averaging (BMA) is applied for calibrating
ensembles of wind speed forecasts produced by the operational Limited Area
Model Ensemble Prediction System of the Hungarian Meteorological Service (HMS).
We describe two possible BMA models for wind speed data of the HMS and show
that BMA post-processing significantly improves the calibration and precision
of forecasts.Comment: 17 pages, 10 figure
Recommended from our members
Multi-model ensemble hydrologic prediction using Bayesian model averaging
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights. © 2006 Elsevier Ltd. All rights reserved
Re-examining the consumption-wealth relationship : the role of model uncertainty
This paper discusses the consumption-wealth relationship. Following the recent influential workof Lettau and Ludvigson [e.g. Lettau and Ludvigson (2001), (2004)], we use data on consumption, assets andlabor income and a vector error correction framework. Key …ndings of their work are that consumption doesrespond to permanent changes in wealth in the expected manner, but that most changes in wealth are transitoryand have no e¤ect on consumption. We investigate the robustness of these results to model uncertainty andargue for the use of Bayesian model averaging. We …nd that there is model uncertainty with regards to thenumber of cointegrating vectors, the form of deterministic components, lag length and whether the cointegratingresiduals a¤ect consumption and income directly. Whether this uncertainty has important empirical implicationsdepends on the researcher's attitude towards the economic theory used by Lettau and Ludvigson. If we workwith their model, our findings are very similar to theirs. However, if we work with a broader set of models andlet the data speak, we obtain somewhat di¤erent results. In the latter case, we …nd that the exact magnitudeof the role of permanent shocks is hard to estimate precisely. Thus, although some support exists for the viewthat their role is small, we cannot rule out the possibility that they have a substantive role to play
- …