2,656 research outputs found

    Forecasting in dynamic factor models using Bayesian model averaging

    Get PDF
    This paper considers the problem of forecasting in dynamic factor models using Bayesian model averaging. Theoretical justifications for averaging across models, as opposed to selecting a single model, are given. Practical methods for implementing Bayesian model averaging with factor models are described. These methods involve algorithms which simulate from the space defined by all possible models. We discuss how these simulation algorithms can also be used to select the model with the highest marginal likelihood (or highest value of an information criterion) in an efficient manner. We apply these methods to the problem of forecasting GDP and inflation using quarterly U.S. data on 162 time series. For both GDP and inflation, we find that the models which contain factors do out-forecast an AR(p), but only by a relatively small amount and only at short horizons. We attribute these findings to the presence of structural instability and the fact that lags of dependent variable seem to contain most of the information relevant for forecasting. Relative to the small forecasting gains provided by including factors, the gains provided by using Bayesian model averaging over forecasting methods based on a single model are appreciable

    Prediction of settlement delay in critical illness insurance claims by using the generalized beta of the second kind distribution

    Get PDF
    We analyse the delay between diagnosis of illness and claim settlement in critical illness insurance by using generalized linear‐type models under a generalized beta of the second kind family of distributions. A Bayesian approach is employed which allows us to incorporate parameter and model uncertainty and also to impute missing data in a natural manner. We propose methodology involving a latent likelihood ratio test to compare missing data models and a version of posterior predictive p‐values to assess different models. Bayesian variable selection is also performed, supporting a small number of models with small Bayes factors, and therefore we base our predictions on model averaging instead of on a best‐fitting model

    Testing the thresholds of toxicological concern values using a new database for food-related substances.

    Get PDF
    Abstract The Threshold of Toxicological Concern (TTC) concept integrates data on exposure, chemical structure, toxicity and metabolism to identify a safe exposure threshold value for chemicals with insufficient toxicity data for risk assessment. The TTC values were originally derived from a non-cancer dataset of 613 compounds with a potentially small domain of applicability. There is interest to test whether the TTC values are applicable to a broader range of substances, particularly relevant to food safety using EFSA's new OpenFoodTox database. After exclusion of genotoxic compounds, organophosphates or carbamates or those belonging to the TTC exclusion categories, the remaining 329 substances in the EFSA OpenFoodTox database were categorized under the Cramer decision tree, into low (Class I), moderate (II), or high (III) toxicity profile. For Cramer Classes I and III the threshold values were 1000β€―ΞΌg/person per day (90% confidence interval: 187–2190) and 87β€―ΞΌg/person per day (90% confidence interval: 60–153), respectively, compared to the corresponding original threshold values of 1800 and 90β€―ΞΌg/person per day. This confirms the applicability of the TTC values to substances relevant to food safety. Cramer Class II was excluded from our analysis because of containing too few compounds. Comparison with the Globally Harmonized System of classification confirmed that the Cramer classification scheme in the TTC approach is conservative for substances relevant to food safety

    Last Night a Shrinkage Saved My Life: Economic Growth, Model Uncertainty and Correlated Regressors

    Get PDF
    We compare the predictive ability of Bayesian methods which deal simultaneously with model uncertainty and correlated regressors in the framework of cross-country growth regressions. In particular, we assess methods with spike and slab priors combined with different prior specifications for the slope parameters in the slab. Our results indicate that moving away from Gaussian g-priors towards Bayesian ridge, LASSO or elastic net specifications has clear advantages for prediction when dealing with datasets of (potentially highly) correlated regressors, a pervasive characteristic of the data used hitherto in the econometric literature

    A Bayesian approach to detect QTL affecting a simulated binary and quantitative trait

    Get PDF
    Background - We analyzed simulated data from the 14th QTL-MAS workshop using a Bayesian approach implemented in the program iBay. The data contained individuals genotypes for 10,031 SNPs and phenotyped for a quantitative and a binary trait. Results - For the quantitative trait we mapped 8 out of 30 additive QTL, 1 out of 3 imprinted QTL and both epistatic pairs of QTL successfully. For the binary trait we mapped 11 out of 22 additive QTL successfully. Four out of 22 pleiotropic QTL were detected as such. Conclusions - The Bayesian variable selection method showed to be a successful method for genome-wide association. This method was reasonably fast using dense marker map
    • …
    corecore