394 research outputs found

    Analysis of variance for bayesian inference

    Get PDF
    This paper develops a multi-way analysis of variance for non-Gaussian multivariate distributions and provides a practical simulation algorithm to estimate the corresponding components of variance. It specifically addresses variance in Bayesian predictive distributions, showing that it may be decomposed into the sum of extrinsic variance, arising from posterior uncertainty about parameters, and intrinsic variance, which would exist even if parameters were known. Depending on the application at hand, further decomposition of extrinsic or intrinsic variance (or both) may be useful. The paper shows how to produce simulation-consistent estimates of all of these components, and the method demands little additional effort or computing time beyond that already invested in the posterior simulator. It illustrates the methods using a dynamic stochastic general equilibrium model of the US economy, both before and during the global financial crisis. JEL Classification: C11, C53Analysis of variance, Bayesian inference, posterior simulation, predictive distributions

    Hierarchical Markov normal mixture models with applications to financial asset returns

    Get PDF
    With the aim of constructing predictive distributions for daily returns, we introduce a new Markov normal mixture model in which the components are themselves normal mixtures. We derive the restrictions on the autocovariances and linear representation of integer powers of the time series in terms of the number of components in the mixture and the roots of the Markov process. We use the model prior predictive distribution to study its implications for some interesting functions of returns. We apply the model to construct predictive distributions of daily S&P500 returns, dollarpound returns, and one- and ten-year bonds. We compare the performance of the model with ARCH and stochastic volatility models using predictive likelihoods. The model's performance is about the same as its competitors for the bond returns, better than its competitors for the S&P 500 returns, and much better for the dollar-pound returns. Validation exercises identify some potential improvements. JEL Classification: C53, G12, C11, C14Asset returns, Bayesian, forecasting, MCMC, mixture models

    Optimal Prediction Pools

    Get PDF
    A prediction model is any statement of a probability distribution for an outcome not yet observed. This study considers the properties of weighted linear combinations of n prediction models, or linear pools, evaluated using the conventional log predictive scoring rule. The log score is a concave function of the weights and, in general, an optimal linear combination will include several models with positive weights despite the fact that exactly one model has limiting posterior probability one. The paper derives several interesting formal results: for example, a prediction model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with prediction models from the ARCH, stochastic volatility and Markov mixture families. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools, and these pools substantially outperform their best components. JEL Classification: C11, C53forecasting, GARCH, log scoring, Markov mixture, model combination

    Comparing and evaluating Bayesian predictive distributions of assets returns

    Get PDF
    Bayesian inference in a time series model provides exact, out-of-sample predictive distributions that fully and coherently incorporate parameter uncertainty. This study compares and evaluates Bayesian predictive distributions from alternative models, using as an illustration five alternative models of asset returns applied to daily S&P 500 returns from 1976 through 2005. The comparison exercise uses predictive likelihoods and is inherently Bayesian. The evaluation exercise uses the probability integral transform and is inherently frequentist. The illustration shows that the two approaches can be complementary, each identifying strengths and weaknesses in models that are not evident using the other. JEL Classification: C11, C53forecasting, GARCH, inverse probability transform, Markov mixture, predictive likelihood, S&P 500 returns, stochastic volatility

    Optimal Prediction Pools

    Get PDF
    A prediction model is any statement of a probability distribution for an outcome not yet observed. This study considers the properties of weighted linear combinations of n prediction models, or linear pools, evaluated using the conventional log predictive scoring rule. The log score is a concave function of the weights and, in general, an optimal linear combination will include several models with positive weights despite the fact that exactly one model has limiting posterior probability one. The paper derives several interesting formal results: for example, a prediction model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with prediction models from the ARCH, stochastic volatility and Markov mixture families. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools, and these pools substantially outperform their best components.forecasting; GARCH; log scoring; Markov mixture; model combination; S&P 500 returns; stochastic volatility

    Mixture of normals probit models

    Get PDF
    This paper generalizes the normal probit model of dichotomous choice by introducing mixtures of normals distributions for the disturbance term. By mixing on both the mean and variance parameters and by increasing the number of distributions in the mixture these models effectively remove the normality assumption and are much closer to semiparametric models. When a Bayesian approach is taken, there is an exact finite-sample distribution theory for the choice probability conditional on the covariates. The paper uses artificial data to show how posterior odds ratios can discriminate between normal and nonnormal distributions in probit models. The method is also applied to female labor force participation decisions in a sample with 1,555 observations from the PSID. In this application, Bayes factors strongly favor mixture of normals probit models over the conventional probit model, and the most favored models have mixtures of four normal distributions for the disturbance term.Econometric models
    • …
    corecore