10,677 research outputs found

    A Monte Carlo comparison of Bayesian testing for cointegration rank

    Get PDF
    This article considers a Bayesian testing for cointegration rank, using an approach developed by Strachan and van Dijk (2007), that is based on Koop, Leon-Gonzalez, and Strachan (2006). The Bayes factors are calculated for selecting cointegrating rank. We calculate the Bayes factors using two methods - the Schwarz BIC approximation and Chib's (1995) algorithm for calculating the marginal likelihood. We run Monte Carlo simulations to compare the two methods.

    Easy computation of the Bayes Factor to fully quantify Occam's razor

    Get PDF
    20 pages plus 5 pages of Supplementary MaterialThe Bayes factor is the gold-standard figure of merit for comparing fits of models to data, for hypothesis selection and parameter estimation. However it is little used because it is computationally very intensive. Here it is shown how Bayes factors can be calculated accurately and easily, so that any least-squares or maximum-likelihood fits may be routinely followed by the calculation of Bayes factors to guide the best choice of model and hence the best estimations of parameters. Approximations to the Bayes factor, such as the Bayesian Information Criterion (BIC), are increasingly used. Occam's razor expresses a primary intuition, that parameters should not be multiplied unnecessarily, and that is quantified by the BIC. The Bayes factor quantifies two further intuitions. Models with physically-meaningful parameters are preferable to models with physically-meaningless parameters. Models that could fail to fit the data, yet which do fit, are preferable to models which span the data space and are therefore guaranteed to fit the data. The outcomes of using Bayes factors are often very different from traditional statistics tests and from the BIC. Three examples are given. In two of these examples, the easy calculation of the Bayes factor is exact. The third example illustrates the rare conditions under which it has some error and shows how to diagnose and correct the error

    Computing analytic Bayes factors from summary statistics in repeated-measures designs

    Full text link
    Bayes factors are an increasingly popular tool for indexing evidence from experiments. For two competing population models, the Bayes factor reflects the relative likelihood of observing some data under one model compared to the other. In general, computing a Bayes factor is difficult, because computing the marginal likelihood of each model requires integrating the product of the likelihood and a prior distribution on the population parameter(s). In this paper, we develop a new analytic formula for computing Bayes factors directly from minimal summary statistics in repeated-measures designs. This work is an improvement on previous methods for computing Bayes factors from summary statistics (e.g., the BIC method), which produce Bayes factors that violate the Sellke upper bound of evidence for smaller sample sizes. The new approach taken in this paper extends requires knowing only the FF-statistic and degrees of freedom, both of which are commonly reported in most empirical work. In addition to providing computational examples, we report a simulation study that benchmarks the new formula against other methods for computing Bayes factors in repeated-measures designs. Our new method provides an easy way for researchers to compute Bayes factors directly from a minimal set of summary statistics, allowing users to index the evidential value of their own data, as well as data reported in published studies

    A bayesian approach for predicting with polynomial regresión of unknown degree.

    Get PDF
    This article presents a comparison of four methods to compute the posterior probabilities of the possible orders in polynomial regression models. These posterior probabilities are used for forecasting by using Bayesian model averaging. It is shown that Bayesian model averaging provides a closer relationship between the theoretical coverage of the high density predictive interval (HDPI) and the observed coverage than those corresponding to selecting the best model. The performance of the different procedures are illustrated with simulations and some known engineering data

    Consistency of objective Bayes factors as the model dimension grows

    Full text link
    In the class of normal regression models with a finite number of regressors, and for a wide class of prior distributions, a Bayesian model selection procedure based on the Bayes factor is consistent [Casella and Moreno J. Amer. Statist. Assoc. 104 (2009) 1261--1271]. However, in models where the number of parameters increases as the sample size increases, properties of the Bayes factor are not totally understood. Here we study consistency of the Bayes factors for nested normal linear models when the number of regressors increases with the sample size. We pay attention to two successful tools for model selection [Schwarz Ann. Statist. 6 (1978) 461--464] approximation to the Bayes factor, and the Bayes factor for intrinsic priors [Berger and Pericchi J. Amer. Statist. Assoc. 91 (1996) 109--122, Moreno, Bertolino and Racugno J. Amer. Statist. Assoc. 93 (1998) 1451--1460]. We find that the the Schwarz approximation and the Bayes factor for intrinsic priors are consistent when the rate of growth of the dimension of the bigger model is O(nb)O(n^b) for b<1b<1. When b=1b=1 the Schwarz approximation is always inconsistent under the alternative while the Bayes factor for intrinsic priors is consistent except for a small set of alternative models which is characterized.Comment: Published in at http://dx.doi.org/10.1214/09-AOS754 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore