11,407 research outputs found

    A Monte Carlo comparison of Bayesian testing for cointegration rank

    Get PDF
    This article considers a Bayesian testing for cointegration rank, using an approach developed by Strachan and van Dijk (2007), that is based on Koop, Leon-Gonzalez, and Strachan (2006). The Bayes factors are calculated for selecting cointegrating rank. We calculate the Bayes factors using two methods - the Schwarz BIC approximation and Chib's (1995) algorithm for calculating the marginal likelihood. We run Monte Carlo simulations to compare the two methods.

    Easy computation of the Bayes Factor to fully quantify Occam's razor

    Get PDF
    20 pages plus 5 pages of Supplementary MaterialThe Bayes factor is the gold-standard figure of merit for comparing fits of models to data, for hypothesis selection and parameter estimation. However it is little used because it is computationally very intensive. Here it is shown how Bayes factors can be calculated accurately and easily, so that any least-squares or maximum-likelihood fits may be routinely followed by the calculation of Bayes factors to guide the best choice of model and hence the best estimations of parameters. Approximations to the Bayes factor, such as the Bayesian Information Criterion (BIC), are increasingly used. Occam's razor expresses a primary intuition, that parameters should not be multiplied unnecessarily, and that is quantified by the BIC. The Bayes factor quantifies two further intuitions. Models with physically-meaningful parameters are preferable to models with physically-meaningless parameters. Models that could fail to fit the data, yet which do fit, are preferable to models which span the data space and are therefore guaranteed to fit the data. The outcomes of using Bayes factors are often very different from traditional statistics tests and from the BIC. Three examples are given. In two of these examples, the easy calculation of the Bayes factor is exact. The third example illustrates the rare conditions under which it has some error and shows how to diagnose and correct the error

    Computing analytic Bayes factors from summary statistics in repeated-measures designs

    Full text link
    Bayes factors are an increasingly popular tool for indexing evidence from experiments. For two competing population models, the Bayes factor reflects the relative likelihood of observing some data under one model compared to the other. In general, computing a Bayes factor is difficult, because computing the marginal likelihood of each model requires integrating the product of the likelihood and a prior distribution on the population parameter(s). In this paper, we develop a new analytic formula for computing Bayes factors directly from minimal summary statistics in repeated-measures designs. This work is an improvement on previous methods for computing Bayes factors from summary statistics (e.g., the BIC method), which produce Bayes factors that violate the Sellke upper bound of evidence for smaller sample sizes. The new approach taken in this paper extends requires knowing only the FF-statistic and degrees of freedom, both of which are commonly reported in most empirical work. In addition to providing computational examples, we report a simulation study that benchmarks the new formula against other methods for computing Bayes factors in repeated-measures designs. Our new method provides an easy way for researchers to compute Bayes factors directly from a minimal set of summary statistics, allowing users to index the evidential value of their own data, as well as data reported in published studies

    A general Bayes theory of nested model comparisons

    Get PDF
    PhD ThesisWe propose a general Bayes analysis for nested model comparisons which does not suffer from Lindley's paradox. It does not use Bayes factors, but uses the posterior distribution of the likelihood ratio between the models evaluated at the true values of the nuisance parameters. This is obtained directly from the posterior distribution of the full model parameters. The analysis requires only conventional uninformative or flat priors, and prior odds on the models. The conclusions from the posterior distribution of the likelihood ratio are in general in conflict with Bayes factor conclusions, but are in agreement with frequentist likelihood ratio test conclusions. Bayes factor conclusions and those from the BIC are, even in simple cases, in conflict with conclusions from HPD intervals for the same parameters, and appear untenable in general. Examples of the new analysis are given, with comparisons to classical P-values and Bayes factors.Engineering and Physical Sciences Research Council

    A bayesian approach for predicting with polynomial regresiĂłn of unknown degree.

    Get PDF
    This article presents a comparison of four methods to compute the posterior probabilities of the possible orders in polynomial regression models. These posterior probabilities are used for forecasting by using Bayesian model averaging. It is shown that Bayesian model averaging provides a closer relationship between the theoretical coverage of the high density predictive interval (HDPI) and the observed coverage than those corresponding to selecting the best model. The performance of the different procedures are illustrated with simulations and some known engineering data
    • 

    corecore