Easy computation of the Bayes Factor to fully quantify Occam's razor

Abstract

20 pages plus 5 pages of Supplementary MaterialThe Bayes factor is the gold-standard figure of merit for comparing fits of models to data, for hypothesis selection and parameter estimation. However it is little used because it is computationally very intensive. Here it is shown how Bayes factors can be calculated accurately and easily, so that any least-squares or maximum-likelihood fits may be routinely followed by the calculation of Bayes factors to guide the best choice of model and hence the best estimations of parameters. Approximations to the Bayes factor, such as the Bayesian Information Criterion (BIC), are increasingly used. Occam's razor expresses a primary intuition, that parameters should not be multiplied unnecessarily, and that is quantified by the BIC. The Bayes factor quantifies two further intuitions. Models with physically-meaningful parameters are preferable to models with physically-meaningless parameters. Models that could fail to fit the data, yet which do fit, are preferable to models which span the data space and are therefore guaranteed to fit the data. The outcomes of using Bayes factors are often very different from traditional statistics tests and from the BIC. Three examples are given. In two of these examples, the easy calculation of the Bayes factor is exact. The third example illustrates the rare conditions under which it has some error and shows how to diagnose and correct the error

    Similar works