12,314 research outputs found

    Bayesian astrostatistics: a backward look to the future

    Full text link
    This perspective chapter briefly surveys: (1) past growth in the use of Bayesian methods in astrophysics; (2) current misconceptions about both frequentist and Bayesian statistical inference that hinder wider adoption of Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian modeling as a major future direction for research in Bayesian astrostatistics, exemplified in part by presentations at the first ISI invited session on astrostatistics, commemorated in this volume. It closes with an intentionally provocative recommendation for astronomical survey data reporting, motivated by the multilevel Bayesian perspective on modeling cosmic populations: that astronomers cease producing catalogs of estimated fluxes and other source properties from surveys. Instead, summaries of likelihood functions (or marginal likelihood functions) for source properties should be reported (not posterior probability density functions), including nontrivial summaries (not simply upper limits) for candidate objects that do not pass traditional detection thresholds.Comment: 27 pp, 4 figures. A lightly revised version of a chapter in "Astrostatistical Challenges for the New Astronomy" (Joseph M. Hilbe, ed., Springer, New York, forthcoming in 2012), the inaugural volume for the Springer Series in Astrostatistics. Version 2 has minor clarifications and an additional referenc

    Frequentist Estimation of Cosmological Parameters from the MAXIMA-1 Cosmic Microwave Background Anisotropy Data

    Get PDF
    We use a frequentist statistical approach to set confidence intervals on the values of cosmological parameters using the MAXIMA-1 and COBE measurements of the angular power spectrum of the cosmic microwave background. We define a Δχ2\Delta \chi^{2} statistic, simulate the measurements of MAXIMA-1 and COBE, determine the probability distribution of the statistic, and use it and the data to set confidence intervals on several cosmological parameters. We compare the frequentist confidence intervals to Bayesian credible regions. The frequentist and Bayesian approaches give best estimates for the parameters that agree within 15%, and confidence interval-widths that agree within 30%. The results also suggest that a frequentist analysis gives slightly broader confidence intervals than a Bayesian analysis. The frequentist analysis gives values of \Omega=0.89{+0.26\atop -0.19}, \Omega_{\rm B}h^2=0.026{+0.020\atop -0.011} and n=1.02{+0.31\atop -0.10}, and the Bayesian analysis gives values of \Omega=0.98{+0.14\atop -0.19}, \Omega_{\rm B}h^2=0.0.029{+0.015\atop-0.010}, and n=1.18+0.100.23n=1.18{+0.10\atop -0.23}, all at the 95% confidence level.Comment: 10 pages, 9 Postscript figures, changes made to reflect published versio

    Comment on `Tainted evidence: cosmological model selection versus fitting', by Eric V. Linder and Ramon Miquel (astro-ph/0702542v2)

    Get PDF
    In astro-ph/0702542v2, Linder and Miquel seek to criticize the use of Bayesian model selection for data analysis and for survey forecasting and design. Their discussion is based on three serious misunderstandings of the conceptual underpinnings and application of model-level Bayesian inference, which invalidate all their main conclusions. Their paper includes numerous further inaccuracies, including an erroneous calculation of the Bayesian Information Criterion. Here we seek to set the record straight.Comment: 6 pages RevTeX

    Chain ladder method: Bayesian bootstrap versus classical bootstrap

    Full text link
    The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilising Markov chain Monte Carlo (MCMC), ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. The ABC methodology arises because we work in a distribution-free setting in which we make no parametric assumptions, meaning we can not evaluate the likelihood point-wise or in this case simulate directly from the likelihood model. The use of a bootstrap procedure allows us to generate samples from the intractable likelihood without the requirement of distributional assumptions, this is crucial to the ABC framework. The developed methodology is used to obtain the empirical distribution of the DFCL model parameters and the predictive distribution of the outstanding loss liabilities conditional on the observed claims. We then estimate predictive Bayesian capital estimates, the Value at Risk (VaR) and the mean square error of prediction (MSEP). The latter is compared with the classical bootstrap and credibility methods
    corecore