49,604 research outputs found

    Improving power posterior estimation of statistical evidence

    Get PDF
    The statistical evidence (or marginal likelihood) is a key quantity in Bayesian statistics, allowing one to assess the probability of the data given the model under investigation. This paper focuses on refining the power posterior approach to improve estimation of the evidence. The power posterior method involves transitioning from the prior to the posterior by powering the likelihood by an inverse temperature. In common with other tempering algorithms, the power posterior involves some degree of tuning. The main contributions of this article are twofold -- we present a result from the numerical analysis literature which can reduce the bias in the estimate of the evidence by addressing the error arising from numerically integrating across the inverse temperatures. We also tackle the selection of the inverse temperature ladder, applying this approach additionally to the Stepping Stone sampler estimation of evidence.Comment: Revised version (to appear in Statistics and Computing). This version corrects the typo in Equation (17), with thanks to Sabine Hug for pointing this ou

    Bayesian model selection for exponential random graph models via adjusted pseudolikelihoods

    Get PDF
    Models with intractable likelihood functions arise in areas including network analysis and spatial statistics, especially those involving Gibbs random fields. Posterior parameter es timation in these settings is termed a doubly-intractable problem because both the likelihood function and the posterior distribution are intractable. The comparison of Bayesian models is often based on the statistical evidence, the integral of the un-normalised posterior distribution over the model parameters which is rarely available in closed form. For doubly-intractable models, estimating the evidence adds another layer of difficulty. Consequently, the selection of the model that best describes an observed network among a collection of exponential random graph models for network analysis is a daunting task. Pseudolikelihoods offer a tractable approximation to the likelihood but should be treated with caution because they can lead to an unreasonable inference. This paper specifies a method to adjust pseudolikelihoods in order to obtain a reasonable, yet tractable, approximation to the likelihood. This allows implementation of widely used computational methods for evidence estimation and pursuit of Bayesian model selection of exponential random graph models for the analysis of social networks. Empirical comparisons to existing methods show that our procedure yields similar evidence estimates, but at a lower computational cost.Comment: Supplementary material attached. To view attachments, please download and extract the gzzipped source file listed under "Other formats

    Accounting for choice of measurement scale in extreme value modeling

    Get PDF
    We investigate the effect that the choice of measurement scale has upon inference and extrapolation in extreme value analysis. Separate analyses of variables from a single process on scales which are linked by a nonlinear transformation may lead to discrepant conclusions concerning the tail behavior of the process. We propose the use of a Box--Cox power transformation incorporated as part of the inference procedure to account parametrically for the uncertainty surrounding the scale of extrapolation. This has the additional feature of increasing the rate of convergence of the distribution tails to an extreme value form in certain cases and thus reducing bias in the model estimation. Inference without reparameterization is practicably infeasible, so we explore a reparameterization which exploits the asymptotic theory of normalizing constants required for nondegenerate limit distributions. Inference is carried out in a Bayesian setting, an advantage of this being the availability of posterior predictive return levels. The methodology is illustrated on both simulated data and significant wave height data from the North Sea.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS333 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bayesian Methods for Exoplanet Science

    Full text link
    Exoplanet research is carried out at the limits of the capabilities of current telescopes and instruments. The studied signals are weak, and often embedded in complex systematics from instrumental, telluric, and astrophysical sources. Combining repeated observations of periodic events, simultaneous observations with multiple telescopes, different observation techniques, and existing information from theory and prior research can help to disentangle the systematics from the planetary signals, and offers synergistic advantages over analysing observations separately. Bayesian inference provides a self-consistent statistical framework that addresses both the necessity for complex systematics models, and the need to combine prior information and heterogeneous observations. This chapter offers a brief introduction to Bayesian inference in the context of exoplanet research, with focus on time series analysis, and finishes with an overview of a set of freely available programming libraries.Comment: Invited revie
    • 

    corecore