1,001 research outputs found
Bayesian inference for Gibbs random fields using composite likelihoods
Gibbs random fields play an important role in statistics, for example the
autologistic model is commonly used to model the spatial distribution of binary
variables defined on a lattice. However they are complicated to work with due
to an intractability of the likelihood function. It is therefore natural to
consider tractable approximations to the likelihood function. Composite
likelihoods offer a principled approach to constructing such approximation. The
contribution of this paper is to examine the performance of a collection of
composite likelihood approximations in the context of Bayesian inference.Comment: To appear in the proceedings of the 2012 Winter Simulation Conferenc
Bayesian Inference from Composite Likelihoods, with an Application to Spatial Extremes
Composite likelihoods are increasingly used in applications where the full
likelihood is analytically unknown or computationally prohibitive. Although the
maximum composite likelihood estimator has frequentist properties akin to those
of the usual maximum likelihood estimator, Bayesian inference based on
composite likelihoods has yet to be explored. In this paper we investigate the
use of the Metropolis--Hastings algorithm to compute a pseudo-posterior
distribution based on the composite likelihood. Two methodologies for adjusting
the algorithm are presented and their performance on approximating the true
posterior distribution is investigated using simulated data sets and real data
on spatial extremes of rainfall
Calibration of conditional composite likelihood for Bayesian inference on Gibbs random fields
Gibbs random fields play an important role in statistics, however, the
resulting likelihood is typically unavailable due to an intractable normalizing
constant. Composite likelihoods offer a principled means to construct useful
approximations. This paper provides a mean to calibrate the posterior
distribution resulting from using a composite likelihood and illustrate its
performance in several examples.Comment: JMLR Workshop and Conference Proceedings, 18th International
Conference on Artificial Intelligence and Statistics (AISTATS), San Diego,
California, USA, 9-12 May 2015 (Vol. 38, pp. 921-929). arXiv admin note:
substantial text overlap with arXiv:1207.575
Hidden Gibbs random fields model selection using Block Likelihood Information Criterion
Performing model selection between Gibbs random fields is a very challenging
task. Indeed, due to the Markovian dependence structure, the normalizing
constant of the fields cannot be computed using standard analytical or
numerical methods. Furthermore, such unobserved fields cannot be integrated out
and the likelihood evaluztion is a doubly intractable problem. This forms a
central issue to pick the model that best fits an observed data. We introduce a
new approximate version of the Bayesian Information Criterion. We partition the
lattice into continuous rectangular blocks and we approximate the probability
measure of the hidden Gibbs field by the product of some Gibbs distributions
over the blocks. On that basis, we estimate the likelihood and derive the Block
Likelihood Information Criterion (BLIC) that answers model choice questions
such as the selection of the dependency structure or the number of latent
states. We study the performances of BLIC for those questions. In addition, we
present a comparison with ABC algorithms to point out that the novel criterion
offers a better trade-off between time efficiency and reliable results
Bayesian model selection for exponential random graph models via adjusted pseudolikelihoods
Models with intractable likelihood functions arise in areas including network
analysis and spatial statistics, especially those involving Gibbs random
fields. Posterior parameter es timation in these settings is termed a
doubly-intractable problem because both the likelihood function and the
posterior distribution are intractable. The comparison of Bayesian models is
often based on the statistical evidence, the integral of the un-normalised
posterior distribution over the model parameters which is rarely available in
closed form. For doubly-intractable models, estimating the evidence adds
another layer of difficulty. Consequently, the selection of the model that best
describes an observed network among a collection of exponential random graph
models for network analysis is a daunting task. Pseudolikelihoods offer a
tractable approximation to the likelihood but should be treated with caution
because they can lead to an unreasonable inference. This paper specifies a
method to adjust pseudolikelihoods in order to obtain a reasonable, yet
tractable, approximation to the likelihood. This allows implementation of
widely used computational methods for evidence estimation and pursuit of
Bayesian model selection of exponential random graph models for the analysis of
social networks. Empirical comparisons to existing methods show that our
procedure yields similar evidence estimates, but at a lower computational cost.Comment: Supplementary material attached. To view attachments, please download
and extract the gzzipped source file listed under "Other formats
- âŠ