1,044 research outputs found
MCMC for doubly-intractable distributions
Markov Chain Monte Carlo (MCMC) algorithms are routinely used to draw samples from distributions with intractable normalization constants. However, standard MCMC algorithms do not apply to doubly-intractable distributions in which there are additional parameter-dependent normalization terms; for example, the posterior over parameters of an undirected graphical model. An ingenious auxiliary-variable scheme (Møller et al., 2004) offers a solution: exact sampling (Propp and Wilson, 1996) is used to sample from a Metropolis–Hastings proposal for which the acceptance probability is tractable. Unfortunately the acceptance probability of these expensive updates can be low. This paper provides a generalization of Møller et al. (2004) and a new MCMC algorithm, which obtains better acceptance probabilities for the same amount of exact sampling, and removes the need to estimate model parameters before sampling begins
Efficient computational strategies for doubly intractable problems with applications to Bayesian social networks
Powerful ideas recently appeared in the literature are adjusted and combined
to design improved samplers for Bayesian exponential random graph models.
Different forms of adaptive Metropolis-Hastings proposals (vertical, horizontal
and rectangular) are tested and combined with the Delayed rejection (DR)
strategy with the aim of reducing the variance of the resulting Markov chain
Monte Carlo estimators for a given computational time. In the examples treated
in this paper the best combination, namely horizontal adaptation with delayed
rejection, leads to a variance reduction that varies between 92% and 144%
relative to the adaptive direction sampling approximate exchange algorithm of
Caimo and Friel (2011). These results correspond to an increased performance
which varies from 10% to 94% if we take simulation time into account. The
highest improvements are obtained when highly correlated posterior
distributions are considered.Comment: 23 pages, 8 figures. Accepted to appear in Statistics and Computin
Noisy Hamiltonian Monte Carlo for doubly-intractable distributions
Hamiltonian Monte Carlo (HMC) has been progressively incorporated within the
statistician's toolbox as an alternative sampling method in settings when
standard Metropolis-Hastings is inefficient. HMC generates a Markov chain on an
augmented state space with transitions based on a deterministic differential
flow derived from Hamiltonian mechanics. In practice, the evolution of
Hamiltonian systems cannot be solved analytically, requiring numerical
integration schemes. Under numerical integration, the resulting approximate
solution no longer preserves the measure of the target distribution, therefore
an accept-reject step is used to correct the bias. For doubly-intractable
distributions -- such as posterior distributions based on Gibbs random fields
-- HMC suffers from some computational difficulties: computation of gradients
in the differential flow and computation of the accept-reject proposals poses
difficulty. In this paper, we study the behaviour of HMC when these quantities
are replaced by Monte Carlo estimates
Markov Chain Monte Carlo Based on Deterministic Transformations
In this article we propose a novel MCMC method based on deterministic
transformations T: X x D --> X where X is the state-space and D is some set
which may or may not be a subset of X. We refer to our new methodology as
Transformation-based Markov chain Monte Carlo (TMCMC). One of the remarkable
advantages of our proposal is that even if the underlying target distribution
is very high-dimensional, deterministic transformation of a one-dimensional
random variable is sufficient to generate an appropriate Markov chain that is
guaranteed to converge to the high-dimensional target distribution. Apart from
clearly leading to massive computational savings, this idea of
deterministically transforming a single random variable very generally leads to
excellent acceptance rates, even though all the random variables associated
with the high-dimensional target distribution are updated in a single block.
Since it is well-known that joint updating of many random variables using
Metropolis-Hastings (MH) algorithm generally leads to poor acceptance rates,
TMCMC, in this regard, seems to provide a significant advance. We validate our
proposal theoretically, establishing the convergence properties. Furthermore,
we show that TMCMC can be very effectively adopted for simulating from doubly
intractable distributions.
TMCMC is compared with MH using the well-known Challenger data, demonstrating
the effectiveness of of the former in the case of highly correlated variables.
Moreover, we apply our methodology to a challenging posterior simulation
problem associated with the geostatistical model of Diggle et al. (1998),
updating 160 unknown parameters jointly, using a deterministic transformation
of a one-dimensional random variable. Remarkable computational savings as well
as good convergence properties and acceptance rates are the results.Comment: 28 pages, 3 figures; Longer abstract inside articl
Delayed acceptance ABC-SMC
Approximate Bayesian computation (ABC) is now an established technique for
statistical inference used in cases where the likelihood function is
computationally expensive or not available. It relies on the use of a~model
that is specified in the form of a~simulator, and approximates the likelihood
at a~parameter value by simulating auxiliary data sets and
evaluating the distance of from the true data . However, ABC is not
computationally feasible in cases where using the simulator for each
is very expensive. This paper investigates this situation in cases where
a~cheap, but approximate, simulator is available. The approach is to employ
delayed acceptance Markov chain Monte Carlo (MCMC) within an ABC sequential
Monte Carlo (SMC) sampler in order to, in a~first stage of the kernel, use the
cheap simulator to rule out parts of the parameter space that are not worth
exploring, so that the ``true'' simulator is only run (in the second stage of
the kernel) where there is a~reasonable chance of accepting proposed values of
. We show that this approach can be used quite automatically, with few
tuning parameters. Applications to stochastic differential equation models and
latent doubly intractable distributions are presented
Bayesian model selection for exponential random graph models via adjusted pseudolikelihoods
Models with intractable likelihood functions arise in areas including network
analysis and spatial statistics, especially those involving Gibbs random
fields. Posterior parameter es timation in these settings is termed a
doubly-intractable problem because both the likelihood function and the
posterior distribution are intractable. The comparison of Bayesian models is
often based on the statistical evidence, the integral of the un-normalised
posterior distribution over the model parameters which is rarely available in
closed form. For doubly-intractable models, estimating the evidence adds
another layer of difficulty. Consequently, the selection of the model that best
describes an observed network among a collection of exponential random graph
models for network analysis is a daunting task. Pseudolikelihoods offer a
tractable approximation to the likelihood but should be treated with caution
because they can lead to an unreasonable inference. This paper specifies a
method to adjust pseudolikelihoods in order to obtain a reasonable, yet
tractable, approximation to the likelihood. This allows implementation of
widely used computational methods for evidence estimation and pursuit of
Bayesian model selection of exponential random graph models for the analysis of
social networks. Empirical comparisons to existing methods show that our
procedure yields similar evidence estimates, but at a lower computational cost.Comment: Supplementary material attached. To view attachments, please download
and extract the gzzipped source file listed under "Other formats
- …