9,444 research outputs found
Noisy Hamiltonian Monte Carlo for doubly-intractable distributions
Hamiltonian Monte Carlo (HMC) has been progressively incorporated within the
statistician's toolbox as an alternative sampling method in settings when
standard Metropolis-Hastings is inefficient. HMC generates a Markov chain on an
augmented state space with transitions based on a deterministic differential
flow derived from Hamiltonian mechanics. In practice, the evolution of
Hamiltonian systems cannot be solved analytically, requiring numerical
integration schemes. Under numerical integration, the resulting approximate
solution no longer preserves the measure of the target distribution, therefore
an accept-reject step is used to correct the bias. For doubly-intractable
distributions -- such as posterior distributions based on Gibbs random fields
-- HMC suffers from some computational difficulties: computation of gradients
in the differential flow and computation of the accept-reject proposals poses
difficulty. In this paper, we study the behaviour of HMC when these quantities
are replaced by Monte Carlo estimates
Recent advances in directional statistics
Mainstream statistical methodology is generally applicable to data observed
in Euclidean space. There are, however, numerous contexts of considerable
scientific interest in which the natural supports for the data under
consideration are Riemannian manifolds like the unit circle, torus, sphere and
their extensions. Typically, such data can be represented using one or more
directions, and directional statistics is the branch of statistics that deals
with their analysis. In this paper we provide a review of the many recent
developments in the field since the publication of Mardia and Jupp (1999),
still the most comprehensive text on directional statistics. Many of those
developments have been stimulated by interesting applications in fields as
diverse as astronomy, medicine, genetics, neurology, aeronautics, acoustics,
image analysis, text mining, environmetrics, and machine learning. We begin by
considering developments for the exploratory analysis of directional data
before progressing to distributional models, general approaches to inference,
hypothesis testing, regression, nonparametric curve estimation, methods for
dimension reduction, classification and clustering, and the modelling of time
series, spatial and spatio-temporal data. An overview of currently available
software for analysing directional data is also provided, and potential future
developments discussed.Comment: 61 page
Hidden Gibbs random fields model selection using Block Likelihood Information Criterion
Performing model selection between Gibbs random fields is a very challenging
task. Indeed, due to the Markovian dependence structure, the normalizing
constant of the fields cannot be computed using standard analytical or
numerical methods. Furthermore, such unobserved fields cannot be integrated out
and the likelihood evaluztion is a doubly intractable problem. This forms a
central issue to pick the model that best fits an observed data. We introduce a
new approximate version of the Bayesian Information Criterion. We partition the
lattice into continuous rectangular blocks and we approximate the probability
measure of the hidden Gibbs field by the product of some Gibbs distributions
over the blocks. On that basis, we estimate the likelihood and derive the Block
Likelihood Information Criterion (BLIC) that answers model choice questions
such as the selection of the dependency structure or the number of latent
states. We study the performances of BLIC for those questions. In addition, we
present a comparison with ABC algorithms to point out that the novel criterion
offers a better trade-off between time efficiency and reliable results
Estimating the transition matrix of a Markov chain observed at random times
In this paper we develop a statistical estimation technique to recover the
transition kernel of a Markov chain in presence
of censored data. We consider the situation where only a sub-sequence of is
available and the time gaps between the observations are iid random variables.
Under the assumption that neither the time gaps nor their distribution are
known, we provide an estimation method which applies when some transitions in
the initial Markov chain are known to be unfeasible. A consistent estimator
of is derived in closed form as a solution of a minimization problem. The
asymptotic performance of the estimator is then discussed in theory and through
numerical simulations
Particle-based likelihood inference in partially observed diffusion processes using generalised Poisson estimators
This paper concerns the use of the expectation-maximisation (EM) algorithm
for inference in partially observed diffusion processes. In this context, a
well known problem is that all except a few diffusion processes lack
closed-form expressions of the transition densities. Thus, in order to estimate
efficiently the EM intermediate quantity we construct, using novel techniques
for unbiased estimation of diffusion transition densities, a random weight
fixed-lag auxiliary particle smoother, which avoids the well known problem of
particle trajectory degeneracy in the smoothing mode. The estimator is
justified theoretically and demonstrated on a simulated example
Bayesian Inference from Composite Likelihoods, with an Application to Spatial Extremes
Composite likelihoods are increasingly used in applications where the full
likelihood is analytically unknown or computationally prohibitive. Although the
maximum composite likelihood estimator has frequentist properties akin to those
of the usual maximum likelihood estimator, Bayesian inference based on
composite likelihoods has yet to be explored. In this paper we investigate the
use of the Metropolis--Hastings algorithm to compute a pseudo-posterior
distribution based on the composite likelihood. Two methodologies for adjusting
the algorithm are presented and their performance on approximating the true
posterior distribution is investigated using simulated data sets and real data
on spatial extremes of rainfall
On the Challenges of Physical Implementations of RBMs
Restricted Boltzmann machines (RBMs) are powerful machine learning models,
but learning and some kinds of inference in the model require sampling-based
approximations, which, in classical digital computers, are implemented using
expensive MCMC. Physical computation offers the opportunity to reduce the cost
of sampling by building physical systems whose natural dynamics correspond to
drawing samples from the desired RBM distribution. Such a system avoids the
burn-in and mixing cost of a Markov chain. However, hardware implementations of
this variety usually entail limitations such as low-precision and limited range
of the parameters and restrictions on the size and topology of the RBM. We
conduct software simulations to determine how harmful each of these
restrictions is. Our simulations are designed to reproduce aspects of the
D-Wave quantum computer, but the issues we investigate arise in most forms of
physical computation
Variational semi-blind sparse deconvolution with orthogonal kernel bases and its application to MRFM
We present a variational Bayesian method of joint image reconstruction and point spread function (PSF) estimation when the PSF of the imaging device is only partially known. To solve this semi-blind deconvolution problem, prior distributions are specified for the PSF and the 3D image. Joint image reconstruction and PSF estimation is then performed within a Bayesian framework, using a variational algorithm to estimate the posterior distribution. The image prior distribution imposes an explicit atomic measure that corresponds to image sparsity. Importantly, the proposed Bayesian deconvolution algorithm does not require hand tuning. Simulation results clearly demonstrate that the semi-blind deconvolution algorithm compares favorably with previous Markov chain Monte Carlo (MCMC) version of myopic sparse reconstruction. It significantly outperforms mismatched non-blind algorithms that rely on the assumption of the perfect knowledge of the PSF. The algorithm is illustrated on real data from magnetic resonance force microscopy (MRFM)
- âŠ