26,416 research outputs found
Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference
We describe an embarrassingly parallel, anytime Monte Carlo method for
likelihood-free models. The algorithm starts with the view that the
stochasticity of the pseudo-samples generated by the simulator can be
controlled externally by a vector of random numbers u, in such a way that the
outcome, knowing u, is deterministic. For each instantiation of u we run an
optimization procedure to minimize the distance between summary statistics of
the simulator and the data. After reweighing these samples using the prior and
the Jacobian (accounting for the change of volume in transforming from the
space of summary statistics to the space of parameters) we show that this
weighted ensemble represents a Monte Carlo estimate of the posterior
distribution. The procedure can be run embarrassingly parallel (each node
handling one sample) and anytime (by allocating resources to the worst
performing sample). The procedure is validated on six experiments.Comment: NIPS 2015 camera read
Variational Sequential Monte Carlo
Many recent advances in large scale probabilistic inference rely on
variational methods. The success of variational approaches depends on (i)
formulating a flexible parametric family of distributions, and (ii) optimizing
the parameters to find the member of this family that most closely approximates
the exact posterior. In this paper we present a new approximating family of
distributions, the variational sequential Monte Carlo (VSMC) family, and show
how to optimize it in variational inference. VSMC melds variational inference
(VI) and sequential Monte Carlo (SMC), providing practitioners with flexible,
accurate, and powerful Bayesian inference. The VSMC family is a variational
family that can approximate the posterior arbitrarily well, while still
allowing for efficient optimization of its parameters. We demonstrate its
utility on state space models, stochastic volatility models for financial data,
and deep Markov models of brain neural circuits
Population Monte Carlo algorithms
We give a cross-disciplinary survey on ``population'' Monte Carlo algorithms.
In these algorithms, a set of ``walkers'' or ``particles'' is used as a
representation of a high-dimensional vector. The computation is carried out by
a random walk and split/deletion of these objects. The algorithms are developed
in various fields in physics and statistical sciences and called by lots of
different terms -- ``quantum Monte Carlo'', ``transfer-matrix Monte Carlo'',
``Monte Carlo filter (particle filter)'',``sequential Monte Carlo'' and
``PERM'' etc. Here we discuss them in a coherent framework. We also touch on
related algorithms -- genetic algorithms and annealed importance sampling.Comment: Title is changed (Population-based Monte Carlo -> Population Monte
Carlo). A number of small but important corrections and additions. References
are also added. Original Version is read at 2000 Workshop on
Information-Based Induction Sciences (July 17-18, 2000, Syuzenji, Shizuoka,
Japan). No figure
Efficient Sequential Monte-Carlo Samplers for Bayesian Inference
In many problems, complex non-Gaussian and/or nonlinear models are required
to accurately describe a physical system of interest. In such cases, Monte
Carlo algorithms are remarkably flexible and extremely powerful approaches to
solve such inference problems. However, in the presence of a high-dimensional
and/or multimodal posterior distribution, it is widely documented that standard
Monte-Carlo techniques could lead to poor performance. In this paper, the study
is focused on a Sequential Monte-Carlo (SMC) sampler framework, a more robust
and efficient Monte Carlo algorithm. Although this approach presents many
advantages over traditional Monte-Carlo methods, the potential of this emergent
technique is however largely underexploited in signal processing. In this work,
we aim at proposing some novel strategies that will improve the efficiency and
facilitate practical implementation of the SMC sampler specifically for signal
processing applications. Firstly, we propose an automatic and adaptive strategy
that selects the sequence of distributions within the SMC sampler that
minimizes the asymptotic variance of the estimator of the posterior
normalization constant. This is critical for performing model selection in
modelling applications in Bayesian signal processing. The second original
contribution we present improves the global efficiency of the SMC sampler by
introducing a novel correction mechanism that allows the use of the particles
generated through all the iterations of the algorithm (instead of only
particles from the last iteration). This is a significant contribution as it
removes the need to discard a large portion of the samples obtained, as is
standard in standard SMC methods. This will improve estimation performance in
practical settings where computational budget is important to consider.Comment: arXiv admin note: text overlap with arXiv:1303.3123 by other author
Statistical Inference for Partially Observed Markov Processes via the R Package pomp
Partially observed Markov process (POMP) models, also known as hidden Markov
models or state space models, are ubiquitous tools for time series analysis.
The R package pomp provides a very flexible framework for Monte Carlo
statistical investigations using nonlinear, non-Gaussian POMP models. A range
of modern statistical methods for POMP models have been implemented in this
framework including sequential Monte Carlo, iterated filtering, particle Markov
chain Monte Carlo, approximate Bayesian computation, maximum synthetic
likelihood estimation, nonlinear forecasting, and trajectory matching. In this
paper, we demonstrate the application of these methodologies using some simple
toy problems. We also illustrate the specification of more complex POMP models,
using a nonlinear epidemiological model with a discrete population,
seasonality, and extra-demographic stochasticity. We discuss the specification
of user-defined models and the development of additional methods within the
programming environment provided by pomp.Comment: In press at the Journal of Statistical Software. A version of this
paper is provided at the pomp package website: http://kingaa.github.io/pom
Orthogonal parallel MCMC methods for sampling and optimization
Monte Carlo (MC) methods are widely used for Bayesian inference and
optimization in statistics, signal processing and machine learning. A
well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms.
In order to foster better exploration of the state space, specially in
high-dimensional applications, several schemes employing multiple parallel MCMC
chains have been recently introduced. In this work, we describe a novel
parallel interacting MCMC scheme, called {\it orthogonal MCMC} (O-MCMC), where
a set of "vertical" parallel MCMC chains share information using some
"horizontal" MCMC techniques working on the entire population of current
states. More specifically, the vertical chains are led by random-walk
proposals, whereas the horizontal MCMC techniques employ independent proposals,
thus allowing an efficient combination of global exploration and local
approximation. The interaction is contained in these horizontal iterations.
Within the analysis of different implementations of O-MCMC, novel schemes in
order to reduce the overall computational cost of parallel multiple try
Metropolis (MTM) chains are also presented. Furthermore, a modified version of
O-MCMC for optimization is provided by considering parallel simulated annealing
(SA) algorithms. Numerical results show the advantages of the proposed sampling
scheme in terms of efficiency in the estimation, as well as robustness in terms
of independence with respect to initial values and the choice of the
parameters
- …