16,708 research outputs found
Nested Sequential Monte Carlo Methods
We propose nested sequential Monte Carlo (NSMC), a methodology to sample from
sequences of probability distributions, even where the random variables are
high-dimensional. NSMC generalises the SMC framework by requiring only
approximate, properly weighted, samples from the SMC proposal distribution,
while still resulting in a correct SMC algorithm. Furthermore, NSMC can in
itself be used to produce such properly weighted samples. Consequently, one
NSMC sampler can be used to construct an efficient high-dimensional proposal
distribution for another NSMC sampler, and this nesting of the algorithm can be
done to an arbitrary degree. This allows us to consider complex and
high-dimensional models using SMC. We show results that motivate the efficacy
of our approach on several filtering problems with dimensions in the order of
100 to 1 000.Comment: Extended version of paper published in Proceedings of the 32nd
International Conference on Machine Learning (ICML), Lille, France, 201
Unbiased and Consistent Nested Sampling via Sequential Monte Carlo
We introduce a new class of sequential Monte Carlo methods called Nested
Sampling via Sequential Monte Carlo (NS-SMC), which reframes the Nested
Sampling method of Skilling (2006) in terms of sequential Monte Carlo
techniques. This new framework allows convergence results to be obtained in the
setting when Markov chain Monte Carlo (MCMC) is used to produce new samples. An
additional benefit is that marginal likelihood estimates are unbiased. In
contrast to NS, the analysis of NS-SMC does not require the (unrealistic)
assumption that the simulated samples be independent. As the original NS
algorithm is a special case of NS-SMC, this provides insights as to why NS
seems to produce accurate estimates despite a typical violation of its
assumptions. For applications of NS-SMC, we give advice on tuning MCMC kernels
in an automated manner via a preliminary pilot run, and present a new method
for appropriately choosing the number of MCMC repeats at each iteration.
Finally, a numerical study is conducted where the performance of NS-SMC and
temperature-annealed SMC is compared on several challenging and realistic
problems. MATLAB code for our experiments is made available at
https://github.com/LeahPrice/SMC-NS .Comment: 45 pages, some minor typographical errors fixed since last versio
Split Sampling: Expectations, Normalisation and Rare Events
In this paper we develop a methodology that we call split sampling methods to
estimate high dimensional expectations and rare event probabilities. Split
sampling uses an auxiliary variable MCMC simulation and expresses the
expectation of interest as an integrated set of rare event probabilities. We
derive our estimator from a Rao-Blackwellised estimate of a marginal auxiliary
variable distribution. We illustrate our method with two applications. First,
we compute a shortest network path rare event probability and compare our
method to estimation to a cross entropy approach. Then, we compute a
normalisation constant of a high dimensional mixture of Gaussians and compare
our estimate to one based on nested sampling. We discuss the relationship
between our method and other alternatives such as the product of conditional
probability estimator and importance sampling. The methods developed here are
available in the R package: SplitSampling
On Nesting Monte Carlo Estimators
Many problems in machine learning and statistics involve nested expectations
and thus do not permit conventional Monte Carlo (MC) estimation. For such
problems, one must nest estimators, such that terms in an outer estimator
themselves involve calculation of a separate, nested, estimation. We
investigate the statistical implications of nesting MC estimators, including
cases of multiple levels of nesting, and establish the conditions under which
they converge. We derive corresponding rates of convergence and provide
empirical evidence that these rates are observed in practice. We further
establish a number of pitfalls that can arise from naive nesting of MC
estimators, provide guidelines about how these can be avoided, and lay out
novel methods for reformulating certain classes of nested expectation problems
into single expectations, leading to improved convergence rates. We demonstrate
the applicability of our work by using our results to develop a new estimator
for discrete Bayesian experimental design problems and derive error bounds for
a class of variational objectives.Comment: To appear at International Conference on Machine Learning 201
- …