10,154 research outputs found
SMCTC : sequential Monte Carlo in C++
Sequential Monte Carlo methods are a very general class of Monte Carlo methods for sampling from sequences of distributions. Simple examples of these algorithms are used very widely in the tracking and signal processing literature. Recent developments illustrate that these techniques have much more general applicability, and can be applied very effectively to statistical inference problems. Unfortunately, these methods are often perceived as being computationally expensive and difficult to implement. This article seeks to address both of these problems. A C++ template class library for the efficient and convenient implementation of very general Sequential Monte Carlo algorithms is presented. Two example applications are provided: a simple particle filter for illustrative purposes and a state-of-the-art algorithm for rare event estimation
Nudging the particle filter
We investigate a new sampling scheme aimed at improving the performance of
particle filters whenever (a) there is a significant mismatch between the
assumed model dynamics and the actual system, or (b) the posterior probability
tends to concentrate in relatively small regions of the state space. The
proposed scheme pushes some particles towards specific regions where the
likelihood is expected to be high, an operation known as nudging in the
geophysics literature. We re-interpret nudging in a form applicable to any
particle filtering scheme, as it does not involve any changes in the rest of
the algorithm. Since the particles are modified, but the importance weights do
not account for this modification, the use of nudging leads to additional bias
in the resulting estimators. However, we prove analytically that nudged
particle filters can still attain asymptotic convergence with the same error
rates as conventional particle methods. Simple analysis also yields an
alternative interpretation of the nudging operation that explains its
robustness to model errors. Finally, we show numerical results that illustrate
the improvements that can be attained using the proposed scheme. In particular,
we present nonlinear tracking examples with synthetic data and a model
inference example using real-world financial data
Inverse Problems and Data Assimilation
These notes are designed with the aim of providing a clear and concise
introduction to the subjects of Inverse Problems and Data Assimilation, and
their inter-relations, together with citations to some relevant literature in
this area. The first half of the notes is dedicated to studying the Bayesian
framework for inverse problems. Techniques such as importance sampling and
Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the
desirable property that in the limit of an infinite number of samples they
reproduce the full posterior distribution. Since it is often computationally
intensive to implement these methods, especially in high dimensional problems,
approximate techniques such as approximating the posterior by a Dirac or a
Gaussian distribution are discussed. The second half of the notes cover data
assimilation. This refers to a particular class of inverse problems in which
the unknown parameter is the initial condition of a dynamical system, and in
the stochastic dynamics case the subsequent states of the system, and the data
comprises partial and noisy observations of that (possibly stochastic)
dynamical system. We will also demonstrate that methods developed in data
assimilation may be employed to study generic inverse problems, by introducing
an artificial time to generate a sequence of probability measures interpolating
from the prior to the posterior
Long-term stability of sequential Monte Carlo methods under verifiable conditions
This paper discusses particle filtering in general hidden Markov models
(HMMs) and presents novel theoretical results on the long-term stability of
bootstrap-type particle filters. More specifically, we establish that the
asymptotic variance of the Monte Carlo estimates produced by the bootstrap
filter is uniformly bounded in time. On the contrary to most previous results
of this type, which in general presuppose that the state space of the hidden
state process is compact (an assumption that is rarely satisfied in practice),
our very mild assumptions are satisfied for a large class of HMMs with possibly
noncompact state space. In addition, we derive a similar time uniform bound on
the asymptotic error. Importantly, our results hold for
misspecified models; that is, we do not at all assume that the data entering
into the particle filter originate from the model governing the dynamics of the
particles or not even from an HMM.Comment: Published in at http://dx.doi.org/10.1214/13-AAP962 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Particle-based likelihood inference in partially observed diffusion processes using generalised Poisson estimators
This paper concerns the use of the expectation-maximisation (EM) algorithm
for inference in partially observed diffusion processes. In this context, a
well known problem is that all except a few diffusion processes lack
closed-form expressions of the transition densities. Thus, in order to estimate
efficiently the EM intermediate quantity we construct, using novel techniques
for unbiased estimation of diffusion transition densities, a random weight
fixed-lag auxiliary particle smoother, which avoids the well known problem of
particle trajectory degeneracy in the smoothing mode. The estimator is
justified theoretically and demonstrated on a simulated example
Stochastic Volatility Filtering with Intractable Likelihoods
This paper is concerned with particle filtering for -stable
stochastic volatility models. The -stable distribution provides a
flexible framework for modeling asymmetry and heavy tails, which is useful when
modeling financial returns. An issue with this distributional assumption is the
lack of a closed form for the probability density function. To estimate the
volatility of financial returns in this setting, we develop a novel auxiliary
particle filter. The algorithm we develop can be easily applied to any hidden
Markov model for which the likelihood function is intractable or
computationally expensive. The approximate target distribution of our auxiliary
filter is based on the idea of approximate Bayesian computation (ABC). ABC
methods allow for inference on posterior quantities in situations when the
likelihood of the underlying model is not available in closed form, but
simulating samples from it is possible. The ABC auxiliary particle filter
(ABC-APF) that we propose provides not only a good alternative to state
estimation in stochastic volatility models, but it also improves on the
existing ABC literature. It allows for more flexibility in state estimation
while improving on the accuracy through better proposal distributions in cases
when the optimal importance density of the filter is unavailable in closed
form. We assess the performance of the ABC-APF on a simulated dataset from the
-stable stochastic volatility model and compare it to other currently
existing ABC filters
- …