611 research outputs found
Long-term stability of sequential Monte Carlo methods under verifiable conditions
This paper discusses particle filtering in general hidden Markov models
(HMMs) and presents novel theoretical results on the long-term stability of
bootstrap-type particle filters. More specifically, we establish that the
asymptotic variance of the Monte Carlo estimates produced by the bootstrap
filter is uniformly bounded in time. On the contrary to most previous results
of this type, which in general presuppose that the state space of the hidden
state process is compact (an assumption that is rarely satisfied in practice),
our very mild assumptions are satisfied for a large class of HMMs with possibly
noncompact state space. In addition, we derive a similar time uniform bound on
the asymptotic error. Importantly, our results hold for
misspecified models; that is, we do not at all assume that the data entering
into the particle filter originate from the model governing the dynamics of the
particles or not even from an HMM.Comment: Published in at http://dx.doi.org/10.1214/13-AAP962 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
The Alive Particle Filter
In the following article we develop a particle filter for approximating
Feynman-Kac models with indicator potentials. Examples of such models include
approximate Bayesian computation (ABC) posteriors associated with hidden Markov
models (HMMs) or rare-event problems. Such models require the use of advanced
particle filter or Markov chain Monte Carlo (MCMC) algorithms e.g. Jasra et al.
(2012), to perform estimation. One of the drawbacks of existing particle
filters, is that they may 'collapse', in that the algorithm may terminate
early, due to the indicator potentials. In this article, using a special case
of the locally adaptive particle filter in Lee et al. (2013), which is closely
related to Le Gland & Oudjane (2004), we use an algorithm which can deal with
this latter problem, whilst introducing a random cost per-time step. This
algorithm is investigated from a theoretical perspective and several results
are given which help to validate the algorithms and to provide guidelines for
their implementation. In addition, we show how this algorithm can be used
within MCMC, using particle MCMC (Andrieu et al. 2010). Numerical examples are
presented for ABC approximations of HMMs
Moderate deviations for particle filtering
Consider the state space model (X_t,Y_t), where (X_t) is a Markov chain, and
(Y_t) are the observations. In order to solve the so-called filtering problem,
one has to compute L(X_t|Y_1,...,Y_t), the law of X_t given the observations
(Y_1,...,Y_t). The particle filtering method gives an approximation of the law
L(X_t|Y_1,...,Y_t) by an empirical measure \frac{1}{n}\sum_1^n\delta_{x_{i,t}}.
In this paper we establish the moderate deviation principle for the empirical
mean \frac{1}{n}\sum_1^n\psi(x_{i,t}) (centered and properly rescaled) when the
number of particles grows to infinity, enhancing the central limit theorem.
Several extensions and examples are also studied.Comment: Published at http://dx.doi.org/10.1214/105051604000000657 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Gradient free parameter estimation for hidden Markov models with intractable likelihoods
In this article we focus on Maximum Likelihood estimation (MLE) for the static model parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. Although these ABC approximations will induce a bias, this can be controlled to arbitrary precision via a positive parameter , so that the bias decreases with decreasing . We first establish that when using an ABC approximation of the HMM for a fixed batch of data, then the bias of the resulting log- marginal likelihood and its gradient is no worse than O(n), where n is the total number of data-points. Therefore, when using gradient methods to perform MLE for the ABC approximation of the HMM, one may expect parameter estimates of reasonable accuracy. To compute an estimate of the unknown and fixed model parameters, we propose a gradient approach based on simultaneous perturbation stochastic approximation (SPSA) and Sequential Monte Carlo (SMC) for the ABC approximation of the HMM. The performance of this method is illustrated using two numerical examples
- …