6,992 research outputs found
A new approach to particle based smoothed marginal MAP
We present here a new method of finding the MAP state estimator from the weighted particles representation of marginal smoother distribution. This is in contrast to the usual practice, where the particle with the highest weight is selected as the MAP, although the latter is not necessarily the most probable state estimate. The method developed here uses only particles with corresponding filtering and smoothing weights. We apply this estimator for finding the unknown initial state of a dynamical system and addressing the parameter estimation problem
Evaluating Structural Models for the U.S. Short Rate Using EMM and Particle Filters
We combine the efficient method of moments with appropriate algorithms from the optimal filtering literature to study a collection of models for the U.S. short rate. Our models include two continuous-time stochastic volatility models and two regime switching models, which provided the best fit in previous work that examined a large collection of models. The continuous-time stochastic volatility models fall into the class of nonlinear, non-Gaussian state space models for which we apply particle filtering and smoothing algorithms. Our results demonstrate the effectiveness of the particle filter for continuous-time processes. Our analysis also provides an alternative and complementary approach to the reprojection technique of Gallant and Tauchen (1998) for studying the dynamics of volatility.
Particle Learning and Smoothing
Particle learning (PL) provides state filtering, sequential parameter
learning and smoothing in a general class of state space models. Our approach
extends existing particle methods by incorporating the estimation of static
parameters via a fully-adapted filter that utilizes conditional sufficient
statistics for parameters and/or states as particles. State smoothing in the
presence of parameter uncertainty is also solved as a by-product of PL. In a
number of examples, we show that PL outperforms existing particle filtering
alternatives and proves to be a competitor to MCMC.Comment: Published in at http://dx.doi.org/10.1214/10-STS325 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
A Probabilistic Perspective on Gaussian Filtering and Smoothing
We present a general probabilistic perspective on Gaussian filtering and smoothing. This allows us to show that common approaches to Gaussian filtering/smoothing can be distinguished solely by their methods of computing/approximating the means and covariances of joint probabilities. This implies that novel filters and smoothers can be derived straightforwardly by providing methods for computing these moments. Based on this insight, we derive the cubature Kalman smoother and propose a novel robust filtering and smoothing algorithm based on Gibbs sampling
Ecological non-linear state space model selection via adaptive particle Markov chain Monte Carlo (AdPMCMC)
We develop a novel advanced Particle Markov chain Monte Carlo algorithm that
is capable of sampling from the posterior distribution of non-linear state
space models for both the unobserved latent states and the unknown model
parameters. We apply this novel methodology to five population growth models,
including models with strong and weak Allee effects, and test if it can
efficiently sample from the complex likelihood surface that is often associated
with these models. Utilising real and also synthetically generated data sets we
examine the extent to which observation noise and process error may frustrate
efforts to choose between these models. Our novel algorithm involves an
Adaptive Metropolis proposal combined with an SIR Particle MCMC algorithm
(AdPMCMC). We show that the AdPMCMC algorithm samples complex, high-dimensional
spaces efficiently, and is therefore superior to standard Gibbs or Metropolis
Hastings algorithms that are known to converge very slowly when applied to the
non-linear state space ecological models considered in this paper.
Additionally, we show how the AdPMCMC algorithm can be used to recursively
estimate the Bayesian Cram\'er-Rao Lower Bound of Tichavsk\'y (1998). We derive
expressions for these Cram\'er-Rao Bounds and estimate them for the models
considered. Our results demonstrate a number of important features of common
population growth models, most notably their multi-modal posterior surfaces and
dependence between the static and dynamic parameters. We conclude by sampling
from the posterior distribution of each of the models, and use Bayes factors to
highlight how observation noise significantly diminishes our ability to select
among some of the models, particularly those that are designed to reproduce an
Allee effect
- …