455,160 research outputs found

    The iterated auxiliary particle filter

    Get PDF
    We present an offline, iterated particle filter to facilitate statistical inference in general state space hidden Markov models. Given a model and a sequence of observations, the associated marginal likelihood L is central to likelihood-based inference for unknown statistical parameters. We define a class of "twisted" models: each member is specified by a sequence of positive functions psi and has an associated psi-auxiliary particle filter that provides unbiased estimates of L. We identify a sequence psi* that is optimal in the sense that the psi*-auxiliary particle filter's estimate of L has zero variance. In practical applications, psi* is unknown so the psi*-auxiliary particle filter cannot straightforwardly be implemented. We use an iterative scheme to approximate psi*, and demonstrate empirically that the resulting iterated auxiliary particle filter significantly outperforms the bootstrap particle filter in challenging settings. Applications include parameter estimation using a particle Markov chain Monte Carlo algorithm

    The Coordinate Particle Filter - A novel Particle Filter for High Dimensional Systems

    Full text link
    Parametric filters, such as the Extended Kalman Filter and the Unscented Kalman Filter, typically scale well with the dimensionality of the problem, but they are known to fail if the posterior state distribution cannot be closely approximated by a density of the assumed parametric form. For nonparametric filters, such as the Particle Filter, the converse holds. Such methods are able to approximate any posterior, but the computational requirements scale exponentially with the number of dimensions of the state space. In this paper, we present the Coordinate Particle Filter which alleviates this problem. We propose to compute the particle weights recursively, dimension by dimension. This allows us to explore one dimension at a time, and resample after each dimension if necessary. Experimental results on simulated as well as real data confirm that the proposed method has a substantial performance advantage over the Particle Filter in high-dimensional systems where not all dimensions are highly correlated. We demonstrate the benefits of the proposed method for the problem of multi-object and robotic manipulator tracking

    Likelihood Consensus and Its Application to Distributed Particle Filtering

    Full text link
    We consider distributed state estimation in a wireless sensor network without a fusion center. Each sensor performs a global estimation task---based on the past and current measurements of all sensors---using only local processing and local communications with its neighbors. In this estimation task, the joint (all-sensors) likelihood function (JLF) plays a central role as it epitomizes the measurements of all sensors. We propose a distributed method for computing, at each sensor, an approximation of the JLF by means of consensus algorithms. This "likelihood consensus" method is applicable if the local likelihood functions of the various sensors (viewed as conditional probability density functions of the local measurements) belong to the exponential family of distributions. We then use the likelihood consensus method to implement a distributed particle filter and a distributed Gaussian particle filter. Each sensor runs a local particle filter, or a local Gaussian particle filter, that computes a global state estimate. The weight update in each local (Gaussian) particle filter employs the JLF, which is obtained through the likelihood consensus scheme. For the distributed Gaussian particle filter, the number of particles can be significantly reduced by means of an additional consensus scheme. Simulation results are presented to assess the performance of the proposed distributed particle filters for a multiple target tracking problem

    The Neural Particle Filter

    Get PDF
    The robust estimation of dynamically changing features, such as the position of prey, is one of the hallmarks of perception. On an abstract, algorithmic level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing signals based on the history of observations, provides a mathematical framework for dynamic perception in real time. Since the general, nonlinear filtering problem is analytically intractable, particle filters are considered among the most powerful approaches to approximating the solution numerically. Yet, these algorithms prevalently rely on importance weights, and thus it remains an unresolved question how the brain could implement such an inference strategy with a neuronal population. Here, we propose the Neural Particle Filter (NPF), a weight-less particle filter that can be interpreted as the neuronal dynamics of a recurrently connected neural network that receives feed-forward input from sensory neurons and represents the posterior probability distribution in terms of samples. Specifically, this algorithm bridges the gap between the computational task of online state estimation and an implementation that allows networks of neurons in the brain to perform nonlinear Bayesian filtering. The model captures not only the properties of temporal and multisensory integration according to Bayesian statistics, but also allows online learning with a maximum likelihood approach. With an example from multisensory integration, we demonstrate that the numerical performance of the model is adequate to account for both filtering and identification problems. Due to the weightless approach, our algorithm alleviates the 'curse of dimensionality' and thus outperforms conventional, weighted particle filters in higher dimensions for a limited number of particles

    Interacting Multiple Model-Feedback Particle Filter for Stochastic Hybrid Systems

    Full text link
    In this paper, a novel feedback control-based particle filter algorithm for the continuous-time stochastic hybrid system estimation problem is presented. This particle filter is referred to as the interacting multiple model-feedback particle filter (IMM-FPF), and is based on the recently developed feedback particle filter. The IMM-FPF is comprised of a series of parallel FPFs, one for each discrete mode, and an exact filter recursion for the mode association probability. The proposed IMM-FPF represents a generalization of the Kalmanfilter based IMM algorithm to the general nonlinear filtering problem. The remarkable conclusion of this paper is that the IMM-FPF algorithm retains the innovation error-based feedback structure even for the nonlinear problem. The interaction/merging process is also handled via a control-based approach. The theoretical results are illustrated with the aid of a numerical example problem for a maneuvering target tracking application

    The Alive Particle Filter

    Full text link
    In the following article we develop a particle filter for approximating Feynman-Kac models with indicator potentials. Examples of such models include approximate Bayesian computation (ABC) posteriors associated with hidden Markov models (HMMs) or rare-event problems. Such models require the use of advanced particle filter or Markov chain Monte Carlo (MCMC) algorithms e.g. Jasra et al. (2012), to perform estimation. One of the drawbacks of existing particle filters, is that they may 'collapse', in that the algorithm may terminate early, due to the indicator potentials. In this article, using a special case of the locally adaptive particle filter in Lee et al. (2013), which is closely related to Le Gland & Oudjane (2004), we use an algorithm which can deal with this latter problem, whilst introducing a random cost per-time step. This algorithm is investigated from a theoretical perspective and several results are given which help to validate the algorithms and to provide guidelines for their implementation. In addition, we show how this algorithm can be used within MCMC, using particle MCMC (Andrieu et al. 2010). Numerical examples are presented for ABC approximations of HMMs

    Path sampling for particle filters with application to multi-target tracking

    Full text link
    In recent work (arXiv:1006.3100v1), we have presented a novel approach for improving particle filters for multi-target tracking. The suggested approach was based on drift homotopy for stochastic differential equations. Drift homotopy was used to design a Markov Chain Monte Carlo step which is appended to the particle filter and aims to bring the particle filter samples closer to the observations. In the current work, we present an alternative way to append a Markov Chain Monte Carlo step to a particle filter to bring the particle filter samples closer to the observations. Both current and previous approaches stem from the general formulation of the filtering problem. We have used the currently proposed approach on the problem of multi-target tracking for both linear and nonlinear observation models. The numerical results show that the suggested approach can improve significantly the performance of a particle filter.Comment: Minor corrections, 23 pages, 8 figures. This is a companion paper to arXiv:1006.3100v
    corecore