14,383 research outputs found

    Particle Gaussian Mixture Filters for Nonlinear Non-Gaussian Bayesian Estimation

    Get PDF
    Nonlinear filtering is the problem of estimating the state of a stochastic nonlinear dynamical system using noisy observations. It is well known that the posterior state estimates in nonlinear problems may assume non-Gaussian multimodal probability densities. We present an unscented Kalman-particle hybrid filtering framework for tracking the three dimensional motion of a space object. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. The performance of the hybrid filtering approach is assessed by simulating two test cases of space objects that are assumed to undergo full three dimensional orbital motion. Having established its performance in the space object tracking problem, we extend the hybrid approach to the general multimodal estimation problem. We propose a particle Gaussian mixture-I (PGM-I) filter for nonlinear estimation that is free of the particle depletion problem inherent to most particle filters. The PGM-I filter employs an ensemble of randomly sampled states for the propagation of state probability density. A Gaussian mixture model (GMM) of the propagated PDF is then recovered by clustering the ensemble. The posterior density is obtained subsequently through a Kalman measurement update of the mixture modes. We prove the convergence in probability of the resultant density to the true filter density assuming exponential forgetting of initial conditions by the true filter. The PGM-I filter is capable of handling the non-Gaussianity of the state PDF arising from dynamics, initial conditions or process noise. A more general estimation scheme titled PGM-II filter that can also handle non-Gaussianity related to measurement update is considered next. The PGM-II filter employs a parallel Markov chain Monte Carlo (MCMC) method to sample from the posterior PDF. The PGM-II filter update is asymptotically exact and does not enforce any assumptions on the number of Gaussian modes. We test the performance of the PGM filters on a number of benchmark filtering problems chosen from recent literature. The PGM filtering performance is compared with that of other general purpose nonlinear filters such as the feedback particle filter and the log homotopy based particle flow filters. The results also indicate that the PGM filters can perform at par with or better than other general purpose nonlinear filters such as the feedback particle filter (FPF) and the log homotopy based particle flow filters. Based on the results, we derive important guidelines on the choice between the PGM-I and PGM-II filters. Furthermore, we conceive an extension of the PGM-I filter, namely the augmented PGM-I filter, for handling the nonlinear/non- Gaussian measurement update without incurring a large computational penalty. A preliminary design for a decentralized PGM-I filter for the distributed estimation problem is also obtained. Finally we conduct a more detailed study on the performance of the parallel MCMC algorithm. It is found that running several parallel Markov chains can lead to significant computational savings in sampling problems that involve multi modal target densities. We also show that the parallel MCMC method can be used to solve global optimization problems

    Analysis of error propagation in particle filters with approximation

    Full text link
    This paper examines the impact of approximation steps that become necessary when particle filters are implemented on resource-constrained platforms. We consider particle filters that perform intermittent approximation, either by subsampling the particles or by generating a parametric approximation. For such algorithms, we derive time-uniform bounds on the weak-sense LpL_p error and present associated exponential inequalities. We motivate the theoretical analysis by considering the leader node particle filter and present numerical experiments exploring its performance and the relationship to the error bounds.Comment: Published in at http://dx.doi.org/10.1214/11-AAP760 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Sequential Kernel Herding: Frank-Wolfe Optimization for Particle Filtering

    Get PDF
    Recently, the Frank-Wolfe optimization algorithm was suggested as a procedure to obtain adaptive quadrature rules for integrals of functions in a reproducing kernel Hilbert space (RKHS) with a potentially faster rate of convergence than Monte Carlo integration (and "kernel herding" was shown to be a special case of this procedure). In this paper, we propose to replace the random sampling step in a particle filter by Frank-Wolfe optimization. By optimizing the position of the particles, we can obtain better accuracy than random or quasi-Monte Carlo sampling. In applications where the evaluation of the emission probabilities is expensive (such as in robot localization), the additional computational cost to generate the particles through optimization can be justified. Experiments on standard synthetic examples as well as on a robot localization task indicate indeed an improvement of accuracy over random and quasi-Monte Carlo sampling.Comment: in 18th International Conference on Artificial Intelligence and Statistics (AISTATS), May 2015, San Diego, United States. 38, JMLR Workshop and Conference Proceeding

    Belief Consensus Algorithms for Fast Distributed Target Tracking in Wireless Sensor Networks

    Full text link
    In distributed target tracking for wireless sensor networks, agreement on the target state can be achieved by the construction and maintenance of a communication path, in order to exchange information regarding local likelihood functions. Such an approach lacks robustness to failures and is not easily applicable to ad-hoc networks. To address this, several methods have been proposed that allow agreement on the global likelihood through fully distributed belief consensus (BC) algorithms, operating on local likelihoods in distributed particle filtering (DPF). However, a unified comparison of the convergence speed and communication cost has not been performed. In this paper, we provide such a comparison and propose a novel BC algorithm based on belief propagation (BP). According to our study, DPF based on metropolis belief consensus (MBC) is the fastest in loopy graphs, while DPF based on BP consensus is the fastest in tree graphs. Moreover, we found that BC-based DPF methods have lower communication overhead than data flooding when the network is sufficiently sparse
    corecore