26,835 research outputs found

    Bayesian Filtering With Random Finite Set Observations

    Get PDF
    This paper presents a novel and mathematically rigorous Bayes’ recursion for tracking a target that generates multiple measurements with state dependent sensor field of view and clutter. Our Bayesian formulation is mathematically well-founded due to our use of a consistent likelihood function derived from random finite set theory. It is established that under certain assumptions, the proposed Bayes’ recursion reduces to the cardinalized probability hypothesis density (CPHD) recursion for a single target. A particle implementation of the proposed recursion is given. Under linear Gaussian and constant sensor field of view assumptions, an exact closed-form solution to the proposed recursion is derived, and efficient implementations are given. Extensions of the closed-form recursion to accommodate mild nonlinearities are also given using linearization and unscented transforms

    Particle Learning for General Mixtures

    Get PDF
    This paper develops particle learning (PL) methods for the estimation of general mixture models. The approach is distinguished from alternative particle filtering methods in two major ways. First, each iteration begins by resampling particles according to posterior predictive probability, leading to a more efficient set for propagation. Second, each particle tracks only the "essential state vector" thus leading to reduced dimensional inference. In addition, we describe how the approach will apply to more general mixture models of current interest in the literature; it is hoped that this will inspire a greater number of researchers to adopt sequential Monte Carlo methods for fitting their sophisticated mixture based models. Finally, we show that PL leads to straight forward tools for marginal likelihood calculation and posterior cluster allocation.Business Administratio

    Deterministic Mean-field Ensemble Kalman Filtering

    Full text link
    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Legland etal. (2011) is extended to non-Gaussian state space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ\kappa between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF when the dimension d<2κd<2\kappa. The fidelity of approximation of the true distribution is also established using an extension of total variation metric to random measures. This is limited by a Gaussian bias term arising from non-linearity/non-Gaussianity of the model, which exists for both DMFEnKF and standard EnKF. Numerical results support and extend the theory

    A Bayesian framework for functional time series analysis

    Full text link
    The paper introduces a general framework for statistical analysis of functional time series from a Bayesian perspective. The proposed approach, based on an extension of the popular dynamic linear model to Banach-space valued observations and states, is very flexible but also easy to implement in many cases. For many kinds of data, such as continuous functions, we show how the general theory of stochastic processes provides a convenient tool to specify priors and transition probabilities of the model. Finally, we show how standard Markov chain Monte Carlo methods for posterior simulation can be employed under consistent discretizations of the data

    Fast MCMC sampling for Markov jump processes and extensions

    Full text link
    Markov jump processes (or continuous-time Markov chains) are a simple and important class of continuous-time dynamical systems. In this paper, we tackle the problem of simulating from the posterior distribution over paths in these models, given partial and noisy observations. Our approach is an auxiliary variable Gibbs sampler, and is based on the idea of uniformization. This sets up a Markov chain over paths by alternately sampling a finite set of virtual jump times given the current path and then sampling a new path given the set of extant and virtual jump times using a standard hidden Markov model forward filtering-backward sampling algorithm. Our method is exact and does not involve approximations like time-discretization. We demonstrate how our sampler extends naturally to MJP-based models like Markov-modulated Poisson processes and continuous-time Bayesian networks and show significant computational benefits over state-of-the-art MCMC samplers for these models.Comment: Accepted at the Journal of Machine Learning Research (JMLR
    • …
    corecore