4,850 research outputs found

    Algorithm design for parallel implementation of the SMC-PHD filter

    Get PDF
    The sequential Monte Carlo (SMC) implementation of the probability hypothesis density (PHD) filter suffers from low computational efficiency since a large number of particles are often required, especially when there are a large number of targets and dense clutter. In order to speed up the computation, an algorithmic framework for parallel SMC-PHD filtering based on multiple processors is proposed. The algorithm makes full parallelization of all four steps of the SMC-PHD filter and the computational load is approximately equal among parallel processors, rendering a high parallelization benefit when there are multiple targets and dense clutter. The parallelization is theoretically unbiased as it provides the same result as the serial implementation, without introducing any approximation. Experiments on multi-core computers have demonstrated that our parallel implementation has gained considerable speedup compared to the serial implementation of the same algorithm

    Parallel resampling in the particle filter

    Full text link
    Modern parallel computing devices, such as the graphics processing unit (GPU), have gained significant traction in scientific and statistical computing. They are particularly well-suited to data-parallel algorithms such as the particle filter, or more generally Sequential Monte Carlo (SMC), which are increasingly used in statistical inference. SMC methods carry a set of weighted particles through repeated propagation, weighting and resampling steps. The propagation and weighting steps are straightforward to parallelise, as they require only independent operations on each particle. The resampling step is more difficult, as standard schemes require a collective operation, such as a sum, across particle weights. Focusing on this resampling step, we analyse two alternative schemes that do not involve a collective operation (Metropolis and rejection resamplers), and compare them to standard schemes (multinomial, stratified and systematic resamplers). We find that, in certain circumstances, the alternative resamplers can perform significantly faster on a GPU, and to a lesser extent on a CPU, than the standard approaches. Moreover, in single precision, the standard approaches are numerically biased for upwards of hundreds of thousands of particles, while the alternatives are not. This is particularly important given greater single- than double-precision throughput on modern devices, and the consequent temptation to use single precision with a greater number of particles. Finally, we provide auxiliary functions useful for implementation, such as for the permutation of ancestry vectors to enable in-place propagation.Comment: 21 pages, 6 figure

    Regional variance for multi-object filtering

    Get PDF
    Recent progress in multi-object filtering has led to algorithms that compute the first-order moment of multi-object distributions based on sensor measurements. The number of targets in arbitrarily selected regions can be estimated using the first-order moment. In this work, we introduce explicit formulae for the computation of the second-order statistic on the target number. The proposed concept of regional variance quantifies the level of confidence on target number estimates in arbitrary regions and facilitates information-based decisions. We provide algorithms for its computation for the Probability Hypothesis Density (PHD) and the Cardinalized Probability Hypothesis Density (CPHD) filters. We demonstrate the behaviour of the regional statistics through simulation examples

    A track-before-detect labelled multi-Bernoulli particle filter with label switching

    Full text link
    This paper presents a multitarget tracking particle filter (PF) for general track-before-detect measurement models. The PF is presented in the random finite set framework and uses a labelled multi-Bernoulli approximation. We also present a label switching improvement algorithm based on Markov chain Monte Carlo that is expected to increase filter performance if targets get in close proximity for a sufficiently long time. The PF is tested in two challenging numerical examples.Comment: Accepted for publication in IEEE Transactions on Aerospace and Electronic System

    Bayesian subset simulation

    Full text link
    We consider the problem of estimating a probability of failure α\alpha, defined as the volume of the excursion set of a function f:XRdRf:\mathbb{X} \subseteq \mathbb{R}^{d} \to \mathbb{R} above a given threshold, under a given probability measure on X\mathbb{X}. In this article, we combine the popular subset simulation algorithm (Au and Beck, Probab. Eng. Mech. 2001) and our sequential Bayesian approach for the estimation of a probability of failure (Bect, Ginsbourger, Li, Picheny and Vazquez, Stat. Comput. 2012). This makes it possible to estimate α\alpha when the number of evaluations of ff is very limited and α\alpha is very small. The resulting algorithm is called Bayesian subset simulation (BSS). A key idea, as in the subset simulation algorithm, is to estimate the probabilities of a sequence of excursion sets of ff above intermediate thresholds, using a sequential Monte Carlo (SMC) approach. A Gaussian process prior on ff is used to define the sequence of densities targeted by the SMC algorithm, and drive the selection of evaluation points of ff to estimate the intermediate probabilities. Adaptive procedures are proposed to determine the intermediate thresholds and the number of evaluations to be carried out at each stage of the algorithm. Numerical experiments illustrate that BSS achieves significant savings in the number of function evaluations with respect to other Monte Carlo approaches
    corecore