2,016 research outputs found
Parallel resampling in the particle filter
Modern parallel computing devices, such as the graphics processing unit
(GPU), have gained significant traction in scientific and statistical
computing. They are particularly well-suited to data-parallel algorithms such
as the particle filter, or more generally Sequential Monte Carlo (SMC), which
are increasingly used in statistical inference. SMC methods carry a set of
weighted particles through repeated propagation, weighting and resampling
steps. The propagation and weighting steps are straightforward to parallelise,
as they require only independent operations on each particle. The resampling
step is more difficult, as standard schemes require a collective operation,
such as a sum, across particle weights. Focusing on this resampling step, we
analyse two alternative schemes that do not involve a collective operation
(Metropolis and rejection resamplers), and compare them to standard schemes
(multinomial, stratified and systematic resamplers). We find that, in certain
circumstances, the alternative resamplers can perform significantly faster on a
GPU, and to a lesser extent on a CPU, than the standard approaches. Moreover,
in single precision, the standard approaches are numerically biased for upwards
of hundreds of thousands of particles, while the alternatives are not. This is
particularly important given greater single- than double-precision throughput
on modern devices, and the consequent temptation to use single precision with a
greater number of particles. Finally, we provide auxiliary functions useful for
implementation, such as for the permutation of ancestry vectors to enable
in-place propagation.Comment: 21 pages, 6 figure
A Parallel Histogram-based Particle Filter for Object Tracking on SIMD-based Smart Cameras
We present a parallel implementation of a histogram-based particle filter for object tracking on smart cameras based on SIMD processors. We specifically focus on parallel computation of the particle weights and parallel construction of the feature histograms since these are the major bottlenecks in standard implementations of histogram-based particle filters. The proposed algorithm can be applied with any histogram-based feature sets—we show in detail how the parallel particle filter can employ simple color histograms as well as more complex histograms of oriented gradients (HOG). The algorithm was successfully implemented on an SIMD processor and performs robust object tracking at up to 30 frames per second—a performance difficult to achieve even on a modern desktop computer
Forest resampling for distributed sequential Monte Carlo
This paper brings explicit considerations of distributed computing
architectures and data structures into the rigorous design of Sequential Monte
Carlo (SMC) methods. A theoretical result established recently by the authors
shows that adapting interaction between particles to suitably control the
Effective Sample Size (ESS) is sufficient to guarantee stability of SMC
algorithms. Our objective is to leverage this result and devise algorithms
which are thus guaranteed to work well in a distributed setting. We make three
main contributions to achieve this. Firstly, we study mathematical properties
of the ESS as a function of matrices and graphs that parameterize the
interaction amongst particles. Secondly, we show how these graphs can be
induced by tree data structures which model the logical network topology of an
abstract distributed computing environment. Thirdly, we present efficient
distributed algorithms that achieve the desired ESS control, perform resampling
and operate on forests associated with these trees
Massively parallel implicit equal-weights particle filter for ocean drift trajectory forecasting
Forecasting of ocean drift trajectories are important for many applications, including search and rescue operations, oil spill cleanup and iceberg risk mitigation. In an operational setting, forecasts of drift trajectories are produced based on computationally demanding forecasts of three-dimensional ocean currents. Herein, we investigate a complementary approach for shorter time scales by using the recently proposed two-stage implicit equal-weights particle filter applied to a simplified ocean model. To achieve this, we present a new algorithmic design for a data-assimilation system in which all components – including the model, model errors, and particle filter – take advantage of massively parallel compute architectures, such as graphical processing units. Faster computations can enable in-situ and ad-hoc model runs for emergency management, and larger ensembles for better uncertainty quantification. Using a challenging test case with near-realistic chaotic instabilities, we run data-assimilation experiments based on synthetic observations from drifting and moored buoys, and analyze the trajectory forecasts for the drifters. Our results show that even sparse drifter observations are sufficient to significantly improve short-term drift forecasts up to twelve hours. With equidistant moored buoys observing only 0.1% of the state space, the ensemble gives an accurate description of the true state after data assimilation followed by a high-quality probabilistic forecast
Parallelized Particle and Gaussian Sum Particle Filters for Large Scale Freeway Traffic Systems
Large scale traffic systems require techniques able to: 1) deal with high amounts of data and heterogenous data coming from different types of sensors, 2) provide robustness in the presence of sparse sensor data, 3) incorporate different models that can deal with various traffic regimes, 4) cope with multimodal conditional probability density functions for the states. Often centralized architectures face challenges due to high communication demands. This paper develops new estimation techniques able to cope with these problems of large traffic network systems. These are Parallelized Particle Filters (PPFs) and a Parallelized Gaussian Sum Particle Filter (PGSPF) that are suitable for on-line traffic management. We show how complex probability density functions of the high dimensional trafc state can be decomposed into functions with simpler forms and the whole estimation problem solved in an efcient way. The proposed approach is general, with limited interactions which reduces the computational time and provides high estimation accuracy. The efciency of the PPFs and PGSPFs is evaluated in terms of accuracy, complexity and communication demands and compared with the case where all processing is centralized
Acceleration of MCMC-based algorithms using reconfigurable logic
Monte Carlo (MC) methods such as Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) have emerged as popular tools to sample from high dimensional probability distributions. Because these algorithms can draw samples effectively from arbitrary distributions in Bayesian inference
problems, they have been widely used in a range of statistical applications. However, they are often too time consuming due to the prohibitive costly likelihood evaluations, thus they cannot be practically applied to complex models with large-scale datasets. Currently, the lack of sufficiently fast MCMC methods limits their applicability in many modern applications such as genetics and machine
learning, and this situation is bound to get worse given the increasing adoption of big data in many fields. The objective of this dissertation is to develop, design and build efficient hardware architectures for MCMC-based algorithms on Field Programmable Gate Arrays (FPGAs), and thereby bring them closer to practical applications. The contributions of this work include: 1) Novel parallel FPGA architectures of the state-of-the-art resampling algorithms for SMC methods. The proposed architectures allow for parallel implementations and thus improve the processing speed. 2) A novel mixed precision MCMC algorithm, along with a tailored FPGA architecture. The proposed design allows for more parallelism and achieves low latency for a given set of hardware resources, while still guaranteeing unbiased estimates. 3) A new
variant of subsampling MCMC method based on unequal probability sampling, along with a highly optimized FPGA architecture. The proposed method significantly reduces off-chip memory access and achieves high accuracy in estimates for a given time budget. This work has resulted in the development of hardware accelerators of MCMC and SMC for very large-scale Bayesian tasks by applying
the above techniques. Notable speed improvements compared to the respective state-of-the-art CPU and GPU implementations have been achieved in this work.Open Acces
Sequential Bayesian inference for implicit hidden Markov models and current limitations
Hidden Markov models can describe time series arising in various fields of
science, by treating the data as noisy measurements of an arbitrarily complex
Markov process. Sequential Monte Carlo (SMC) methods have become standard tools
to estimate the hidden Markov process given the observations and a fixed
parameter value. We review some of the recent developments allowing the
inclusion of parameter uncertainty as well as model uncertainty. The
shortcomings of the currently available methodology are emphasised from an
algorithmic complexity perspective. The statistical objects of interest for
time series analysis are illustrated on a toy "Lotka-Volterra" model used in
population ecology. Some open challenges are discussed regarding the
scalability of the reviewed methodology to longer time series,
higher-dimensional state spaces and more flexible models.Comment: Review article written for ESAIM: proceedings and surveys. 25 pages,
10 figure
- …