8 research outputs found

    Parallel Computing of Particle Filtering Algorithms for Target Tracking Applications

    Get PDF
    Particle filtering has been a very popular method to solve nonlinear/non-Gaussian state estimation problems for more than twenty years. Particle filters (PFs) have found lots of applications in areas that include nonlinear filtering of noisy signals and data, especially in target tracking. However, implementation of high dimensional PFs in real-time for large-scale problems is a very challenging computational task. Parallel & distributed (P&D) computing is a promising way to deal with the computational challenges of PF methods. The main goal of this dissertation is to develop, implement and evaluate computationally efficient PF algorithms for target tracking, and thereby bring them closer to practical applications. To reach this goal, a number of parallel PF algorithms is designed and implemented using different parallel hardware architectures such as Computer Cluster, Graphics Processing Unit (GPU), and Field-Programmable Gate Array (FPGA). Proposed is an improved PF implementation for computer cluster - the Particle Transfer Algorithm (PTA), which takes advantage of the cluster architecture and outperforms significantly existing algorithms. Also, a novel GPU PF algorithm implementation is designed which is highly efficient for GPU architectures. The proposed algorithm implementations on different parallel computing environments are applied and tested for target tracking problems, such as space object tracking, ground multitarget tracking using image sensor, UAV-multisensor tracking. Comprehensive performance evaluation and comparison of the algorithms for both tracking and computational capabilities is performed. It is demonstrated by the obtained simulation results that the proposed implementations help greatly overcome the computational issues of particle filtering for realistic practical problems

    Novel methods for multi-target tracking with applications in sensor registration and fusion

    Get PDF
    Maintaining surveillance over vast volumes of space is an increasingly important capability for the defence industry. A clearer and more accurate picture of a surveillance region could be obtained through sensor fusion between a network of sensors. However, this accurate picture is dependent on the sensor registration being resolved. Any inaccuracies in sensor location or orientation can manifest themselves into the sensor measurements that are used in the fusion process, and lead to poor target tracking performance. Solutions previously proposed in the literature for the sensor registration problem have been based on a number of assumptions that do not always hold in practice, such as having a synchronous network and having small, static registration errors. This thesis will propose a number of solutions to resolving the sensor registration and sensor fusion problems jointly in an efficient manner. The assumptions made in previous works will be loosened or removed, making the solutions more applicable to problems that we are likely to see in practice. The proposed methods will be applied to both simulated data, and a segment of data taken from a live trial in the field

    Novel Hybrid Resampling Algorithms for Parallel/Distributed Particle Filters

    Full text link
    Particle filters, also known as sequential Monte Carlo (SMC) methods, use the Bayesian inference and the stochastic sampling technique to estimate the states of dynamic systems from given observations. Parallel/Distributed particle filters were introduced to improve the performance of sequential particle filters by using multiple processing units (PUs). The classical resampling algorithm used in parallel/distributed particle filters is a centralized scheme, called centralized resampling, which needs a central unit (CU) to serve as a hub for data transfers. As a result, the centralized resampling procedures produce extra communication costs, which lowers the speedup factors in parallel computing. Even though some efficient particle routing policies had been introduced, the centralized resampling still suffered from high communication costs. A decentralized resampling algorithm was introduced to decrease the communication cost in parallel/distributed particle filters. In the decentralized resampling, each PU independently handles its particles and transfers a portion of particles to its neighboring PUs after the resampling step. Because of the lack of the global information, the estimate accuracy of the decentralized resampling is relatively low compared to that of the centralized resampling. Hybrid resampling algorithms were proposed to improve the performance by alternatively executing the centralized resampling and the decentralized resampling, which can reduce the communication costs without losing the estimation accuracy. In this study, we propose novel hybrid resampling algorithms to adjust the intervals between the centralized resampling steps and the decentralized resampling steps based on the measured system convergence. We analyze the computation time, the communication time, and speedup factors of parallel/distributed particle filters with various resampling algorithms, state sizes, system complexities, numbers of processing units, and model dimensions. The experimental results indicate that the decentralized resampling achieves the highest speedup factors due to the local transfer of particles, the centralized resampling always has the lowest speedup factors because of the global transfer of particles, and the hybrid resampling attains the speedup factors between. Moreover, we define the complexity-state ratio, as the ratio between the system complexity and the system state size to study how it impacts the speedup factor. The experiments show that the high complexity-state ratio results in the increase of the speedup factors. This is one of the earliest attempts to analyze and compare the performance of parallel/distributed particle filters with different resampling algorithms. The analysis can provide potential solutions for further performance improvements and guide the appropriate selection of the resampling algorithm for parallel/distributed particle filters. Meanwhile, we formalize various hybrid resampling algorithms to be a generic resampling algorithm and prove it to be uniformly convergent. The proof provides a solid theoretical foundation for their wide adoptions in parallel/distributed particle filters

    Aeronautical engineering: A continuing bibliography with indexes (supplement 262)

    Get PDF
    This bibliography lists 474 reports, articles, and other documents introduced into the NASA scientific and technical information system in Jan. 1991. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore