5,509 research outputs found

    Communication Optimizations for a Wireless Distributed Prognostic Framework

    Get PDF
    Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation

    Novel Hybrid Resampling Algorithms for Parallel/Distributed Particle Filters

    Full text link
    Particle filters, also known as sequential Monte Carlo (SMC) methods, use the Bayesian inference and the stochastic sampling technique to estimate the states of dynamic systems from given observations. Parallel/Distributed particle filters were introduced to improve the performance of sequential particle filters by using multiple processing units (PUs). The classical resampling algorithm used in parallel/distributed particle filters is a centralized scheme, called centralized resampling, which needs a central unit (CU) to serve as a hub for data transfers. As a result, the centralized resampling procedures produce extra communication costs, which lowers the speedup factors in parallel computing. Even though some efficient particle routing policies had been introduced, the centralized resampling still suffered from high communication costs. A decentralized resampling algorithm was introduced to decrease the communication cost in parallel/distributed particle filters. In the decentralized resampling, each PU independently handles its particles and transfers a portion of particles to its neighboring PUs after the resampling step. Because of the lack of the global information, the estimate accuracy of the decentralized resampling is relatively low compared to that of the centralized resampling. Hybrid resampling algorithms were proposed to improve the performance by alternatively executing the centralized resampling and the decentralized resampling, which can reduce the communication costs without losing the estimation accuracy. In this study, we propose novel hybrid resampling algorithms to adjust the intervals between the centralized resampling steps and the decentralized resampling steps based on the measured system convergence. We analyze the computation time, the communication time, and speedup factors of parallel/distributed particle filters with various resampling algorithms, state sizes, system complexities, numbers of processing units, and model dimensions. The experimental results indicate that the decentralized resampling achieves the highest speedup factors due to the local transfer of particles, the centralized resampling always has the lowest speedup factors because of the global transfer of particles, and the hybrid resampling attains the speedup factors between. Moreover, we define the complexity-state ratio, as the ratio between the system complexity and the system state size to study how it impacts the speedup factor. The experiments show that the high complexity-state ratio results in the increase of the speedup factors. This is one of the earliest attempts to analyze and compare the performance of parallel/distributed particle filters with different resampling algorithms. The analysis can provide potential solutions for further performance improvements and guide the appropriate selection of the resampling algorithm for parallel/distributed particle filters. Meanwhile, we formalize various hybrid resampling algorithms to be a generic resampling algorithm and prove it to be uniformly convergent. The proof provides a solid theoretical foundation for their wide adoptions in parallel/distributed particle filters

    Parallelized Particle and Gaussian Sum Particle Filters for Large Scale Freeway Traffic Systems

    Get PDF
    Large scale traffic systems require techniques able to: 1) deal with high amounts of data and heterogenous data coming from different types of sensors, 2) provide robustness in the presence of sparse sensor data, 3) incorporate different models that can deal with various traffic regimes, 4) cope with multimodal conditional probability density functions for the states. Often centralized architectures face challenges due to high communication demands. This paper develops new estimation techniques able to cope with these problems of large traffic network systems. These are Parallelized Particle Filters (PPFs) and a Parallelized Gaussian Sum Particle Filter (PGSPF) that are suitable for on-line traffic management. We show how complex probability density functions of the high dimensional trafc state can be decomposed into functions with simpler forms and the whole estimation problem solved in an efcient way. The proposed approach is general, with limited interactions which reduces the computational time and provides high estimation accuracy. The efciency of the PPFs and PGSPFs is evaluated in terms of accuracy, complexity and communication demands and compared with the case where all processing is centralized

    PPF - A Parallel Particle Filtering Library

    Full text link
    We present the parallel particle filtering (PPF) software library, which enables hybrid shared-memory/distributed-memory parallelization of particle filtering (PF) algorithms combining the Message Passing Interface (MPI) with multithreading for multi-level parallelism. The library is implemented in Java and relies on OpenMPI's Java bindings for inter-process communication. It includes dynamic load balancing, multi-thread balancing, and several algorithmic improvements for PF, such as input-space domain decomposition. The PPF library hides the difficulties of efficient parallel programming of PF algorithms and provides application developers with the necessary tools for parallel implementation of PF methods. We demonstrate the capabilities of the PPF library using two distributed PF algorithms in two scenarios with different numbers of particles. The PPF library runs a 38 million particle problem, corresponding to more than 1.86 GB of particle data, on 192 cores with 67% parallel efficiency. To the best of our knowledge, the PPF library is the first open-source software that offers a parallel framework for PF applications.Comment: 8 pages, 8 figures; will appear in the proceedings of the IET Data Fusion & Target Tracking Conference 201
    corecore