11 research outputs found

    Distributed Greedy Sensor Scheduling for Model-based Reconstruction of Space-Time Continuous Physical Phenomena

    Get PDF
    A novel distributed sensor scheduling method for large-scale sensor networks observing space-time continuous physical phenomena is introduced. In a first step, the model of the distributed phenomenon is spatially and temporally decomposed leading to a linear probabilistic finite-dimensional model. Based on this representation, the information gain of sensor measurements is evaluated by means of the so-called covariance reduction function. For this reward function, it is shown that the performance of the greedy sensor scheduling is at least half that of the optimal scheduling considering long-term effects. This finding is the key for distributed sensor scheduling, where a central processing unit or fusion center is unnecessary, and thus, scaling as well as reliability is ensured. Hence, greedy scheduling in combination with a proposed hierarchical communication scheme requires only local sensor information and communication

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation

    Decentralised particle filtering for multiple target tracking in wireless sensor networks

    No full text
    This paper presents algorithms for consistent joint localisation and tracking of multiple targets in wireless sensor networks under the decentralised data fusion (DDF) paradigm where particle representations of the state posteriors are communicated. This work differs from previous work [1], [2] as more generalised methods have been developed to account for correlated estimation errors that arise due to common past information between two discrete particle sets. The particle sets are converted to continuous distributions for communication and inter-nodal fusion. Common past information is then removed by a division operation of two estimates so that only new information is updated at the node. In previous work, the continuous distribution used was limited to a Gaussian kernel function. This new method is compared to the optimal centralised solution where each node sends all observation information to a central fusion node when received. Results presented include a real-time application of the DDF operation of division on data logged from field trials

    High-level Information Fusion for Constrained SMC Methods and Applications

    Get PDF
    Information Fusion is a field that studies processes utilizing data from various input sources, and techniques exploiting this data to produce estimates and knowledge about objects and situations. On the other hand, human computation is a new and evolving research area that uses human intelligence to solve computational problems that are beyond the scope of existing artificial intelligence algorithms. In previous systems, humans' role was mostly restricted for analysing a finished fusion product; however, in the current systems the role of humans is an integral element in a distributed framework, where many tasks can be accomplished by either humans or machines. Moreover, some information can be provided only by humans not machines, because the observational capabilities and opportunities for traditional electronic (hard) sensors are limited. A source-reliability-adaptive distributed non-linear estimation method applicable to a number of distributed state estimation problems is proposed. The proposed method requires only local data exchange among neighbouring sensor nodes. It therefore provides enhanced reliability, scalability, and ease of deployment. In particular, by taking into account the estimation reliability of each sensor node at any point in time, it yields a more robust distributed estimation. To perform the Multi-Model Particle Filtering (MMPF) in an adaptive distributed manner, a Gaussian approximation of the particle cloud obtained at each sensor node, along with a weighted Consensus Propagation (CP)-based distributed data aggregation scheme, are deployed to dynamically re-weight the particle clouds. The filtering is a soft-data-constrained variant of multi-model particle filter, and is capable of processing both soft human-generated data and conventional hard sensory data. If permanent noise occurs in the estimation provided by a sensor node, due to either a faulty sensing device or misleading soft data, the contribution of that node in the weighted consensus process is immediately reduced in order to alleviate its effect on the estimation provided by the neighbouring nodes and the entire network. The robustness of the proposed source-reliability-adaptive distributed estimation method is demonstrated through simulation results for agile target tracking scenarios. Agility here refers to cases in which the observed dynamics of targets deviate from the given probabilistic characterization. Furthermore, the same concept is applied to model soft data constrained multiple-model Probability Hypothesis Density (PHD) filter that can track agile multiple targets with non-linear dynamics, which is a challenging problem. In this case, a Sequential Monte Carlo-Probability Hypothesis Density (SMC-PHD) filter deploys a Random Set (RS) theoretic formulation, along with Sequential Monte Carlo approximation, a variant of Bayes filtering. In general, the performance of Bayesian filtering-based methods can be enhanced by using extra information incorporated as specific constraints into the filtering process. Following the same principle, the new approach uses a constrained variant of the SMC-PHD filter, in which a fuzzy logic approach is used to transform the inherently vague human-generated data into a set of constraints. These constraints are then enforced on the filtering process by applying them as coefficients to the particles' weights. Because the human generated Soft Data (SD), reports on target-agility level, the proposed constrained-filtering approach is capable of dealing with multiple agile target tracking scenarios

    Decentralized Riemannian Particle Filtering with Applications to Multi-Agent Localization

    Get PDF
    The primary focus of this research is to develop consistent nonlinear decentralized particle filtering approaches to the problem of multiple agent localization. A key aspect in our development is the use of Riemannian geometry to exploit the inherently non-Euclidean characteristics that are typical when considering multiple agent localization scenarios. A decentralized formulation is considered due to the practical advantages it provides over centralized fusion architectures. Inspiration is taken from the relatively new field of information geometry and the more established research field of computer vision. Differential geometric tools such as manifolds, geodesics, tangent spaces, exponential, and logarithmic mappings are used extensively to describe probabilistic quantities. Numerous probabilistic parameterizations were identified, settling on the efficient square-root probability density function parameterization. The square-root parameterization has the benefit of allowing filter calculations to be carried out on the well studied Riemannian unit hypersphere. A key advantage for selecting the unit hypersphere is that it permits closed-form calculations, a characteristic that is not shared by current solution approaches. Through the use of the Riemannian geometry of the unit hypersphere, we are able to demonstrate the ability to produce estimates that are not overly optimistic. Results are presented that clearly show the ability of the proposed approaches to outperform current state-of-the-art decentralized particle filtering methods. In particular, results are presented that emphasize the achievable improvement in estimation error, estimator consistency, and required computational burden

    Cooperative Vehicle Tracking in Large Environments

    Get PDF
    Vehicle position tracking and prediction over large areas is of significant importance in many industrial applications, such as mining operations. In a small area, this can be easily achieved by providing vehicles with a constant communication link to a control centre and having the vehicles broadcast their position. The problem changes dramatically when vehicles operate within a large environment of potentially hundreds of square kilometres and in difficult terrain. This thesis presents algorithms for cooperative tracking of vehicles based on a vehicle motion model that incorporates the properties of the working area, and information collected by infrastructure collection points and other mobile agents. The probabilistic motion prediction approach provides long-term estimates of vehicle positions using motion profiles built for the particular environment and considering the vehicle stopping probability. A limited number of data collection points distributed around the field are used to update the position estimates, with negative information also used to improve the estimation. The thesis introduces the concept of observation harvesting, a process in which peer-to-peer communication between vehicles allows egocentric position updates and inter-vehicle measurements to be relayed among vehicles and finally conveyed to the collection points for an improved position estimate. It uses a store-and-synchronise concept to deal with intermittent communication and aims to disseminate data in an opportunistic manner. A nonparametric filtering algorithm for cooperative tracking is proposed to incorporate the information harvested, including the negative, relative, and time delayed observations. An important contribution of this thesis is to enable the optimisation of fleet scheduling when full coverage networks are not available or feasible. The proposed approaches were validated with comprehensive experimental results using data collected from a large-scale mining operation

    Cooperative Vehicle Tracking in Large Environments

    Get PDF
    Vehicle position tracking and prediction over large areas is of significant importance in many industrial applications, such as mining operations. In a small area, this can be easily achieved by providing vehicles with a constant communication link to a control centre and having the vehicles broadcast their position. The problem changes dramatically when vehicles operate within a large environment of potentially hundreds of square kilometres and in difficult terrain. This thesis presents algorithms for cooperative tracking of vehicles based on a vehicle motion model that incorporates the properties of the working area, and information collected by infrastructure collection points and other mobile agents. The probabilistic motion prediction approach provides long-term estimates of vehicle positions using motion profiles built for the particular environment and considering the vehicle stopping probability. A limited number of data collection points distributed around the field are used to update the position estimates, with negative information also used to improve the estimation. The thesis introduces the concept of observation harvesting, a process in which peer-to-peer communication between vehicles allows egocentric position updates and inter-vehicle measurements to be relayed among vehicles and finally conveyed to the collection points for an improved position estimate. It uses a store-and-synchronise concept to deal with intermittent communication and aims to disseminate data in an opportunistic manner. A nonparametric filtering algorithm for cooperative tracking is proposed to incorporate the information harvested, including the negative, relative, and time delayed observations. An important contribution of this thesis is to enable the optimisation of fleet scheduling when full coverage networks are not available or feasible. The proposed approaches were validated with comprehensive experimental results using data collected from a large-scale mining operation
    corecore