1,153 research outputs found

    Likelihood Consensus and Its Application to Distributed Particle Filtering

    Full text link
    We consider distributed state estimation in a wireless sensor network without a fusion center. Each sensor performs a global estimation task---based on the past and current measurements of all sensors---using only local processing and local communications with its neighbors. In this estimation task, the joint (all-sensors) likelihood function (JLF) plays a central role as it epitomizes the measurements of all sensors. We propose a distributed method for computing, at each sensor, an approximation of the JLF by means of consensus algorithms. This "likelihood consensus" method is applicable if the local likelihood functions of the various sensors (viewed as conditional probability density functions of the local measurements) belong to the exponential family of distributions. We then use the likelihood consensus method to implement a distributed particle filter and a distributed Gaussian particle filter. Each sensor runs a local particle filter, or a local Gaussian particle filter, that computes a global state estimate. The weight update in each local (Gaussian) particle filter employs the JLF, which is obtained through the likelihood consensus scheme. For the distributed Gaussian particle filter, the number of particles can be significantly reduced by means of an additional consensus scheme. Simulation results are presented to assess the performance of the proposed distributed particle filters for a multiple target tracking problem

    Multi-sensing Data Fusion: Target tracking via particle filtering

    Get PDF
    In this Master's thesis, Multi-sensing Data Fusion is firstly introduced with a focus on perception and the concepts that are the base of this work, like the mathematical tools that make it possible. Particle filters are one class of these tools that allow a computer to perform fusion of numerical information that is perceived from real environment by sensors. For this reason they are described and state of the art mathematical formulas and algorithms for particle filtering are also presented. At the core of this project, a simple piece of software has been developed in order to test these tools in practice. More specifically, a Target Tracking Simulator software is presented where a virtual trackable object can freely move in a 2-dimensional simulated environment and distributed sensor agents, dispersed in the same environment, should be able to perceive the object through a state-dependent measurement affected by additive Gaussian noise. Each sensor employs particle filtering along with communication with other neighboring sensors in order to update the perceived state of the object and track it as it moves in the environment. The combination of Java and AgentSpeak languages is used as a platform for the development of this application

    Distributed implementations of the particle filter with performance bounds

    Get PDF
    The focus of the thesis is on developing distributed estimation algorithms for systems with nonlinear dynamics. Of particular interest are the agent or sensor networks (AN/SN) consisting of a large number of local processing and observation agents/nodes, which can communicate and cooperate with each other to perform a predefined task. Examples of such AN/SNs are distributed camera networks, acoustic sensor networks, networks of unmanned aerial vehicles, social networks, and robotic networks. Signal processing in the AN/SNs is traditionally centralized and developed for systems with linear dynamics. In the centralized architecture, the participating nodes communicate their observations (either directly or indirectly via a multi-hop relay) to a central processing unit, referred to as the fusion centre, which is responsible for performing the predefined task. For centralized systems with linear dynamics, the Kalman filter provides the optimal approach but suffers from several drawbacks, e.g., it is generally unscalable and also susceptible to failure in case the fusion centre breaks down. In general, no analytic solution can be determined for systems with nonlinear dynamics. Consequently, the conventional Kalman filter cannot be used and one has to rely on numerical approaches. In such cases, the sequential Monte Carlo approaches, also known as the particle filters, are widely used as approximates to the Bayesian estimators but mostly in the centralized configuration. Recently there has been a growing interest in distributed signal processing algorithms where: (i) There is no fusion centre; (ii) The local nodes do not have (require) global knowledge of the network topology, and; (iii) Each node exchanges data only within its local neighborhood. Distributed estimation have been widely explored for estimation/tracking problems in linear systems. Distributed particle filter implementations for nonlinear systems are still in their infancy and are the focus of this thesis. In the first part of this thesis, four different consensus-based distributed particle filter implementations are proposed. First, a constrained sufficient statistic based distributed implementation of the particle filter (CSS/DPF) is proposed for bearing-only tracking (BOT) and joint bearing/range tracking problems encountered in a number of applications including radar target tracking and robot localization. Although the number of parallel consensus runs in the CSS/DPF is lower compared to the existing distributed implementations of the particle filter, the CSS/DPF still requires a large number of iterations for the consensus runs to converge. To further reduce the consensus overhead, the CSS/DPF is extended to distributed implementation of the unscented particle filter, referred to as the CSS/DUPF, which require a limited number of consensus iterations. Both CSS/DPF and CSS/DUPF are specific to BOT and joint bearing/range tracking problems. Next, the unscented, consensus-based, distributed implementation of the particle filter (UCD /DPF) is proposed which is generalizable to systems with any dynamics. In terms of contributions, the UCD /DPF makes two important improvements to the existing distributed particle filter framework: (i) Unlike existing distributed implementations of the particle filter, the UCD /DPF uses all available global observations including the most recent ones in deriving the proposal distribution based on the distributed UKF, and; (ii) Computation of the global estimates from local estimates during the consensus step is based on an optimal fusion rule. Finally, a multi-rate consensus/fusion based framework for distributed implementation of the particle filter, referred to as the CF /DPF, is proposed. Separate fusion filters are designed to consistently assimilate the local filtering distributions into the global posterior by compensating for the common past information between neighbouring nodes. The CF /DPF offers two distinct advantages over its counterparts. First, the CF /DPF framework is suitable for scenarios where network connectivity is intermittent and consensus can not be reached between two consecutive observations. Second, the CF /DPF is not limited to the Gaussian approximation for the global posterior density. Numerical simulations verify the near-optimal performance of the proposed distributed particle filter implementations. The second half of the thesis focuses on the distributed computation of the posterior Cramer-Rao lower bounds (PCRLB). The current PCRLB approaches assume a centralized or hierarchical architecture. The exact expression for distributed computation of the PCRLB is not yet available and only an approximate expression has recently been derived. Motivated by the distributed adaptive resource management problems with the objective of dynamically activating a time-variant subset of observation nodes to optimize the network's performance, the thesis derives the exact expression, referred to as the dPCRLB, for computing the PCRLB for any AN/SN configured in a distributed fashion. The dPCRLB computational algorithms are derived for both the off-line conventional (non-conditional) PCRLB determined primarily from the state model, observation model, and prior knowledge of the initial state of the system, and the online conditional PCRLB expressed as a function of past history of the observations. Compared to the non-conditional dPCRLB, its conditional counterpart provides a more accurate representation of the estimator's performance and, consequently, a better criteria for sensor selection. The thesis then extends the dPCRLB algorithms to quantized observations. Particle filter realizations are used to compute these bounds numerically and quantify their performance for data fusion problems through Monte-Carlo simulations

    Distributed Target Tracking and Synchronization in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks provide useful information for various applications but pose challenges in scalable information processing and network maintenance. This dissertation focuses on statistical methods for distributed information fusion and sensor synchronization for target tracking in wireless sensor networks. We perform target tracking using particle filtering. For scalability, we extend centralized particle filtering to distributed particle filtering via distributed fusion of local estimates provided by individual sensors. We derive a distributed fusion rule from Bayes\u27 theorem and implement it via average consensus. We approximate each local estimate as a Gaussian mixture and develop a sampling-based approach to the nonlinear fusion of Gaussian mixtures. By using the sampling-based approach in the fusion of Gaussian mixtures, we do not require each Gaussian mixture to have a uniform number of mixture components, and thus give each sensor the flexibility to adaptively learn a Gaussian mixture model with the optimal number of mixture components, based on its local information. Given such flexibility, we develop an adaptive method for Gaussian mixture fitting through a combination of hierarchical clustering and the expectation-maximization algorithm. Using numerical examples, we show that the proposed distributed particle filtering algorithm improves the accuracy and communication efficiency of distributed target tracking, and that the proposed adaptive Gaussian mixture learning method improves the accuracy and computational efficiency of distributed target tracking. We also consider the synchronization problem of a wireless sensor network. When sensors in a network are not synchronized, we model their relative clock offsets as unknown parameters in a state-space model that connects sensor observations to target state transition. We formulate the synchronization problem as a joint state and parameter estimation problem and solve it via the expectation-maximization algorithm to find the maximum likelihood solution for the unknown parameters, without knowledge of the target states. We also study the performance of the expectation-maximization algorithm under the Monte Carlo approximations used by particle filtering in target tracking. Numerical examples show that the proposed synchronization method converges to the ground truth, and that sensor synchronization significantly improves the accuracy of target tracking

    On the Development of Distributed Estimation Techniques for Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) have lately witnessed tremendous demand, as evidenced by the increasing number of day-to-day applications. The sensor nodes aim at estimating the parameters of their corresponding adaptive filters to achieve the desired response for the event of interest. Some of the burning issues related to linear parameter estimation in WSNs have been addressed in this thesis mainly focusing on reduction of communication overhead and latency, and robustness to noise. The first issue deals with the high communication overhead and latency in distributed parameter estimation techniques such as diffusion least mean squares (DLMS) and incremental least mean squares (ILMS) algorithms. Subsequently the poor performance demonstrated by these distributed techniques in presence of impulsive noise has been dealt separately. The issue of source localization i.e. estimation of source bearing in WSNs, where the existing decentralized algorithms fail to perform satisfactorily, has been resolved in this thesis. Further the same issue has been dealt separately independent of nodal connectivity in WSNs. This thesis proposes two algorithms namely the block diffusion least mean squares (BDLMS) and block incremental least mean squares (BILMS) algorithms for reducing the communication overhead in WSNs. The theoretical and simulation studies demonstrate that BDLMS and BILMS algorithms provide the same performances as that of DLMS and ILMS, but with significant reduction in communication overheads per node. The latency also reduces by a factor as high as the block-size used in the proposed algorithms. With an aim to develop robustness towards impulsive noise, this thesis proposes three robust distributed algorithms i.e. saturation nonlinearity incremental LMS (SNILMS), saturation nonlinearity diffusion LMS (SNDLMS) and Wilcoxon norm diffusion LMS (WNDLMS) algorithms. The steady-state analysis of SNILMS algorithm is carried out based on spatial-temporal energy conservation principle. The theoretical and simulation results show that these algorithms are robust to impulsive noise. The SNDLMS algorithm is found to provide better performance than SNILMS and WNDLMS algorithms. In order to develop a distributed source localization technique, a novel diffusion maximum likelihood (ML) bearing estimation algorithm is proposed in this thesis which needs less communication overhead than the centralized algorithms. After forming a random array with its neighbours, each sensor node estimates the source bearing by optimizing the ML function locally using a diffusion particle swarm optimization algorithm. The simulation results show that the proposed algorithm performs better than the centralized multiple signal classification (MUSIC) algorithm in terms of probability of resolution and root mean square error. Further, in order to make the proposed algorithm independent of nodal connectivity, a distributed in-cluster bearing estimation technique is proposed. Each cluster of sensors estimates the source bearing by optimizing the ML function locally in cooperation with other clusters. The simulation results demonstrate improved performance of the proposed method in comparison to the centralized and decentralized MUSIC algorithms, and the distributed in-network algorith

    Architectures and GPU-Based Parallelization for Online Bayesian Computational Statistics and Dynamic Modeling

    Get PDF
    Recent work demonstrates that coupling Bayesian computational statistics methods with dynamic models can facilitate the analysis of complex systems associated with diverse time series, including those involving social and behavioural dynamics. Particle Markov Chain Monte Carlo (PMCMC) methods constitute a particularly powerful class of Bayesian methods combining aspects of batch Markov Chain Monte Carlo (MCMC) and the sequential Monte Carlo method of Particle Filtering (PF). PMCMC can flexibly combine theory-capturing dynamic models with diverse empirical data. Online machine learning is a subcategory of machine learning algorithms characterized by sequential, incremental execution as new data arrives, which can give updated results and predictions with growing sequences of available incoming data. While many machine learning and statistical methods are adapted to online algorithms, PMCMC is one example of the many methods whose compatibility with and adaption to online learning remains unclear. In this thesis, I proposed a data-streaming solution supporting PF and PMCMC methods with dynamic epidemiological models and demonstrated several successful applications. By constructing an automated, easy-to-use streaming system, analytic applications and simulation models gain access to arriving real-time data to shorten the time gap between data and resulting model-supported insight. The well-defined architecture design emerging from the thesis would substantially expand traditional simulation models' potential by allowing such models to be offered as continually updated services. Contingent on sufficiently fast execution time, simulation models within this framework can consume the incoming empirical data in real-time and generate informative predictions on an ongoing basis as new data points arrive. In a second line of work, I investigated the platform's flexibility and capability by extending this system to support the use of a powerful class of PMCMC algorithms with dynamic models while ameliorating such algorithms' traditionally stiff performance limitations. Specifically, this work designed and implemented a GPU-enabled parallel version of a PMCMC method with dynamic simulation models. The resulting codebase readily has enabled researchers to adapt their models to the state-of-art statistical inference methods, and ensure that the computation-heavy PMCMC method can perform significant sampling between the successive arrival of each new data point. Investigating this method's impact with several realistic PMCMC application examples showed that GPU-based acceleration allows for up to 160x speedup compared to a corresponding CPU-based version not exploiting parallelism. The GPU accelerated PMCMC and the streaming processing system can complement each other, jointly providing researchers with a powerful toolset to greatly accelerate learning and securing additional insight from the high-velocity data increasingly prevalent within social and behavioural spheres. The design philosophy applied supported a platform with broad generalizability and potential for ready future extensions. The thesis discusses common barriers and difficulties in designing and implementing such systems and offers solutions to solve or mitigate them

    Robust and Efficient Inference of Scene and Object Motion in Multi-Camera Systems

    Get PDF
    Multi-camera systems have the ability to overcome some of the fundamental limitations of single camera based systems. Having multiple view points of a scene goes a long way in limiting the influence of field of view, occlusion, blur and poor resolution of an individual camera. This dissertation addresses robust and efficient inference of object motion and scene in multi-camera and multi-sensor systems. The first part of the dissertation discusses the role of constraints introduced by projective imaging towards robust inference of multi-camera/sensor based object motion. We discuss the role of the homography and epipolar constraints for fusing object motion perceived by individual cameras. For planar scenes, the homography constraints provide a natural mechanism for data association. For scenes that are not planar, the epipolar constraint provides a weaker multi-view relationship. We use the epipolar constraint for tracking in multi-camera and multi-sensor networks. In particular, we show that the epipolar constraint reduces the dimensionality of the state space of the problem by introducing a ``shared'' state space for the joint tracking problem. This allows for robust tracking even when one of the sensors fail due to poor SNR or occlusion. The second part of the dissertation deals with challenges in the computational aspects of tracking algorithms that are common to such systems. Much of the inference in the multi-camera and multi-sensor networks deal with complex non-linear models corrupted with non-Gaussian noise. Particle filters provide approximate Bayesian inference in such settings. We analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and in particular concentrate on implementations that have minimum processing times. The last part of the dissertation deals with the efficient sensing paradigm of compressing sensing (CS) applied to signals in imaging, such as natural images and reflectance fields. We propose a hybrid signal model on the assumption that most real-world signals exhibit subspace compressibility as well as sparse representations. We show that several real-world visual signals such as images, reflectance fields, videos etc., are better approximated by this hybrid of two models. We derive optimal hybrid linear projections of the signal and show that theoretical guarantees and algorithms designed for CS can be easily extended to hybrid subspace-compressive sensing. Such methods reduce the amount of information sensed by a camera, and help in reducing the so called data deluge problem in large multi-camera systems

    Algorithms and architectures for MCMC acceleration in FPGAs

    Get PDF
    Markov Chain Monte Carlo (MCMC) is a family of stochastic algorithms which are used to draw random samples from arbitrary probability distributions. This task is necessary to solve a variety of problems in Bayesian modelling, e.g. prediction and model comparison, making MCMC a fundamental tool in modern statistics. Nevertheless, due to the increasing complexity of Bayesian models, the explosion in the amount of data they need to handle and the computational intensity of many MCMC algorithms, performing MCMC-based inference is often impractical in real applications. This thesis tackles this computational problem by proposing Field Programmable Gate Array (FPGA) architectures for accelerating MCMC and by designing novel MCMC algorithms and optimization methodologies which are tailored for FPGA implementation. The contributions of this work include: 1) An FPGA architecture for the Population-based MCMC algorithm, along with two modified versions of the algorithm which use custom arithmetic precision in large parts of the implementation without introducing error in the output. Mapping the two modified versions to an FPGA allows for more parallel modules to be instantiated in the same chip area. 2) An FPGA architecture for the Particle MCMC algorithm, along with a novel algorithm which combines Particle MCMC and Population-based MCMC to tackle multi-modal distributions. A proposed FPGA architecture for the new algorithm achieves higher datapath utilization than the Particle MCMC architecture. 3) A generic method to optimize the arithmetic precision of any MCMC algorithm that is implemented on FPGAs. The method selects the minimum precision among a given set of precisions, while guaranteeing a user-defined bound on the output error. By applying the above techniques to large-scale Bayesian problems, it is shown that significant speedups (one or two orders of magnitude) are possible compared to state-of-the-art MCMC algorithms implemented on CPUs and GPUs, opening the way for handling complex statistical analyses in the era of ubiquitous, ever-increasing data.Open Acces
    corecore