255 research outputs found

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    Distributed Estimation and Performance Limits in Resource-constrained Wireless Sensor Networks

    Get PDF
    Distributed inference arising in sensor networks has been an interesting and promising discipline in recent years. The goal of this dissertation is to investigate several issues related to distributed inference in sensor networks, emphasizing parameter estimation and target tracking with resource-constrainted networks. To reduce the transmissions between sensors and the fusion center thereby saving bandwidth and energy consumption in sensor networks, a novel methodology, where each local sensor performs a censoring procedure based on the normalized innovation square (NIS), is proposed for the sequential Bayesian estimation problem in this dissertation. In this methodology, each sensor sends only the informative measurements and the fusion center fuses both missing measurements and received ones to yield more accurate inference. The new methodology is derived for both linear and nonlinear dynamic systems, and both scalar and vector measurements. The relationship between the censoring rule based on NIS and the one based on Kullback-Leibler (KL) divergence is investigated. A probabilistic transmission model over multiple access channels (MACs) is investigated. With this model, a relationship between the sensor management and compressive sensing problems is established, based on which, the sensor management problem becomes a constrained optimization problem, where the goal is to determine the optimal values of probabilities that each sensor should transmit with such that the determinant of the Fisher information matrix (FIM) at any given time step is maximized. The performance of the proposed compressive sensing based sensor management methodology in terms of accuracy of inference is investigated. For the Bayesian parameter estimation problem, a framework is proposed where quantized observations from local sensors are not directly fused at the fusion center, instead, an additive noise is injected independently to each quantized observation. The injected noise performs as a low-pass filter in the characteristic function (CF) domain, and therefore, is capable of recoverving the original analog data if certain conditions are satisfied. The optimal estimator based on the new framework is derived, so is the performance bound in terms of Fisher information. Moreover, a sub-optimal estimator, namely, linear minimum mean square error estimator (LMMSE) is derived, due to the fact that the proposed framework theoretically justifies the additive noise modeling of the quantization process. The bit allocation problem based on the framework is also investigated. A source localization problem in a large-scale sensor network is explored. The maximum-likelihood (ML) estimator based on the quantized data from local sensors and its performance bound in terms of Cram\\u27{e}r-Rao lower bound (CRLB) are derived. Since the number of sensors is large, the law of large numbers (LLN) is utilized to obtain a closed-form version of the performance bound, which clearly shows the dependence of the bound on the sensor density, i.e.,i.e., the Fisher information is a linearly increasing function of the sensor density. Error incurred by the LLN approximation is also theoretically analyzed. Furthermore, the design of sub-optimal local sensor quantizers based on the closed-form solution is proposed. The problem of on-line performance evaluation for state estimation of a moving target is studied. In particular, a compact and efficient recursive conditional Posterior Cram\\u27{e}r-Rao lower bound (PCRLB) is proposed. This bound provides theoretical justification for a heuristic one proposed by other researchers in this area. Theoretical complexity analysis is provided to show the efficiency of the proposed bound, compared to the existing bound

    Distributed Location Estimation of a Moving Target Characterized by a Spatial Poisson Field

    Get PDF
    Wireless Sensor Networks (WSNs) are traditionally employed to collect spatial and temporal data characterizing various events. These data are then used to solve inference problems such as object detection, counting, classification, estimation and tracking. Distributed solutions provided by WSNs are often cost effective and characterized by high performance indices.;In this work, we model and simulate a distributed sensor network composed of radiation detectors and analyze its ability to make inferences. Radiation detectors are deployed over a known area. A radiological point source is positioned in the interior of the area. Detectors take measurements of the field generated by the point source and transmit them (without any interaction with one another) to a remotely installed super computer (called here Fusion Center) for a joint processing. To minimize consumption of resources such as power in the network and transmission bandwidth, the measurements are locally preprocessed prior to transmission. Our model assumes two Gaussian channels, observation and transmission. The first channel distorts data at the receiver end of each sensor during data acquisition. The second channel distorts data during transmission. Sensor measurements are modeled as an inhomogeneous spatial counting random process (Poisson process). The location of the radiological point source in the area and the strength of the field generated by the substance are unknown parameters. The goal of the FC is to estimate these parameters from the distributed measurements provided by the WSN. To find the distributed estimates, we adopt the Maximum Likelihood approach. This approach requires knowledge of the joint probability density function of the distributed measurements observed by the FC. Since the joint probability density of the data observed at the FC is nonlinear in unknown parameters, we propose an iterative approach to solve for the maximum likelihood estimates of these parameters. The solution is a combination of the Bisection and Secant approaches adjusted to seek solution in a multidimensional parameter space. The performance of the distributed estimator is measured in terms of the mean square error. It is analyzed with respect to various parameters of the WSN. We vary the following parameters of the network: (1) the number of sensors in the WSN, (2) signal to noise ratio in observation and transmission channels, (3) the strength of the original field, and (4) the number of quantization levels used by a sensor to convert an analog measurement into a digital signal. We also propose a distributed tracking algorithm for monitoring position of the object in real time

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored
    corecore