725 research outputs found

    Efficient distributed information fusion using value of information based censoring

    Get PDF
    In many distributed sensing applications, not all agents have valuable information at all times. Therefore, requiring all agents to communicate at all times can be resource intensive. In this work, the notion of Value of Information (VoI) is used to improve the efficiency of distributed sensing algorithms. Particularly, only agents with high VoI broadcast their measurements to the network, while others censor their measurements. New VoI realized data fusion algorithms are introduced, and an in depth analysis of the costs incurred by these algorithms and conventional distributed data fusion algorithms is presented. Numerical simulations are used to compare the performance of the VoI realized algorithms with traditional data fusion algorithms. A VoI based algorithm that adaptively adjusts the criterion for being informative is presented and shown to strike a good balance between reduced communication cost and increased accuracy.United States. Army Research Office (MURI grant W911NF-11-1-0391

    Distributed Estimation and Performance Limits in Resource-constrained Wireless Sensor Networks

    Get PDF
    Distributed inference arising in sensor networks has been an interesting and promising discipline in recent years. The goal of this dissertation is to investigate several issues related to distributed inference in sensor networks, emphasizing parameter estimation and target tracking with resource-constrainted networks. To reduce the transmissions between sensors and the fusion center thereby saving bandwidth and energy consumption in sensor networks, a novel methodology, where each local sensor performs a censoring procedure based on the normalized innovation square (NIS), is proposed for the sequential Bayesian estimation problem in this dissertation. In this methodology, each sensor sends only the informative measurements and the fusion center fuses both missing measurements and received ones to yield more accurate inference. The new methodology is derived for both linear and nonlinear dynamic systems, and both scalar and vector measurements. The relationship between the censoring rule based on NIS and the one based on Kullback-Leibler (KL) divergence is investigated. A probabilistic transmission model over multiple access channels (MACs) is investigated. With this model, a relationship between the sensor management and compressive sensing problems is established, based on which, the sensor management problem becomes a constrained optimization problem, where the goal is to determine the optimal values of probabilities that each sensor should transmit with such that the determinant of the Fisher information matrix (FIM) at any given time step is maximized. The performance of the proposed compressive sensing based sensor management methodology in terms of accuracy of inference is investigated. For the Bayesian parameter estimation problem, a framework is proposed where quantized observations from local sensors are not directly fused at the fusion center, instead, an additive noise is injected independently to each quantized observation. The injected noise performs as a low-pass filter in the characteristic function (CF) domain, and therefore, is capable of recoverving the original analog data if certain conditions are satisfied. The optimal estimator based on the new framework is derived, so is the performance bound in terms of Fisher information. Moreover, a sub-optimal estimator, namely, linear minimum mean square error estimator (LMMSE) is derived, due to the fact that the proposed framework theoretically justifies the additive noise modeling of the quantization process. The bit allocation problem based on the framework is also investigated. A source localization problem in a large-scale sensor network is explored. The maximum-likelihood (ML) estimator based on the quantized data from local sensors and its performance bound in terms of Cram\\u27{e}r-Rao lower bound (CRLB) are derived. Since the number of sensors is large, the law of large numbers (LLN) is utilized to obtain a closed-form version of the performance bound, which clearly shows the dependence of the bound on the sensor density, i.e.,i.e., the Fisher information is a linearly increasing function of the sensor density. Error incurred by the LLN approximation is also theoretically analyzed. Furthermore, the design of sub-optimal local sensor quantizers based on the closed-form solution is proposed. The problem of on-line performance evaluation for state estimation of a moving target is studied. In particular, a compact and efficient recursive conditional Posterior Cram\\u27{e}r-Rao lower bound (PCRLB) is proposed. This bound provides theoretical justification for a heuristic one proposed by other researchers in this area. Theoretical complexity analysis is provided to show the efficiency of the proposed bound, compared to the existing bound

    Cost aware Inference for IoT Devices

    Full text link
    Networked embedded devices (IoTs) of limitedCPU, memory and power resources are revo-lutionizing data gathering, remote monitoringand planning in many consumer and businessapplications. Nevertheless, resource limita-tions place a significant burden on their ser-vice life and operation, warranting cost-awaremethods that are capable of distributivelyscreening redundancies in device informationand transmitting informative data. We pro-pose to train a decentralized gated networkthat, given an observed instance at test-time,allows for activation of select devices to trans-mit information to a central node, which thenperforms inference. We analyze our proposedgradient descent algorithm for Gaussian fea-tures and establish convergence guaranteesunder good initialization. We conduct exper-iments on a number of real-world datasetsarising in IoT applications and show that ourmodel results in over 1.5X service life withnegligible accuracy degradation relative to aperformance achievable by a neural network.http://proceedings.mlr.press/v89/zhu19d/zhu19d.pdfPublished versio

    Value of information based distributed inference and planning

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from department-submitted PDF version of thesisIncludes bibliographical references (p. 85-92).In multi-agent applications it is often the case that not all information is equally valuable to the missions, and agents are typically resource limited. Therefore it is important to ensure that the resources are spent on getting and conveying valuable information. This thesis presents efficient distributed sensing and planning algorithms that improve resource planning efficiently by taking into account the obtainable Value of Information (VoI) and improve distributed sensing efficiency by ensuring agents only broadcast high value measurements. The first result focuses on communication efficient distributed sensing algorithms. In particular, agents broadcast their measurements only when the VoI in their measurements exceeds a pre-defined threshold. The VoI threshold is further adaptively adjusted to better balance between the communication cost incurred and the longterm accuracy of the estimation. Theoretical results are presented establishing almost sure convergence of the communication cost and estimation error for distributions in the exponential family. Moreover, an algorithm that automatically forgets old information is also developed to estimate dynamically changing parameters. Validation through numerical simulations and real datasets show that the new VoI-based algorithms can yield improved parameter estimates than those achieved by previously published hyperparameter consensus algorithms while incurring only a fraction of the communication cost. The second result focuses on efficient distributed planning algorithms. In particular, in a system with heterogeneous agents, a coupled planning framework is presented that evaluates the sensing/exploration activities by the improvement on mission returns. Numerical results shows that the coupling between exploration and tasking agents encourages better cooperation between them, thus leading to better performance than decoupled approaches. A hardware testbed is developed to demonstrate the performance improvements of the coupled approach in context of distributed planning with uncertain target classifications.by Beipeng Mu.S.M

    Heterogeneous Sensor Signal Processing for Inference with Nonlinear Dependence

    Get PDF
    Inferring events of interest by fusing data from multiple heterogeneous sources has been an interesting and important topic in recent years. Several issues related to inference using heterogeneous data with complex and nonlinear dependence are investigated in this dissertation. We apply copula theory to characterize the dependence among heterogeneous data. In centralized detection, where sensor observations are available at the fusion center (FC), we study copula-based fusion. We design detection algorithms based on sample-wise copula selection and mixture of copulas model in different scenarios of the true dependence. The proposed approaches are theoretically justified and perform well when applied to fuse acoustic and seismic sensor data for personnel detection. Besides traditional sensors, the access to the massive amount of social media data provides a unique opportunity for extracting information about unfolding events. We further study how sensor networks and social media complement each other in facilitating the data-to-decision making process. We propose a copula-based joint characterization of multiple dependent time series from sensors and social media. As a proof-of-concept, this model is applied to the fusion of Google Trends (GT) data and stock/flu data for prediction, where the stock/flu data serves as a surrogate for sensor data. In energy constrained networks, local observations are compressed before they are transmitted to the FC. In these cases, conditional dependence and heterogeneity complicate the system design particularly. We consider the classification of discrete random signals in Wireless Sensor Networks (WSNs), where, for communication efficiency, only local decisions are transmitted. We derive the necessary conditions for the optimal decision rules at the sensors and the FC by introducing a hidden random variable. An iterative algorithm is designed to search for the optimal decision rules. Its convergence and asymptotical optimality are also proved. The performance of the proposed scheme is illustrated for the distributed Automatic Modulation Classification (AMC) problem. Censoring is another communication efficient strategy, in which sensors transmit only informative observations to the FC, and censor those deemed uninformative . We design the detectors that take into account the spatial dependence among observations. Fusion rules for censored data are proposed with continuous and discrete local messages, respectively. Their computationally efficient counterparts based on the key idea of injecting controlled noise at the FC before fusion are also investigated. In this thesis, with heterogeneous and dependent sensor observations, we consider not only inference in parallel frameworks but also the problem of collaborative inference where collaboration exists among local sensors. Each sensor forms coalition with other sensors and shares information within the coalition, to maximize its inference performance. The collaboration strategy is investigated under a communication constraint. To characterize the influence of inter-sensor dependence on inference performance and thus collaboration strategy, we quantify the gain and loss in forming a coalition by introducing the copula-based definitions of diversity gain and redundancy loss for both estimation and detection problems. A coalition formation game is proposed for the distributed inference problem, through which the information contained in the inter-sensor dependence is fully explored and utilized for improved inference performance
    corecore