127 research outputs found

    Fusing Censored Dependent Data for Distributed Detection

    Full text link
    In this paper, we consider a distributed detection problem for a censoring sensor network where each sensor's communication rate is significantly reduced by transmitting only "informative" observations to the Fusion Center (FC), and censoring those deemed "uninformative". While the independence of data from censoring sensors is often assumed in previous research, we explore spatial dependence among observations. Our focus is on designing the fusion rule under the Neyman-Pearson (NP) framework that takes into account the spatial dependence among observations. Two transmission scenarios are considered, one where uncensored observations are transmitted directly to the FC and second where they are first quantized and then transmitted to further improve transmission efficiency. Copula-based Generalized Likelihood Ratio Test (GLRT) for censored data is proposed with both continuous and discrete messages received at the FC corresponding to different transmission strategies. We address the computational issues of the copula-based GLRTs involving multidimensional integrals by presenting more efficient fusion rules, based on the key idea of injecting controlled noise at the FC before fusion. Although, the signal-to-noise ratio (SNR) is reduced by introducing controlled noise at the receiver, simulation results demonstrate that the resulting noise-aided fusion approach based on adding artificial noise performs very closely to the exact copula-based GLRTs. Copula-based GLRTs and their noise-aided counterparts by exploiting the spatial dependence greatly improve detection performance compared with the fusion rule under independence assumption

    Heterogeneous Sensor Signal Processing for Inference with Nonlinear Dependence

    Get PDF
    Inferring events of interest by fusing data from multiple heterogeneous sources has been an interesting and important topic in recent years. Several issues related to inference using heterogeneous data with complex and nonlinear dependence are investigated in this dissertation. We apply copula theory to characterize the dependence among heterogeneous data. In centralized detection, where sensor observations are available at the fusion center (FC), we study copula-based fusion. We design detection algorithms based on sample-wise copula selection and mixture of copulas model in different scenarios of the true dependence. The proposed approaches are theoretically justified and perform well when applied to fuse acoustic and seismic sensor data for personnel detection. Besides traditional sensors, the access to the massive amount of social media data provides a unique opportunity for extracting information about unfolding events. We further study how sensor networks and social media complement each other in facilitating the data-to-decision making process. We propose a copula-based joint characterization of multiple dependent time series from sensors and social media. As a proof-of-concept, this model is applied to the fusion of Google Trends (GT) data and stock/flu data for prediction, where the stock/flu data serves as a surrogate for sensor data. In energy constrained networks, local observations are compressed before they are transmitted to the FC. In these cases, conditional dependence and heterogeneity complicate the system design particularly. We consider the classification of discrete random signals in Wireless Sensor Networks (WSNs), where, for communication efficiency, only local decisions are transmitted. We derive the necessary conditions for the optimal decision rules at the sensors and the FC by introducing a hidden random variable. An iterative algorithm is designed to search for the optimal decision rules. Its convergence and asymptotical optimality are also proved. The performance of the proposed scheme is illustrated for the distributed Automatic Modulation Classification (AMC) problem. Censoring is another communication efficient strategy, in which sensors transmit only informative observations to the FC, and censor those deemed uninformative . We design the detectors that take into account the spatial dependence among observations. Fusion rules for censored data are proposed with continuous and discrete local messages, respectively. Their computationally efficient counterparts based on the key idea of injecting controlled noise at the FC before fusion are also investigated. In this thesis, with heterogeneous and dependent sensor observations, we consider not only inference in parallel frameworks but also the problem of collaborative inference where collaboration exists among local sensors. Each sensor forms coalition with other sensors and shares information within the coalition, to maximize its inference performance. The collaboration strategy is investigated under a communication constraint. To characterize the influence of inter-sensor dependence on inference performance and thus collaboration strategy, we quantify the gain and loss in forming a coalition by introducing the copula-based definitions of diversity gain and redundancy loss for both estimation and detection problems. A coalition formation game is proposed for the distributed inference problem, through which the information contained in the inter-sensor dependence is fully explored and utilized for improved inference performance

    Multiple Hypothesis Testing Framework for Spatial Signals

    Full text link
    The problem of identifying regions of spatially interesting, different or adversarial behavior is inherent to many practical applications involving distributed multisensor systems. In this work, we develop a general framework stemming from multiple hypothesis testing to identify such regions. A discrete spatial grid is assumed for the monitored environment. The spatial grid points associated with different hypotheses are identified while controlling the false discovery rate at a pre-specified level. Measurements are acquired using a large-scale sensor network. We propose a novel, data-driven method to estimate local false discovery rates based on the spectral method of moments. Our method is agnostic to specific spatial propagation models of the underlying physical phenomenon. It relies on a broadly applicable density model for local summary statistics. In between sensors, locations are assigned to regions associated with different hypotheses based on interpolated local false discovery rates. The benefits of our method are illustrated by applications to spatially propagating radio waves.Comment: Submitted to IEEE Transactions on Signal and Information Processing over Network

    Distributed Detection in the Presence of Byzantine Attacks

    Full text link

    Algorithms for energy-efficient adaptive wireless sensor networks

    Get PDF
    Mención Internacional en el título de doctorIn this thesis we focus on the development of energy-efficient adaptive algorithms for Wireless Sensor Networks. Its contributions can be arranged in two main lines. Firstly, we focus on the efficient management of energy resources in WSNs equipped with finite-size batteries and energy-harvesting devices. To that end, we propose a censoring scheme by which the nodes are able to decide if a message transmission is worthy or not given their energetic condition. In order to do so, we model the system using a Markov Decision Process and use this model to derive optimal policies. Later, these policies are analyzed in simplified scenarios in order to get insights of their features. Finally, using Stochastic Approximation, we develop low-complexity censoring algorithms that approximate the optimal policy, with less computational complexity and faster convergence speed than other approaches such as Q-learning. Secondly, we propose a novel diffusion scheme for adaptive distributed estimation in WSNs. This strategy, which we call Decoupled Adapt-then-Combine (D-ATC), is based on keeping an estimate that each node adapts using purely local information and then combines with the diffused estimations by other nodes in its neighborhood. Our strategy, which is specially suitable for heterogeneous networks, is theoretically analyzed using two different techniques: the classical procedure for transient analysis of adaptive systems and the energy conservation method. Later, as using different combination rules in the transient and steady-state regime is needed to obtain the best performance, we propose two adaptive rules to learn the combination coefficients that are useful for our diffusion strategy. Several experiments simulating both stationary estimation and tracking problems show that our method outperforms state-of-the-art techniques in relevant scenarios. Some of these simulations reveal the robustness of our scheme under node failures. Finally, we show that both approaches can be combined in a common setup: a WSN composed of harvesting nodes aiming to solve an adaptive distributed estimation problem. As a result, a censoring scheme is added on top of D-ATC. We show how our censoring approach helps to improve both steady-state and convergence performance of the diffusion scheme.La presente tesis se centra en el desarrollo de algoritmos adaptativos energéticamente eficientes para redes de sensores inalámbricos. Sus contribuciones se pueden englobar en dos líneas principales. Por un lado, estudiamos el problema de la gestión eficiente de recursos energéticos en redes de sensores equipadas con dispositivos de captación de energía y baterías finitas. Para ello, proponemos un esquema de censura mediante el cual, en un momento dado, un nodo es capaz de decidir si la transmisión de un mensaje merece la pena en las condiciones energéticas actuales. El sistema se modela mediante un Proceso de Decisión de Markov (Markov Decision Process, MDP) de horizonte infinito y dicho modelo nos sirve para derivar políticas óptimas de censura bajo ciertos supuestos. Después, analizamos estas políticas óptimas en escenarios simplificados para extraer intuiciones sobre las mismas. Por último, mediante técnicas de Aproximación Estocástica, desarrollamos algoritmos de censura de menor complejidad que aproximan estas políticas óptimas. Las numerosas simulaciones realizadas muestran que estas aproximaciones son competitivas, obteniendo una mayor tasa de convergencia y mejores prestaciones que otras técnicas del estado del arte como las basadas en Q-learning. Por otro lado, proponemos un nuevo esquema de difusión para estimación distribuida adaptativa. Esta estrategia, que denominamos Decoupled Adapt-then-Combine (D-ATC), se basa en mantener una estimación que cada nodo adapta con información puramente local y que posteriormente combina con las estimaciones difundidas por los demás nodos de la vecindad. Analizamos teóricamente nuestra estrategia, que es especialmente útil en redes heterogéneas, usando dos métodos diferentes: el método clásico para el análisis de régimen transitorio en sistemas adaptativos y el método de conservación de la energía. Posteriormente, y dado que para obtener el mejor rendimiento es necesario utilizar reglas de combinación diferentes en el transitorio y en régimen permanente, proponemos dos reglas adaptativas para el aprendizaje de los pesos de combinación para nuestra estrategia de difusión. La primera de ellas está basada en una aproximación de mínimos cuadrados (least-squares, LS); mientras que la segunda se basa en el algoritmo de proyecciones afines (Afifne Projection Algorithm, APA). Se han realizado numerosos experimentos tanto en escenarios estacionarios como de seguimiento que muestran cómo nuestra estrategia supera en prestaciones a otras aproximaciones del estado del arte. Algunas de estas simulaciones revelan además la robustez de nuestra estrategia ante errores en los nodos de la red. Por último, mostramos que estas dos aproximaciones son complementarias y las combinamos en mismo escenario: una red de sensores inalámbricos compuesta de nodos equipados con dispositivos de captación energética cuyo objetivo es resolver de manera distribuida y adaptativa un problema de estimación. Para ello, añadimos la capacidad de censurar mensajes a nuestro esquema D-ATC. Nuestras simulaciones muestran que la censura puede ser beneficiosa para mejorar tanto el rendimiento en régimen permanente como la tasa de convergencia en escenarios relevantes de estimación basada en difusión.This work was partially supported by the "Formación de Profesorado Universitario" fellowship from the Spanish Ministry of Education (FPU AP2010-5225).Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Santiago Zazo Bello.- Secretario: Miguel Lázaro Gredilla.- Vocal: Alexander Bertran

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    Statistical models for energy-efficient selective communications in sensor networks

    Get PDF
    An inherent characteristic of Wireless Sensor Networks is their ability to operate with autonomy when sensor node devices are resource-constrained. Optimizing energy consumption with the goal of achieving longer sensor network lifetime is a major challenge. This thesis focuses on energy-efficient strategies based on the reduction of communication processes, the most energy expensive tasks by far. In particular, we analyze selective communication policies that allow sensor nodes to save energy resources at the same time that can assure the quantity and quality of the transmitted information. This thesis proposes selective communication strategies for energy-constrained Wireless Sensor Networks, which are based on statistical models of the information flowing through the nodes. Assuming that messages are graded according to an importance/priority value (and whose traffic can be statistically modeled) and that the energy consumption patterns of each individual node are known (or can be estimated), the design and evaluation of optimal selective communication policies that maximize the quality of the information arriving to destination along the network lifetime are analyzed. The problem is initially stated from a decision theory perspective and later reformulated as a dynamic programming problem (based on Markov Decision Processes). The total importance sum of the transmitted, forwarded or finally delivered messages are used as performance measures to design optimal transmission policies. The proposed solutions are fairly simple and based on forwarding thresholds whose values can be adaptively estimated. Simulated numerical tests, including a target tracking scenario, corroborate the analytical claims and reveal that significant energy saving can be obtained to enlarge sensor network lifetime when implementing the proposed schemes

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored
    corecore