519 research outputs found

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey

    Get PDF
    Growing progress in sensor technology has constantly expanded the number and range of low-cost, small, and portable sensors on the market, increasing the number and type of physical phenomena that can be measured with wirelessly connected sensors. Large-scale deployments of wireless sensor networks (WSN) involving hundreds or thousands of devices and limited budgets often constrain the choice of sensing hardware, which generally has reduced accuracy, precision, and reliability. Therefore, it is challenging to achieve good data quality and maintain error-free measurements during the whole system lifetime. Self-calibration or recalibration in ad hoc sensor networks to preserve data quality is essential, yet challenging, for several reasons, such as the existence of random noise and the absence of suitable general models. Calibration performed in the field, without accurate and controlled instrumentation, is said to be in an uncontrolled environment. This paper provides current and fundamental self-calibration approaches and models for wireless sensor networks in uncontrolled environments

    Novel pattern recognition methods for classification and detection in remote sensing and power generation applications

    Get PDF
    Novel pattern recognition methods for classification and detection in remote sensing and power generation application

    Sensors Fault Diagnosis Trends and Applications

    Get PDF
    Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis

    A New Method for Multisensor Data Fusion Based on Wavelet Transform in a Chemical Plant

    Get PDF
    Abstract This paper presents a new multi-sensor data fusion method based on the combination of wavelet transform (WT) and extended Kalman filter (EKF). Input data are first filtered by a wavelet transform via Daubechies wavelet "db4" functions and the filtered data are then fused based on variance weights in terms of minimum mean square error. The fused data are finally treated by extended Kalman filter for the final state estimation. The recent data are recursively utilized to apply wavelet transform and extract the variance of the updated data, which makes it suitable to be applied to both static and dynamic systems corrupted by noisy environments. The method has suitable performance in state estimation in comparison with the other alternative algorithms. A three-tank benchmark system has been adopted to comparatively demonstrate the performance merits of the method compared to a known algorithm in terms of efficiently satisfying signal-tonoise (SNR) and minimum square error (MSE) criteria

    Distributed Inference and Learning with Byzantine Data

    Get PDF
    We are living in an increasingly networked world with sensing networks of varying shapes and sizes: the network often comprises of several tiny devices (or nodes) communicating with each other via different topologies. To make the problem even more complicated, the nodes in the network can be unreliable due to a variety of reasons: noise, faults and attacks, thus, providing corrupted data. Although the area of statistical inference has been an active area of research in the past, distributed learning and inference in a networked setup with potentially unreliable components has only gained attention recently. The emergence of big and dirty data era demands new distributed learning and inference solutions to tackle the problem of inference with corrupted data. Distributed inference networks (DINs) consist of a group of networked entities which acquire observations regarding a phenomenon of interest (POI), collaborate with other entities in the network by sharing their inference via different topologies to make a global inference. The central goal of this thesis is to analyze the effect of corrupted (or falsified) data on the inference performance of DINs and design robust strategies to ensure reliable overall performance for several practical network architectures. Specifically, the inference (or learning) process can be that of detection or estimation or classification, and the topology of the system can be parallel, hierarchical or fully decentralized (peer to peer). Note that, the corrupted data model may seem similar to the scenario where local decisions are transmitted over a Binary Symmetric Channel (BSC) with a certain cross over probability, however, there are fundamental differences. Over the last three decades, research community has extensively studied the impact of transmission channels or faults on the distributed detection system and related problems due to its importance in several applications. However, corrupted (Byzantine) data models considered in this thesis, are philosophically different from the BSC or the faulty sensor cases. Byzantines are intentional and intelligent, therefore, they can optimize over the data corruption parameters. Thus, in contrast to channel aware detection, both the FC and the Byzantines can optimize their utility by choosing their actions based on the knowledge of their opponent’s behavior. Study of these practically motivated scenarios in the presence of Byzantines is of utmost importance, and is missing from the channel aware detection and fault tolerant detection literature. This thesis advances the distributed inference literature by providing fundamental limits of distributed inference with Byzantine data and provides optimal counter-measures (using the insights provided by these fundamental limits) from a network designer’s perspective. Note that, the analysis of problems related to strategical interaction between Byzantines and network designed is very challenging (NP-hard is many cases). However, we show that by utilizing the properties of the network architecture, efficient solutions can be obtained. Specifically, we found that several problems related to the design of optimal counter-measures in the inference context are, in fact, special cases of these NP-hard problems which can be solved in polynomial time. First, we consider the problem of distributed Bayesian detection in the presence of data falsification (or Byzantine) attacks in the parallel topology. Byzantines considered in this thesis are those nodes that are compromised and reprogrammed by an adversary to transmit false information to a centralized fusion center (FC) to degrade detection performance. We show that above a certain fraction of Byzantine attackers in the network, the detection scheme becomes completely incapable (or blind) of utilizing the sensor data for detection. When the fraction of Byzantines is not sufficient to blind the FC, we also provide closed form expressions for the optimal attacking strategies for the Byzantines that most degrade the detection performance. Optimal attacking strategies in certain cases have the minimax property and, therefore, the knowledge of these strategies has practical significance and can be used to implement a robust detector at the FC. In several practical situations, parallel topology cannot be implemented due to limiting factors, such as, the FC being outside the communication range of the nodes and limited energy budget of the nodes. In such scenarios, a multi-hop network is employed, where nodes are organized hierarchically into multiple levels (tree networks). Next, we study the problem of distributed inference in tree topologies in the presence of Byzantines under several practical scenarios. We analytically characterize the effect of Byzantines on the inference performance of the system. We also look at the possible counter-measures from the FC’s perspective to protect the network from these Byzantines. These counter-measures are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. For scenarios where this is not possible, Byzantine tolerant schemes, which use game theory and error-correcting codes, are developed that tolerate the effect of Byzantines while maintaining a reasonably good inference performance in the network. Going a step further, we also consider scenarios where a centralized FC is not available. In such scenarios, a solution is to employ detection approaches which are based on fully distributed consensus algorithms, where all of the nodes exchange information only with their neighbors. For such networks, we analytically characterize the negative effect of Byzantines on the steady-state and transient detection performance of conventional consensus-based detection schemes. To avoid performance deterioration, we propose a distributed weighted average consensus algorithm that is robust to Byzantine attacks. Next, we exploit the statistical distribution of the nodes’ data to devise techniques for mitigating the influence of data falsifying Byzantines on the distributed detection system. Since some parameters of the statistical distribution of the nodes’ data might not be known a priori, we propose learning based techniques to enable an adaptive design of the local fusion or update rules. The above considerations highlight the negative effect of the corrupted data on the inference performance. However, it is possible for a system designer to utilize the corrupted data for network’s benefit. Finally, we consider the problem of detecting a high dimensional signal based on compressed measurements with secrecy guarantees. We consider a scenario where the network operates in the presence of an eavesdropper who wants to discover the state of the nature being monitored by the system. To keep the data secret from the eavesdropper, we propose to use cooperating trustworthy nodes that assist the FC by injecting corrupted data in the system to deceive the eavesdropper. We also design the system by determining the optimal values of parameters which maximize the detection performance at the FC while ensuring perfect secrecy at the eavesdropper

    Discrete Time Systems

    Get PDF
    Discrete-Time Systems comprehend an important and broad research field. The consolidation of digital-based computational means in the present, pushes a technological tool into the field with a tremendous impact in areas like Control, Signal Processing, Communications, System Modelling and related Applications. This book attempts to give a scope in the wide area of Discrete-Time Systems. Their contents are grouped conveniently in sections according to significant areas, namely Filtering, Fixed and Adaptive Control Systems, Stability Problems and Miscellaneous Applications. We think that the contribution of the book enlarges the field of the Discrete-Time Systems with signification in the present state-of-the-art. Despite the vertiginous advance in the field, we also believe that the topics described here allow us also to look through some main tendencies in the next years in the research area
    • …
    corecore