973 research outputs found

    Secure State Estimation in the Presence of False Information Injection Attacks

    Get PDF
    In this dissertation, we first investigate the problem of source location estimation in wireless sensor networks (WSNs) based on quantized data in the presence of false information attacks. Using a Gaussian mixture to model the possible attacks, we develop a maximum likelihood estimator (MLE) to estimate the source location. The Cramer-Rao lower bound (CRLB) for this estimation problem is also derived. Then, the assumption that the fusion center does not have the knowledge of the attack probability and the attack noise power investigated. We assume that the attack probability and power are random variables which follow certain uniform distributions. We derive the MLE for the localization problem. The CRLB for this estimation problem is also derived. It is shown that the proposed estimator is robust in various cases with different attack probabilities and parameter mismatch. The linear state estimation problem subjected to False Information Injection is also considered. The relationship between the attacker and the defender is modeled from a minimax perspective, in which the attacker tries to maximize the cost function. On the other hand, the defender tries to optimize the detection threshold selection to minimize the cost function. We consider that the attacker will attack with deterministic bias, then we also considered the random bias. In both cases, we derive the probabilities of detection and miss, and the probability of false alarm is derived based on the Chi squared distribution. We solve the minimax optimization problem numerically for both the cases

    Distributed Detection and Estimation in Wireless Sensor Networks

    Full text link
    In this article we consider the problems of distributed detection and estimation in wireless sensor networks. In the first part, we provide a general framework aimed to show how an efficient design of a sensor network requires a joint organization of in-network processing and communication. Then, we recall the basic features of consensus algorithm, which is a basic tool to reach globally optimal decisions through a distributed approach. The main part of the paper starts addressing the distributed estimation problem. We show first an entirely decentralized approach, where observations and estimations are performed without the intervention of a fusion center. Then, we consider the case where the estimation is performed at a fusion center, showing how to allocate quantization bits and transmit powers in the links between the nodes and the fusion center, in order to accommodate the requirement on the maximum estimation variance, under a constraint on the global transmit power. We extend the approach to the detection problem. Also in this case, we consider the distributed approach, where every node can achieve a globally optimal decision, and the case where the decision is taken at a central node. In the latter case, we show how to allocate coding bits and transmit power in order to maximize the detection probability, under constraints on the false alarm rate and the global transmit power. Then, we generalize consensus algorithms illustrating a distributed procedure that converges to the projection of the observation vector onto a signal subspace. We then address the issue of energy consumption in sensor networks, thus showing how to optimize the network topology in order to minimize the energy necessary to achieve a global consensus. Finally, we address the problem of matching the topology of the network to the graph describing the statistical dependencies among the observed variables.Comment: 92 pages, 24 figures. To appear in E-Reference Signal Processing, R. Chellapa and S. Theodoridis, Eds., Elsevier, 201

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Distributed Detection and Estimation in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) are typically formed by a large number of densely deployed, spatially distributed sensors with limited sensing, computing, and communication capabilities that cooperate with each other to achieve a common goal. In this dissertation, we investigate the problem of distributed detection, classification, estimation, and localization in WSNs. In this context, the sensors observe the conditions of their surrounding environment, locally process their noisy observations, and send the processed data to a central entity, known as the fusion center (FC), through parallel communication channels corrupted by fading and additive noise. The FC will then combine the received information from the sensors to make a global inference about the underlying phenomenon, which can be either the detection or classification of a discrete variable or the estimation of a continuous one.;In the domain of distributed detection and classification, we propose a novel scheme that enables the FC to make a multi-hypothesis classification of an underlying hypothesis using only binary detections of spatially distributed sensors. This goal is achieved by exploiting the relationship between the influence fields characterizing different hypotheses and the accumulated noisy versions of local binary decisions as received by the FC, where the influence field of a hypothesis is defined as the spatial region in its surrounding in which it can be sensed using some sensing modality. In the realm of distributed estimation and localization, we make four main contributions: (a) We first formulate a general framework that estimates a vector of parameters associated with a deterministic function using spatially distributed noisy samples of the function for both analog and digital local processing schemes. ( b) We consider the estimation of a scalar, random signal at the FC and derive an optimal power-allocation scheme that assigns the optimal local amplification gains to the sensors performing analog local processing. The objective of this optimized power allocation is to minimize the L 2-norm of the vector of local transmission powers, given a maximum estimation distortion at the FC. We also propose a variant of this scheme that uses a limited-feedback strategy to eliminate the requirement of perfect feedback of the instantaneous channel fading coefficients from the FC to local sensors through infinite-rate, error-free links. ( c) We propose a linear spatial collaboration scheme in which sensors collaborate with each other by sharing their local noisy observations. We derive the optimal set of coefficients used to form linear combinations of the shared noisy observations at local sensors to minimize the total estimation distortion at the FC, given a constraint on the maximum average cumulative transmission power in the entire network. (d) Using a novel performance measure called the estimation outage, we analyze the effects of the spatial randomness of the location of the sensors on the quality and performance of localization algorithms by considering an energy-based source-localization scheme under the assumption that the sensors are positioned according to a uniform clustering process

    Statistical Tools for Digital Image Forensics

    Get PDF
    A digitally altered image, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic image. The tampering, however, may disturb some underlying statistical properties of the image. Under this assumption, we propose five techniques that quantify and detect statistical perturbations found in different forms of tampered images: (1) re-sampled images (e.g., scaled or rotated); (2) manipulated color filter array interpolated images; (3) double JPEG compressed images; (4) images with duplicated regions; and (5) images with inconsistent noise patterns. These techniques work in the absence of any embedded watermarks or signatures. For each technique we develop the theoretical foundation, show its effectiveness on credible forgeries, and analyze its sensitivity and robustness to simple counter-attacks

    Improving the Inertial Navigation System of the CV90 Platform Using Sensor Fusion

    Get PDF
    The aim of this thesis was to synthesize and evaluate an inertial navigation system (INS) for the Combat Vehicle 90 Platform. The INS that was created utilize sensor fusion in order to combine the different signals coming from the vehicle’s multitude of sensors to estimate the vehicle’s position and heading in some known global reference frame. The CV90 standard INS, the NAV90 system, had performed the task of navigation with an unpredictable behavior due to the fact that it relied on heading estimates from a magnetic compass that is strongly influenced by nearby metallic objects, e.g. other vehicles. It will be demonstrated in this thesis that with a two-axis gyroscope mounted on the weapon’s rotational axis, the position and heading estimate from the INS can continue to provide reliable information even during long periods with without GPS signal reception

    Estimation and control with limited information and unreliable feedback

    Get PDF
    Advancement in sensing technology is introducing new sensors that can provide information that was not available before. This creates many opportunities for the development of new control systems. However, the measurements provided by these sensors may not follow the classical assumptions from the control literature. As a result, standard control tools fail to maximize the performance in control systems utilizing these new sensors. In this work we formulate new assumptions on the measurements applicable to new sensing capabilities, and develop and analyze control tools that perform better than the standard tools under these assumptions. Specifically, we make the assumption that the measurements are quantized. This assumption is applicable, for example, to low resolution sensors, remote sensing using limited bandwidth communication links, and vision-based control. We also make the assumption that some of the measurements may be faulty. This assumption is applicable to advanced sensors such as GPS and video surveillance, as well as to remote sensing using unreliable communication links. The first tool that we develop is a dynamic quantization scheme that makes a control system stable to any bounded disturbance using the minimum number of quantization regions. Both full state feedback and output feedback are considered, as well as nonlinear systems. We further show that our approach remains stable under modeling errors and delays. The main analysis tool we use for proving these results is the nonlinear input-to-state stability property. The second tool that we analyze is the Minimum Sum of Distances estimator that is robust to faulty measurements. We prove that this robustness is maintained when the measurements are also corrupted by noise, and that the estimate is stable with respect to such noise. We also develop an algorithm to compute the maximum number of faulty measurements that this estimator is robust to. The last tool we consider is motivated by vision-based control systems. We use a nonlinear optimization that is taking place over both the model parameters and the state of the plant in order to estimate these quantities. Using the example of an automatic landing controller, we demonstrate the improvement in performance attainable with such a tool
    • …
    corecore