2,059 research outputs found

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Target localization in wireless sensor networks for industrial control with selected sensors

    Get PDF
    This paper presents a novel energy-based target localization method in wireless sensor networks with selected sensors. In this method, sensors use Turbo Product Code (TPC) to transmit decisions to the fusion center. TPC can reduce bit error probability if communication channel errors exist. Moreover, in this method, thresholds for the energy-based target localization are designed using a heuristic method. This design method to find thresholds is suitable for uniformly distributed sensors and normally distributed targets. Furthermore, to save sensor energy, a sensor selection method is also presented. Simulation results showed that if sensors used TPC instead of Hamming code to transmit decisions to the fusion center, localization performance could be improved. Furthermore, the sensor selection method used can substantially reduce energy consumption for our target localization method. At the same time, this target localization method with selected sensors also provides satisfactory localization performance

    Cloud-aided wireless systems: communications and radar applications

    Get PDF
    This dissertation focuses on cloud-assisted radio technologies for communication, including mobile cloud computing and Cloud Radio Access Network (C-RAN), and for radar systems. This dissertation first concentrates on cloud-aided communications. Mobile cloud computing, which allows mobile users to run computationally heavy applications on battery limited devices, such as cell phones, is considered initially. Mobile cloud computing enables the offloading of computation-intensive applications from a mobile device to a cloud processor via a wireless interface. The interplay between offloading decisions at the application layer and physical-layer parameters, which determine the energy and latency associated with the mobile-cloud communication, motivates the inter-layer optimization of fine-grained task offloading across both layers. This problem is modeled by using application call graphs, and the joint optimization of application-layer and physical-layer parameters is carried out via a message passing algorithm by minimizing the total energy expenditure of the mobile user. The concept of cloud radio is also being considered for the development of two cellular architectures known as Distributed RAN (D-RAN) and C-RAN, whereby the baseband processing of base stations is carried out in a remote Baseband Processing Unit (BBU). These architectures can reduce the capital and operating expenses of dense deployments at the cost of increasing the communication latency. The effect of this latency, which is due to the fronthaul transmission between the Remote Radio Head (RRH) and the BBU, is then studied for implementation of Hybrid Automatic Repeat Request (HARQ) protocols. Specifically, two novel solutions are proposed, which are based on the control-data separation architecture. The trade-offs involving resources such as the number of transmitting and receiving antennas, transmission power and the blocklength of the transmitted codeword, and the performance of the proposed solutions is investigated in analysis and numerical results. The detection of a target in radar systems requires processing of the signal that is received by the sensors. Similar to cloud radio access networks in communications, this processing of the signals can be carried out in a remote Fusion Center (FC) that is connected to all sensors via limited-capacity fronthaul links. The last part of this dissertation is dedicated to exploring the application of cloud radio to radar systems. In particular, the problem of maximizing the detection performance at the FC jointly over the code vector used by the transmitting antenna and over the statistics of the noise introduced by quantization at the sensors for fronthaul transmission is investigated by adopting the information-theoretic criterion of the Bhattacharyya distance and information-theoretic bounds on the quantization rate
    corecore