453 research outputs found

    Monitoring of Wireless Sensor Networks

    Get PDF

    High Accuracy Distributed Target Detection and Classification in Sensor Networks Based on Mobile Agent Framework

    Get PDF
    High-accuracy distributed information exploitation plays an important role in sensor networks. This dissertation describes a mobile-agent-based framework for target detection and classification in sensor networks. Specifically, we tackle the challenging problems of multiple- target detection, high-fidelity target classification, and unknown-target identification. In this dissertation, we present a progressive multiple-target detection approach to estimate the number of targets sequentially and implement it using a mobile-agent framework. To further improve the performance, we present a cluster-based distributed approach where the estimated results from different clusters are fused. Experimental results show that the distributed scheme with the Bayesian fusion method have better performance in the sense that they have the highest detection probability and the most stable performance. In addition, the progressive intra-cluster estimation can reduce data transmission by 83.22% and conserve energy by 81.64% compared to the centralized scheme. For collaborative target classification, we develop a general purpose multi-modality, multi-sensor fusion hierarchy for information integration in sensor networks. The hierarchy is com- posed of four levels of enabling algorithms: local signal processing, temporal fusion, multi-modality fusion, and multi-sensor fusion using a mobile-agent-based framework. The fusion hierarchy ensures fault tolerance and thus generates robust results. In the meanwhile, it also takes into account energy efficiency. Experimental results based on two field demos show constant improvement of classification accuracy over different levels of the hierarchy. Unknown target identification in sensor networks corresponds to the capability of detecting targets without any a priori information, and of modifying the knowledge base dynamically. In this dissertation, we present a collaborative method to solve this problem among multiple sensors. When applied to the military vehicles data set collected in a field demo, about 80% unknown target samples can be recognized correctly, while the known target classification ac- curacy stays above 95%

    A Fault Tolerant System for an Integrated Avionics Sensor Configuration

    Get PDF
    An aircraft sensor fault tolerant system methodology for the Transport Systems Research Vehicle in a Microwave Landing System (MLS) environment is described. The fault tolerant system provides reliable estimates in the presence of possible failures both in ground-based navigation aids, and in on-board flight control and inertial sensors. Sensor failures are identified by utilizing the analytic relationships between the various sensors arising from the aircraft point mass equations of motion. The estimation and failure detection performance of the software implementation (called FINDS) of the developed system was analyzed on a nonlinear digital simulation of the research aircraft. Simulation results showing the detection performance of FINDS, using a dual redundant sensor compliment, are presented for bias, hardover, null, ramp, increased noise and scale factor failures. In general, the results show that FINDS can distinguish between normal operating sensor errors and failures while providing an excellent detection speed for bias failures in the MLS, indicated airspeed, attitude and radar altimeter sensors

    Definition and Empirical Evaluation of Voters for Redundant Smart Sensor Systems Definición y Evaluación Empírica de Algoritmos de Voteo para Sistemas Redundantes de Sensado Inteligente

    Get PDF
    Abstract This study is the first attempt for integration voting algorithms with fault diagnosis devices. Voting algorithms are used to arbitrate between the results of redundant modules in fault-tolerant systems. Smart sensors are used for FDI (Fault Detection and Isolation) purposes by means of their built in intelligence. Integration of fault masking and FDI strategies is necessary in the construction of ultra-available/safe systems with on-line fault detection capability. This article introduces a range of novel software voting algorithms which adjudicate among the results of redundant smart sensors in a Triple Modular Redundant (TMR) system. Techniques to integrate replicated smart sensors and fault masking approach are discussed, and a classification of hybrid voters is provided based on result and confidence values, which affect the metrics of availability and safety.Thus, voters are classified into four groups: Independent-diagnostic safety-optimised voters, Integrated-diagnostic safety-optimised voters, Independent-diagnostic availability-optimised voters and Integrated-diagnostic availability-optimised voters. The properties of each category are explained and sample versions of each class as well as their possible application areas are discussed. Keywords: Ultra-Available System, Smart Sensor, Fault Masking, Triple Modular Redundancy. Resumen Este estudio es una primer aproximación para la integración de algoritmos de voteo con dispositivos de diagnóstico de fallas. Los algoritmos de voteo son usados para arbitrar entre los resultados de elementos redundantes en sistemas tolerantes a fallas. Los sensores inteligentes son usados para propositos de detección y separación de fallas (FDI) dada la capacidad su capacidad de inteligencia construida. La integración de enmascaramiento de fallas y las estrategias de FDI is necesaria en la construcción de sistemas altamente disponibles y seguros con la capacidad de detección de fallas en línea. Este artículo introduce un rango de algoritmos de voteo los cuales adjudican un resultado entre los resultados generados por los sensores inteligentes en un módulo de redundancia triple. Las técnicas para integrar los sensores inteligentes replicados y la aproximación de enmascaramiento de fallas son revisadas en este artículo. Una clasificación de algoritmos de voteo híbrido es provista con base en el resultado y los valores de confianza los cuales afectan las métricas de disponibilidad y seguridad de estos algoritmos. De hecho los algoritmos de voteo son clasificados en cuatro grupos: Diagnóstico-Independiente con seguridad-optimizada, Diagnóstico-Integrado con seguridad-optimizada, Diagnóstico-Independiente con disponibilidad-opitimizada y Diagnóstico-Integrado con disponibilidad-optimizada. Las propiedades de cada categoria son revisadas asi como muestras de sus implementaciones son discutidas

    Robust and reliable decision-making systems and algorithms

    Get PDF
    We investigate robustness and reliability in decision-making systems and algorithms based on the tradeoff between cost and performance. We propose two abstract frameworks to investigate robustness and reliability concerns, which critically impact the design and analysis of systems and algorithms based on unreliable components. We consider robustness in online systems and algorithms under the framework of online optimization subject to adversarial perturbations. The framework of online optimization models a rich class of problems from information theory, machine learning, game theory, optimization, and signal processing. This is a repeated game framework where, on each round, a player selects an action from a decision set using a randomized strategy, and then Nature reveals a loss function for this action, for which the player incurs a loss. Through a worst case adversary framework to model the perturbations, we introduce a randomized algorithm that is provably robust even against such adversarial attacks. In particular, we show that this algorithm is Hannan-consistent with respect to a rich class of randomized strategies under mild regularity conditions. We next focus on reliability of decision-making systems and algorithms based on the problem of fusing several unreliable computational units that perform the same task under cost and fidelity constraints. In particular, we model the relationship between the fidelity of the outcome and the cost of computing it as an additive perturbation. We analyze performance of repetition-based strategies that distribute cost across several unreliable units and fuse their outcomes. When the cost is a convex function of fidelity, the optimal repetition-based strategy in terms of minimizing total incurred cost while achieving a target mean-square error performance may fuse several computational units. For concave and linear costs, a single more reliable unit incurs lower cost compared to fusion of several lower cost and less reliable units while achieving the same mean-square error (MSE) performance. We show how our results give insight into problems from theoretical neuroscience, circuits, and crowdsourcing. We finally study an application of a partial information extension of the cost-fidelity framework of this dissertation to a stochastic gradient descent problem, where the underlying cost-fidelity function is assumed to be unknown. We present a generic framework for trading off fidelity and cost in computing stochastic gradients when the costs of acquiring stochastic gradients of different quality are not known a priori. We consider a mini-batch oracle that distributes a limited query budget over a number of stochastic gradients and aggregates them to estimate the true gradient. Since the optimal mini-batch size depends on the unknown cost fidelity function, we propose an algorithm, EE-Grad, that sequentially explores the performance of mini-batch oracles and exploits the accumulated knowledge to estimate the one achieving the best performance in terms of cost efficiency. We provide performance guarantees for EE-Grad with respect to the optimal mini-batch oracle, and illustrate these results in the case of strongly convex objectives

    Intelligent fault detection and classification based on hybrid deep learning methods for Hardware-in-the-Loop test of automotive software systems

    Get PDF
    Hardware-in-the-Loop (HIL) has been recommended by ISO 26262 as an essential test bench for determining the safety and reliability characteristics of automotive software systems (ASSs). However, due to the complexity and the huge amount of data recorded by the HIL platform during the testing process, the conventional data analysis methods used for detecting and classifying faults based on the human expert are not realizable. Therefore, the development of effective means based on the historical data set is required to analyze the records of the testing process in an efficient manner. Even though data-driven fault diagnosis is superior to other approaches, selecting the appropriate technique from the wide range of Deep Learning (DL) techniques is challenging. Moreover, the training data containing the automotive faults are rare and considered highly confidential by the automotive industry. Using hybrid DL techniques, this study proposes a novel intelligent fault detection and classification (FDC) model to be utilized during the V-cycle development process, i.e., the system integration testing phase. To this end, an HIL-based real-time fault injection framework is used to generate faulty data without altering the original system model. In addition, a combination of the Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) is employed to build the model structure. In this study, eight types of sensor faults are considered to cover the most common potential faults in the signals of ASSs. As a case study, a gasoline engine system model is used to demonstrate the capabilities and advantages of the proposed method and to verify the performance of the model. The results prove that the proposed method shows better detection and classification performance compared to other standalone DL methods. Specifically, the overall detection accuracies of the proposed structure in terms of precision, recall and F1-score are 98.86%, 98.90% and 98.88%, respectively. For classification, the experimental results also demonstrate the superiority under unseen test data with an average accuracy of 98.8%

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    A comparison of different approaches to target differentiation with sonar

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent University, 2001.Thesis (Ph.D.) -- Bilkent University, 2001.Includes bibliographical references leaves 180-197This study compares the performances of di erent classication schemes and fusion techniques for target di erentiation and localization of commonly encountered features in indoor robot environments using sonar sensing Di erentiation of such features is of interest for intelligent systems in a variety of applications such as system control based on acoustic signal detection and identication map building navigation obstacle avoidance and target tracking The classication schemes employed include the target di erentiation algorithm developed by Ayrulu and Barshan statistical pattern recognition techniques fuzzy c means clustering algorithm and articial neural networks The fusion techniques used are Dempster Shafer evidential reasoning and di erent voting schemes To solve the consistency problem arising in simple ma jority voting di erent voting schemes including preference ordering and reliability measures are proposed and veried experimentally To improve the performance of neural network classiers di erent input signal representations two di erent training algorithms and both modular and non modular network structures are considered The best classication and localization scheme is found to be the neural network classier trained with the wavelet transform of the sonar signals This method is applied to map building in mobile robot environments Physically di erent sensors such as infrared sensors and structured light systems besides sonar sensors are also considered to improve the performance in target classication and localization.Ayrulu (Erdem), BirselPh.D

    Classification techniques on computerized systems to predict and/or to detect Apnea: A systematic review

    Get PDF
    Sleep apnea syndrome (SAS), which can significantly decrease the quality of life is associated with a major risk factor of health implications such as increased cardiovascular disease, sudden death, depression, irritability, hypertension, and learning difficulties. Thus, it is relevant and timely to present a systematic review describing significant applications in the framework of computational intelligence-based SAS, including its performance, beneficial and challenging effects, and modeling for the decision-making on multiple scenarios.info:eu-repo/semantics/publishedVersio
    corecore