2,181 research outputs found

    Model-based detection in cyber-physical systems

    Get PDF

    A Bayesian Approach to Sensor Placement and System Health Monitoring

    Get PDF
    System health monitoring and sensor placement are areas of great technical and scientific interest. Prognostics and health management of a complex system require multiple sensors to extract required information from the sensed environment, because no single sensor can obtain all the required information reliably at all times. The increasing costs of aging systems and infrastructures have become a major concern, and system health monitoring techniques can ensure increased safety and reliability of these systems. Similar concerns also exist for newly designed systems. The main objectives of this research were: (1) to find an effective way for optimal functional sensor placement under uncertainty, and (2) to develop a system health monitoring approach with both prognostic and diagnostic capabilities with limited and uncertain information sensing and monitoring points. This dissertation provides a functional/information --based sensor placement methodology for monitoring the health (state of reliability) of a system and utilizes it in a new system health monitoring approach. The developed sensor placement method is based on Bayesian techniques and is capable of functional sensor placement under uncertainty. It takes into account the uncertainty inherent in characteristics of sensors as well. It uses Bayesian networks for modeling and reasoning the uncertainties as well as for updating the state of knowledge for unknowns of interest and utilizes information metrics for sensor placement based on the amount of information each possible sensor placement scenario provides. A new system health monitoring methodology is also developed which is: (1) capable of assessing current state of a system's health and can predict the remaining life of the system (prognosis), and (2) through appropriate data processing and interpretation can point to elements of the system that have or are likely to cause system failure or degradation (diagnosis). It can also be set up as a dynamic monitoring system such that through consecutive time steps, the system sensors perform observations and send data to the Bayesian network for continuous health assessment. The proposed methodology is designed to answer important questions such as how to infer the health of a system based on limited number of monitoring points at certain subsystems (upward propagation); how to infer the health of a subsystem based on knowledge of the health of the main system (downward propagation); and how to infer the health of a subsystem based on knowledge of the health of other subsystems (distributed propagation)

    A reliable and resource aware framework for data dissemination in wireless sensor networks

    Full text link
    Distinctive from traditional wireless ad hoc networks, wireless sensor networks (WSN) comprise a large number of low-cost miniaturized nodes each acting autonomously and equipped with short-range wireless communication mechanism, limited memory, processing power, and a physical sensing capability. Since sensor networks are resource constrained in terms of power, bandwidth and computational capability, an optimal system design radically changes the performance of the sensor network. Here, a comprehensive information dissemination scheme for wireless sensor networks is performed. Two main research issues are considered: (1) a collaborative flow of information packet/s from the source to sink and (2) energy efficiency of the sensor nodes and the entire system. For the first issue, we designed and evaluated a reactive and on-demand routing paradigm for distributed sensing applications. We name this scheme as IDLF-Information Dissemination via Label ForwarDing IDLF incorporates point to point data transmission where the source initiates the routing scheme and disseminates the information toward the sink (destination) node. Prior to transmission of actual data packet/s, a data tunnel is formed followed by the source node issuing small label information to its neighbors locally. These labels are in turn disseminated in the network. By using small size labels, IDLF avoids generation of unnecessary network traffic and transmission of duplicate packets to nodes. To study the impact of node failures and to improve the reliability of the network, we developed another scheme which is an extension to IDLF. This new scheme, RM-IDLF - Reliable Multipath Information dissemination by Label Forwarding, employ an alternate disjoint path. This alternate path scheme (RM-IDLF) may have a higher path cost in terms of energy consumption, but is more reliable in terms of data packet delivery to sink than the single path scheme (IDLF). In the latter scheme, the protocol establishes multiple (alternate) disjoint path/s from source to destination with negligible control overhead to balance load due to heavy data traffic among intermediate nodes from source to the destination. Another point of interest in this framework is the study of trade-offs between the achieved routing reliability using multiple disjoint path routing and extra energy consumption due to the use of additional path/s. Also, the effect of the failed nodes on the network performance is evaluated within the sensor system; Performance of the label dissemination scheme is evaluated and compared with the classic flooding and SPIN. (Abstract shortened by UMI.)

    A framework for energy based performability models for wireless sensor networks

    Get PDF
    A novel idea of alternating node operations between Active and Sleep modes in Wireless Sensor Network (WSN) has successfully been used to save node power consumption. The idea which started off as a simple implementation of a timer in most protocols has been improved over the years to dynamically change with traffic conditions and the nature of application area. Recently, use of a second low power radio transceiver to triggered Active/Sleep modes has also been made. Active/Sleep operation modes have also been used to separately model and evaluate performance and availability of WSNs. The advancement in technology and continuous improvements of the existing protocols and application implementation demands continue to pose great challenges to the existing performance and availability models. In this study the need for integrating performance and availability studies of WSNs in the presence of both channel and node failures and repairs is investigated. A framework that outlines and characterizes key models required for integration of performance and availability of WSN is in turn outlined. Possible solution techniques for such models are also highlighted. Finally it is shown that the resulting models may be used to comparatively evaluate energy consumption of the existing motes and WSNs as well as deriving required performance measures

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    USING PROBABILISTIC GRAPHICAL MODELS TO DRAW INFERENCES IN SENSOR NETWORKS WITH TRACKING APPLICATIONS

    Get PDF
    Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people

    Vehicle Remote Health Monitoring and Prognostic Maintenance System

    Get PDF

    Reconfigurable middleware architectures for large scale sensor networks

    Get PDF
    Wireless sensor networks, in an effort to be energy efficient, typically lack the high-level abstractions of advanced programming languages. Though strong, the dichotomy between these two paradigms can be overcome. The SENSIX software framework, described in this dissertation, uniquely integrates constraint-dominated wireless sensor networks with the flexibility of object-oriented programming models, without violating the principles of either. Though these two computing paradigms are contradictory in many ways, SENSIX bridges them to yield a dynamic middleware abstraction unifying low-level resource-aware task reconfiguration and high-level object recomposition. Through the layered approach of SENSIX, the software developer creates a domain-specific sensing architecture by defining a customized task specification and utilizing object inheritance. In addition, SENSIX performs better at large scales (on the order of 1000 nodes or more) than other sensor network middleware which do not include such unified facilities for vertical integration
    corecore