47 research outputs found

    Tolerating failures of continuous-valued sensors

    Get PDF
    One aspect of fault tolerance in process control programs is the ability to tolerate sensor failure. A methodology for transforming a process control program that cannot tolerate sensor failures onto one that can is presented. Issues addressed include modifying specifications in order to accommodate uncertainty in sensor values and averaging sensor values in a fault tolerant manner. In addition, a hierarchy of sensor failure models is identified, and both the attainable accuracy and the run-time complexity of sensor averaging with respect to this hierarchy is discussed

    Masking failures of multidimensional sensors (extended abstract)

    Get PDF
    When a computer monitors a physical process, the computer uses sensors to determine the values of the physical variables that represent the state of the process. A sensor can sometimes fail, however, and in the worst case report a value completely unrelated to the true physical value. The work described is motivated by a methodology for transforming a process control program that can not tolerate sensor failure into one that can. In this methodology, a reliable abstract sensor is created by combining information from several real sensors that measure the same physical value. To be useful, an abstract sensor must deliver reasonably accurate information at reasonable computational cost. Sensors are considered that deliver multidimensional values (e.g., location or velocity in three dimensions, or both temperature and pressure). Geometric techniques are used to derive upper bounds on abstract sensor accuracy and to develop efficient algorithms for implementing abstract sensors

    Self-stabilizing Numerical Iterative Computation

    Full text link
    Many challenging tasks in sensor networks, including sensor calibration, ranking of nodes, monitoring, event region detection, collaborative filtering, collaborative signal processing, {\em etc.}, can be formulated as a problem of solving a linear system of equations. Several recent works propose different distributed algorithms for solving these problems, usually by using linear iterative numerical methods. In this work, we extend the settings of the above approaches, by adding another dimension to the problem. Specifically, we are interested in {\em self-stabilizing} algorithms, that continuously run and converge to a solution from any initial state. This aspect of the problem is highly important due to the dynamic nature of the network and the frequent changes in the measured environment. In this paper, we link together algorithms from two different domains. On the one hand, we use the rich linear algebra literature of linear iterative methods for solving systems of linear equations, which are naturally distributed with rapid convergence properties. On the other hand, we are interested in self-stabilizing algorithms, where the input to the computation is constantly changing, and we would like the algorithms to converge from any initial state. We propose a simple novel method called \syncAlg as a self-stabilizing variant of the linear iterative methods. We prove that under mild conditions the self-stabilizing algorithm converges to a desired result. We further extend these results to handle the asynchronous case. As a case study, we discuss the sensor calibration problem and provide simulation results to support the applicability of our approach

    Detection of global state predicates

    Get PDF
    The problem addressed here arises in the context of Meta: how can a set of processes monitor the state of a distributed application in a consistent manner? For example, consider the simple distributed application as shown here. Each of the three processes in the application has a light, and the control processes would each like to take an action when some specified subset of the lights are on. The application processes are instrumented with stubs that determine when the process turns its lights on or off. This information is disseminated to the control processes, each of which then determines when its condition of interest is met. Meta is built on top of the ISIS toolkit, and so we first built the sensor dissemination mechanism using atomic broadcast. Atomic broadcast guarantees that all recipients receive the messages in the same order and that this order is consistent with causality. Unfortunately, the control processes are somewhat limited in what they can deduce when they find that their condition of interest holds

    The Isis project: Fault-tolerance in large distributed systems

    Get PDF
    This final status report covers activities of the Isis project during the first half of 1992. During the report period, the Isis effort has achieved a major milestone in its effort to redesign and reimplement the Isis system using Mach and Chorus as target operating system environments. In addition, we completed a number of publications that address issues raised in our prior work; some of these have recently appeared in print, while others are now being considered for publication in a variety of journals and conferences

    Attack-Resilient Sensor Fusion

    Get PDF
    This work considers the problem of attack-resilient sensor fusion in an autonomous system where multiple sensors measure the same physical variable. A malicious attacker may corrupt a subset of these sensors and send wrong measurements to the controller on their behalf, potentially compromising the safety of the system. We formalize the goals and constraints of such an attacker who also wants to avoid detection by the system. We argue that the attacker’s capabilities depend on the amount of information she has about the correct sensors’ measurements. In the presence of a shared bus where messages are broadcast to all components connected to the network, the attacker may consider all other measurements before sending her own in order to achieve maximal impact. Consequently, we investigate effects of communication schedules on sensor fusion performance. We provide worst- and average-case results in support of the Ascending schedule, where sensors send their measurements in a fixed succession based on their precision, starting from the most precise sensors. Finally, we provide a case study to illustrate the use of this approach

    Optimal Distributed Fault-Tolerant Sensor Fusion: Fundamental Limits and Efficient Algorithms

    Full text link
    Distributed estimation is a fundamental problem in signal processing which finds applications in a variety of scenarios of interest including distributed sensor networks, robotics, group decision problems, and monitoring and surveillance applications. The problem considers a scenario where distributed agents are given a set of measurements, and are tasked with estimating a target variable. This work considers distributed estimation in the context of sensor networks, where a subset of sensor measurements are faulty and the distributed agents are agnostic to these faulty sensor measurements. The objective is to minimize i) the mean square error in estimating the target variable at each node (accuracy objective), and ii) the mean square distance between the estimates at each pair of nodes (consensus objective). It is shown that there is an inherent tradeoff between satisfying the former and latter objectives. The tradeoff is explicitly characterized and the fundamental performance limits are derived under specific statistical assumptions on the sensor output statistics. Assuming a general stochastic model, the sensor fusion algorithm optimizing this tradeoff is characterized through a computable optimization problem. Finding the optimal sensor fusion algorithm is computationally complex. To address this, a general class of low-complexity Brooks-Iyengar Algorithms are introduced, and their performance, in terms of accuracy and consensus objectives, is compared to that of optimal linear estimators through case study simulations of various scenarios

    Ensuring Reliable Measurements In Remote Aquatic Sensor Networks

    Full text link
    A flood monitoring system comprises an extensive network of water sensors, a bundle of forecast simulations models, and a decision-support information system. A cascade of uncertainties present in each part of the system affects a reliable flood alert and response. The timeliness and quality of data gathering, used subsequently in forecasting models, is affected by the pervasive nature of the monitoring network where aquatic sensors are vulnerable to external disturbances affecting the accuracy of data acquisition. Existing solutions for aquatic monitoring are composed by heterogeneous sensors usually unable to ensure reliable measurements in complex scenarios, due to specific effects of each technology as transitional loss of availability, errors, limits of coverage, etc. In this paper, we introduce a more general study of all aspects of the criticality of sensor networks in the aquatic monitoring process, and we motivate for the need of reliable data collection in harsh coastal and marine environments. It is presented an overview of the main challenges such as the sensors power life, sensor hardware compatibility, reliability and long-range communication. These issues need to be addressed to improve the robustness of the sensors measurements. The development of solutions to automatically adjust the sensors measurements to each disturbance accordingly would provide an important increase on the quality of the measurements, thus supplying other parts of a flood monitoring system with dependable monitoring data. Also, with the purpose of providing software solutions to hardware failures, we introduce context-awareness techniques such as data processing, filtering and sensor fusion methods that were applied to a real working monitoring network with several proprietary probes (measuring conductivity, temperature, depth and various water quality parameters) in distant sites in Portugal. The goal is to assess the best technique to overcome each detected faulty measurement without compromising the time frame of the monitoring process
    corecore