8 research outputs found

    Beyond the golden run : evaluating the use of reference run models in fault injection analysis

    Get PDF
    Fault injection (FI) has been shown to be an effective approach to assess- ing the dependability of software systems. To determine the impact of faults injected during FI, a given oracle is needed. This oracle can take a variety of forms, however prominent oracles include (i) specifications, (ii) error detection mechanisms and (iii) golden runs. Focusing on golden runs, in this paper we show that there are classes of software which a golden run based approach can not be used to analyse. Specifically we demonstrate that a golden run based approach can not be used when analysing systems which employ a main control loop with an irregular period. Further, we show how a simple model, which has been refined using FI, can be employed as an oracle in the analysis of such a system

    Verifying Quantitative Reliability of Programs That Execute on Unreliable Hardware

    Get PDF
    Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and recovery from soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. In this paper we present Rely, a programming language that enables developers to reason about the quantitative reliability of an application -- namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification, that characterizes the reliability of the underlying hardware components, and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented in Rely.This research was supported in part by the National Science Foundation (Grants CCF-0905244, CCF-1036241, CCF-1138967, CCF-1138967, and IIS-0835652), the United States Department of Energy (Grant DE-SC0008923), and DARPA (Grants FA8650-11-C-7192, FA8750-12-2-0110)

    Automation Derivation of Application-Aware Error Detectors Using Compiler Analysis

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation / NSF ACI CNS-040634 and NSF CNS 05-24695Gigascale Systems Research CenterMotorola Corp

    A Study of Software Input Failure Propagation Mechanisms

    Get PDF
    Probabilistic Risk Assessment (PRA) is a well-established technique to assess the probability of failure or success of a system. Classical PRA does not consider the contributions of software to risk. Dr. B. Li and C. Smidts have established a framework to integrate software into PRA which recognizes the existence of four classes of risk contributors: functional, input, output and support failures. Input/Output failures have been shown to make up 57.4 % of the failures experienced during software development of major aerospace systems and have been at the origin of a number of major accidents such as the Mars Polar Lander. This research quantifies the contribution of the input failures. More specifically, this dissertation 1) defines the concept of input failure, 2) studies the related propagation mechanisms, 2) estimates the propagation probability for different types of input failures, and 3) applies the fault propagation analysis to the framework of integrating software into PRA. The dissertation defines the concept of artifact as a reference point to identify expected inputs and consequently input failures (inputs which differ from the expected ones). Input failures are divided into value-related failures (including value, range, type and amount failures) and time-related failures (including time, rate and duration failures). Value failures are examined first. The concept of masking areas and flat parts is defined, and the dissertation proposes an Image Reconstruction Method (IRM) to estimate the propagation probability of input value failures. This method is proven to require less number of test cases than one that could be based on random testing to reach the same relative error. For the other input failure modes, the dissertation reveals how they transform to the data state error and formalizes their propagation criteria so that the IRM can be applied to estimate the propagation probability. The contributions are thus: 1. Clear definition of the concept of input failure; 2. Definition of a systematic process of identification and quantification of the contributions of input failures to risk; 3. Systematic analysis of the propagation mechanisms of each type of input failures

    From experiment to design – fault characterization and detection in parallel computer systems using computational accelerators

    Get PDF
    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads

    On the Placement of Software Mechanisms for Detection of Data Errors

    No full text
    An important aspect in the development of dependable software is to decide where to locate mechanisms for efficient error detection and recovery. We present a comparison between two methods for selecting locations for error detection mechanisms, in this case executable assertions (EA's), in black-box modular software. Our results show that by placing EA's based on error propagation analysis one may reduce the memory and execution time requirements as compared to experience- and heuristic-based placement while maintaining the obtained detection coverage. Further, we show the sensitivity of the EA-provided coverage estimation on the choice of the underlying error model. Subsequently, we extend the analysis framework such that error-model effects are also addressed and introduce measures for classifying signals according to their effect on system output when errors are present. The extended framework facilitates profiling of software systems from varied dependability perspectives and is also less susceptible to the effects of having different error models for estimating detection coverage

    On the placement of software mechanisms for detection of data errors

    No full text
    corecore