6,438 research outputs found

    Fault-tolerant computer study

    Get PDF
    A set of building block circuits is described which can be used with commercially available microprocessors and memories to implement fault tolerant distributed computer systems. Each building block circuit is intended for VLSI implementation as a single chip. Several building blocks and associated processor and memory chips form a self checking computer module with self contained input output and interfaces to redundant communications buses. Fault tolerance is achieved by connecting self checking computer modules into a redundant network in which backup buses and computer modules are provided to circumvent failures. The requirements and design methodology which led to the definition of the building block circuits are discussed

    Resilience of an embedded architecture using hardware redundancy

    Get PDF
    In the last decade the dominance of the general computing systems market has being replaced by embedded systems with billions of units manufactured every year. Embedded systems appear in contexts where continuous operation is of utmost importance and failure can be profound. Nowadays, radiation poses a serious threat to the reliable operation of safety-critical systems. Fault avoidance techniques, such as radiation hardening, have been commonly used in space applications. However, these components are expensive, lag behind commercial components with regards to performance and do not provide 100% fault elimination. Without fault tolerant mechanisms, many of these faults can become errors at the application or system level, which in turn, can result in catastrophic failures. In this work we study the concepts of fault tolerance and dependability and extend these concepts providing our own definition of resilience. We analyse the physics of radiation-induced faults, the damage mechanisms of particles and the process that leads to computing failures. We provide extensive taxonomies of 1) existing fault tolerant techniques and of 2) the effects of radiation in state-of-the-art electronics, analysing and comparing their characteristics. We propose a detailed model of faults and provide a classification of the different types of faults at various levels. We introduce an algorithm of fault tolerance and define the system states and actions necessary to implement it. We introduce novel hardware and system software techniques that provide a more efficient combination of reliability, performance and power consumption than existing techniques. We propose a new element of the system called syndrome that is the core of a resilient architecture whose software and hardware can adapt to reliable and unreliable environments. We implement a software simulator and disassembler and introduce a testing framework in combination with ERA’s assembler and commercial hardware simulators

    Systematic Model-based Design Assurance and Property-based Fault Injection for Safety Critical Digital Systems

    Get PDF
    With advances in sensing, wireless communications, computing, control, and automation technologies, we are witnessing the rapid uptake of Cyber-Physical Systems across many applications including connected vehicles, healthcare, energy, manufacturing, smart homes etc. Many of these applications are safety-critical in nature and they depend on the correct and safe execution of software and hardware that are intrinsically subject to faults. These faults can be design faults (Software Faults, Specification faults, etc.) or physically occurring faults (hardware failures, Single-event-upsets, etc.). Both types of faults must be addressed during the design and development of these critical systems. Several safety-critical industries have widely adopted Model-Based Engineering paradigms to manage the design assurance processes of these complex CPSs. This thesis studies the application of IEC 61508 compliant model-based design assurance methodology on a representative safety-critical digital architecture targeted for the Nuclear power generation facilities. The study presents detailed experiences and results to demonstrate the benefits of Model testing in finding design flaws and its relevance to subsequent verification steps in the workflow. Additionally, to study the impact of physical faults on the digital architecture we develop a novel property-based fault injection method that overcomes few deficiencies of traditional fault injection methods. The model-based fault injection approach presented here guarantees high efficiency and near-exhaustive input/state/fault space coverage, by utilizing formal model checking principles to identify fault activation conditions and prove the fault tolerance features. The fault injection framework facilitates automated integration of fault saboteurs throughout the model to enable exhaustive fault location coverage in the model

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Software verification plan for GCS

    Get PDF
    This verification plan is written as part of an experiment designed to study the fundamental characteristics of the software failure process. The experiment will be conducted using several implementations of software that were produced according to industry-standard guidelines, namely the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, Software Consideration in Airborne Systems and Equipment Certification, for the development of flight software. This plan fulfills the DO-178A requirements for providing instructions on the testing of each implementation of software. The plan details the verification activities to be performed at each phase in the development process, contains a step by step description of the testing procedures, and discusses all of the tools used throughout the verification process

    On Provably Correct Decision-Making for Automated Driving

    Get PDF
    The introduction of driving automation in road vehicles can potentially reduce road traffic crashes and significantly improve road safety. Automation in road vehicles also brings several other benefits such as the possibility to provide independent mobility for people who cannot and/or should not drive. Many different hardware and software components (e.g. sensing, decision-making, actuation, and control) interact to solve the autonomous driving task. Correctness of such automated driving systems is crucial as incorrect behaviour may have catastrophic consequences. Autonomous vehicles operate in complex and dynamic environments, which requires decision-making and planning at different levels. The aim of such decision-making components in these systems is to make safe decisions at all times. The challenge of safety verification of these systems is crucial for the commercial deployment of full autonomy in vehicles. Testing for safety is expensive, impractical, and can never guarantee the absence of errors. In contrast, formal methods, which are techniques that use rigorous mathematical models to build hardware and software systems can provide a mathematical proof of the correctness of the system. The focus of this thesis is to address some of the challenges in the safety verification of decision-making in automated driving systems. A central question here is how to establish formal verification as an efficient tool for automated driving software development.A key finding is the need for an integrated formal approach to prove correctness and to provide a complete safety argument. This thesis provides insights into how three different formal verification approaches, namely supervisory control theory, model checking, and deductive verification differ in their application to automated driving and identifies the challenges associated with each method. It identifies the need for the introduction of more rigour in the requirement refinement process and presents one possible solution by using a formal model-based safety analysis approach. To address challenges in the manual modelling process, a possible solution by automatically learning formal models directly from code is proposed
    • …
    corecore