84 research outputs found

    Advancing Dynamic Fault Tree Analysis

    Full text link
    This paper presents a new state space generation approach for dynamic fault trees (DFTs) together with a technique to synthesise failures rates in DFTs. Our state space generation technique aggressively exploits the DFT structure --- detecting symmetries, spurious non-determinism, and don't cares. Benchmarks show a gain of more than two orders of magnitude in terms of state space generation and analysis time. Our approach supports DFTs with symbolic failure rates and is complemented by parameter synthesis. This enables determining the maximal tolerable failure rate of a system component while ensuring that the mean time of failure stays below a threshold

    Finding the Median (Obliviously) with Bounded Space

    Full text link
    We prove that any oblivious algorithm using space SS to find the median of a list of nn integers from {1,...,2n}\{1,...,2n\} requires time Ω(nloglogSn)\Omega(n \log\log_S n). This bound also applies to the problem of determining whether the median is odd or even. It is nearly optimal since Chan, following Munro and Raman, has shown that there is a (randomized) selection algorithm using only ss registers, each of which can store an input value or O(logn)O(\log n)-bit counter, that makes only O(loglogsn)O(\log\log_s n) passes over the input. The bound also implies a size lower bound for read-once branching programs computing the low order bit of the median and implies the analog of PNPcoNPP \ne NP \cap coNP for length o(nloglogn)o(n \log\log n) oblivious branching programs

    Synthesizing FDIR Recovery Strategies for Space Systems

    Get PDF
    Dynamic Fault Trees (DFTs) are powerful tools to drive the design of fault tolerant systems. However, semantic pitfalls limit their practical utility for interconnected systems that require complex recovery strategies to maximize their reliability. This thesis discusses the shortcomings of DFTs in the context of analyzing Fault Detection, Isolation and Recovery (FDIR) concepts with a particular focus on the needs of space systems. To tackle these shortcomings, we introduce an inherently non-deterministic model for DFTs. Deterministic recovery strategies are synthesized by transforming these non-deterministic DFTs into Markov automata that represent all possible choices between recovery actions. From the corresponding scheduler, optimized to maximize a given RAMS (Reliability, Availability, Maintainability and Safety) metric, an optimal recovery strategy can then be derived and represented by a model we call recovery automaton. We discuss dedicated techniques for reducing the state space of this recovery automaton and analyze their soundness and completeness. Moreover, modularized approaches to handle the complexity added by the state-based transformation approach are discussed. Furthermore, we consider the non-deterministic approach in a partially observable setting and propose an approach to lift the model for the fully observable case. We give an implementation of our approach within the Model-Based Systems Engineering (MBSE) framework Virtual Satellite. Finally, the implementation is evaluated based on the FFORT benchmark. The results show that basic non-deterministic DFTs generally scale well. However, we also found that semantically enriched non-deterministic DFTs employing repair or delayed observability mechanisms pose a challenge

    Parallel programming using functional languages

    Get PDF
    It has been argued for many years that functional programs are well suited to parallel evaluation. This thesis investigates this claim from a programming perspective; that is, it investigates parallel programming using functional languages. The approach taken has been to determine the minimum programming which is necessary in order to write efficient parallel programs. This has been attempted without the aid of clever compile-time analyses. It is argued that parallel evaluation should be explicitly expressed, by the programmer, in programs. To do achieve this a lazy functional language is extended with parallel and sequential combinators. The mathematical nature of functional languages means that programs can be formally derived by program transformation. To date, most work on program derivation has concerned sequential programs. In this thesis Squigol has been used to derive three parallel algorithms. Squigol is a functional calculus from program derivation, which is becoming increasingly popular. It is shown that some aspects of Squigol are suitable for parallel program derivation, while others aspects are specifically orientated towards sequential algorithm derivation. In order to write efficient parallel programs, parallelism must be controlled. Parallelism must be controlled in order to limit storage usage, the number of tasks and the minimum size of tasks. In particular over-eager evaluation or generating excessive numbers of tasks can consume too much storage. Also, tasks can be too small to be worth evaluating in parallel. Several program techniques for parallelism control were tried. These were compared with a run-time system heuristic for parallelism control. It was discovered that the best control was effected by a combination of run-time system and programmer control of parallelism. One of the problems with parallel programming using functional languages is that non-deterministic algorithms cannot be expressed. A bag (multiset) data type is proposed to allow a limited form of non-determinism to be expressed. Bags can be given a non-deterministic parallel implementation. However, providing the operations used to combine bag elements are associative and commutative, the result of bag operations will be deterministic. The onus is on the programmer to prove this, but usually this is not difficult. Also bags' insensitivity to ordering means that more transformations are directly applicable than if, say, lists were used instead. It is necessary to be able to reason about and measure the performance of parallel programs. For example, sometimes algorithms which seem intuitively to be good parallel ones, are not. For some higher order functions it is posible to devise parameterised formulae describing their performance. This is done for divide and conquer functions, which enables constraints to be formulated which guarantee that they have a good performance. Pipelined parallelism is difficult to analyse. Therefore a formal semantics for calculating the performance of pipelined programs is devised. This is used to analyse the performance of a pipelined Quicksort. By treating the performance semantics as a set of transformation rules, the simulation of parallel programs may be achieved by transforming programs. Some parallel programs perform poorly due to programming errors. A pragmatic method of debugging such programming errors is illustrated by some examples

    Proposed extension of specification approach to meet needs of RCA

    Get PDF
    This document is deliverable D10.2, describing extensions of the MBSE specification approach to needs of future Functional Railway System Architectures within Task 10.3 of work package WP10 Formal Methods for Functional Railway System Architecture, within the X2Rail-5 project. This deliverable is concerned with a specification approach meeting the needs in ongoing and future developments of ERTMS, and the European initiatives RCA and EULYNX. This is a rather large scope, whose general high-level goal may be formulated as: Determine a suitable approach to specify, verify, and validate system requirements, that can meet the needs of initiatives and projects RCA and EULYNX that define a future system architecture

    Many-core architectures with time predictable execution Support for hard real-time applications

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 183-193).Hybrid control systems are a growing domain of application. They are pervasive and their complexity is increasing rapidly. Distributed control systems for future "Intelligent Grid" and renewable energy generation systems are demanding high-performance, hard real-time computation, and more programmability. General-purpose computer systems are primarily designed to process data and not to interact with physical processes as required by these systems. Generic general-purpose architectures even with the use of real-time operating systems fail to meet the hard realtime constraints of hybrid system dynamics. ASIC, FPGA, or traditional embedded design approaches to these systems often result in expensive, complicated systems that are hard to program, reuse, or maintain. In this thesis, we propose a domain-specific architecture template targeting hybrid control system applications. Using power electronics control applications, we present new modeling techniques, synthesis methodologies, and a parameterizable computer architecture for these large distributed control systems. We propose a new system modeling approach, called Adaptive Hybrid Automaton, based on previous work in control system theory, that uses a mixed-model abstractions and lends itself well to digital processing. We develop a domain-specific architecture based on this modeling that uses heterogeneous processing units and predictable execution, called MARTHA. We develop a hard real-time aware router architecture to enable deterministic on-chip interconnect network communication. We present several algorithms for scheduling task-based applications onto these types of heterogeneous architectures. We create Heracles, an open-source, functional, parameterized, synthesizable many-core system design toolkit, that can be used to explore future multi/many-core processors with different topologies, routing schemes, processing elements or cores, and memory system organizations. Using the Heracles design tool we build a prototype of the proposed architecture using a state-of-the-art FPGA-based platform, and deploy and test it in actual physical power electronics systems. We develop and release an open-source, small representative set of power electronics system applications that can be used for hard real-time application benchmarking.by Michel A. Kinsy.Ph.D

    Defining Safe Training Datasets for Machine Learning Models Using Ontologies

    Get PDF
    Machine Learning (ML) models have been gaining popularity in recent years in a wide variety of domains, including safety-critical domains. While ML models have shown high accuracy in their predictions, they are still considered black boxes, meaning that developers and users do not know how the models make their decisions. While this is simply a nuisance in some domains, in safetycritical domains, this makes ML models difficult to trust. To fully utilize ML models in safetycritical domains, there needs to be a method to improve trust in their safety and accuracy without human experts checking each decision. This research proposes a method to increase trust in ML models used in safety-critical domains by ensuring the safety and completeness of the model’s training dataset. Since most of the complexity of the model is built through training, ensuring the safety of the training dataset could help to increase the trust in the safety of the model. The method proposed in this research uses a domain ontology and an image quality characteristic ontology to validate the domain completeness and image quality robustness of a training dataset. This research also presents an experiment as a proof of concept for this method where ontologies are built for the emergency road vehicle domain

    Time and space complexity of rule-based graph programs

    Get PDF
    This thesis concerns the time and space efficiency of programs in GP 2, a rule-based graph transformation language that facilitates formal program analysis. Such programs are a sequence of control structures in which rules are called. A rule describes how part of a host graph is changed to another by specifying a subgraph that is to be replaced. We call the process of finding the specified subgraph in the host graph matching, which takes polynomial time in general. In practise however, we often want rule application to take constant time since it likely corresponds to a single step in a classical algorithm. Several case studies show that the time complexity of GP 2 programs can be on the same level as that of their imperative counterparts. We give linear-time programs for connectedness checking, DAG recognition, and topological sorting, as well as an efficient implementation of Boruvka’s algorithm for finding minimum spanning trees. This efficiency is achieved via roots, which are special nodes in rules and graphs that can be accessed in constant time and allow matching to happen locally around them. The given programs also use depth-first search to traverse graphs in linear time instead of iterating over nodes because GP 2 abstracts away from internal graph data structures. In the spirit of formal program analysis, we give a framework in which to describe the time complexity of these efficient programs. This framework is underpinned by a formal semantics that describes program execution in a sequence of steps that do not cover more than one rule application. On the topic of space efficiency, we give a theoretical result that shows GP 2, like some graph-based machine models, can simulate Turing machines using less space and only quadratic time overhead
    corecore