150 research outputs found

    Extended Fault Trees Analysis supported by Stochastic Petri Nets

    Get PDF
    This work presents several extensions to the Fault Tree [90] formalism used to build models oriented to the Dependability [103] analysis of systems. In this way, we increment the modelling capacity of Fault Trees which turn from simple combinatorial models to an high level language to represent more complicated aspects of the behaviour and of the failure mode of systems. Together with the extensions to the Fault Tree formalism, this work proposes solution methods for extended Fault Trees in order to cope with the new modelling facilities. These methods are mainly based on the use of Stochastic Petri Nets. Some of the formalisms described in this work are already present in the literature; for them we propose alternative solution methods with respect to the existing ones. Other formalisms are instead part of the original contribution of this work

    Methods for the efficient measurement of phased mission system reliability and component importance

    Get PDF
    An increasing number of systems operate over a number of consecutive time periods, in which their reliability structure and the consequences of failure differ, in order to perform some overall operation. Each distinct time period is known as a phase and the overall operation is known as a phased mission. Generally, a phased mission fails immediately if the system fails at any point and is considered a success only if all phases are completed without failure. The work presented in this thesis provides efficient methods for the prediction and optimisation of phased mission reliability. A number of techniques and methods for the analysis of phased mission reliability have been previously developed. Due to the component and system failure time dependencies introduced by the phases, the computational expense of these methods is high and this limits the size of the systems that can be analysed in reasonable time frames on modern computers. Two importance measures, which provide an index of the influence of each component on the system reliability, have also been previously developed. This is useful for the optimisation of the reliability of a phased mission, however a much larger number have been developed for non-phased missions and the different perspectives and functions they provide are advantageous. This thesis introduces new methods as well as improvements and extensions to existing methods for the analysis of both non-repairable and repairable systems with an emphasis on improved efficiency in the derivation of phase and mission reliability. New importance measures for phased missions are also presented, including interpretations of those currently available for non-phased missions. These provide a number of interpretations of component importance, allowing those most suitable in a given context to be employed and thus aiding in the optimisation of mission reliability. In addition, an extensive computer code has been produced that implements and tests the majority of the newly developed techniques and methods.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Methods for the efficient measurement of phased mission system reliability and component importance

    Get PDF
    An increasing number of systems operate over a number of consecutive time periods, in which their reliability structure and the consequences of failure differ, in order to perform some overall operation. Each distinct time period is known as a phase and the overall operation is known as a phased mission. Generally, a phased mission fails immediately if the system fails at any point and is considered a success only if all phases are completed without failure. The work presented in this thesis provides efficient methods for the prediction and optimisation of phased mission reliability. A number of techniques and methods for the analysis of phased mission reliability have been previously developed. Due to the component and system failure time dependencies introduced by the phases, the computational expense of these methods is high and this limits the size of the systems that can be analysed in reasonable time frames on modern computers. Two importance measures, which provide an index of the influence of each component on the system reliability, have also been previously developed. This is useful for the optimisation of the reliability of a phased mission, however a much larger number have been developed for non-phased missions and the different perspectives and functions they provide are advantageous. This thesis introduces new methods as well as improvements and extensions to existing methods for the analysis of both non-repairable and repairable systems with an emphasis on improved efficiency in the derivation of phase and mission reliability. New importance measures for phased missions are also presented, including interpretations of those currently available for non-phased missions. These provide a number of interpretations of component importance, allowing those most suitable in a given context to be employed and thus aiding in the optimisation of mission reliability. In addition, an extensive computer code has been produced that implements and tests the majority of the newly developed techniques and methods.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Formal verification of automotive embedded UML designs

    Get PDF
    Software applications are increasingly dominating safety critical domains. Safety critical domains are domains where the failure of any application could impact human lives. Software application safety has been overlooked for quite some time but more focus and attention is currently directed to this area due to the exponential growth of software embedded applications. Software systems have continuously faced challenges in managing complexity associated with functional growth, flexibility of systems so that they can be easily modified, scalability of solutions across several product lines, quality and reliability of systems, and finally the ability to detect defects early in design phases. AUTOSAR was established to develop open standards to address these challenges. ISO-26262, automotive functional safety standard, aims to ensure functional safety of automotive systems by providing requirements and processes to govern software lifecycle to ensure safety. Each functional system needs to be classified in terms of safety goals, risks and Automotive Safety Integrity Level (ASIL: A, B, C and D) with ASIL D denoting the most stringent safety level. As risk of the system increases, ASIL level increases and the standard mandates more stringent methods to ensure safety. ISO-26262 mandates that ASILs C and D classified systems utilize walkthrough, semi-formal verification, inspection, control flow analysis, data flow analysis, static code analysis and semantic code analysis techniques to verify software unit design and implementation. Ensuring software specification compliance via formal methods has remained an academic endeavor for quite some time. Several factors discourage formal methods adoption in the industry. One major factor is the complexity of using formal methods. Software specification compliance in automotive remains in the bulk heavily dependent on traceability matrix, human based reviews, and testing activities conducted on either actual production software level or simulation level. ISO26262 automotive safety standard recommends, although not strongly, using formal notations in automotive systems that exhibit high risk in case of failure yet the industry still heavily relies on semi-formal notations such as UML. The use of semi-formal notations makes specification compliance still heavily dependent on manual processes and testing efforts. In this research, we propose a framework where UML finite state machines are compiled into formal notations, specification requirements are mapped into formal model theorems and SAT/SMT solvers are utilized to validate implementation compliance to specification. The framework will allow semi-formal verification of AUTOSAR UML designs via an automated formal framework backbone. This semi-formal verification framework will allow automotive software to comply with ISO-26262 ASIL C and D unit design and implementation formal verification guideline. Semi-formal UML finite state machines are automatically compiled into formal notations based on Symbolic Analysis Laboratory formal notation. Requirements are captured in the UML design and compiled automatically into theorems. Model Checkers are run against the compiled formal model and theorems to detect counterexamples that violate the requirements in the UML model. Semi-formal verification of the design allows us to uncover issues that were previously detected in testing and production stages. The methodology is applied on several automotive systems to show how the framework automates the verification of UML based designs, the de-facto standard for automotive systems design, based on an implicit formal methodology while hiding the cons that discouraged the industry from using it. Additionally, the framework automates ISO-26262 system design verification guideline which would otherwise be verified via human error prone approaches

    Non deterministic Repairable Fault Trees for computing optimal repair strategy

    Get PDF
    In this paper, the Non deterministic Repairable Fault Tree (NdRFT) formalism is proposed: it allows to model failure modes of complex systems as well as their repair processes. The originality of this formalism with respect to other Fault Tree extensions is that it allows to face repair strategies optimization problems: in an NdRFT model, the decision on whether to start or not a given repair action is non deterministic, so that all the possibilities are left open. The formalism is rather powerful allowing to specify which failure events are observable, whether local repair or global repair can be applied, and the resources needed to start a repair action. The optimal repair strategy can then be computed by solving an optimization problem on a Markov Decision Process (MDP) derived from the NdRFT. A software framework is proposed in order to perform in automatic way the derivation of an MDP from a NdRFT model, and to deal with the solution of the MDP

    Design-time detection of physical-unit changes in product lines

    Get PDF
    Software product lines evolve over time, both as new products are added to the product line and as existing products are updated. This evolution creates unintended as well as planned changes to Systems. A persistent problem is that unintended changes are hard to detect. Often they are not discovered until testing or operations. Late discovery is a problem especially in safety-critical, cyberphysical product lines such as avionics, pacemakers, and smart-braking systems, where unintended changes may lead to accidents. This thesis proposes an approach and a prototype tool to detect unintended changes earlier in development of a new product in the product line. The capability to detect potentially risky, unintended changes at the design stage is beneficial because repair is easier, less costly, and safer in design than when detection is delayed to testing or operations. The Product Line Change Detector (PLCD) introduced here analyzes products’ SysML block and parametric diagrams, which are typical project artifacts for cyber-physical systems, in order to detect problematic, unintended changes. The PLCD software automatically detects potential change-related issues, ranks them in terms of severity using the products’ safety-analysis artifacts, and reports them to developers in a graphical format. Developers select and fix the reported issues with the assistance of the tool’s displays, with the tool recording the fixes and updating the SysML diagrams accordingly. The evaluation of PLCD’s performance and capabilities uses three product lines, extended from cyber-physical systems in the literature: NASA astronaut jetpack, vehicle dynamics, and low-earth satellite. The evaluation focuses on unintended changes that cause physical unit inconsistencies, such as between meters and feet, since those may lead to accidents in cyber-physical product lines. The evaluation results show that PLCD successfully detects such unintended changes both in a single product and between products in a software product line

    Methodology for automated Petri Net model generation to support Reliability Modelling

    Get PDF
    As the complexity of engineering systems and processes increases, determining their optimal performance also becomes increasingly complex. There are various reliability methods available to model performance but generating the models can become a significant task that is cumbersome, error-prone and tedious. Hence, over the years, work has been undertaken into automatically generating reliability models in order to detect the most critical components and design errors at an early stage, supporting alternative designs. Earlier work lacks full automation resulting in semi-automated methods since they require user intervention to import system information to the algorithm, focus on specific domains and cannot accurately model systems or processes with control loops and dynamic features. This thesis develops a novel method that can generate reliability models for complex systems and processes, based on Petri Net models. The process has been fully automated with software developed that extracts the information required for the model from a topology diagram that describes the system or process considered and generates the corresponding mathematical and graphical representations of the Petri Net model. Such topology diagrams are used in industrial sectors, ranging from aerospace and automotive engineering to finance, defence, government, entertainment and telecommunications. Complex real-life scenarios are studied to demonstrate the application of the proposed method, followed by the verification, validation and simulation of the developed Petri Net models. Thus, the proposed method is seen to be a powerful tool to automatically obtain the PN modelling formalism from a topology diagram, commonly used in industry, by: - Handling and efficiently modelling systems and processes with a large number of components and activities respectively, dependent events and control loops. - Providing generic domain applicability. - Providing software independence by generating models readily understandable by the user without requiring further manipulation by any industrial software. Finally, the method documented in this thesis enables engineers to conduct reliability and performance analysis in a timely manner that ensures the results feed into the design process

    SAPHIRE 8 Volume 2 - Technical Reference

    Full text link
    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). Herein information is provided on the principles used in the construction and operation of Version 8.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, Workspace algorithms, cut set "recovery," end state manipulation, and use of "compound events.
    • …
    corecore