111 research outputs found

    Hybrid Causal Logic Methodology for Risk Assessment

    Get PDF
    Probabilistic Risk Assessment is being increasingly used in a number of industries such as nuclear, aerospace, chemical process, to name a few. Probabilistic Risk Assessment (PRA) characterizes risk in terms of three questions: (1) What can go wrong? (2) How likely is it? (3) What are the consequences? Probabilistic Risk Assessment studies answer these questions by systematically postulating and quantifying undesired scenarios in a highly integrated, top down fashion. The PRA process for technological systems typically includes the following steps: objective and scope definition, system familiarization, identification of initiating events, scenario modeling, quantification, uncertainty analysis, sensitivity analysis, importance ranking, and data analysis. Fault trees and event trees are widely used tools for risk scenario analysis in PRAs of technological systems. This methodology is most suitable for systems made of hardware components. A more comprehensive treatment of risks of technical systems needs to consider the entire environment within which such systems are designed and operated. This environment includes the physical environment, the socio-economic environment, and in some cases the regulatory and oversight environment. The technical system, supported by an organization of people in charge of its operation, is at the cross-section of these environments. In order to develop a more comprehensive risk model for these systems, an important step is to extend the modeling capabilities of the conventional Probabilistic Risk Assessment methodology to also include risks associated with human activities and organizational factors in addition to hardware and software failures and adverse conditions of the physical environment. The causal modeling should also extend to the influence of regulatory and oversight functions. This research offers such a methodology. It proposes a multi-layered modeling approach so that most the appropriate techniques are applied to different individual domains of the system. The approach is called the Hybrid Causal Logic (HCL) methodology. The main layers include: (a) A model to define safety/risk context. This is done using a technique known as event sequence diagram (ESD) method that helps define the kinds of accidents and incidents that can occur in relation to the system being considered; (b) A model that captures the behaviors of the physical system (hardware, software, and environmental factors) as possible causes or contributing factors to accidents and incidents delineated by the event sequence diagrams. This is done by common system modeling techniques such as fault tress (FT); and (c) A model to extend the causal chain of events to their potential human and organizational roots. This is done using Bayesian belief networks (BBN). Bayesian belief networks are particularly useful as they do not require complete knowledge of the relation between causes and effects. The integrated model is therefore a hybrid causal model with the corresponding sets of taxonomies and analytical and computational procedures. In this research, a methodology to combine fault trees, event trees or event sequence diagrams, and Bayesian belief networks has been introduced. Since such hybrid models involve significant interdependencies, the nature of such dependencies are first determined to pave the way for developing proper algorithmic solutions of the logic model. Major achievements of this work are: (1) development of the Hybrid Causal Logic model concept and quantification algorithms; (2) development and testing of computer implementation of algorithms (collaborative work); (3) development and implementation of algorithms for HCL-based importance measures, an uncertainty propagation method the BBN models, and algorithms for qualitative-quantitative Bayesian belief networks; and (4) development and testing of the Integrated Risk Information System (IRIS) software based on HCL methodology

    Addressing Complexity and Intelligence in Systems Dependability Evaluation

    Get PDF
    Engineering and computing systems are increasingly complex, intelligent, and open adaptive. When it comes to the dependability evaluation of such systems, there are certain challenges posed by the characteristics of “complexity” and “intelligence”. The first aspect of complexity is the dependability modelling of large systems with many interconnected components and dynamic behaviours such as Priority, Sequencing and Repairs. To address this, the thesis proposes a novel hierarchical solution to dynamic fault tree analysis using Semi-Markov Processes. A second aspect of complexity is the environmental conditions that may impact dependability and their modelling. For instance, weather and logistics can influence maintenance actions and hence dependability of an offshore wind farm. The thesis proposes a semi-Markov-based maintenance model called “Butterfly Maintenance Model (BMM)” to model this complexity and accommodate it in dependability evaluation. A third aspect of complexity is the open nature of system of systems like swarms of drones which makes complete design-time dependability analysis infeasible. To address this aspect, the thesis proposes a dynamic dependability evaluation method using Fault Trees and Markov-Models at runtime.The challenge of “intelligence” arises because Machine Learning (ML) components do not exhibit programmed behaviour; their behaviour is learned from data. However, in traditional dependability analysis, systems are assumed to be programmed or designed. When a system has learned from data, then a distributional shift of operational data from training data may cause ML to behave incorrectly, e.g., misclassify objects. To address this, a new approach called SafeML is developed that uses statistical distance measures for monitoring the performance of ML against such distributional shifts. The thesis develops the proposed models, and evaluates them on case studies, highlighting improvements to the state-of-the-art, limitations and future work

    Inclusion-exclusion principle for belief functions

    Get PDF
    International audienceThe inclusion-exclusion principle is a well-known property in probability theory, and is instrumental in some computational problems such as the evaluation of system reliability or the calculation of the probability of a Boolean formula in diagnosis. However, in the setting of uncertainty theories more general than probability theory, this principle no longer holds in general. It is therefore useful to know for which families of events it continues to hold. This paper investigates this question in the setting of belief functions. After exhibiting original sufficient and necessary conditions for the principle to hold, we illustrate its use on the uncertainty analysis of Boolean and non-Boolean systems in reliability

    Failure analysis of a complex system based on partial information about subsystems, with potential applications to aircraft maintenance

    Get PDF
    In many real-life applications (e.g., in aircraft maintenance), we need to estimate the probability of failure of a complex system (such as an aircraft as a whole or one of its subsystems). Complex systems are usually built with redundancy allowing them to withstand the failure of a small number of components. In this paper, we assume that we know the structure of the system, and, as a result, for each possible set of failed components, we can tell whether this set will lead to a system failure. For each component A, we know the probability P(A) of its failure with some uncertainty: e.g., we know the lower and upper bounds P(A) and P(A) for this probability. Usually, it is assumed that failures of different components are independent events. Our objective is to use all this information to estimate the probability of failure of the entire the complex system. In this paper, we describe several methods for solving this problem, including a new efficient method for such estimation based on Cauchy deviates

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
    • …
    corecore