144 research outputs found

    Quantification of uncertainty in probabilistic safety analysis

    Get PDF
    This thesis develops methods for quantification and interpretation of uncertainty in probabilistic safety analysis, focussing on fault trees. The output of a fault tree analysis is, usually, the probability of occurrence of an undesirable event (top event) calculated using the failure probabilities of identified basic events. The standard method for evaluating the uncertainty distribution is by Monte Carlo simulation, but this is a computationally intensive approach to uncertainty estimation and does not, readily, reveal the dominant reasons for the uncertainty. A closed form approximation for the fault tree top event uncertainty distribution, for models using only lognormal distributions for model inputs, is developed in this thesis. Its output is compared with the output from two sampling based approximation methods; standard Monte Carlo analysis, and Wilks’ method, which is based on order statistics using small sample sizes. Wilks’ method can be used to provide an upper bound for the percentiles of top event distribution, and is computationally cheap. The combination of the lognormal approximation and Wilks’ Method can be used to give, respectively, the overall shape and high confidence on particular percentiles of interest. This is an attractive, practical option for evaluation of uncertainty in fault trees and, more generally, uncertainty in certain multilinear models. A new practical method of ranking uncertainty contributors in lognormal models is developed which can be evaluated in closed form, based on cutset uncertainty. The method is demonstrated via examples, including a simple fault tree model and a model which is the size of a commercial PSA model for a nuclear power plant. Finally, quantification of “hidden uncertainties” is considered; hidden uncertainties are those which are not typically considered in PSA models, but may contribute considerable uncertainty to the overall results if included. A specific example of the inclusion of a missing uncertainty is explained in detail, and the effects on PSA quantification are considered. It is demonstrated that the effect on the PSA results can be significant, potentially permuting the order of the most important cutsets, which is of practical concern for the interpretation of PSA models. Finally, suggestions are made for the identification and inclusion of further hidden uncertainties.Open Acces

    Risk assessment of a bulk cryogenic tank: Beyond the Leak-Before-Break criterion

    Get PDF
    International audienceThe increase in the size and production capacity of air separation plants has boosted the need of developing methodologies to properly assess the risk related to major releases of liquefied gas. In this respect, the Leak-Before-Break (LBB) assessment is currently adopted to demonstrate the safety of the structures containing liquefied gas, under the assumption that the tank is always operated in nominal conditions. This assumption is questioned in this paper, which proposes a new methodology for the assessment of the risks related to cryogenic tank catastrophic rupture. The methodology provides a comprehensive understanding of the issues associated to the worst case rupture scenario: from the investigation of the causes of the undesirable operating conditions up to the analysis of the associated structural consequences, within a probabilistic framewo

    Aggregation of importance measures for decision making in reliability engineering

    Get PDF
    This article investigates the aggregation of rankings based on component importance measures to provide the decision maker with a guidance for design or maintenance decisions. In particular, ranking aggregation algorithms of the literature are considered, a procedure for ensuring that the aggregated ranking is compliant with the Condorcet criterion of majority principle is presented and two original ranking aggregation approaches are proposed. Comparisons are made on a case study of an auxiliary feed-water system of a nuclear pressurized water reactor

    Genetic algorithms for condition-based maintenance optimization under uncertainty

    Get PDF
    International audienceThis paper proposes and compares different techniques for maintenance optimization based on Genetic Algorithms (GA), when the parameters of the maintenance model are affected by uncertainty and the fitness values are represented by Cumulative Distribution Functions (CDFs). The main issues addressed to tackle this problem are the development of a method to rank the uncertain fitness values, and the definition of a novel Pareto dominance concept. The GA-based methods are applied to a practical case study concerning the setting of a condition-based maintenance policy on the degrading nozzles of a gas turbine operated in an energy production plant

    Modeling and analysis of process failures using probabilistic functional model

    Get PDF
    Failure analysis is an important tool for effective safety management in the chemical process industry. This thesis applies a probabilistic approach to study two failure analysis techniques. The first technique focuses on fault detection and diagnosis (FDD), while the second is on vulnerability analysis of plant components. In formulating the FDD strategy, a class of functional model called multilevel flow modeling (MFM) was used. Since this model is not commonly used for chemical processes, it was tested on a crude distillation unit and validated using a simulation flowsheet implemented in Aspen HYSYS (Version 8.4) to demonstrate its suitability. Within the proposed FDD framework, probabilistic information was added by transforming the MFM model into its equivalent fault tree model to provide the ability to predict the likelihood of component’s failure. This model was then converted into its equivalent Bayesian network model using HUGIN 8.1 software to facilitate computations. Evaluations of the system on a heat exchanger pilot plant highlight the capability of the model in detecting process faults and identifying the associated root causes. The proposed technique also incorporated options for multi – state functional outcomes, in addition to the typical binary states offered by typical MFM model. The second tool proposed was a new methodology called basic event ranking approach (BERA), which measures the relative vulnerabilities of plant components and can be used to assist plant maintenance and upgrade planning. The framework was applied to a case study involving toxic prevention barriers in a typical process plant. The method was compared to some common importance index methodologies, and the results obtained ascertained the suitability of BERA to be used as a tool to facilitate risk based decisions in planning maintenance schedules in a process plant

    Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners (Second Edition)

    Get PDF
    Probabilistic Risk Assessment (PRA) is a comprehensive, structured, and logical analysis method aimed at identifying and assessing risks in complex technological systems for the purpose of cost-effectively improving their safety and performance. NASA's objective is to better understand and effectively manage risk, and thus more effectively ensure mission and programmatic success, and to achieve and maintain high safety standards at NASA. NASA intends to use risk assessment in its programs and projects to support optimal management decision making for the improvement of safety and program performance. In addition to using quantitative/probabilistic risk assessment to improve safety and enhance the safety decision process, NASA has incorporated quantitative risk assessment into its system safety assessment process, which until now has relied primarily on a qualitative representation of risk. Also, NASA has recently adopted the Risk-Informed Decision Making (RIDM) process [1-1] as a valuable addition to supplement existing deterministic and experience-based engineering methods and tools. Over the years, NASA has been a leader in most of the technologies it has employed in its programs. One would think that PRA should be no exception. In fact, it would be natural for NASA to be a leader in PRA because, as a technology pioneer, NASA uses risk assessment and management implicitly or explicitly on a daily basis. NASA has probabilistic safety requirements (thresholds and goals) for crew transportation system missions to the International Space Station (ISS) [1-2]. NASA intends to have probabilistic requirements for any new human spaceflight transportation system acquisition. Methods to perform risk and reliability assessment in the early 1960s originated in U.S. aerospace and missile programs. Fault tree analysis (FTA) is an example. It would have been a reasonable extrapolation to expect that NASA would also become the world leader in the application of PRA. That was, however, not to happen. Early in the Apollo program, estimates of the probability for a successful roundtrip human mission to the moon yielded disappointingly low (and suspect) values and NASA became discouraged from further performing quantitative risk analyses until some two decades later when the methods were more refined, rigorous, and repeatable. Instead, NASA decided to rely primarily on the Hazard Analysis (HA) and Failure Modes and Effects Analysis (FMEA) methods for system safety assessment

    Integrated Scenario-Based Methodology for Project Risk Management

    Get PDF
    Project risk management is currently used in several industries and mandated by government acquisition agencies around the world to manage uncertainty in an effort to improve a project's probability of success. Common practice involves developing a list of risk items scored with probability and consequence ordinal scales by committee usually focusing on cost and schedule issues. A scenario based process modeling construct is introduced using a hybrid Probabilistic Risk Assessment and Decision Analysis framework integrating project development risks with operational system risks. Project management's decisions are explicitly modeled and ranked based on risk importance to the project. Multiple consequence attributes are unified providing a basis for computing total project risk. This study shows that such an approach leads to an analysis system where scenarios tracing risk items to many possible consequences are explicitly understood; the interaction between cost, schedule, and performance models drive the analysis; probabilities for overruns, delays, increased system hazards are determined directly; and state-of-the-art quantification techniques are directly applicable. All these enhance project management's capability to respond with more effective decisions

    Un cadre holistique de la modélisation de la dégradation pour l’analyse de fiabilité et optimisation de la maintenance de systèmes de sécurité nucléaires

    Get PDF
    Components of nuclear safety systems are in general highly reliable, which leads to a difficulty in modeling their degradation and failure behaviors due to the limited amount of data available. Besides, the complexity of such modeling task is increased by the fact that these systems are often subject to multiple competing degradation processes and that these can be dependent under certain circumstances, and influenced by a number of external factors (e.g. temperature, stress, mechanical shocks, etc.). In this complicated problem setting, this PhD work aims to develop a holistic framework of models and computational methods for the reliability-based analysis and maintenance optimization of nuclear safety systems taking into account the available knowledge on the systems, degradation and failure behaviors, their dependencies, the external influencing factors and the associated uncertainties.The original scientific contributions of the work are: (1) For single components, we integrate random shocks into multi-state physics models for component reliability analysis, considering general dependencies between the degradation and two types of random shocks. (2) For multi-component systems (with a limited number of components):(a) a piecewise-deterministic Markov process modeling framework is developed to treat degradation dependency in a system whose degradation processes are modeled by physics-based models and multi-state models; (b) epistemic uncertainty due to incomplete or imprecise knowledge is considered and a finite-volume scheme is extended to assess the (fuzzy) system reliability; (c) the mean absolute deviation importance measures are extended for components with multiple dependent competing degradation processes and subject to maintenance; (d) the optimal maintenance policy considering epistemic uncertainty and degradation dependency is derived by combining finite-volume scheme, differential evolution and non-dominated sorting differential evolution; (e) the modeling framework of (a) is extended by including the impacts of random shocks on the dependent degradation processes.(3) For multi-component systems (with a large number of components), a reliability assessment method is proposed considering degradation dependency, by combining binary decision diagrams and Monte Carlo simulation to reduce computational costs.Composants de systèmes de sûreté nucléaire sont en général très fiable, ce qui conduit à une difficulté de modéliser leurs comportements de dégradation et d'échec en raison de la quantité limitée de données disponibles. Par ailleurs, la complexité de cette tâche de modélisation est augmentée par le fait que ces systèmes sont souvent l'objet de multiples processus concurrents de dégradation et que ceux-ci peut être dépendants dans certaines circonstances, et influencé par un certain nombre de facteurs externes (par exemple la température, le stress, les chocs mécaniques, etc.).Dans ce cadre de problème compliqué, ce travail de thèse vise à développer un cadre holistique de modèles et de méthodes de calcul pour l'analyse basée sur la fiabilité et la maintenance d'optimisation des systèmes de sûreté nucléaire en tenant compte des connaissances disponibles sur les systèmes, les comportements de dégradation et de défaillance, de leurs dépendances, les facteurs influençant externes et les incertitudes associées.Les contributions scientifiques originales dans la thèse sont:(1) Pour les composants simples, nous intégrons des chocs aléatoires dans les modèles de physique multi-états pour l'analyse de la fiabilité des composants qui envisagent dépendances générales entre la dégradation et de deux types de chocs aléatoires.(2) Pour les systèmes multi-composants (avec un nombre limité de composants):(a) un cadre de modélisation de processus de Markov déterministes par morceaux est développé pour traiter la dépendance de dégradation dans un système dont les processus de dégradation sont modélisées par des modèles basés sur la physique et des modèles multi-états; (b) l'incertitude épistémique à cause de la connaissance incomplète ou imprécise est considéré et une méthode volumes finis est prolongée pour évaluer la fiabilité (floue) du système; (c) les mesures d'importance de l'écart moyen absolu sont étendues pour les composants avec multiples processus concurrents dépendants de dégradation et soumis à l'entretien; (d) la politique optimale de maintenance compte tenu de l'incertitude épistémique et la dépendance de dégradation est dérivé en combinant schéma volumes finis, évolution différentielle et non-dominée de tri évolution différentielle; (e) le cadre de la modélisation de (a) est étendu en incluant les impacts des chocs aléatoires sur les processus dépendants de dégradation.(3) Pour les systèmes multi-composants (avec un grand nombre de composants), une méthode d'évaluation de la fiabilité est proposé considérant la dépendance dégradation en combinant des diagrammes de décision binaires et simulation de Monte Carlo pour réduire le coût de calcul

    Efficient Reliability and Sensitivity Analysis of Complex Systems and Networks with Imprecise Probability

    Get PDF
    Complex systems and networks, such as grid systems and transportation networks, are backbones of our society, so performing RAMS (Reliability, Availability, Maintainability, and Safety) analysis on them is essential. The complex system consists of multiple component types, which is time consuming to analyse by using cut sets or system signatures methods. Analytical solutions (when available) are always preferable than simulation methods since the computational time is in general negligible. However, analytical solutions are not always available or are restricted to particular cases. For instance, if there exist imprecisions within the components' failure time distributions, or empirical distribution of components failure times are used, no analytical methods can be used without resorting to some degree of simplification or approximation. In real applications, there sometimes exist common cause failures within the complex systems, which make the components' independence assumption invalid. In this dissertation, the concept of survival signature is used for performing reliability analysis on complex systems and realistic networks with multiple types of components. It opens a new pathway for a structured approach with high computational efficiency based on a complete probabilistic description of the system. An efficient algorithm for evaluating the survival signature of a complex system bases on binary decision diagrams is introduced in the thesis. In addition, the proposed novel survival signature-based simulation techniques can be applied to any systems irrespectively of the probability distribution for the component failure time used. Hence, the advantage of the simulation methods compared to the analytical methods is not on the computational times of the analysis, but on the possibility to analyse any kind of systems without introducing simplifications or unjustified assumptions. The thesis extends survival signature analysis for application to repairable systems reliability as well as illustrates imprecise probability methods for modelling uncertainty in lifetime distribution specifications. Based on the above methodologies, this dissertation proposes applications for calculation of importance measures and performing sensitivity analysis. To be specific, the novel methodologies are based on the survival signature and allow to identify the most critical component or components set at different survival times of the system. The imprecision, which is caused by limited data or incomplete information on the system, is taken into consideration when performing a sensitivity analysis and calculating the component importance index. In order to modify the above methods to analyse systems with components that are subject to common cause failures, α\alpha-factor models are presented in this dissertation. The approaches are based on the survival signature and can be applied to complex systems with multiple component types. Furthermore, the imprecision and uncertainty within the α\alpha-factor parameters or component failure distribution parameters is considered as well. Numerical examples are presented in each chapter to show the applicability and efficiency of the proposed methodologies for reliability and sensitivity analysis on complex systems and networks with imprecise probability

    An Integrated Framework to Evaluate Off-Nominal Requirements and Reliability of Novel Aircraft Architectures in Early Design

    Get PDF
    One of the barriers to the development of novel aircraft architectures and technologies is the uncertainty related to their reliability and the safety risk they pose. In the conceptual and preliminary design stages, traditional system safety techniques rely on heuristics, experience, and historical data to assess these requirements. The limitations and off-nominal operational considerations generally postulated during traditional safety analysis may not be complete or correct for new concepts. Additionally, dearth of available reliability data results in poor treatments of epistemic and aleatory uncertainty for novel aircraft architectures. Two performance-based methods are demonstrated to solve the problem of improving the identification and characterization of safety related off-nominal requirements in early design. The problem of allocating requirements to the unit level is solved using a network-based bottom-up analysis algorithm combined with the Critical Flow Method. A Bayesian probability approach is utilized to better deal with epistemic and aleatory uncertainty while assessing unit level failure rates. When combined with a Bayesian decision theoretic approach, it provides a mathematically backed framework for compliance finding under uncertainty. To estimate multi-state reliability of complex systems, this dissertation contributes a modified Monte-Carlo algorithm that uses the Bayesian failure rate posteriors previously generated. Finally, multi-state importance measures are introduced to determine the sensitivity of different hazard severity to unit reliability. The developed tools, techniques, and methods of this dissertation are combined into an integrated framework with the capability to perform trade-studies informed by safety and reliability considerations for novel aircraft architectures in early preliminary design. A test distributed electric propulsion (T-DEP) aircraft inspired by the X-57 is utilized as a test problem to demonstrate this frameworkPh.D
    • …
    corecore