5,006 research outputs found

    Risk-informed decision-making in the presence of epistemic uncertainty

    Get PDF
    International audienceAn important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte-Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we therefore propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decision-maker's attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process

    The concept of plausibility in a risk analysis context: Review and clarifications of defining ideas and interpretations

    Get PDF
    The plausibility concept has gained increasing attention in recent years in risk analysis settings. A number of definitions exist, most of which interpret plausibility as an expression of uncertainty. The concept is frequently referred to in scenario analysis and emerging risk contexts, which are characterized by large uncertainties. The difficulty of assigning probabilities in such cases has led some to claim that, by offering a purely qualitative approach, plausibility is a more suitable tool for measuring uncertainty. However, a proper clarification of what the plausibility concept means in a risk analysis context is missing; current definitions of the concept do not provide a clear understanding of how plausibility is linked to fundamental aspects of risk and uncertainty. The present paper aims to rectify these issues, by i) reviewing and discussing how the plausibility concept is interpreted and used in the literature, ii) providing a suggested interpretation of the concept in a risk analysis context, and iii) giving our recommendations on how the practical application of the plausibility concept can be enhanced by drawing on contemporary risk science, specifically with regard to highlighting the likelihood and knowledge dimensions of risk. Based on the review, it is shown that the concept of plausibility should be seen as a measure of uncertainty capturing a combination of likelihood and judgments on the supporting knowledge. We conclude that a prudent use of the concept requires that each of these dimensions are addressed explicitly, using imprecise probabilities and strength of knowledge judgments.publishedVersio

    A proposed framework for characterising uncertainty and variability in rock mechanics and rock engineering

    Get PDF
    This thesis develops a novel understanding of the fundamental issues in characterising and propagating unpredictability in rock engineering design. This unpredictability stems from the inherent complexity and heterogeneity of fractured rock masses as engineering media. It establishes the importance of: a) recognising that unpredictability results from epistemic uncertainty (i.e. resulting from a lack of knowledge) and aleatory variability (i.e. due to inherent randomness), and; b) the means by which uncertainty and variability associated with the parameters that characterise fractured rock masses are propagated through the modelling and design process. Through a critical review of the literature, this thesis shows that in geotechnical engineering – rock mechanics and rock engineering in particular – there is a lack of recognition in the existence of epistemic uncertainty and aleatory variability, and hence inappropriate design methods are often used. To overcome this, a novel taxonomy is developed and presented that facilitates characterisation of epistemic uncertainty and aleatory variability in the context of rock mechanics and rock engineering. Using this taxonomy, a new framework is developed that gives a protocol for correctly propagating uncertainty and variability through engineering calculations. The effectiveness of the taxonomy and the framework are demonstrated through their application to simple challenge problems commonly found in rock engineering. This new taxonomy and framework will provide engineers engaged in preparing rock engineering designs an objective means of characterising unpredictability in parameters commonly used to define properties of fractured rock masses. These new tools will also provide engineers with a means of clearly understanding the true nature of unpredictability inherent in rock mechanics and rock engineering, and thus direct selection of an appropriate unpredictability model to propagate unpredictability faithfully through engineering calculations. Thus, the taxonomy and framework developed in this thesis provide practical tools to improve the safety of rock engineering designs through an improved understanding of the unpredictability concepts.Open Acces

    Generalized Bayesian inference under prior-data conflict

    Get PDF
    This thesis is concerned with the generalisation of Bayesian inference towards the use of imprecise or interval probability, with a focus on model behaviour in case of prior-data conflict. Bayesian inference is one of the main approaches to statistical inference. It requires to express (subjective) knowledge on the parameter(s) of interest not incorporated in the data by a so-called prior distribution. All inferences are then based on the so-called posterior distribution, the subsumption of prior knowledge and the information in the data calculated via Bayes' Rule. The adequate choice of priors has always been an intensive matter of debate in the Bayesian literature. While a considerable part of the literature is concerned with so-called non-informative priors aiming to eliminate (or, at least, to standardise) the influence of priors on posterior inferences, inclusion of specific prior information into the model may be necessary if data are scarce, or do not contain much information about the parameter(s) of interest; also, shrinkage estimators, common in frequentist approaches, can be considered as Bayesian estimators based on informative priors. When substantial information is used to elicit the prior distribution through, e.g, an expert's assessment, and the sample size is not large enough to eliminate the influence of the prior, prior-data conflict can occur, i.e., information from outlier-free data suggests parameter values which are surprising from the viewpoint of prior information, and it may not be clear whether the prior specifications or the integrity of the data collecting method (the measurement procedure could, e.g., be systematically biased) should be questioned. In any case, such a conflict should be reflected in the posterior, leading to very cautious inferences, and most statisticians would thus expect to observe, e.g., wider credibility intervals for parameters in case of prior-data conflict. However, at least when modelling is based on conjugate priors, prior-data conflict is in most cases completely averaged out, giving a false certainty in posterior inferences. Here, imprecise or interval probability methods offer sound strategies to counter this issue, by mapping parameter uncertainty over sets of priors resp. posteriors instead of over single distributions. This approach is supported by recent research in economics, risk analysis and artificial intelligence, corroborating the multi-dimensional nature of uncertainty and concluding that standard probability theory as founded on Kolmogorov's or de Finetti's framework may be too restrictive, being appropriate only for describing one dimension, namely ideal stochastic phenomena. The thesis studies how to efficiently describe sets of priors in the setting of samples from an exponential family. Models are developed that offer enough flexibility to express a wide range of (partial) prior information, give reasonably cautious inferences in case of prior-data conflict while resulting in more precise inferences when prior and data agree well, and still remain easily tractable in order to be useful for statistical practice. Applications in various areas, e.g. common-cause failure modeling and Bayesian linear regression, are explored, and the developed approach is compared to other imprecise probability models.Das Thema dieser Dissertation ist die Generalisierung der Bayes-Inferenz durch die Verwendung von unscharfen oder intervallwertigen Wahrscheinlichkeiten. Ein besonderer Fokus liegt dabei auf dem Modellverhalten in dem Fall, dass Vorwissen und beobachtete Daten in Konflikt stehen. Die Bayes-Inferenz ist einer der Hauptansätze zur Herleitung von statistischen Inferenzmethoden. In diesem Ansatz muss (eventuell subjektives) Vorwissen über die Modellparameter in einer sogenannten Priori-Verteilung (kurz: Priori) erfasst werden. Alle Inferenzaussagen basieren dann auf der sogenannten Posteriori-Verteilung (kurz: Posteriori), welche mittels des Satzes von Bayes berechnet wird und das Vorwissen und die Informationen in den Daten zusammenfasst. Wie eine Priori-Verteilung in der Praxis zu wählen sei, ist dabei stark umstritten. Ein großer Teil der Literatur befasst sich mit der Bestimmung von sogenannten nichtinformativen Prioris. Diese zielen darauf ab, den Einfluss der Priori auf die Posteriori zu eliminieren oder zumindest zu standardisieren. Falls jedoch nur wenige Daten zur Verfügung stehen, oder diese nur wenige Informationen in Bezug auf die Modellparameter bereitstellen, kann es hingegen nötig sein, spezifische Priori-Informationen in ein Modell einzubeziehen. Außerdem können sogenannte Shrinkage-Schätzer, die in frequentistischen Ansätzen häufig zum Einsatz kommen, als Bayes-Schätzer mit informativen Prioris angesehen werden. Wenn spezifisches Vorwissen zur Bestimmung einer Priori genutzt wird (beispielsweise durch eine Befragung eines Experten), aber die Stichprobengröße nicht ausreicht, um eine solche informative Priori zu überstimmen, kann sich ein Konflikt zwischen Priori und Daten ergeben. Dieser kann sich darin äußern, dass die beobachtete (und von eventuellen Ausreißern bereinigte) Stichprobe Parameterwerte impliziert, die aus Sicht der Priori äußerst überraschend und unerwartet sind. In solch einem Fall kann es unklar sein, ob eher das Vorwissen oder eher die Validität der Datenerhebung in Zweifel gezogen werden sollen. (Es könnten beispielsweise Messfehler, Kodierfehler oder eine Stichprobenverzerrung durch selection bias vorliegen.) Zweifellos sollte sich ein solcher Konflikt in der Posteriori widerspiegeln und eher vorsichtige Inferenzaussagen nach sich ziehen; die meisten Statistiker würden daher davon ausgehen, dass sich in solchen Fällen breitere Posteriori-Kredibilitätsintervalle für die Modellparameter ergeben. Bei Modellen, die auf der Wahl einer bestimmten parametrischen Form der Priori basieren, welche die Berechnung der Posteriori wesentlich vereinfachen (sogenannte konjugierte Priori-Verteilungen), wird ein solcher Konflikt jedoch einfach ausgemittelt. Dann werden Inferenzaussagen, die auf einer solchen Posteriori basieren, den Anwender in falscher Sicherheit wiegen. In dieser problematischen Situation können Intervallwahrscheinlichkeits-Methoden einen fundierten Ausweg bieten, indem Unsicherheit über die Modellparameter mittels Mengen von Prioris beziehungsweise Posterioris ausgedrückt wird. Neuere Erkenntnisse aus Risikoforschung, Ökonometrie und der Forschung zu künstlicher Intelligenz, die die Existenz von verschiedenen Arten von Unsicherheit nahelegen, unterstützen einen solchen Modellansatz, der auf der Feststellung aufbaut, dass die auf den Ansätzen von Kolmogorov oder de Finetti basierende übliche Wahrscheinlichkeitsrechung zu restriktiv ist, um diesen mehrdimensionalen Charakter von Unsicherheit adäquat einzubeziehen. Tatsächlich kann in diesen Ansätzen nur eine der Dimensionen von Unsicherheit modelliert werden, nämlich die der idealen Stochastizität. In der vorgelegten Dissertation wird untersucht, wie sich Mengen von Prioris für Stichproben aus Exponentialfamilien effizient beschreiben lassen. Wir entwickeln Modelle, die eine ausreichende Flexibilität gewährleisten, sodass eine Vielfalt von Ausprägungen von partiellem Vorwissen beschrieben werden kann. Diese Modelle führen zu vorsichtigen Inferenzaussagen, wenn ein Konflikt zwischen Priori und Daten besteht, und ermöglichen dennoch präzisere Aussagen für den Fall, dass Priori und Daten im Wesentlichen übereinstimmen, ohne dabei die Einsatzmöglichkeiten in der statistischen Praxis durch eine zu hohe Komplexität in der Anwendung zu erschweren. Wir ermitteln die allgemeinen Inferenzeigenschaften dieser Modelle, die sich durch einen klaren und nachvollziehbaren Zusammenhang zwischen Modellunsicherheit und der Präzision von Inferenzaussagen auszeichnen, und untersuchen Anwendungen in verschiedenen Bereichen, unter anderem in sogenannten common-cause-failure-Modellen und in der linearen Bayes-Regression. Zudem werden die in dieser Dissertation entwickelten Modelle mit anderen Intervallwahrscheinlichkeits-Modellen verglichen und deren jeweiligen Stärken und Schwächen diskutiert, insbesondere in Bezug auf die Präzision von Inferenzaussagen bei einem Konflikt von Vorwissen und beobachteten Daten

    An advanced fuzzy Bayesian-based FMEA approach for assessing maritime supply chain risks

    Get PDF
    This paper aims to develop a novel model to assess the risk factors of maritime supply chains by incorporating a fuzzy belief rule approach with Bayesian networks. The new model, compared to traditional risk analysis methods, has the capability of improving result accuracy under a high uncertainty in risk data. A real case of a world leading container shipping company is investigated, and the research results reveal that among the most significant risk factors are transportation of dangerous goods, fluctuation of fuel price, fierce competition, unattractive markets, and change of exchange rates in sequence. Such findings will provide useful insights for accident prevention

    Alternative sweetener from curculigo fruits

    Get PDF
    This study gives an overview on the advantages of Curculigo Latifolia as an alternative sweetener and a health product. The purpose of this research is to provide another option to the people who suffer from diabetes. In this research, Curculigo Latifolia was chosen, due to its unique properties and widely known species in Malaysia. In order to obtain the sweet protein from the fruit, it must go through a couple of procedures. First we harvested the fruits from the Curculigo trees that grow wildly in the garden. Next, the Curculigo fruits were dried in the oven at 50 0C for 3 days. Finally, the dried fruits were blended in order to get a fine powder. Curculin is a sweet protein with a taste-modifying activity of converting sourness to sweetness. The curculin content from the sample shown are directly proportional to the mass of the Curculigo fine powder. While the FTIR result shows that the sample spectrum at peak 1634 cm–1 contains secondary amines. At peak 3307 cm–1 contains alkynes

    Dynamic safety analysis of decommissioning and abandonment of offshore oil and gas installations

    Get PDF
    The global oil and gas industry have seen an increase in the number of installations moving towards decommissioning. Offshore decommissioning is a complex, challenging and costly activity, making safety one of the major concerns. The decommissioning operation is, therefore, riskier than capital projects, partly due to the uniqueness of every offshore installation, and mainly because these installations were not designed for removal during their development phases. The extent of associated risks is deep and wide due to limited data and incomplete knowledge of the equipment conditions. For this reason, it is important to capture every uncertainty that can be introduced at the operational level, or existing hazards due to the hostile environment, technical difficulties, and the timing of the decommissioning operations. Conventional accident modelling techniques cannot capture the complex interactions among contributing elements. To assess the safety risks, a dynamic safety analysis of the accident is, thus, necessary. In this thesis, a dynamic integrated safety analysis model is proposed and developed to capture both planned and evolving risks during the various stages of decommissioning. First, the failure data are obtained from source-to-source and are processed utilizing Hierarchical Bayesian Analysis. Then, the system failure and potential accident scenarios are built on bowtie model which is mapped into a Bayesian network with advanced relaxation techniques. The Dynamic Integrated Safety Analysis (DISA) allows for the combination of reliability tools to identify safetycritical causals and their evolution into single undesirable failure through the utilisation of source to-source variability, time-dependent prediction, diagnostic, and economic risk assessment to support effective recommendations and decisions-making. The DISA framework is applied to the Elgin platform well abandonment and Brent Alpha jacket structure decommissioning and the results are validated through sensitivity analysis. Through a dynamic-diagnostic and multi-factor regression analysis, the loss values of accident contributory factors are also presented. The study shows that integrating Hierarchical Bayesian Analysis (HBA) and dynamic Bayesian networks (DBN) application to modelling time-variant risks are essential to achieve a well-informed decommissioning decision through the identification of safety critical barriers that could be mitigated against to drive down the cost of remediation.The global oil and gas industry have seen an increase in the number of installations moving towards decommissioning. Offshore decommissioning is a complex, challenging and costly activity, making safety one of the major concerns. The decommissioning operation is, therefore, riskier than capital projects, partly due to the uniqueness of every offshore installation, and mainly because these installations were not designed for removal during their development phases. The extent of associated risks is deep and wide due to limited data and incomplete knowledge of the equipment conditions. For this reason, it is important to capture every uncertainty that can be introduced at the operational level, or existing hazards due to the hostile environment, technical difficulties, and the timing of the decommissioning operations. Conventional accident modelling techniques cannot capture the complex interactions among contributing elements. To assess the safety risks, a dynamic safety analysis of the accident is, thus, necessary. In this thesis, a dynamic integrated safety analysis model is proposed and developed to capture both planned and evolving risks during the various stages of decommissioning. First, the failure data are obtained from source-to-source and are processed utilizing Hierarchical Bayesian Analysis. Then, the system failure and potential accident scenarios are built on bowtie model which is mapped into a Bayesian network with advanced relaxation techniques. The Dynamic Integrated Safety Analysis (DISA) allows for the combination of reliability tools to identify safetycritical causals and their evolution into single undesirable failure through the utilisation of source to-source variability, time-dependent prediction, diagnostic, and economic risk assessment to support effective recommendations and decisions-making. The DISA framework is applied to the Elgin platform well abandonment and Brent Alpha jacket structure decommissioning and the results are validated through sensitivity analysis. Through a dynamic-diagnostic and multi-factor regression analysis, the loss values of accident contributory factors are also presented. The study shows that integrating Hierarchical Bayesian Analysis (HBA) and dynamic Bayesian networks (DBN) application to modelling time-variant risks are essential to achieve a well-informed decommissioning decision through the identification of safety critical barriers that could be mitigated against to drive down the cost of remediation
    • …
    corecore