67 research outputs found

    Risk-informed decision-making in the presence of epistemic uncertainty

    Get PDF
    International audienceAn important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte-Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we therefore propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decision-maker's attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process

    Propagation of aleatory and epistemic uncertainties in the model for the design of a flood protection dike

    No full text
    International audienceTraditionally, probability distributions are used in risk analysis to represent the uncertainty associated to random (aleatory) phenomena. The parameters (e.g., their mean, variance, ...) of these distributions are usually affected by epistemic (state-of-knowledge) uncertainty, due to limited experience and incomplete knowledge about the phenomena that the distributions represent: the uncertainty framework is then characterized by two hierarchical levels of uncertainty. Probability distributions may be used to characterize also the epistemic uncertainty affecting the parameters of the probability distributions. However, when sufficiently informative data are not available, an alternative and proper way to do this might be by means of possibilistic distributions. In this paper, we use probability distributions to represent aleatory uncertainty and possibility distributions to describe the epistemic uncertainty associated to the poorly known parameters of such probability distributions. A hybrid method is used to hierarchically propagate the two types of uncertainty. The results obtained on a risk model for the design of a flood protection dike are compared with those of a traditional, purely probabilistic, two-dimensional (or double) Monte Carlo approach. To the best of the authors' knowledge, this is the first time that a hybrid Monte Carlo and possibilistic method is tailored to propagate the uncertainties in a risk model when the uncertainty framework is characterized by two hierarchical levels. The results of the case study show that the hybrid approach produces risk estimates that are more conservative than (or at least comparable to) those obtained by the two-dimensional Monte Carlo method

    Statistical reasoning with set-valued information : Ontic vs. epistemic views

    Get PDF
    International audienceIn information processing tasks, sets may have a conjunctive or a disjunctive reading. In the conjunctive reading, a set represents an object of interest and its elements are subparts of the object, forming a composite description. In the disjunctive reading, a set contains mutually exclusive elements and refers to the representation of incomplete knowledge. It does not model an actual object or quantity, but partial information about an underlying object or a precise quantity. This distinction between what we call ontic vs. epistemic sets remains valid for fuzzy sets, whose membership functions, in the disjunctive reading are possibility distributions, over deterministic or random values. This paper examines the impact of this distinction in statistics. We show its importance because there is a risk of misusing basic notions and tools, such as conditioning, distance between sets, variance, regression, etc. when data are set-valued. We discuss several examples where the ontic and epistemic points of view yield different approaches to these concepts

    Fuzzy Reliability Assessment of Systems with Multiple Dependent Competing Degradation Processes

    Get PDF
    International audienceComponents are often subject to multiple competing degradation processes. For multi-component systems, the degradation dependency within one component or/and among components need to be considered. Physics-based models (PBMs) and multi-state models (MSMs) are often used for component degradation processes, particularly when statistical data are limited. In this paper, we treat dependencies between degradation processes within a piecewise-deterministic Markov process (PDMP) modeling framework. Epistemic (subjective) uncertainty can arise due to the incomplete or imprecise knowledge about the degradation processes and the governing parameters: to take into account this, we describe the parameters of the PDMP model as fuzzy numbers. Then, we extend the finite-volume (FV) method to quantify the (fuzzy) reliability of the system. The proposed method is tested on one subsystem of the residual heat removal system (RHRS) of a nuclear power plant, and a comparison is offered with a Monte Carlo (MC) simulation solution: the results show that our method can be most efficient

    A fuzzy expectation maximization based method for estimating the parameters of a multi-state degradation model from imprecise maintenance outcomes

    Get PDF
    Multi-State (MS) reliability models are used in practice to describe the evolution of degradation in industrial components and systems. To estimate the MS model parameters, we propose a method based on the Fuzzy Expectation-Maximization (FEM) algorithm, which integrates the evidence of the field inspection outcomes with information taken from the maintenance operators about the transition times from one state to another. Possibility distributions are used to describe the imprecision in the expert statements. A procedure for estimating the Remaining Useful Life (RUL) based on the MS model and conditional on such imprecise evidence is, then, developed. The proposed method is applied to a case study concerning the degradation of pipe welds in the coolant system of a Nuclear Power Plant (NPP). The obtained results show that the combination of field data with expert knowledge can allow reducing the uncertainty in degradation estimation and RUL prediction

    Probabilistic assessment of performance under uncertain information using a generalised maximum entropy principle

    Get PDF
    When information about a distribution consists of statistical moments only, a self-consistent approach to deriving a subjective probability density function (pdf) is Maximum Entropy. Nonetheless, the available information may have uncertainty, and statistical moments maybe known only to lie in a certain domain. If Maximum Entropy is used to find the distribution with the largest entropy whose statistical moments lie within the domain, the information at only a single point in the domain would be used and other information would be discarded. In this paper, the bounded information on statistical moments is used to construct a family of Maximum Entropy distributions, leading to an uncertain probability function. This uncertainty description enables the investigation of how the uncertainty in the probabilistic assignment affects the predicted performance of an engineering system with respect to safety, quality and design constraints. It is shown that the pdf which maximizes (or equivalently minimizes) an engineering metric is potentially different from the pdf which maximizes the entropy. The feasibility of the proposed uncertainty model is shown through its app lication to: (i) fatigue failure analysis of a structural joint; (ii) evaluation of the probability that a response variable of an engineering system exceeds a critical level, and (iii) random vibration

    EMPIRICAL COMPARISON OF METHODS FOR THE HIERARCHICAL PROPAGATION OF HYBRID UNCERTAINTY IN RISK ASSESSMENT, IN PRESENCE OF DEPENDENCES

    No full text
    Risk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, ...) that are epistemically-uncertain, i.e., known with poor precision. Whereas aleatory uncertainty is always described by probability distributions, epistemic uncertainty may be represented in different ways (e.g., probabilistic or possibilistic), depending on the information and data available. The work presented in this paper addresses the issue of accounting for (in)dependence relationships between epistemically-uncertain parameters. When a probabilistic representation of epistemic uncertainty is considered, uncertainty propagation is carried out by a two-dimensional (or double) Monte Carlo (MC) simulation approach; instead, when possibility distributions are used, two approaches are undertaken: the hybrid MC and Fuzzy Interval Analysis (FIA) method and the MC-based Dempster-Shafer (DS) approach employing Independent Random Sets (IRSs). The objectives are: i) studying the effects of (in)dependence between the epistemically-uncertain parameters of the aleatory probability distributions (when a probabilistic/possibilistic representation of epistemic uncertainty is adopted) and ii) studying the effect of the probabilistic/possibilistic representation of epistemic uncertainty (when the state of dependence between the epistemic parameters is defined). The Dependency Bound Convolution (DBC) approach is then undertaken within a hierarchical setting of hybrid (probabilistic and possibilistic) uncertainty propagation, in order to account for all kinds of (possibly unknown) dependences between the random variables. The analyses are carried out with reference to two toy examples, built in such a way to allow performing a fair quantitative comparison between the methods, and evaluating their rationale and appropriateness in relation to risk analysis

    Regression modeling based on improved genetic algorithm

    Get PDF
    Regresijski model je dobro uhodana metoda u analizi podataka s primjenom u raznim područjima. Izbor nezavisnih varijabli i matematički transformiranih u regresijski model, često predstavlja izazovan problem. Nedavno je nekoliko znanstvenika primijenilo evolucijski proračun za rješenje tog problema, ali rezultat nije učinkovit onoliko koliko smo željeli. Ukrižena (crossover) operacija u GA redizajnirana je primjenom Latin hypercube uzorkovanja, a zatim, kombinacijom dvaju uobičajeno korištenih statističkih kriterija (AIC, BIC), dajemo poboljšani genetički algoritam za rješavanje problema izbora statističkog modela. Predloženim se algoritmom može prevladati jaka ovisnost o putanji i osloniti na iskustvo stečeno primjenom klasičnih pristupa. Usporedba rezultata simulacije u rješavanju problema odabira statističkog modela s ovim poboljšanim GA, tradicionalnog genetičkog algoritma i klasičnog algoritma za odabir modela pokazuje da je novi GA superiorniji u rješavanju kvalitete, brzine konvergencije i drugih različitih pokazatelja.Regression model is a well-established method in data analysis with applications in various fields. The selection of independent variables and mathematically transformed in a regression model is often a challenging problem. Recently, some scholars have used evolutionary computation to solve this problem, but the result is not effective as we desired. The crossover operation in GA is redesigned by using Latin hypercube sampling, then combining two commonly used statistical criteria (AIC, BIC) we are presenting an improved genetic algorithm based for solving statistical model selection problem. The proposed algorithm can overcome strong path-dependence and rely on experience of classical approaches. Comparison of simulation results in solving statistical model selection problem with this improved GA, traditional genetic algorithm and classical algorithm for model selection show that the new GA has superiority in solution of quality, convergence rate and other various indices
    • …
    corecore