13 research outputs found

    Evaluating the Uncertainty of a Boolean Formula with Belief Functions

    Get PDF
    In fault-tree analysis, probabilities of failure of components are often assumed to be precise and the events are assumed to be inde- pendent, but this is not always verified in practice. By giving up some of these assumptions, results can still be computed, even though it may require more expensive algorithms, or provide more imprecise results. Once compared to those obtained with the simplified model, the impact of these assumptions can be evaluated. This paper investigates the case when probability intervals of atomic propositions come from independent sources of information. In this case, the problem is solved by means of belief functions. We provide the general framework, discuss computation methods, and compare this setting with other approaches to evaluating the uncertainty of formulas

    Other uncertainty theories based on capacities

    Get PDF
    International audienceThe two main uncertainty representations in the literature that tolerate imprecision are possibility distributions and random disjunctive sets. This chapter devotes special attention to the theories that have emerged from them. The first part of the chapter discusses epistemic logic and derives the need for capturing imprecision in information representations. It bridges the gap between uncertainty theories and epistemic logic showing that imprecise probabilities subsume modalities of possibility and necessity as much as probability. The second part presents possibility and evidence theories, their origins, assumptions and semantics, discusses the connections between them and the general framework of imprecise probability. Finally, chapter points out the remaining discrepancies between the different theories regarding various basic notions, such as conditioning, independence or information fusion and the existing bridges between them

    Inclusion-exclusion principle for belief functions

    Get PDF
    International audienceThe inclusion-exclusion principle is a well-known property in probability theory, and is instrumental in some computational problems such as the evaluation of system reliability or the calculation of the probability of a Boolean formula in diagnosis. However, in the setting of uncertainty theories more general than probability theory, this principle no longer holds in general. It is therefore useful to know for which families of events it continues to hold. This paper investigates this question in the setting of belief functions. After exhibiting original sufficient and necessary conditions for the principle to hold, we illustrate its use on the uncertainty analysis of Boolean and non-Boolean systems in reliability

    Addressing ambiguity in randomized reinsurance stop-loss treaties using belief functions

    Get PDF
    The aim of the paper is to model ambiguity in a randomized reinsurance stop-loss treaty. For this, we consider the lower envelope of the set of bivariate joint probability distributions having a precise discrete marginal and an ambiguous Bernoulli marginal. Under an independence assumption, since the lower envelope fails 2-monotonicity, inner/outer Dempster-Shafer approximations are considered, so as to select the optimal retention level by maximizing the lower expected insurer's annual profit under reinsurance. We show that the inner approximation is not suitable in the reinsurance problem, while the outer approximation preserves the given marginal information, weakens the independence assumption, and does not introduce spurious information in the retention level selection problem. Finally, we provide a characterization of the optimal retention level

    Constructing copulas from shock models with imprecise distributions

    Full text link
    The omnipotence of copulas when modeling dependence given marg\-inal distributions in a multivariate stochastic situation is assured by the Sklar's theorem. Montes et al.\ (2015) suggest the notion of what they call an \emph{imprecise copula} that brings some of its power in bivariate case to the imprecise setting. When there is imprecision about the marginals, one can model the available information by means of pp-boxes, that are pairs of ordered distribution functions. By analogy they introduce pairs of bivariate functions satisfying certain conditions. In this paper we introduce the imprecise versions of some classes of copulas emerging from shock models that are important in applications. The so obtained pairs of functions are not only imprecise copulas but satisfy an even stronger condition. The fact that this condition really is stronger is shown in Omladi\v{c} and Stopar (2019) thus raising the importance of our results. The main technical difficulty in developing our imprecise copulas lies in introducing an appropriate stochastic order on these bivariate objects

    Statistical reasoning with set-valued information : Ontic vs. epistemic views

    Get PDF
    International audienceIn information processing tasks, sets may have a conjunctive or a disjunctive reading. In the conjunctive reading, a set represents an object of interest and its elements are subparts of the object, forming a composite description. In the disjunctive reading, a set contains mutually exclusive elements and refers to the representation of incomplete knowledge. It does not model an actual object or quantity, but partial information about an underlying object or a precise quantity. This distinction between what we call ontic vs. epistemic sets remains valid for fuzzy sets, whose membership functions, in the disjunctive reading are possibility distributions, over deterministic or random values. This paper examines the impact of this distinction in statistics. We show its importance because there is a risk of misusing basic notions and tools, such as conditioning, distance between sets, variance, regression, etc. when data are set-valued. We discuss several examples where the ontic and epistemic points of view yield different approaches to these concepts

    EMPIRICAL COMPARISON OF METHODS FOR THE HIERARCHICAL PROPAGATION OF HYBRID UNCERTAINTY IN RISK ASSESSMENT, IN PRESENCE OF DEPENDENCES

    No full text
    Risk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, ...) that are epistemically-uncertain, i.e., known with poor precision. Whereas aleatory uncertainty is always described by probability distributions, epistemic uncertainty may be represented in different ways (e.g., probabilistic or possibilistic), depending on the information and data available. The work presented in this paper addresses the issue of accounting for (in)dependence relationships between epistemically-uncertain parameters. When a probabilistic representation of epistemic uncertainty is considered, uncertainty propagation is carried out by a two-dimensional (or double) Monte Carlo (MC) simulation approach; instead, when possibility distributions are used, two approaches are undertaken: the hybrid MC and Fuzzy Interval Analysis (FIA) method and the MC-based Dempster-Shafer (DS) approach employing Independent Random Sets (IRSs). The objectives are: i) studying the effects of (in)dependence between the epistemically-uncertain parameters of the aleatory probability distributions (when a probabilistic/possibilistic representation of epistemic uncertainty is adopted) and ii) studying the effect of the probabilistic/possibilistic representation of epistemic uncertainty (when the state of dependence between the epistemic parameters is defined). The Dependency Bound Convolution (DBC) approach is then undertaken within a hierarchical setting of hybrid (probabilistic and possibilistic) uncertainty propagation, in order to account for all kinds of (possibly unknown) dependences between the random variables. The analyses are carried out with reference to two toy examples, built in such a way to allow performing a fair quantitative comparison between the methods, and evaluating their rationale and appropriateness in relation to risk analysis
    corecore