11,062 research outputs found

    Evaluating epistemic uncertainty under incomplete assessments

    Get PDF
    The thesis of this study is to propose an extended methodology for laboratory based Information Retrieval evaluation under incomplete relevance assessments. This new methodology aims to identify potential uncertainty during system comparison that may result from incompleteness. The adoption of this methodology is advantageous, because the detection of epistemic uncertainty - the amount of knowledge (or ignorance) we have about the estimate of a system's performance - during the evaluation process can guide and direct researchers when evaluating new systems over existing and future test collections. Across a series of experiments we demonstrate how this methodology can lead towards a finer grained analysis of systems. In particular, we show through experimentation how the current practice in Information Retrieval evaluation of using a measurement depth larger than the pooling depth increases uncertainty during system comparison

    Uncertainty Analysis of the Adequacy Assessment Model of a Distributed Generation System

    Full text link
    Due to the inherent aleatory uncertainties in renewable generators, the reliability/adequacy assessments of distributed generation (DG) systems have been particularly focused on the probabilistic modeling of random behaviors, given sufficient informative data. However, another type of uncertainty (epistemic uncertainty) must be accounted for in the modeling, due to incomplete knowledge of the phenomena and imprecise evaluation of the related characteristic parameters. In circumstances of few informative data, this type of uncertainty calls for alternative methods of representation, propagation, analysis and interpretation. In this study, we make a first attempt to identify, model, and jointly propagate aleatory and epistemic uncertainties in the context of DG systems modeling for adequacy assessment. Probability and possibility distributions are used to model the aleatory and epistemic uncertainties, respectively. Evidence theory is used to incorporate the two uncertainties under a single framework. Based on the plausibility and belief functions of evidence theory, the hybrid propagation approach is introduced. A demonstration is given on a DG system adapted from the IEEE 34 nodes distribution test feeder. Compared to the pure probabilistic approach, it is shown that the hybrid propagation is capable of explicitly expressing the imprecision in the knowledge on the DG parameters into the final adequacy values assessed. It also effectively captures the growth of uncertainties with higher DG penetration levels

    Risk-informed decision-making in the presence of epistemic uncertainty

    Get PDF
    International audienceAn important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte-Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we therefore propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decision-maker's attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process

    COMPILATION OF ACTIVE FAULT DATA IN PORTUGAL FOR USE IN SEISMIC HAZARD ANALYSIS

    Get PDF
    To estimate where future earthquakes are likely to occur, it is essential to combine information about past earthquakes with knowledge about the location and seismogenic properties of active faults. For this reason, robust probabilistic seismic hazard analysis (PSHA) integrates seismicity and active fault data. Existing seismic hazard assessments for Portugal rely exclusively on seismicity data and do not incorporate data on active faults. Project SHARE (Seismic Hazard Harmonization in Europe) is an EC-funded initiative (FP7) that aims to evaluate European seismic hazards using an integrated, standardized approach. In the context of SHARE, we are developing a fully-parameterized active fault database for Portugal that incorporates existing compilations, updated according to the most recent publications. The seismogenic source model derived for SHARE will be the first model for Portugal to include fault data and follow an internationally standardized approach. This model can be used to improve both seismic hazard and risk analyses and will be combined with the Spanish database for use in Iberian- and European-scale assessments

    Propagation of epistemic uncertainty in queueing models with unreliable server using chaos expansions

    Full text link
    In this paper, we develop a numerical approach based on Chaos expansions to analyze the sensitivity and the propagation of epistemic uncertainty through a queueing systems with breakdowns. Here, the quantity of interest is the stationary distribution of the model, which is a function of uncertain parameters. Polynomial chaos provide an efficient alternative to more traditional Monte Carlo simulations for modelling the propagation of uncertainty arising from those parameters. Furthermore, Polynomial chaos expansion affords a natural framework for computing Sobol' indices. Such indices give reliable information on the relative importance of each uncertain entry parameters. Numerical results show the benefit of using Polynomial Chaos over standard Monte-Carlo simulations, when considering statistical moments and Sobol' indices as output quantities

    Challenges of Sustaining the International Space Station through 2020 and Beyond: Including Epistemic Uncertainty in Reassessing Confidence Targets

    Get PDF
    This paper introduces an analytical approach, Probability and Confidence Trade-space (PACT), which can be used to assess uncertainty in International Space Station (ISS) hardware sparing necessary to extend the life of the vehicle. There are several key areas under consideration in this research. We investigate what sparing confidence targets may be reasonable to ensure vehicle survivability and for completion of science on the ISS. The results of the analysis will provide a methodological basis for reassessing vehicle subsystem confidence targets. An ongoing annual analysis currently compares the probability of existing spares exceeding the total expected unit demand of the Orbital Replacement Unit (ORU) in functional hierarchies approximating the vehicle subsystems. In cases where the functional hierarchies availability does not meet subsystem confidence targets, the current sparing analysis further identifies which ORUs may require additional spares to extend the life of the ISS. The resulting probability is dependent upon hardware reliability estimates. However, the ISS hardware fleet carries considerable epistemic uncertainty (uncertainty in the knowledge of the true hardware failure rate), which does not currently factor into the annual sparing analysis. The existing confidence targets may be conservative. This paper will also discuss how confidence targets may be relaxed based on the inclusion of epistemic uncertainty for each ORU. The paper will conclude with strengths and limitations for implementing the analytical approach in sustaining the ISS through end of life, 2020 and beyond

    Stochastic and epistemic uncertainty propagation in LCA

    Get PDF
    Purpose: When performing uncertainty propagation, most LCA practitioners choose to represent uncertainties by single probability distributions and to propagate them using stochastic methods. However the selection of single probability distributions appears often arbitrary when faced with scarce information or expert judgement (epistemic uncertainty). Possibility theory has been developed over the last decades to address this problem. The objective of this study is to present a methodology that combines probability and possibility theories to represent stochastic and epistemic uncertainties in a consistent manner and apply it to LCA. A case study is used to show the uncertainty propagation performed with the proposed method and compare it to propagation performed using probability and possibility theories alone. Methods: Basic knowledge on the probability theory is first recalled, followed by a detailed description of hal-00811827, version 1- 11 Apr 2013 epistemic uncertainty representation using fuzzy intervals. The propagation methods used are the Monte Carlo analysis for probability distribution and an optimisation on alpha-cuts for fuzzy intervals. The proposed method (noted IRS) generalizes the process of random sampling to probability distributions as well as fuzzy intervals, thus making the simultaneous use of both representations possible

    Robust analysis of uncertainty in scientific assessments

    Get PDF
    Uncertainty refers to any limitation in knowledge. Identifying and characterizing uncertainty in conclusions is important to ensure transparency and avoid over or under confidence in scientific assessments. Quantitative expressions of uncertainty are less ambiguous compared to uncertainty expressed qualitatively, or not at all. Subjective probability is an example of a quantitative expression of epistemic uncertainty, which combined with Bayesian inference makes it possible to integrate evidence and characterizes uncertainty in quantities of interest. This thesis contributes to the understanding and implementation of robust Bayesian analysis as a way to integrate expert judgment and data into assessments and quantify uncertainty by bounded probability. The robust Bayesian framework is based on sets of probability for epistemic uncertainty, where precise probability is seen as a special case. This thesis covers applications relevant for scientific assessments, including evidence synthesis and quantitative risk assessment.Paper I proposes to combine two sampling methods: iterative importance sampling and Markov chain Monte Carlo (MCMC) sampling, for quantifying uncertainty by bounded probability when Bayesian updating requires MCMC sampling. This opens up for robust Bayesian analysis to be applied to complex statistical models. To achieve this, an effective sample size of importance sampling that accounts for correlated MCMC samples is proposed. For illustration, the proposed method is applied to estimate the overall effect with bounded probability in a published meta-analysis within the Collaboration for Environmental Evidence on the effect of biomanipulation on freshwater lakes.Paper II demonstrates robust Bayesian analysis as a way to quantify uncertainty in a quantity of interest by bounded probability, and explicitly distinguishes between epistemic and aleatory uncertainty in the assessment and learn parameters by integrating evidence into the model. Robust Bayesian analysis is described as a generalization of Bayesian analysis, including Bayesian analysis through precise probability as a special case. Both analyses are applied to an intake assessment.Paper III describes a way to consider uncertainty arising from ignorance or ambiguity about bias terms in a quantitative bias analysis by characterizing bias with imprecision. This is done by specifying bias with a set of bias terms and use robust Bayesian analysis to estimate the overall effect in the meta-analysis. The approach provides a structured framework to transform qualitative judgments concerning risk of biases into quantitative expressions of uncertainty in quantitative bias analysis.Paper IV compares the effect of different diversified farming practices on biodiversity and crop yields. This is done by applying a Bayesian network meta-analysis to a new public global database from a systematic protocol on diversified farming. A portfolio analysis calibrated by the network meta-analyses showed that uncertainty about the mean performance is large compared to the variability in performance across different farms
    • 

    corecore