1,410 research outputs found

    Tolerance analysis approach based on the classification of uncertainty (aleatory / epistemic)

    Get PDF
    Uncertainty is ubiquitous in tolerance analysis problem. This paper deals with tolerance analysis formulation, more particularly, with the uncertainty which is necessary to take into account into the foundation of this formulation. It presents: a brief view of the uncertainty classification: Aleatory uncertainty comes from the inherent uncertain nature and phenomena, and epistemic uncertainty comes from the lack of knowledge, a formulation of the tolerance analysis problem based on this classification, its development: Aleatory uncertainty is modeled by probability distributions while epistemic uncertainty is modeled by intervals; Monte Carlo simulation is employed for probabilistic analysis while nonlinear optimization is used for interval analysis.“AHTOLA” project (ANR-11- MONU-013

    Investigation of robust optimization and evidence theory with stochastic expansions for aerospace applications under mixed uncertainty

    Get PDF
    One of the primary objectives of this research is to develop a method to model and propagate mixed (aleatory and epistemic) uncertainty in aerospace simulations using DSTE. In order to avoid excessive computational cost associated with large scale applications and the evaluation of Dempster Shafer structures, stochastic expansions are implemented for efficient UQ. The mixed UQ with DSTE approach was demonstrated on an analytical example and high fidelity computational fluid dynamics (CFD) study of transonic flow over a RAE 2822 airfoil. Another objective is to devise a DSTE based performance assessment framework through the use of quantification of margins and uncertainties. Efficient uncertainty propagation in system design performance metrics and performance boundaries is achieved through the use of stochastic expansions. The technique is demonstrated on: (1) a model problem with non-linear analytical functions representing the outputs and performance boundaries of two coupled systems and (2) a multi-disciplinary analysis of a supersonic civil transport. Finally, the stochastic expansions are applied to aerodynamic shape optimization under uncertainty. A robust optimization algorithm is presented for computationally efficient airfoil design under mixed uncertainty using a multi-fidelity approach. This algorithm exploits stochastic expansions to create surrogate models utilized in the optimization process. To reduce the computational cost, output space mapping technique is implemented to replace the high-fidelity CFD model by a suitably corrected low-fidelity one. The proposed algorithm is demonstrated on the robust optimization of NACA 4-digit airfoils under mixed uncertainties in transonic flow. --Abstract, page iii

    Model uncertainty in non-linear numerical analyses of slender reinforced concrete members

    Get PDF
    The present study aims to characterize the epistemic uncertainty within the use of global non-linear numerical analyses (i.e., NLNAs) for design and assessment purposes of slender reinforced concrete (RC) members. The epistemic uncertainty associated to NLNAs may be represented by approximations and choices performed during the definition of a structural numerical model. In order to quantify epistemic uncertainty associated to a non-linear numerical simulation, the resistance model uncertainty random variable has to be characterized by means of the comparison between experimental and numerical results. With this aim, a set of experimental tests on slender RC columns known from the literature is considered. Then, the experimental results in terms of maximum axial load are compared to the outcomes achieved from NLNAs. Nine different modeling hypotheses are herein considered to characterize the resistance model uncertainty random variable. The probabilistic analysis of the results has been performed according to Bayesian approach accounting also for both the previous knowledge from the scientific literature and the influence of the experimental uncertainty on the estimation of the statistics of the resistance model uncertainty random variable. Finally, the resistance model uncertainty partial safety factor is evaluated in line with the global resistance format of fib Model Code for Concrete Structures 2010 with reference to new and existing RC structures

    Application of Single-Station Sigma and Site-Response Characterization in a Probabilistic Seismic-Hazard Analysis for a New Nuclear Site

    Get PDF
    Aleatory variability in ground-motion prediction, represented by the standard deviation (sigma) of a ground-motion prediction equation, exerts a very strong influence on the results of probabilistic seismic-hazard analysis (PSHA). This is especially so at the low annual exceedance frequencies considered for nuclear facilities; in these cases, even small reductions in sigma can have a marked effect on the hazard estimates. Proper separation and quantification of aleatory variability and epistemic uncertainty can lead to defensible reductions in sigma. One such approach is the single-station sigma concept, which removes that part of sigma corresponding to repeatable site-specific effects. However, the site-to-site component must then be constrained by site-specific measurements or else modeled as epistemic uncertainty and incorporated into the modeling of site effects. The practical application of the single-station sigma concept, including the characterization of the dynamic properties of the site and the incorporation of site-response effects into the hazard calculations, is illustrated for a PSHA conducted at a rock site under consideration for the potential construction of a nuclear power plant.Civil, Architectural, and Environmental Engineerin

    Uncertainty Assessment in High-Risk Environments Using Probability, Evidence Theory and Expert Judgment Elicitation

    Get PDF
    The level of uncertainty in advanced system design is assessed by comparing the results of expert judgment elicitation to probability and evidence theory. This research shows how one type of monotone measure, namely Dempster-Shafer Theory of Evidence can expand the framework of uncertainty to provide decision makers a more robust solution space. The issues imbedded in this research are focused on how the relevant predictive uncertainty produced by similar action is measured. This methodology uses the established approach from traditional probability theory and Dempster-Shafer evidence theory to combine two classes of uncertainty, aleatory and epistemic. Probability theory provides the mathematical structure traditionally used in the representation of aleatory uncertainty. The uncertainty in analysis outcomes is represented by probability distributions and typically summarized as Complimentary Cumulative Distribution Functions (CCDFs). The main components of this research are probability of X in the probability theory compared to mx in evidence theory. Using this comparison, an epistemic model is developed to obtain the upper “CCPF - Complimentary Cumulative Plausibility Function” limits and the lower “CCBF - Complimentary Cumulative Belief Function” limits compared to the traditional probability function. A conceptual design for the Thermal Protection System (TPS) of future Crew Exploration Vehicles (CEV) is used as an initial test case. A questionnaire is tailored to elicit judgment from experts in high-risk environments. Based on description and characteristics, the answers of the questionnaire produces information, that serves as qualitative semantics used for the evidence theory functions. The computational mechanism provides a heuristic approach for the compilation and presentation of the results. A follow-up evaluation serves as validation of the findings and provides useful information in terms of consistency and adoptability to other domains. The results of this methodology provide a useful and practical approach in conceptual design to aid the decision maker in assessing the level of uncertainty of the experts. The methodology presented is well-suited for decision makers that encompass similar conceptual design instruments

    Quantification of uncertainty in probabilistic safety analysis

    Get PDF
    This thesis develops methods for quantification and interpretation of uncertainty in probabilistic safety analysis, focussing on fault trees. The output of a fault tree analysis is, usually, the probability of occurrence of an undesirable event (top event) calculated using the failure probabilities of identified basic events. The standard method for evaluating the uncertainty distribution is by Monte Carlo simulation, but this is a computationally intensive approach to uncertainty estimation and does not, readily, reveal the dominant reasons for the uncertainty. A closed form approximation for the fault tree top event uncertainty distribution, for models using only lognormal distributions for model inputs, is developed in this thesis. Its output is compared with the output from two sampling based approximation methods; standard Monte Carlo analysis, and Wilks’ method, which is based on order statistics using small sample sizes. Wilks’ method can be used to provide an upper bound for the percentiles of top event distribution, and is computationally cheap. The combination of the lognormal approximation and Wilks’ Method can be used to give, respectively, the overall shape and high confidence on particular percentiles of interest. This is an attractive, practical option for evaluation of uncertainty in fault trees and, more generally, uncertainty in certain multilinear models. A new practical method of ranking uncertainty contributors in lognormal models is developed which can be evaluated in closed form, based on cutset uncertainty. The method is demonstrated via examples, including a simple fault tree model and a model which is the size of a commercial PSA model for a nuclear power plant. Finally, quantification of “hidden uncertainties” is considered; hidden uncertainties are those which are not typically considered in PSA models, but may contribute considerable uncertainty to the overall results if included. A specific example of the inclusion of a missing uncertainty is explained in detail, and the effects on PSA quantification are considered. It is demonstrated that the effect on the PSA results can be significant, potentially permuting the order of the most important cutsets, which is of practical concern for the interpretation of PSA models. Finally, suggestions are made for the identification and inclusion of further hidden uncertainties.Open Acces

    Propagation of aleatory and epistemic uncertainties in the model for the design of a flood protection dike

    No full text
    International audienceTraditionally, probability distributions are used in risk analysis to represent the uncertainty associated to random (aleatory) phenomena. The parameters (e.g., their mean, variance, ...) of these distributions are usually affected by epistemic (state-of-knowledge) uncertainty, due to limited experience and incomplete knowledge about the phenomena that the distributions represent: the uncertainty framework is then characterized by two hierarchical levels of uncertainty. Probability distributions may be used to characterize also the epistemic uncertainty affecting the parameters of the probability distributions. However, when sufficiently informative data are not available, an alternative and proper way to do this might be by means of possibilistic distributions. In this paper, we use probability distributions to represent aleatory uncertainty and possibility distributions to describe the epistemic uncertainty associated to the poorly known parameters of such probability distributions. A hybrid method is used to hierarchically propagate the two types of uncertainty. The results obtained on a risk model for the design of a flood protection dike are compared with those of a traditional, purely probabilistic, two-dimensional (or double) Monte Carlo approach. To the best of the authors' knowledge, this is the first time that a hybrid Monte Carlo and possibilistic method is tailored to propagate the uncertainties in a risk model when the uncertainty framework is characterized by two hierarchical levels. The results of the case study show that the hybrid approach produces risk estimates that are more conservative than (or at least comparable to) those obtained by the two-dimensional Monte Carlo method
    • …
    corecore