14 research outputs found

    A PDEM-COM framework for uncertainty quantification of backward issues involving both aleatory and epistemic uncertainties

    Get PDF
    Uncertainties that exist in nature or due to lack of knowledge have been widely recognized by researchers and engineering practitioners throughout engineering design and analysis for decades. Though great efforts have been devoted to the issues of uncertainty quantification (UQ) in various aspects, the methodologies on the quantification of aleatory uncertainty and epistemic uncertainty are usually logically inconsistent. For instance, the aleatory uncertainty is usually quantified in the framework of probability theory, whereas the epistemic uncertainty is quantified mostly by non-probabilistic methods. In the present paper, a probabilistically consistent framework for the quantification of both aleatory and epistemic uncertainty by synthesizing the probability density evolution method (PDEM) and the change of probability measure (COM) is outlined. The framework is then applied to the backward issues of uncertainty quantification. In particular, the uncertainty model updating issue is discussed in this paper. A numerical example is presented, and the results indicate the flexibility and efficiency of the proposed PDEM-COM framework

    Bayesian update of the parameters of probability distributions for risk assessment in a two-level hybrid probabilistic-possibilistic uncertainty framework

    No full text
    International audienceRisk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, ...) that are epistemically uncertain, i.e., known with poor precision. Whereas probability distributions are always used to describe aleatory uncertainty, alternative frameworks of representation may be considered for describing epistemic uncertainty, depending on the information and data available. In this paper, we use possibility distributions to describe the epistemic uncertainty in the parameters of the (aleatory) probability distributions. We address the issue of updating, in a Bayesian framework, the possibilistic representation of the epistemical-ly-uncertain parameters of (aleatory) probability distributions as new information (e.g., data) becomes availa-ble. A purely possibilistic counterpart of the classical, well-grounded probabilistic Bayes theorem is adopted. The feasibility of the method is shown on a literature case study involving the risk-based design of a flood protection dike

    Non-intrusive stochastic analysis with parameterized imprecise probability models: I. Performance estimation

    Get PDF
    © 2019 Elsevier Ltd Uncertainty propagation through the simulation models is critical for computational mechanics engineering to provide robust and reliable design in the presence of polymorphic uncertainty. This set of companion papers present a general framework, termed as non-intrusive imprecise stochastic simulation, for uncertainty propagation under the background of imprecise probability. This framework is composed of a set of methods developed for meeting different goals. In this paper, the performance estimation is concerned. The local extended Monte Carlo simulation (EMCS) is firstly reviewed, and then the global EMCS is devised to improve the global performance. Secondly, the cut-HDMR (High-Dimensional Model Representation) is introduced for decomposing the probabilistic response functions, and the local EMCS method is used for estimating the cut-HDMR component functions. Thirdly, the RS (Random Sampling)-HDMR is introduced to decompose the probabilistic response functions, and the global EMCS is applied for estimating the RS-HDMR component functions. The statistical errors of all estimators are derived, and the truncation errors are estimated by two global sensitivity indices, which can also be used for identifying the influential HDMR components. In the companion paper, the reliability and rare event analysis are treated. The effectiveness of the proposed methods are demonstrated by numerical and engineering examples

    Non-intrusive stochastic analysis with parameterized imprecise probability models: II. Reliability and rare events analysis

    Get PDF
    © 2019 Elsevier Ltd Structural reliability analysis for rare failure events in the presence of hybrid uncertainties is a challenging task drawing increasing attentions in both academic and engineering fields. Based on the new imprecise stochastic simulation framework developed in the companion paper, this work aims at developing efficient methods to estimate the failure probability functions subjected to rare failure events with the hybrid uncertainties being characterized by imprecise probability models. The imprecise stochastic simulation methods are firstly improved by the active learning procedure so as to reduce the computational costs. For the more challenging rare failure events, two extended subset simulation based sampling methods are proposed to provide better performances in both local and global parameter spaces. The computational costs of both methods are the same with the classical subset simulation method. These two methods are also combined with the active learning procedure so as to further substantially reduce the computational costs. The estimation errors of all the methods are analyzed based on sensitivity indices and statistical properties of the developed estimators. All these new developments enrich the imprecise stochastic simulation framework. The feasibility and efficiency of the proposed methods are demonstrated with numerical and engineering test examples

    A PDEM-COM framework for uncertainty quantification of backward issues involving both aleatory and epistemic uncertainties

    Get PDF
    Abstract Uncertainties that exist in nature or due to lack of knowledge have been widely recognized by researchers and engineering practitioners throughout engineering design and analysis for decades. Though great efforts have been devoted to the issues of uncertainty quantification (UQ) in various aspects, the methodologies on the quantification of aleatory uncertainty and epistemic uncertainty are usually logically inconsistent. For instance, the aleatory uncertainty is usually quantified in the framework of probability theory, whereas the epistemic uncertainty is quantified mostly by non-probabilistic methods. In the present paper, a probabilistically consistent framework for the quantification of both aleatory and epistemic uncertainty by synthesizing the probability density evolution method (PDEM) and the change of probability measure (COM) is outlined. The framework is then applied to the backward issues of uncertainty quantification. In particular, the uncertainty model updating issue is discussed in this paper. A numerical example is presented, and the results indicate the flexibility and efficiency of the proposed PDEM-COM framework.</jats:p

    Fuzzy finite element model updating of the DLR AIRMOD test structure

    Get PDF
    This article presents the application of finite-element fuzzy model updating to the DLR AIRMOD structure. The proposed approach is initially demonstrated on a simulated mass-spring system with three degrees of freedom. Considering the effect of the assembly process on variability measurements, modal tests were carried out for the repeatedly disassembled and reassembled DLR AIRMOD structure. The histograms of the measured data attributed to the uncertainty of the structural components in terms of mass and stiffness are utilised to obtain the membership functions of the chosen fuzzy outputs and to determine the updated membership functions of the uncertain input parameters represented by fuzzy variables. In this regard, a fuzzy parameter is introduced to represent a set of interval parameters through the membership function, and a meta model (kriging, in this work) is used to speed up the updating. The use of non-probabilistic models, i.e. interval and fuzzy models, for updating models with uncertainties is often more practical when the large quantities of test data that are necessary for probabilistic model updating are unavailable

    The role of the Bhattacharyya distance in stochastic model updating

    Get PDF
    The Bhattacharyya distance is a stochastic measurement between two samples and taking into account their probability distributions. The objective of this work is to further generalize the application of the Bhattacharyya distance as a novel uncertainty quantification metric by developing an approximate Bayesian computation model updating framework, in which the Bhattacharyya distance is fully embedded. The Bhattacharyya distance between sample sets is evaluated via a binning algorithm. And then the approximate likelihood function built upon the concept of the distance is developed in a two-step Bayesian updating framework, where the Euclidian and Bhattacharyya distances are utilized in the first and second steps, respectively. The performance of the proposed procedure is demonstrated with two exemplary applications, a simulated mass-spring example and a quite challenging benchmark problem for uncertainty treatment. These examples demonstrate a gain in quality of the stochastic updating by utilizing the superior features of the Bhattacharyya distance, representing a convenient, efficient, and capable metric for stochastic model updating and uncertainty characterization

    Stochastic simulation methods for structural reliability under mixed uncertainties

    Get PDF
    Uncertainty quantification (UQ) has been widely recognized as one of the most important, yet challenging task in both structural engineering and system engineering, and the current researches are mainly on the proper treatment of different types of uncertainties, resulting from either natural randomness or lack of information, in all related sub-problems of UQ such as uncertainty characterization, uncertainty propagation, sensitivity analysis, model updating, model validation, risk and reliability analysis, etc. It has been widely accepted that those uncertainties can be grouped as either aleatory uncertainty or epistemic uncertainty, depending on whether they are reducible or not. For dealing with the above challenge, many non-traditional uncertainty characterization models have been developed, and those models can be grouped as either imprecise probability models (e.g., probability-box model, evidence theory, second-order probability model and fuzzy probability model) or non-probabilistic models (e.g., interval/convex model and fuzzy set theory). This thesis concerns the efficient numerical propagation of the three kinds of uncertainty characterization models, and for simplicity, the precise probability model, the distribution probability-box model, and the interval model are taken as examples. The target is to develop efficient numerical algorithms for learning the functional behavior of the probabilistic responses (e.g., response moments and failure probability) with respect to the epistemic parameters of model inputs, which is especially useful for making reliable decisions even when the available information on model inputs is imperfect. To achieve the above target, my thesis presents three main developments for improving the Non-intrusive Imprecise Stochastic Simulation (NISS), which is a general methodology framework for propagating the imprecise probability models with only one stochastic simulation. The first development is on generalizing the NISS methods to the problems with inputs including both imprecise probability models and non-probability models. The algorithm is established by combining Bayes rule and kernel density estimation. The sensitivity indices of the epistemic parameters are produced as by-products. The NASA Langley UQ challenge is then successfully solved by using the generalized NISS method. The second development is to inject the classical line sampling to the NISS framework so as to substantially improve the efficiency of the algorithm for rare failure event analysis, and two strategies, based on different interpretations of line sampling, are developed. The first strategy is based on the hyperplane approximations, while the second-strategy is derived based on the one-dimensional integrals. Both strategies can be regarded as post-processing of the classical line sampling, while the results show that their resultant NISS estimators have different performance. The third development aims at further substantially improving the efficiency and suitability to highly nonlinear problems of line sampling, for complex structures and systems where one deterministic simulation may take hours. For doing this, the active learning strategy based on Gaussian process regression is embedded into the line sampling procedure for accurately estimating the interaction point for each sample line, with only a small number of deterministic simulations. The above three developments have largely improved the suitability and efficiency of the NISS methods, especially for real-world engineering applications. The efficiency and effectiveness of those developments are clearly interpreted with toy examples and sufficiently demonstrated by real-world test examples in system engineering, civil engineering, and mechanical engineering

    Distribution-free stochastic simulation methodology for model updating under hybrid uncertainties

    Get PDF
    In the real world, a significant challenge faced in the safe operation and maintenance of infrastructures is the lack of available information or data. This results in a large degree of uncertainty and the requirement for robust and efficient uncertainty quantification (UQ) tools in order to derive the most realistic estimates of the behavior of structures. While the probabilistic approach has long been utilized as an essential tool for the quantitative mathematical representation of uncertainty, a common criticism is that the approach often involves insubstantiated subjective assumptions because of the scarcity or imprecision of available information. To avoid the inclusion of subjectivity, the concepts of imprecise probabilities have been developed, and the distributional probability-box (p-box) has gained the most attention among various types of imprecise probability models since it can straightforwardly provide a clear separation between aleatory and epistemic uncertainty. This thesis concerns the realistic consideration and numerically efficient calibraiton and propagation of aleatory and epistemic uncertainties (hybrid uncertainties) based on the distributional p-box. The recent developments including the Bhattacharyya distance-based approximate Bayesian computation (ABC) and non-intrusive imprecise stochastic simulation (NISS) methods have strengthened the subjective assumption-free approach for uncertainty calibration and propagation. However, these methods based on the distributional p-box stand on the availability of the prior knowledge determining a specific distribution family for the p-box. The target of this thesis is hence to develop a distribution-free approach for the calibraiton and propagation of hybrid uncertainties, strengthening the subjective assumption-free UQ approach. To achieve the above target, this thesis presents five main developments to improve the Bhattacharyya distance-based ABC and NISS frameworks. The first development is on improving the scope of application and efficiency of the Bhattacharyya distance-based ABC. The dimension reduction procedure is proposed to evaluate the Bhattacharyya distance when the system under investigation is described by time-domain sequences. Moreover, the efficient Bayesian inference method within the Bayesian updating with structural reliability methods (BUS) framework is developed by combining BUS with the adaptive Kriging-based reliability method, namely AK-MCMC. The second development of the distribution-free stochastic model updating framework is based on the combined application of the staircase density functions and the Bhattacharyya distance. The staircase density functions can approximate a wide range of distributions arbitrarily close; hence the development achieved to perform the Bhattacharyya distance-based ABC without limiting hypotheses on the distribution families of the parameters having to be updated. The aforementioned two developments are then integrated in the third development to provide a solution to the latest edition (2019) of the NASA UQ challenge problem. The model updating tasks under very challenging condition, where prior information of aleatory parameters are extremely limited other than a common boundary, are successfully addressed based on the above distribution-free stochastic model updating framework. Moreover, the NISS approach that simplifies the high-dimensional optimization to a set of one-dimensional searching by a first-order high-dimensional model representation (HDMR) decomposition with respect to each design parameter is developed to efficiently solve the reliability-based design optimization tasks. This challenge, at the same time, elucidates the limitations of the current developments, hence the fourth development aims at addressing the limitation that the staircase density functions are designed for univariate random variables and cannot acount for the parameter dependencies. In order to calibrate the joint distribution of correlated parameters, the distribution-free stochastic model updating framework is extended by characterizing the aleatory parameters using the Gaussian copula functions having marginal distributions as the staircase density functions. This further strengthens the assumption-free approach for uncertainty calibration in which no prior information of the parameter dependencies is required. Finally, the fifth development of the distribution-free uncertainty propagation framework is based on another application of the staircase density functions to the NISS class of methods, and it is applied for efficiently solving the reliability analysis subproblem of the NASA UQ challenge 2019. The above five developments have successfully strengthened the assumption-free approach for both uncertainty calibration and propagation thanks to the nature of the staircase density functions approximating arbitrary distributions. The efficiency and effectiveness of those developments are sufficiently demonstrated upon the real-world applications including the NASA UQ challenge 2019

    Model-data interaction in groundwater studies: Review of methods, applications and future directions

    Get PDF
    This manuscript version is made available under the CC-BY-NC-ND 4.0 license: http://creativecommons.org/licenses/by-nc-nd/4.0/ which permits use, distribution and reproduction in any medium, provided the original work is properly cited. This author accepted manuscript is made available following 24 month embargo from date of publication (Sept 2018) in accordance with the publisher’s archiving policyWe define model-data interaction (MDI) as a two way process between models and data, in which on one hand data can serve the modeling purpose by supporting model discrimination, parameter refinement, uncertainty analysis, etc., and on the other hand models provide a tool for data fusion, interpretation, interpolation, etc. MDI has many applications in the realm of groundwater and has been the topic of extensive research in the groundwater community for the past several decades. This has led to the development of a multitude of increasingly sophisticated methods. The progress of data acquisition technologies and the evolution of models are continuously changing the landscape of groundwater MDI, creating new challenges and opportunities that must be properly understood and addressed. This paper aims to review, analyze and classify research on MDI in groundwater applications, and discusses several related aspects including: (1) basic theoretical concepts and classification of methods, (2) sources of uncertainty and how they are commonly addressed, (3) specific characteristics of groundwater models and data that affect the choice of methods, (4) how models and data can interact to provide added value in groundwater applications, (5) software and codes for MDI, and (6) key issues that will likely form future research directions. The review shows that there are many tools and techniques for groundwater MDI, and this diversity is needed to support different MDI objectives, assumptions, model and data types and computational constraints. The study identifies eight categories of applications for MDI in the groundwater literature, and highlights the growing gap between MDI practices in the research community and those in consulting, industry and government.Behzad Ataie-Ashtiani and Craig T. Simmons acknowledge support from the National Centre for Groundwater Research and Training, Australia. Behzad Ataie-Ashtiani also appreciates the support of the Research Office of the Sharif University of Technology, Iran
    corecore