6,295 research outputs found

    Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    Get PDF
    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem

    Reliability-based design optimization using kriging surrogates and subset simulation

    Full text link
    The aim of the present paper is to develop a strategy for solving reliability-based design optimization (RBDO) problems that remains applicable when the performance models are expensive to evaluate. Starting with the premise that simulation-based approaches are not affordable for such problems, and that the most-probable-failure-point-based approaches do not permit to quantify the error on the estimation of the failure probability, an approach based on both metamodels and advanced simulation techniques is explored. The kriging metamodeling technique is chosen in order to surrogate the performance functions because it allows one to genuinely quantify the surrogate error. The surrogate error onto the limit-state surfaces is propagated to the failure probabilities estimates in order to provide an empirical error measure. This error is then sequentially reduced by means of a population-based adaptive refinement technique until the kriging surrogates are accurate enough for reliability analysis. This original refinement strategy makes it possible to add several observations in the design of experiments at the same time. Reliability and reliability sensitivity analyses are performed by means of the subset simulation technique for the sake of numerical efficiency. The adaptive surrogate-based strategy for reliability estimation is finally involved into a classical gradient-based optimization algorithm in order to solve the RBDO problem. The kriging surrogates are built in a so-called augmented reliability space thus making them reusable from one nested RBDO iteration to the other. The strategy is compared to other approaches available in the literature on three academic examples in the field of structural mechanics.Comment: 20 pages, 6 figures, 5 tables. Preprint submitted to Springer-Verla

    Kontextsensitive Modellhierarchien für Quantifizierung der höherdimensionalen Unsicherheit

    Get PDF
    We formulate four novel context-aware algorithms based on model hierarchies aimed to enable an efficient quantification of uncertainty in complex, computationally expensive problems, such as fluid-structure interaction and plasma microinstability simulations. Our results show that our algorithms are more efficient than standard approaches and that they are able to cope with the challenges of quantifying uncertainty in higher-dimensional, complex problems.Wir formulieren vier kontextsensitive Algorithmen auf der Grundlage von Modellhierarchien um eine effiziente Quantifizierung der Unsicherheit bei komplexen, rechenintensiven Problemen zu ermöglichen, wie Fluid-Struktur-Wechselwirkungs- und Plasma-Mikroinstabilitätssimulationen. Unsere Ergebnisse zeigen, dass unsere Algorithmen effizienter als Standardansätze sind und die Herausforderungen der Quantifizierung der Unsicherheit in höherdimensionalen, komplexen Problemen bewältigen können

    Adaptive numerical designs for the calibration of computer codes

    Full text link
    Making good predictions of a physical system using a computer code requires the inputs to be carefully specified. Some of these inputs called control variables have to reproduce physical conditions whereas other inputs, called parameters, are specific to the computer code and most often uncertain. The goal of statistical calibration consists in estimating these parameters with the help of a statistical model which links the code outputs with the field measurements. In a Bayesian setting, the posterior distribution of these parameters is normally sampled using MCMC methods. However, they are impractical when the code runs are high time-consuming. A way to circumvent this issue consists of replacing the computer code with a Gaussian process emulator, then sampling a cheap-to-evaluate posterior distribution based on it. Doing so, calibration is subject to an error which strongly depends on the numerical design of experiments used to fit the emulator. We aim at reducing this error by building a proper sequential design by means of the Expected Improvement criterion. Numerical illustrations in several dimensions assess the efficiency of such sequential strategies

    Data-driven modelling of biological multi-scale processes

    Full text link
    Biological processes involve a variety of spatial and temporal scales. A holistic understanding of many biological processes therefore requires multi-scale models which capture the relevant properties on all these scales. In this manuscript we review mathematical modelling approaches used to describe the individual spatial scales and how they are integrated into holistic models. We discuss the relation between spatial and temporal scales and the implication of that on multi-scale modelling. Based upon this overview over state-of-the-art modelling approaches, we formulate key challenges in mathematical and computational modelling of biological multi-scale and multi-physics processes. In particular, we considered the availability of analysis tools for multi-scale models and model-based multi-scale data integration. We provide a compact review of methods for model-based data integration and model-based hypothesis testing. Furthermore, novel approaches and recent trends are discussed, including computation time reduction using reduced order and surrogate models, which contribute to the solution of inference problems. We conclude the manuscript by providing a few ideas for the development of tailored multi-scale inference methods.Comment: This manuscript will appear in the Journal of Coupled Systems and Multiscale Dynamics (American Scientific Publishers

    Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations

    Get PDF
    We construct a new framework for accelerating Markov chain Monte Carlo in posterior sampling problems where standard methods are limited by the computational cost of the likelihood, or of numerical models embedded therein. Our approach introduces local approximations of these models into the Metropolis-Hastings kernel, borrowing ideas from deterministic approximation theory, optimization, and experimental design. Previous efforts at integrating approximate models into inference typically sacrifice either the sampler's exactness or efficiency; our work seeks to address these limitations by exploiting useful convergence characteristics of local approximations. We prove the ergodicity of our approximate Markov chain, showing that it samples asymptotically from the \emph{exact} posterior distribution of interest. We describe variations of the algorithm that employ either local polynomial approximations or local Gaussian process regressors. Our theoretical results reinforce the key observation underlying this paper: when the likelihood has some \emph{local} regularity, the number of model evaluations per MCMC step can be greatly reduced without biasing the Monte Carlo average. Numerical experiments demonstrate multiple order-of-magnitude reductions in the number of forward model evaluations used in representative ODE and PDE inference problems, with both synthetic and real data.Comment: A major update of the theory and example

    Practical Bayesian optimization in the presence of outliers

    Get PDF
    Inference in the presence of outliers is an important field of research as outliers are ubiquitous and may arise across a variety of problems and domains. Bayesian optimization is method that heavily relies on probabilistic inference. This allows outstanding sample efficiency because the probabilistic machinery provides a memory of the whole optimization process. However, that virtue becomes a disadvantage when the memory is populated with outliers, inducing bias in the estimation. In this paper, we present an empirical evaluation of Bayesian optimization methods in the presence of outliers. The empirical evidence shows that Bayesian optimization with robust regression often produces suboptimal results. We then propose a new algorithm which combines robust regression (a Gaussian process with Student-t likelihood) with outlier diagnostics to classify data points as outliers or inliers. By using an scheduler for the classification of outliers, our method is more efficient and has better convergence over the standard robust regression. Furthermore, we show that even in controlled situations with no expected outliers, our method is able to produce better results.Comment: 10 pages (2 of references), 6 figures, 1 algorith
    • …
    corecore