530 research outputs found

    Shifting Data Collection from a Fixed to an Adaptive Sampling Paradigm

    Get PDF
    For domains where data are difficult to obtain due to human or resource limitations, an emphasis is needed to efficiently explore the dimensions of information spaces to acquire any given response of interest. Many disciplines are still making the transition from brute force, dense, full factorial exploration of their information spaces to a more efficient design of experiments approach; the latter being in use successfully for many decades in agricultural and automotive applications. Although this transition is still incomplete, groundwork must be laid for incorporating the next generation of algorithms to adaptively explore the information space in response to data collected, as well as any resulting empirical models (i.e., metamodels). The methodology in the present work was to compare metamodel quality using a fixed sampling technique compared to an adaptive sampling technique based on metamodel variance. In order to quantify metamodeling errors, a delta method was used to provide quantitative model variance estimates. The present methodology was applied to a design space with an air-breathing engine performance response. It was shown that competitive metamodel quality with lower associated error could be achieved for an adaptive sampling technique for the same level of effort as a fixed, a priori sampling technique

    Kriging Metamodeling in Simulation: A Review

    Get PDF
    This article reviews Kriging (also called spatial correlation modeling). It presents the basic Kriging assumptions and formulas contrasting Kriging and classic linear regression metamodels. Furthermore, it extends Kriging to random simulation, and discusses bootstrapping to estimate the variance of the Kriging predictor. Besides classic one-shot statistical designs such as Latin Hypercube Sampling, it reviews sequentialized and customized designs. It ends with topics for future research.Kriging;Metamodel;Response Surface;Interpolation;Design

    Development of reduced polynomial chaos-Kriging metamodel for uncertainty quantification of computational aerodynamics

    Get PDF
    2018 Summer.Includes bibliographical references.Computational fluid dynamics (CFD) simulations are a critical component of the design and development of aerodynamic bodies. However, as engineers attempt to capture more detailed physics, the computational cost of simulations increases. This limits the ability of engineers to use robust or multidisciplinary design methodologies for practical engineering applications because the computational model is too expensive to evaluate for uncertainty quantification studies and off-design performance analysis. Metamodels (surrogate models) are a closed-form mathematical solution fit to only a few simulation responses which can be used to remedy this situation by estimating off-design performance and stochastic responses of the CFD simulation for far less computational cost. The development of a reduced polynomial chaos-Kriging (RPC-K) metamodel is another step towards eliminating simulation gridlock by capturing the relevant physics of the problem in a cheap-to-evaluate metamodel using fewer CFD simulations. The RPC-K metamodel is superior to existing technologies because its model reduction methodology eliminates the design parameters which contribute little variance to the problem before fitting a high-fidelity metamodel to the remaining data. This metamodel can capture non-linear physics due to its inclusion of both the long-range trend information of a polynomial chaos expansion and local variations in the simulation data through Kriging. In this thesis, the RPC-K metamodel is developed, validated on a convection-diffusion-reaction problem, and applied to the NACA 4412 airfoil and aircraft engine nacelle problems. This research demonstrates the metamodel's effectiveness over existing polynomial chaos and Kriging metamodels for aerodynamics applications because of its ability to fit non-linear fluid flows with far fewer CFD simulations. This research will allow aerospace engineers to more effectively take advantage of detailed CFD simulations in the development of next-generation aerodynamic bodies through the use of the RPC-K metamodel to save computational cost

    Metamodel based high-fidelity stochastic analysis of composite laminates: A concise review with critical comparative assessment

    Get PDF
    This paper presents a concise state-of-the-art review along with an exhaustive comparative investigation on surrogate models for critical comparative assessment of uncertainty in natural frequencies of composite plates on the basis of computational efficiency and accuracy. Both individual and combined variations of input parameters have been considered to account for the effect of low and high dimensional input parameter spaces in the surrogate based uncertainty quantification algorithms including the rate of convergence. Probabilistic characterization of the first three stochastic natural frequencies is carried out by using a finite element model that includes the effects of transverse shear deformation based on Mindlin’s theory in conjunction with a layer-wise random variable approach. The results obtained by different metamodels have been compared with the results of traditional Monte Carlo simulation (MCS) method for high fidelity uncertainty quantification. The crucial issue regarding influence of sampling techniques on the performance of metamodel based uncertainty quantification has been addressed as an integral part of this article

    Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 1: Theory

    Full text link
    In nuclear reactor system design and safety analysis, the Best Estimate plus Uncertainty (BEPU) methodology requires that computer model output uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. "Expert opinion" and "user self-evaluation" have been widely used to specify computer model input uncertainties in previous uncertainty, sensitivity and validation studies. Inverse Uncertainty Quantification (UQ) is the process to inversely quantify input uncertainties based on experimental data in order to more precisely quantify such ad-hoc specifications of the input uncertainty information. In this paper, we used Bayesian analysis to establish the inverse UQ formulation, with systematic and rigorously derived metamodels constructed by Gaussian Process (GP). Due to incomplete or inaccurate underlying physics, as well as numerical approximation errors, computer models always have discrepancy/bias in representing the realities, which can cause over-fitting if neglected in the inverse UQ process. The model discrepancy term is accounted for in our formulation through the "model updating equation". We provided a detailed introduction and comparison of the full and modular Bayesian approaches for inverse UQ, as well as pointed out their limitations when extrapolated to the validation/prediction domain. Finally, we proposed an improved modular Bayesian approach that can avoid extrapolating the model discrepancy that is learnt from the inverse UQ domain to the validation/prediction domain.Comment: 27 pages, 10 figures, articl
    • …
    corecore