250 research outputs found

    Optimal Model Management for Multifidelity Monte Carlo Estimation

    Get PDF
    This work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive high-fidelity models. Existing acceleration methods typically exploit a multilevel hierarchy of surrogate models that follow a known rate of error decay and computational costs; however, a general collection of surrogate models, which may include projection-based reduced models, data-fit models, support vector machines, and simplified-physics models, does not necessarily give rise to such a hierarchy. Our multifidelity approach provides a framework to combine an arbitrary number of surrogate models of any type. Instead of relying on error and cost rates, an optimization problem balances the number of model evaluations across the high-fidelity and surrogate models with respect to error and costs. We show that a unique analytic solution of the model management optimization problem exists under mild conditions on the models. Our multifidelity method makes occasional recourse to the high-fidelity model; in doing so it provides an unbiased estimator of the statistics of the high-fidelity model, even in the absence of error bounds and error estimators for the surrogate models. Numerical experiments with linear and nonlinear examples show that speedups by orders of magnitude are obtained compared to Monte Carlo estimation that invokes a single model only

    Multifidelity optimization under uncertainty for a tailless aircraft

    Get PDF
    This paper presents a multifidelity method for optimization under uncertainty for aerospace problems. In this work, the effectiveness of the method is demonstrated for the robust optimization of a tailless aircraft that is based on the Boeing Insitu ScanEagle. Aircraft design is often affected by uncertainties in manufacturing and operating conditions. Accounting for uncertainties during optimization ensures a robust design that is more likely to meet performance requirements. Designing robust systems can be computationally prohibitive due to the numerous evaluations of expensive-to-evaluate high-fidelity numerical models required to estimate system-level statistics at each optimization iteration. This work uses a multifidelity Monte Carlo approach to estimate the mean and the variance of the system outputs for robust optimization. The method uses control variates to exploit multiple fidelities and optimally allocates resources to different fidelities to minimize the variance in the estimates for a given budget. The results for the ScanEagle application show that the proposed multifidelity method achieves substantial speed-ups as compared to a regular Monte-Carlo-based robust optimization.United States. Air Force. Office of Scientific Research. Multidisciplinary University Research Initiative (FA9550-15-1-0038

    On information fusion for reliability estimation with multifidelity models

    Get PDF
    Multifidelity models attempt to reduce the computational effort by combining simulation models of different approximation quality and from different sources. Information fusion combines outputs from a model hierarchy in order to obtain efficient estimators for a quantity of interest. In this paper, information fusion is applied to reliability estimation. To this end, efficient multifidelity estimators for the probability of failure are developed by combining additive and multiplicative information fusion with importance sampling and importance splitting (notably the moving particles method). Importance sampling and importance splitting based multifidelity reliability estimators are compared focusing on relative error and coefficient of variation

    Scalable Environment for Quantification of Uncertainty and Optimization in Industrial Applications (SEQUOIA)

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/143027/1/6.2017-1327.pd

    Multifidelity approaches for uncertainty estimation in wildfire spread simulators

    Get PDF
    A variety of wildfire models are currently used for prescribed fire management, fire behaviour studies and decision support during wildfire emergencies, among other applications. All these applications are based on predictive analysis, and therefore require careful estimation of aleatoric and epistemic uncertainties such as weather conditions, vegetation properties and model parameters. However, the large computational cost of high-fidelity computaional fluid dynamics models prohibits the straightforward utilization of traditional Monte Carlo methods. Conversely, low-fidelity fire models are several orders of magnitude faster but they typically do not provide enough accuracy and they do not resolve all relevant phenomena. Multifidelity frameworks offer a viable solution to this limitation through the efficient combination of high-and low-fidelity simulations. While high-fidelity models provide the required level of accuracy, low-fidelity simulations are used to economically improve the confidence on estimated uncertainty. In this work, we assessed the suitability of multifidelity methodologies to quantify uncertainty in wildfire simulations. A collection of different multifidelity strategies, including Multilevel and Control Variates Monte Carlo, were tested and their computational efficiency compared. Fire spread was predicted in a canonical scenario using popular simulators such as the Wildland-Urban Interface Fire Dynamics Simulator (WFDS) and FARSITE. Results show that multifidelity estimators allow speedups in the order of 100Ă— to 1000Ă— with respect to traditional Monte Carlo
    • …
    corecore