144,220 research outputs found
Review of multi-fidelity models
Multi-fidelity models provide a framework for integrating computational
models of varying complexity, allowing for accurate predictions while
optimizing computational resources. These models are especially beneficial when
acquiring high-accuracy data is costly or computationally intensive. This
review offers a comprehensive analysis of multi-fidelity models, focusing on
their applications in scientific and engineering fields, particularly in
optimization and uncertainty quantification. It classifies publications on
multi-fidelity modeling according to several criteria, including application
area, surrogate model selection, types of fidelity, combination methods and
year of publication. The study investigates techniques for combining different
fidelity levels, with an emphasis on multi-fidelity surrogate models. This work
discusses reproducibility, open-sourcing methodologies and benchmarking
procedures to promote transparency. The manuscript also includes educational
toy problems to enhance understanding. Additionally, this paper outlines best
practices for presenting multi-fidelity-related savings in a standardized,
succinct and yet thorough manner. The review concludes by examining current
trends in multi-fidelity modeling, including emerging techniques, recent
advancements, and promising research directions.Comment: 50 pages, 20 figure
Some considerations regarding the use of multi-fidelity Kriging in the construction of surrogate models
Surrogate models or metamodels are commonly used to exploit expensive computational simulations within a design optimization framework. The application of multi-fidelity surrogate modeling approaches has recently been gaining ground due to the potential for further reductions in simulation effort over single fidelity approaches. However, given a black box problem when exactly should a designer select a multi-fidelity approach over a single fidelity approach and vice versa? Using a series of analytical test functions and engineering design examples from the literature, the following paper illustrates the potential pitfalls of choosing one technique over the other without a careful consideration of the optimization problem at hand. These examples are then used to define and validate a set of guidelines for the creation of a multi-fidelity Kriging model. The resulting guidelines state that the different fidelity functions should be well correlated, that the amount of low fidelity data in the model should be greater than the amount of high fidelity data and that more than 10\% and less than 80\% of the total simulation budget should be spent on low fidelity simulations in order for the resulting multi-fidelity model to perform better than the equivalent costing high fidelity model
A New Cokriging Method for Variable-Fidelity Surrogate Modeling of Aerodynamic Data
Cokriging is a statistical interpolation method for the enhanced prediction of a less intensively sampled primary variable of interest with assistance of intensively sampled auxiliary variables. In the geostatistics community it is referred to as two- or multi-variable kriging. In this paper, a new cokriging method is proposed and used for variable-fidelity surrogate modeling of aerodynamic data obtained with an expensive high-fidelity CFD code, assisted by data computed with cheaper lower-fidelity codes or by gradients computed with an adjoint version of the high-fidelity CFD code, or both. A self-contained derivation as well as the numerical implementation of this new cokriging method is presented and the comparison with the autoregressive model of Kennedy and O’Hagan is discussed. The developed cokriging method is validated against an analytical problem and applied to construct global approximation models of the aerodynamic coefficients as well as the drag polar of an RAE 2822 airfoil based on sampled CFD data. The numerical examples show
that it is efficient, robust and practical for the surrogate modeling of aerodynamic data based on a set of CFD methods with varying degrees of fidelity and computational expense. It can potentially be applied in the efficient CFD-based aerodynamic analysis and design optimization of aircraft
Propagation of Input Uncertainty in Presence of Model-Form Uncertainty: A Multi-fidelity Approach for CFD Applications
Proper quantification and propagation of uncertainties in computational
simulations are of critical importance. This issue is especially challenging
for CFD applications. A particular obstacle for uncertainty quantifications in
CFD problems is the large model discrepancies associated with the CFD models
used for uncertainty propagation. Neglecting or improperly representing the
model discrepancies leads to inaccurate and distorted uncertainty distribution
for the Quantities of Interest. High-fidelity models, being accurate yet
expensive, can accommodate only a small ensemble of simulations and thus lead
to large interpolation errors and/or sampling errors; low-fidelity models can
propagate a large ensemble, but can introduce large modeling errors. In this
work, we propose a multi-model strategy to account for the influences of model
discrepancies in uncertainty propagation and to reduce their impact on the
predictions. Specifically, we take advantage of CFD models of multiple
fidelities to estimate the model discrepancies associated with the
lower-fidelity model in the parameter space. A Gaussian process is adopted to
construct the model discrepancy function, and a Bayesian approach is used to
infer the discrepancies and corresponding uncertainties in the regions of the
parameter space where the high-fidelity simulations are not performed. The
proposed multi-model strategy combines information from models with different
fidelities and computational costs, and is of particular relevance for CFD
applications, where a hierarchy of models with a wide range of complexities
exists. Several examples of relevance to CFD applications are performed to
demonstrate the merits of the proposed strategy. Simulation results suggest
that, by combining low- and high-fidelity models, the proposed approach
produces better results than what either model can achieve individually.Comment: 18 pages, 8 figure
Safe Policy Synthesis in Multi-Agent POMDPs via Discrete-Time Barrier Functions
A multi-agent partially observable Markov decision process (MPOMDP) is a
modeling paradigm used for high-level planning of heterogeneous autonomous
agents subject to uncertainty and partial observation. Despite their modeling
efficiency, MPOMDPs have not received significant attention in safety-critical
settings. In this paper, we use barrier functions to design policies for
MPOMDPs that ensure safety. Notably, our method does not rely on discretization
of the belief space, or finite memory. To this end, we formulate sufficient and
necessary conditions for the safety of a given set based on discrete-time
barrier functions (DTBFs) and we demonstrate that our formulation also allows
for Boolean compositions of DTBFs for representing more complicated safe sets.
We show that the proposed method can be implemented online by a sequence of
one-step greedy algorithms as a standalone safe controller or as a
safety-filter given a nominal planning policy. We illustrate the efficiency of
the proposed methodology based on DTBFs using a high-fidelity simulation of
heterogeneous robots.Comment: 8 pages and 4 figure
Accelerated Modeling of Near and Far-Field Diffraction for Coronagraphic Optical Systems
Accurately predicting the performance of coronagraphs and tolerancing optical
surfaces for high-contrast imaging requires a detailed accounting of
diffraction effects. Unlike simple Fraunhofer diffraction modeling, near and
far-field diffraction effects, such as the Talbot effect, are captured by
plane-to-plane propagation using Fresnel and angular spectrum propagation. This
approach requires a sequence of computationally intensive Fourier transforms
and quadratic phase functions, which limit the design and aberration
sensitivity parameter space which can be explored at high-fidelity in the
course of coronagraph design. This study presents the results of optimizing the
multi-surface propagation module of the open source Physical Optics Propagation
in PYthon (POPPY) package. This optimization was performed by implementing and
benchmarking Fourier transforms and array operations on graphics processing
units, as well as optimizing multithreaded numerical calculations using the
NumExpr python library where appropriate, to speed the end-to-end simulation of
observatory and coronagraph optical systems. Using realistic systems, this
study demonstrates a greater than five-fold decrease in wall-clock runtime over
POPPY's previous implementation and describes opportunities for further
improvements in diffraction modeling performance.Comment: Presented at SPIE ASTI 2018, Austin Texas. 11 pages, 6 figure
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined
Computational Investigations on Polymerase Actions in Gene Transcription and Replication Combining Physical Modeling and Atomistic Simulations
Polymerases are protein enzymes that move along nucleic acid chains and
catalyze template-based polymerization reactions during gene transcription and
replication. The polymerases also substantially improve transcription or
replication fidelity through the non-equilibrium enzymatic cycles. We briefly
review computational efforts that have been made toward understanding
mechano-chemical coupling and fidelity control mechanisms of the polymerase
elongation. The polymerases are regarded as molecular information motors during
the elongation process. It requires a full spectrum of computational approaches
from multiple time and length scales to understand the full polymerase
functional cycle. We keep away from quantum mechanics based approaches to the
polymerase catalysis due to abundant former surveys, while address only
statistical physics modeling approach and all-atom molecular dynamics
simulation approach. We organize this review around our own modeling and
simulation practices on a single-subunit T7 RNA polymerase, and summarize
commensurate studies on structurally similar DNA polymerases. For multi-subunit
RNA polymerases that have been intensively studied in recent years, we leave
detailed discussions on the simulation achievements to other computational
chemical surveys, while only introduce very recently published representative
studies, including our own preliminary work on structure-based modeling on
yeast RNA polymerase II. In the end, we quickly go through kinetic modeling on
elongation pauses and backtracking activities. We emphasize the fluctuation and
control mechanisms of the polymerase actions, highlight the non-equilibrium
physical nature of the system, and try to bring some perspectives toward
understanding replication and transcription regulation from single molecular
details to a genome-wide scale
- …
