77 research outputs found
Multi-Information Source Fusion and Optimization to Realize ICME: Application to Dual Phase Materials
Integrated Computational Materials Engineering (ICME) calls for the
integration of computational tools into the materials and parts development
cycle, while the Materials Genome Initiative (MGI) calls for the acceleration
of the materials development cycle through the combination of experiments,
simulation, and data. As they stand, both ICME and MGI do not prescribe how to
achieve the necessary tool integration or how to efficiently exploit the
computational tools, in combination with experiments, to accelerate the
development of new materials and materials systems. This paper addresses the
first issue by putting forward a framework for the fusion of information that
exploits correlations among sources/models and between the sources and `ground
truth'. The second issue is addressed through a multi-information source
optimization framework that identifies, given current knowledge, the next best
information source to query and where in the input space to query it via a
novel value-gradient policy. The querying decision takes into account the
ability to learn correlations between information sources, the resource cost of
querying an information source, and what a query is expected to provide in
terms of improvement over the current state. The framework is demonstrated on
the optimization of a dual-phase steel to maximize its strength-normalized
strain hardening rate. The ground truth is represented by a
microstructure-based finite element model while three low fidelity information
sources---i.e. reduced order models---based on different homogenization
assumptions---isostrain, isostress and isowork---are used to efficiently and
optimally query the materials design space.Comment: 19 pages, 11 figures, 5 table
Sparsifying to optimize over multiple information sources: an augmented Gaussian process based algorithm
AbstractOptimizing a black-box, expensive, and multi-extremal function, given multiple approximations, is a challenging task known as multi-information source optimization (MISO), where each source has a different cost and the level of approximation (aka fidelity) of each source can change over the search space. While most of the current approaches fuse the Gaussian processes (GPs) modelling each source, we propose to use GP sparsification to select only "reliable" function evaluations performed over all the sources. These selected evaluations are used to create an augmented Gaussian process (AGP), whose name is implied by the fact that the evaluations on the most expensive source are augmented with the reliable evaluations over less expensive sources. A new acquisition function, based on confidence bound, is also proposed, including both cost of the next source to query and the location-dependent approximation of that source. This approximation is estimated through a model discrepancy measure and the prediction uncertainty of the GPs. MISO-AGP and the MISO-fused GP counterpart are compared on two test problems and hyperparameter optimization of a machine learning classifier on a large dataset
mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location
This paper develops mfEGRA, a multifidelity active learning method using
data-driven adaptively refined surrogates for failure boundary location in
reliability analysis. This work addresses the issue of prohibitive cost of
reliability analysis using Monte Carlo sampling for expensive-to-evaluate
high-fidelity models by using cheaper-to-evaluate approximations of the
high-fidelity model. The method builds on the Efficient Global Reliability
Analysis (EGRA) method, which is a surrogate-based method that uses adaptive
sampling for refining Gaussian process surrogates for failure boundary location
using a single-fidelity model. Our method introduces a two-stage adaptive
sampling criterion that uses a multifidelity Gaussian process surrogate to
leverage multiple information sources with different fidelities. The method
combines expected feasibility criterion from EGRA with one-step lookahead
information gain to refine the surrogate around the failure boundary. The
computational savings from mfEGRA depends on the discrepancy between the
different models, and the relative cost of evaluating the different models as
compared to the high-fidelity model. We show that accurate estimation of
reliability using mfEGRA leads to computational savings of 46% for an
analytic multimodal test problem and 24% for a three-dimensional acoustic horn
problem, when compared to single-fidelity EGRA. We also show the effect of
using a priori drawn Monte Carlo samples in the implementation for the acoustic
horn problem, where mfEGRA leads to computational savings of 45% for the
three-dimensional case and 48% for a rarer event four-dimensional case as
compared to single-fidelity EGRA
- …