55 research outputs found
Multi-output multilevel best linear unbiased estimators via semidefinite programming
Multifidelity forward uncertainty quantification (UQ) problems often involve
multiple quantities of interest and heterogeneous models (e.g., different
grids, equations, dimensions, physics, surrogate and reduced-order models).
While computational efficiency is key in this context, multi-output strategies
in multilevel/multifidelity methods are either sub-optimal or non-existent. In
this paper we extend multilevel best linear unbiased estimators (MLBLUE) to
multi-output forward UQ problems and we present new semidefinite programming
formulations for their optimal setup. Not only do these formulations yield the
optimal number of samples required, but also the optimal selection of
low-fidelity models to use. While existing MLBLUE approaches are single-output
only and require a non-trivial nonlinear optimization procedure, the new
multi-output formulations can be solved reliably and efficiently. We
demonstrate the efficacy of the new methods and formulations in practical UQ
problems with model heterogeneity.Comment: 22 pages, 5 figures, 3 table
Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)
Multifidelity simulation methodologies are often used in an attempt to
judiciously combine low-fidelity and high-fidelity simulation results in an
accuracy-increasing, cost-saving way. Candidates for this approach are
simulation methodologies for which there are fidelity differences connected
with significant computational cost differences. Physics-informed Neural
Networks (PINNs) are candidates for these types of approaches due to the
significant difference in training times required when different fidelities
(expressed in terms of architecture width and depth as well as optimization
criteria) are employed. In this paper, we propose a particular multifidelity
approach applied to PINNs that exploits low-rank structure. We demonstrate that
width, depth, and optimization criteria can be used as parameters related to
model fidelity, and show numerical justification of cost differences in
training due to fidelity parameter choices. We test our multifidelity scheme on
various canonical forward PDE models that have been presented in the emerging
PINNs literature
Multifidelity domain-aware learning for the design of re-entry vehicles
The multidisciplinary design optimization (MDO) of re-entry vehicles presents many challenges associated with the plurality
of the domains that characterize the design problem and the multi-physics interactions. Aerodynamic and thermodynamic
phenomena are strongly coupled and relate to the heat loads that affect the vehicle along the re-entry trajectory, which drive
the design of the thermal protection system (TPS). The preliminary design and optimization of re-entry vehicles would benefit
from accurate high-fidelity aerothermodynamic analysis, which are usually expensive computational fluid dynamic simulations.
We propose an original formulation for multifidelity active learning that considers both the information extracted from
data and domain-specific knowledge. Our scheme is developed for the design of re-entry vehicles and is demonstrated for
the case of an Orion-like capsule entering the Earth atmosphere. The design process aims to minimize the mass of propellant
burned during the entry maneuver, the mass of the TPS, and the temperature experienced by the TPS along the re-entry.
The results demonstrate that our multifidelity strategy allows to achieve a sensitive improvement of the design solution with
respect to the baseline. In particular, the outcomes of our method are superior to the design obtained through a single-fidelity
framework, as a result of the principled selection of a limited number of high-fidelity evaluations
Multi-Fidelity Covariance Estimation in the Log-Euclidean Geometry
We introduce a multi-fidelity estimator of covariance matrices that employs
the log-Euclidean geometry of the symmetric positive-definite manifold. The
estimator fuses samples from a hierarchy of data sources of differing
fidelities and costs for variance reduction while guaranteeing definiteness, in
contrast with previous approaches. The new estimator makes covariance
estimation tractable in applications where simulation or data collection is
expensive; to that end, we develop an optimal sample allocation scheme that
minimizes the mean-squared error of the estimator given a fixed budget.
Guaranteed definiteness is crucial to metric learning, data assimilation, and
other downstream tasks. Evaluations of our approach using data from physical
applications (heat conduction, fluid dynamics) demonstrate more accurate metric
learning and speedups of more than one order of magnitude compared to
benchmarks.Comment: To appear at the International Conference on Machine Learning (ICML)
202
Kontextsensitive Modellhierarchien für Quantifizierung der höherdimensionalen Unsicherheit
We formulate four novel context-aware algorithms based on model hierarchies aimed to enable an efficient quantification of uncertainty in complex, computationally expensive problems, such as fluid-structure interaction and plasma microinstability simulations. Our results show that our algorithms are more efficient than standard approaches and that they are able to cope with the challenges of quantifying uncertainty in higher-dimensional, complex problems.Wir formulieren vier kontextsensitive Algorithmen auf der Grundlage von Modellhierarchien um eine effiziente Quantifizierung der Unsicherheit bei komplexen, rechenintensiven Problemen zu ermöglichen, wie Fluid-Struktur-Wechselwirkungs- und Plasma-Mikroinstabilitätssimulationen. Unsere Ergebnisse zeigen, dass unsere Algorithmen effizienter als Standardansätze sind und die Herausforderungen der Quantifizierung der Unsicherheit in höherdimensionalen, komplexen Problemen bewältigen können
Multi-Fidelity Gaussian Process Emulation And Its Application In The Study Of Tsunami Risk Modelling
Investigating uncertainties in computer simulations can be prohibitive in terms of computational costs, since the simulator needs to be run over a large number of input values. Building a statistical surrogate model of the simulator, using a small design of experiments, greatly alleviates the computational burden to carry out such investigations. Nevertheless, this can still be above the computational budget for many studies. We present a novel method that combines both approaches, the multilevel adaptive sequential design of computer experiments (MLASCE) in the framework of Gaussian process (GP) emulators. MLASCE is based on the two major approaches: efficient design of experiments, such as sequential designs, and combining training data of different degrees of sophistication in a so-called multi-fidelity method, or multilevel in case these fidelities are ordered typically for increasing resolutions. This dual strategy allows us to allocate efficiently limited computational resources over simulations of different levels of fidelity and build the GP emulator. The allocation of computational resources is shown to be the solution of a simple optimization problem in a special case where we theoretically prove the validity of our approach. MLASCE is compared with other existing models of multi-fidelity Gaussian process emulation. Gains of orders of magnitudes in accuracy for medium-size computing budgets are demonstrated in numerical examples. MLASCE should be useful in a computer experiment of a natural disaster risk and more than a mere tool for calculating the scale of natural disasters. To show MLASCE meets this expectation, we propose the first end-to-end example of a risk model for household asset loss due to a possible future tsunami. As a follow-up to this proposed framework, MLASCE provides a reliable statistical surrogate to a realistic tsunami risk assessment under a restricted computational resource and provides accurate and instant predictions of future tsunami risks
- …