5,580 research outputs found
Multi-fidelity modeling with different input domain definitions using Deep Gaussian Processes
Multi-fidelity approaches combine different models built on a scarce but
accurate data-set (high-fidelity data-set), and a large but approximate one
(low-fidelity data-set) in order to improve the prediction accuracy. Gaussian
Processes (GPs) are one of the popular approaches to exhibit the correlations
between these different fidelity levels. Deep Gaussian Processes (DGPs) that
are functional compositions of GPs have also been adapted to multi-fidelity
using the Multi-Fidelity Deep Gaussian process model (MF-DGP). This model
increases the expressive power compared to GPs by considering non-linear
correlations between fidelities within a Bayesian framework. However, these
multi-fidelity methods consider only the case where the inputs of the different
fidelity models are defined over the same domain of definition (e.g., same
variables, same dimensions). However, due to simplification in the modeling of
the low-fidelity, some variables may be omitted or a different parametrization
may be used compared to the high-fidelity model. In this paper, Deep Gaussian
Processes for multi-fidelity (MF-DGP) are extended to the case where a
different parametrization is used for each fidelity. The performance of the
proposed multifidelity modeling technique is assessed on analytical test cases
and on structural and aerodynamic real physical problems
Disentangled Multi-Fidelity Deep Bayesian Active Learning
To balance quality and cost, various domain areas of science and engineering
run simulations at multiple levels of sophistication. Multi-fidelity active
learning aims to learn a direct mapping from input parameters to simulation
outputs at the highest fidelity by actively acquiring data from multiple
fidelity levels. However, existing approaches based on Gaussian processes are
hardly scalable to high-dimensional data. Deep learning-based methods often
impose a hierarchical structure in hidden representations, which only supports
passing information from low-fidelity to high-fidelity. These approaches can
lead to the undesirable propagation of errors from low-fidelity representations
to high-fidelity ones. We propose a novel framework called Disentangled
Multi-fidelity Deep Bayesian Active Learning (D-MFDAL), that learns the
surrogate models conditioned on the distribution of functions at multiple
fidelities. On benchmark tasks of learning deep surrogates of partial
differential equations including heat equation, Poisson's equation and fluid
simulations, our approach significantly outperforms state-of-the-art in
prediction accuracy and sample efficiency. Our code is available at
https://github.com/Rose-STL-Lab/Multi-Fidelity-Deep-Active-Learning
A Comprehensive Review of Digital Twin -- Part 1: Modeling and Twinning Enabling Technologies
As an emerging technology in the era of Industry 4.0, digital twin is gaining
unprecedented attention because of its promise to further optimize process
design, quality control, health monitoring, decision and policy making, and
more, by comprehensively modeling the physical world as a group of
interconnected digital models. In a two-part series of papers, we examine the
fundamental role of different modeling techniques, twinning enabling
technologies, and uncertainty quantification and optimization methods commonly
used in digital twins. This first paper presents a thorough literature review
of digital twin trends across many disciplines currently pursuing this area of
research. Then, digital twin modeling and twinning enabling technologies are
further analyzed by classifying them into two main categories:
physical-to-virtual, and virtual-to-physical, based on the direction in which
data flows. Finally, this paper provides perspectives on the trajectory of
digital twin technology over the next decade, and introduces a few emerging
areas of research which will likely be of great use in future digital twin
research. In part two of this review, the role of uncertainty quantification
and optimization are discussed, a battery digital twin is demonstrated, and
more perspectives on the future of digital twin are shared
Bayesian Quadrature for Multiple Related Integrals
Bayesian probabilistic numerical methods are a set of tools providing
posterior distributions on the output of numerical methods. The use of these
methods is usually motivated by the fact that they can represent our
uncertainty due to incomplete/finite information about the continuous
mathematical problem being approximated. In this paper, we demonstrate that
this paradigm can provide additional advantages, such as the possibility of
transferring information between several numerical methods. This allows users
to represent uncertainty in a more faithful manner and, as a by-product,
provide increased numerical efficiency. We propose the first such numerical
method by extending the well-known Bayesian quadrature algorithm to the case
where we are interested in computing the integral of several related functions.
We then prove convergence rates for the method in the well-specified and
misspecified cases, and demonstrate its efficiency in the context of
multi-fidelity models for complex engineering systems and a problem of global
illumination in computer graphics.Comment: Proceedings of the 35th International Conference on Machine Learning
(ICML), PMLR 80:5369-5378, 201
- …