In this study, we develop a novel multi-fidelity deep learning approach that
transforms low-fidelity solution maps into high-fidelity ones by incorporating
parametric space information into a standard autoencoder architecture. This
method's integration of parametric space information significantly reduces the
need for training data to effectively predict high-fidelity solutions from
low-fidelity ones. In this study, we examine a two-dimensional steady-state
heat transfer analysis within a highly heterogeneous materials microstructure.
The heat conductivity coefficients for two different materials are condensed
from a 101 x 101 grid to smaller grids. We then solve the boundary value
problem on the coarsest grid using a pre-trained physics-informed neural
operator network known as Finite Operator Learning (FOL). The resulting
low-fidelity solution is subsequently upscaled back to a 101 x 101 grid using a
newly designed enhanced autoencoder. The novelty of the developed enhanced
autoencoder lies in the concatenation of heat conductivity maps of different
resolutions to the decoder segment in distinct steps. Hence the developed
algorithm is named microstructure-embedded autoencoder (MEA). We compare the
MEA outcomes with those from finite element methods, the standard U-Net, and
various other upscaling techniques, including interpolation functions and
feedforward neural networks (FFNN). Our analysis shows that MEA outperforms
these methods in terms of computational efficiency and error on test cases. As
a result, the MEA serves as a potential supplement to neural operator networks,
effectively upscaling low-fidelity solutions to high fidelity while preserving
critical details often lost in traditional upscaling methods, particularly at
sharp interfaces like those seen with interpolation