6 research outputs found
Physics-aware registration based auto-encoder for convection dominated PDEs
We design a physics-aware auto-encoder to specifically reduce the
dimensionality of solutions arising from convection-dominated nonlinear
physical systems. Although existing nonlinear manifold learning methods seem to
be compelling tools to reduce the dimensionality of data characterized by a
large Kolmogorov n-width, they typically lack a straightforward mapping from
the latent space to the high-dimensional physical space. Moreover, the realized
latent variables are often hard to interpret. Therefore, many of these methods
are often dismissed in the reduced order modeling of dynamical systems governed
by the partial differential equations (PDEs). Accordingly, we propose an
auto-encoder type nonlinear dimensionality reduction algorithm. The
unsupervised learning problem trains a diffeomorphic spatio-temporal grid, that
registers the output sequence of the PDEs on a non-uniform
parameter/time-varying grid, such that the Kolmogorov n-width of the mapped
data on the learned grid is minimized. We demonstrate the efficacy and
interpretability of our approach to separate convection/advection from
diffusion/scaling on various manufactured and physical systems.Comment: 10 pages, 6 figure
Learning Nonautonomous Systems via Dynamic Mode Decomposition
We present a data-driven learning approach for unknown nonautonomous
dynamical systems with time-dependent inputs based on dynamic mode
decomposition (DMD). To circumvent the difficulty of approximating the
time-dependent Koopman operators for nonautonomous systems, a modified system
derived from local parameterization of the external time-dependent inputs is
employed as an approximation to the original nonautonomous system. The modified
system comprises a sequence of local parametric systems, which can be well
approximated by a parametric surrogate model using our previously proposed
framework for dimension reduction and interpolation in parameter space (DRIPS).
The offline step of DRIPS relies on DMD to build a linear surrogate model,
endowed with reduced-order bases (ROBs), for the observables mapped from
training data. Then the offline step constructs a sequence of iterative
parametric surrogate models from interpolations on suitable manifolds, where
the target/test parameter points are specified by the local parameterization of
the test external time-dependent inputs. We present a number of numerical
examples to demonstrate the robustness of our method and compare its
performance with deep neural networks in the same settings.Comment: arXiv admin note: text overlap with arXiv:2006.02392 by other author