116,807 research outputs found
Reduced order modeling of fluid flows: Machine learning, Kolmogorov barrier, closure modeling, and partitioning
In this paper, we put forth a long short-term memory (LSTM) nudging framework
for the enhancement of reduced order models (ROMs) of fluid flows utilizing
noisy measurements. We build on the fact that in a realistic application, there
are uncertainties in initial conditions, boundary conditions, model parameters,
and/or field measurements. Moreover, conventional nonlinear ROMs based on
Galerkin projection (GROMs) suffer from imperfection and solution instabilities
due to the modal truncation, especially for advection-dominated flows with slow
decay in the Kolmogorov width. In the presented LSTM-Nudge approach, we fuse
forecasts from a combination of imperfect GROM and uncertain state estimates,
with sparse Eulerian sensor measurements to provide more reliable predictions
in a dynamical data assimilation framework. We illustrate the idea with the
viscous Burgers problem, as a benchmark test bed with quadratic nonlinearity
and Laplacian dissipation. We investigate the effects of measurements noise and
state estimate uncertainty on the performance of the LSTM-Nudge behavior. We
also demonstrate that it can sufficiently handle different levels of temporal
and spatial measurement sparsity. This first step in our assessment of the
proposed model shows that the LSTM nudging could represent a viable realtime
predictive tool in emerging digital twin systems
Physics-informed Neural Networks for Solving Inverse Problems of Nonlinear Biot's Equations: Batch Training
In biomedical engineering, earthquake prediction, and underground energy
harvesting, it is crucial to indirectly estimate the physical properties of
porous media since the direct measurement of those are usually
impractical/prohibitive. Here we apply the physics-informed neural networks to
solve the inverse problem with regard to the nonlinear Biot's equations.
Specifically, we consider batch training and explore the effect of different
batch sizes. The results show that training with small batch sizes, i.e., a few
examples per batch, provides better approximations (lower percentage error) of
the physical parameters than using large batches or the full batch. The
increased accuracy of the physical parameters, comes at the cost of longer
training time. Specifically, we find the size should not be too small since a
very small batch size requires a very long training time without a
corresponding improvement in estimation accuracy. We find that a batch size of
8 or 32 is a good compromise, which is also robust to additive noise in the
data. The learning rate also plays an important role and should be used as a
hyperparameter.Comment: arXiv admin note: text overlap with arXiv:2002.0823
Cross-language Text Classification with Convolutional Neural Networks From Scratch
Cross language classification is an important task in multilingual learning, where documents in different languages often share the same set of categories. The main goal is to reduce the labeling cost of training classification model for each individual language. The novel approach by using Convolutional Neural Networks for multilingual language classification is proposed in this article. It learns representation of knowledge gained from languages. Moreover, current method works for new individual language, which was not used in training. The results of empirical study on large dataset of 21 languages demonstrate robustness and competitiveness of the presented approach
- …