112,058 research outputs found

    Reduced order modeling of subsurface multiphase flow models using deep residual recurrent neural networks

    Get PDF
    We present a reduced order modeling (ROM) technique for subsurface multi-phase flow problems building on the recently introduced deep residual recurrent neural network (DR-RNN) [1]. DR-RNN is a physics aware recurrent neural network for modeling the evolution of dynamical systems. The DR-RNN architecture is inspired by iterative update techniques of line search methods where a fixed number of layers are stacked together to minimize the residual (or reduced residual) of the physical model under consideration. In this manuscript, we combine DR-RNN with proper orthogonal decomposition (POD) and discrete empirical interpolation method (DEIM) to reduce the computational complexity associated with high-fidelity numerical simulations. In the presented formulation, POD is used to construct an optimal set of reduced basis functions and DEIM is employed to evaluate the nonlinear terms independent of the full-order model size. We demonstrate the proposed reduced model on two uncertainty quantification test cases using Monte-Carlo simulation of subsurface flow with random permeability field. The obtained results demonstrate that DR-RNN combined with POD-DEIM provides an accurate and stable reduced model with a fixed computational budget that is much less than the computational cost of standard POD-Galerkin reduced model combined with DEIM for nonlinear dynamical systems

    Approximation bounds for convolutional neural networks in operator learning

    Full text link
    Recently, deep Convolutional Neural Networks (CNNs) have proven to be successful when employed in areas such as reduced order modeling of parametrized PDEs. Despite their accuracy and efficiency, the approaches available in the literature still lack a rigorous justification on their mathematical foundations. Motivated by this fact, in this paper we derive rigorous error bounds for the approximation of nonlinear operators by means of CNN models. More precisely, we address the case in which an operator maps a finite dimensional input μRp\boldsymbol{\mu}\in\mathbb{R}^{p} onto a functional output uμ:[0,1]dRu_{\boldsymbol{\mu}}:[0,1]^{d}\to\mathbb{R}, and a neural network model is used to approximate a discretized version of the input-to-output map. The resulting error estimates provide a clear interpretation of the hyperparameters defining the neural network architecture. All the proofs are constructive, and they ultimately reveal a deep connection between CNNs and the Fourier transform. Finally, we complement the derived error bounds by numerical experiments that illustrate their application

    A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning

    Full text link
    Academic tabular benchmarks often contain small sets of curated features. In contrast, data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones. To prevent overfitting in subsequent downstream modeling, practitioners commonly use automated feature selection methods that identify a reduced subset of informative features. Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance. Motivated by the increasing popularity of tabular deep learning, we construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers, using real datasets and multiple methods for generating extraneous features. We also propose an input-gradient-based analogue of Lasso for neural networks that outperforms classical feature selection methods on challenging problems such as selecting from corrupted or second-order features

    Data-Driven Reduced-Order Modeling of Unsteady Nonlinear Shock Wave using Physics-Informed Neural Network (PINN) Based Solution

    Get PDF
    This article presents a preliminary study on data-driven reduced-order modeling (ROM) of unsteady nonlinear shock wave. A basic form of such problem can be modeled using the Burgers’ equation. The physics-informed neural networks (PINN) approach is used to obtain numerical solutions to the problem at certain time steps. PINN is a cutting-edge computational framework that seamlessly integrates deep neural networks with the governing physics of the problem and is turning out to be promising for enhancing the accuracy and efficiency of numerical solutions in a wide array of scientific and engineering applications. Next, extraction of the Proper Orthogonal Decomposition (POD) modes from the solution field is carried out, providing a compact representation of the system’s dominant spatial patterns. Subsequently, temporal coefficients are computed at specific time intervals, allowing for a reduced-order representation of the temporal evolution of the system. These temporal coefficients are then employed as input data to train a deep neural network (DNN) model designed to predict the temporal coefficient at various time steps. The predicted coefficient can be used to form the solution. The synergy between the POD-based spatial decomposition and the data-driven capabilities of DNN results in an efficient and accurate model for approximating the solution. The trained ANN subsequently takes the value of the Reynolds number and historical POD coefficients as inputs, generating predictions for future temporal coefficients. The study demonstrates the potential of combining model reduction techniques with machine learning approaches for solving complex partial differential equations. It showcases the use of physics-informed deep learning for obtaining numerical solutions. The idea presented can be extended to solve more complicated problems involving Navier-Stokes equations

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
    corecore