1,682 research outputs found

    Physics-Constrained Deep Learning for High-dimensional Surrogate Modeling and Uncertainty Quantification without Labeled Data

    Full text link
    Surrogate modeling and uncertainty quantification tasks for PDE systems are most often considered as supervised learning problems where input and output data pairs are used for training. The construction of such emulators is by definition a small data problem which poses challenges to deep learning approaches that have been developed to operate in the big data regime. Even in cases where such models have been shown to have good predictive capability in high dimensions, they fail to address constraints in the data implied by the PDE model. This paper provides a methodology that incorporates the governing equations of the physical model in the loss/likelihood functions. The resulting physics-constrained, deep learning models are trained without any labeled data (e.g. employing only input data) and provide comparable predictive responses with data-driven models while obeying the constraints of the problem at hand. This work employs a convolutional encoder-decoder neural network approach as well as a conditional flow-based generative model for the solution of PDEs, surrogate model construction, and uncertainty quantification tasks. The methodology is posed as a minimization problem of the reverse Kullback-Leibler (KL) divergence between the model predictive density and the reference conditional density, where the later is defined as the Boltzmann-Gibbs distribution at a given inverse temperature with the underlying potential relating to the PDE system of interest. The generalization capability of these models to out-of-distribution input is considered. Quantification and interpretation of the predictive uncertainty is provided for a number of problems.Comment: 51 pages, 18 figures, submitted to Journal of Computational Physic

    Surrogate Modeling for Fluid Flows Based on Physics-Constrained Deep Learning Without Simulation Data

    Full text link
    Numerical simulations on fluid dynamics problems primarily rely on spatially or/and temporally discretization of the governing equation into the finite-dimensional algebraic system solved by computers. Due to complicated nature of the physics and geometry, such process can be computational prohibitive for most real-time applications and many-query analyses. Therefore, developing a cost-effective surrogate model is of great practical significance. Deep learning (DL) has shown new promises for surrogate modeling due to its capability of handling strong nonlinearity and high dimensionality. However, the off-the-shelf DL architectures fail to operate when the data becomes sparse. Unfortunately, data is often insufficient in most parametric fluid dynamics problems since each data point in the parameter space requires an expensive numerical simulation based on the first principle, e.g., Naiver--Stokes equations. In this paper, we provide a physics-constrained DL approach for surrogate modeling of fluid flows without relying on any simulation data. Specifically, a structured deep neural network (DNN) architecture is devised to enforce the initial and boundary conditions, and the governing partial differential equations are incorporated into the loss of the DNN to drive the training. Numerical experiments are conducted on a number of internal flows relevant to hemodynamics applications, and the forward propagation of uncertainties in fluid properties and domain geometry is studied as well. The results show excellent agreement on the flow field and forward-propagated uncertainties between the DL surrogate approximations and the first-principle numerical simulations.Comment: 43 pages, 12 figure

    Theory-guided Auto-Encoder for Surrogate Construction and Inverse Modeling

    Full text link
    A Theory-guided Auto-Encoder (TgAE) framework is proposed for surrogate construction and is further used for uncertainty quantification and inverse modeling tasks. The framework is built based on the Auto-Encoder (or Encoder-Decoder) architecture of convolutional neural network (CNN) via a theory-guided training process. In order to achieve the theory-guided training, the governing equations of the studied problems can be discretized and the finite difference scheme of the equations can be embedded into the training of CNN. The residual of the discretized governing equations as well as the data mismatch constitute the loss function of the TgAE. The trained TgAE can be used to construct a surrogate that approximates the relationship between the model parameters and responses with limited labeled data. In order to test the performance of the TgAE, several subsurface flow cases are introduced. The results show the satisfactory accuracy of the TgAE surrogate and efficiency of uncertainty quantification tasks can be improved with the TgAE surrogate. The TgAE also shows good extrapolation ability for cases with different correlation lengths and variances. Furthermore, the parameter inversion task has been implemented with the TgAE surrogate and satisfactory results can be obtained

    PhyGeoNet: Physics-Informed Geometry-Adaptive Convolutional Neural Networks for Solving Parameterized Steady-State PDEs on Irregular Domain

    Full text link
    Recently, the advent of deep learning has spurred interest in the development of physics-informed neural networks (PINN) for efficiently solving partial differential equations (PDEs), particularly in a parametric setting. Among all different classes of deep neural networks, the convolutional neural network (CNN) has attracted increasing attention in the scientific machine learning community, since the parameter-sharing feature in CNN enables efficient learning for problems with large-scale spatiotemporal fields. However, one of the biggest challenges is that CNN only can handle regular geometries with image-like format (i.e., rectangular domains with uniform grids). In this paper, we propose a novel physics-constrained CNN learning architecture, aiming to learn solutions of parametric PDEs on irregular domains without any labeled data. In order to leverage powerful classic CNN backbones, elliptic coordinate mapping is introduced to enable coordinate transforms between the irregular physical domain and regular reference domain. The proposed method has been assessed by solving a number of PDEs on irregular domains, including heat equations and steady Navier-Stokes equations with parameterized boundary conditions and varying geometries. Moreover, the proposed method has also been compared against the state-of-the-art PINN with fully-connected neural network (FC-NN) formulation. The numerical results demonstrate the effectiveness of the proposed approach and exhibit notable superiority over the FC-NN based PINN in terms of efficiency and accuracy.Comment: 57 pages, 26 figure

    Transfer learning based multi-fidelity physics informed deep neural network

    Full text link
    For many systems in science and engineering, the governing differential equation is either not known or known in an approximate sense. Analyses and design of such systems are governed by data collected from the field and/or laboratory experiments. This challenging scenario is further worsened when data-collection is expensive and time-consuming. To address this issue, this paper presents a novel multi-fidelity physics informed deep neural network (MF-PIDNN). The framework proposed is particularly suitable when the physics of the problem is known in an approximate sense (low-fidelity physics) and only a few high-fidelity data are available. MF-PIDNN blends physics informed and data-driven deep learning techniques by using the concept of transfer learning. The approximate governing equation is first used to train a low-fidelity physics informed deep neural network. This is followed by transfer learning where the low-fidelity model is updated by using the available high-fidelity data. MF-PIDNN is able to encode useful information on the physics of the problem from the {\it approximate} governing differential equation and hence, provides accurate prediction even in zones with no data. Additionally, no low-fidelity data is required for training this model. Applicability and utility of MF-PIDNN are illustrated in solving four benchmark reliability analysis problems. Case studies to illustrate interesting features of the proposed approach are also presented

    Simulator-free Solution of High-Dimensional Stochastic Elliptic Partial Differential Equations using Deep Neural Networks

    Full text link
    Stochastic partial differential equations (SPDEs) are ubiquitous in engineering and computational sciences. The stochasticity arises as a consequence of uncertainty in input parameters, constitutive relations, initial/boundary conditions, etc. Because of these functional uncertainties, the stochastic parameter space is often high-dimensional, requiring hundreds, or even thousands, of parameters to describe it. This poses an insurmountable challenge to response surface modeling since the number of forward model evaluations needed to construct an accurate surrogate grows exponentially with the dimension of the uncertain parameter space; a phenomenon referred to as the \textit{curse of dimensionality}. State-of-the-art methods for high-dimensional uncertainty propagation seek to alleviate the curse of dimensionality by performing dimensionality reduction in the uncertain parameter space. However, one still needs to perform forward model evaluations that potentially carry a very high computational burden. We propose a novel methodology for high-dimensional uncertainty propagation of elliptic SPDEs which lifts the requirement for a deterministic forward solver. Our approach is as follows. We parameterize the solution of the elliptic SPDE using a deep residual network (ResNet). In a departure from the traditional squared residual (SR) based loss function for training the ResNet, we introduce a novel physics-informed loss function derived from variational principles. Specifically, our loss function is the expectation of the energy functional of the PDE over the stochastic variables. We demonstrate our solver-free approach through various examples where the elliptic SPDE is subjected to different types of high-dimensional input uncertainties. Also, we solve high-dimensional uncertainty propagation and inverse problems.Comment: 63 pages, 32 figure

    PhyCRNet: Physics-informed Convolutional-Recurrent Network for Solving Spatiotemporal PDEs

    Full text link
    Partial differential equations (PDEs) play a fundamental role in modeling and simulating problems across a wide range of disciplines. Recent advances in deep learning have shown the great potential of physics-informed neural networks (PINNs) to solve PDEs as a basis for data-driven modeling and inverse analysis. However, the majority of existing PINN methods, based on fully-connected NNs, pose intrinsic limitations to low-dimensional spatiotemporal parameterizations. Moreover, since the initial/boundary conditions (I/BCs) are softly imposed via penalty, the solution quality heavily relies on hyperparameter tuning. To this end, we propose the novel physics-informed convolutional-recurrent learning architectures (PhyCRNet and PhyCRNet-s) for solving PDEs without any labeled data. Specifically, an encoder-decoder convolutional long short-term memory network is proposed for low-dimensional spatial feature extraction and temporal evolution learning. The loss function is defined as the aggregated discretized PDE residuals, while the I/BCs are hard-encoded in the network to ensure forcible satisfaction (e.g., periodic boundary padding). The networks are further enhanced by autoregressive and residual connections that explicitly simulate time marching. The performance of our proposed methods has been assessed by solving three nonlinear PDEs (e.g., 2D Burgers' equations, the λ\lambda-ω\omega and FitzHugh Nagumo reaction-diffusion equations), and compared against the start-of-the-art baseline algorithms. The numerical results demonstrate the superiority of our proposed methodology in the context of solution accuracy, extrapolability and generalizability.Comment: 22 page

    A probabilistic generative model for semi-supervised training of coarse-grained surrogates and enforcing physical constraints through virtual observables

    Full text link
    The data-centric construction of inexpensive surrogates for fine-grained, physical models has been at the forefront of computational physics due to its significant utility in many-query tasks such as uncertainty quantification. Recent efforts have taken advantage of the enabling technologies from the field of machine learning (e.g. deep neural networks) in combination with simulation data. While such strategies have shown promise even in higher-dimensional problems, they generally require large amounts of training data even though the construction of surrogates is by definition a Small Data problem. Rather than employing data-based loss functions, it has been proposed to make use of the governing equations (in the simplest case at collocation points) in order to imbue domain knowledge in the training of the otherwise black-box-like interpolators. The present paper provides a flexible, probabilistic framework that accounts for physical structure and information both in the training objectives as well as in the surrogate model itself. We advocate a probabilistic (Bayesian) model in which equalities that are available from the physics (e.g. residuals, conservation laws) can be introduced as virtual observables and can provide additional information through the likelihood. We further advocate a generative model i.e. one that attempts to learn the joint density of inputs and outputs that is capable of making use of unlabeled data (i.e. only inputs) in a semi-supervised fashion in order to promote the discovery of lower-dimensional embeddings which are nevertheless predictive of the fine-grained model's output

    Physics-Constrained Bayesian Neural Network for Fluid Flow Reconstruction with Sparse and Noisy Data

    Full text link
    In many applications, flow measurements are usually sparse and possibly noisy. The reconstruction of a high-resolution flow field from limited and imperfect flow information is significant yet challenging. In this work, we propose an innovative physics-constrained Bayesian deep learning approach to reconstruct flow fields from sparse, noisy velocity data, where equation-based constraints are imposed through the likelihood function and uncertainty of the reconstructed flow can be estimated. Specifically, a Bayesian deep neural network is trained on sparse measurement data to capture the flow field. In the meantime, the violation of physical laws will be penalized on a large number of spatiotemporal points where measurements are not available. A non-parametric variational inference approach is applied to enable efficient physics-constrained Bayesian learning. Several test cases on idealized vascular flows with synthetic measurement data are studied to demonstrate the merit of the proposed method.Comment: 17 pages, 5 figure

    Modeling the Dynamics of PDE Systems with Physics-Constrained Deep Auto-Regressive Networks

    Full text link
    In recent years, deep learning has proven to be a viable methodology for surrogate modeling and uncertainty quantification for a vast number of physical systems. However, in their traditional form, such models can require a large amount of training data. This is of particular importance for various engineering and scientific applications where data may be extremely expensive to obtain. To overcome this shortcoming, physics-constrained deep learning provides a promising methodology as it only utilizes the governing equations. In this work, we propose a novel auto-regressive dense encoder-decoder convolutional neural network to solve and model non-linear dynamical systems without training data at a computational cost that is potentially magnitudes lower than standard numerical solvers. This model includes a Bayesian framework that allows for uncertainty quantification of the predicted quantities of interest at each time-step. We rigorously test this model on several non-linear transient partial differential equation systems including the turbulence of the Kuramoto-Sivashinsky equation, multi-shock formation and interaction with 1D Burgers' equation and 2D wave dynamics with coupled Burgers' equations. For each system, the predictive results and uncertainty are presented and discussed together with comparisons to the results obtained from traditional numerical analysis methods.Comment: 48 pages, 30 figures, Accepted to Journal of Computational Physic
    • …
    corecore