3,702 research outputs found

    A Discussion on Solving Partial Differential Equations using Neural Networks

    Full text link
    Can neural networks learn to solve partial differential equations (PDEs)? We investigate this question for two (systems of) PDEs, namely, the Poisson equation and the steady Navier--Stokes equations. The contributions of this paper are five-fold. (1) Numerical experiments show that small neural networks (< 500 learnable parameters) are able to accurately learn complex solutions for systems of partial differential equations. (2) It investigates the influence of random weight initialization on the quality of the neural network approximate solution and demonstrates how one can take advantage of this non-determinism using ensemble learning. (3) It investigates the suitability of the loss function used in this work. (4) It studies the benefits and drawbacks of solving (systems of) PDEs with neural networks compared to classical numerical methods. (5) It proposes an exhaustive list of possible directions of future work.Comment: 9 pages, 2 figure

    PFNN: A Penalty-Free Neural Network Method for Solving a Class of Second-Order Boundary-Value Problems on Complex Geometries

    Full text link
    We present PFNN, a penalty-free neural network method, to efficiently solve a class of second-order boundary-value problems on complex geometries. To reduce the smoothness requirement, the original problem is reformulated to a weak form so that the evaluations of high-order derivatives are avoided. Two neural networks, rather than just one, are employed to construct the approximate solution, with one network satisfying the essential boundary conditions and the other handling the rest part of the domain. In this way, an unconstrained optimization problem, instead of a constrained one, is solved without adding any penalty terms. The entanglement of the two networks is eliminated with the help of a length factor function that is scale invariant and can adapt with complex geometries. We prove the convergence of the PFNN method and conduct numerical experiments on a series of linear and nonlinear second-order boundary-value problems to demonstrate that PFNN is superior to several existing approaches in terms of accuracy, flexibility and robustness

    Physics Informed Extreme Learning Machine (PIELM) -- A rapid method for the numerical solution of partial differential equations

    Full text link
    There has been rapid progress recently on the application of deep networks to the solution of partial differential equations, collectively labelled as Physics Informed Neural Networks (PINNs). In this paper, we develop Physics Informed Extreme Learning Machine (PIELM), a rapid version of PINNs which can be applied to stationary and time dependent linear partial differential equations. We demonstrate that PIELM matches or exceeds the accuracy of PINNs on a range of problems. We also discuss the limitations of neural network based approaches, including our PIELM, in the solution of PDEs on large domains and suggest an extension, a distributed version of our algorithm -{}- DPIELM. We show that DPIELM produces excellent results comparable to conventional numerical techniques in the solution of time-dependent problems. Collectively, this work contributes towards making the use of neural networks in the solution of partial differential equations in complex domains as a competitive alternative to conventional discretization techniques.Comment: 29 pages, 30 figure

    PhyGeoNet: Physics-Informed Geometry-Adaptive Convolutional Neural Networks for Solving Parameterized Steady-State PDEs on Irregular Domain

    Full text link
    Recently, the advent of deep learning has spurred interest in the development of physics-informed neural networks (PINN) for efficiently solving partial differential equations (PDEs), particularly in a parametric setting. Among all different classes of deep neural networks, the convolutional neural network (CNN) has attracted increasing attention in the scientific machine learning community, since the parameter-sharing feature in CNN enables efficient learning for problems with large-scale spatiotemporal fields. However, one of the biggest challenges is that CNN only can handle regular geometries with image-like format (i.e., rectangular domains with uniform grids). In this paper, we propose a novel physics-constrained CNN learning architecture, aiming to learn solutions of parametric PDEs on irregular domains without any labeled data. In order to leverage powerful classic CNN backbones, elliptic coordinate mapping is introduced to enable coordinate transforms between the irregular physical domain and regular reference domain. The proposed method has been assessed by solving a number of PDEs on irregular domains, including heat equations and steady Navier-Stokes equations with parameterized boundary conditions and varying geometries. Moreover, the proposed method has also been compared against the state-of-the-art PINN with fully-connected neural network (FC-NN) formulation. The numerical results demonstrate the effectiveness of the proposed approach and exhibit notable superiority over the FC-NN based PINN in terms of efficiency and accuracy.Comment: 57 pages, 26 figure

    Physics-Informed Neural Networks for Solving Multiscale Mode-Resolved Phonon Boltzmann Transport Equation

    Full text link
    Boltzmann transport equation (BTE) is an ideal tool to describe the multiscale phonon transport phenomena, which are critical to applications like microelectronics cooling. Numerically solving phonon BTE is extremely computationally challenging due to the high dimensionality of such problems, especially when mode-resolved properties are considered. In this work, we demonstrate the use of physics-informed neural networks (PINNs) to efficiently solve phonon BTE for multiscale thermal transport problems with the consideration of phonon dispersion and polarization. In particular, a PINN framework is devised to predict the phonon energy distribution by minimizing the residuals of governing equations and boundary conditions, without the need for any labeled training data. Moreover, geometric parameters, such as the characteristic length scale, are included as a part of the input to PINN, which enables learning BTE solutions in a parametric setting. The effectiveness of the present scheme is demonstrated by solving a number of phonon transport problems in different spatial dimensions (from 1D to 3D). Compared to existing numerical BTE solvers, the proposed method exhibits superiority in efficiency and accuracy, showing great promises for practical applications, such as the thermal design of electronic devices

    Surrogate Modeling for Fluid Flows Based on Physics-Constrained Deep Learning Without Simulation Data

    Full text link
    Numerical simulations on fluid dynamics problems primarily rely on spatially or/and temporally discretization of the governing equation into the finite-dimensional algebraic system solved by computers. Due to complicated nature of the physics and geometry, such process can be computational prohibitive for most real-time applications and many-query analyses. Therefore, developing a cost-effective surrogate model is of great practical significance. Deep learning (DL) has shown new promises for surrogate modeling due to its capability of handling strong nonlinearity and high dimensionality. However, the off-the-shelf DL architectures fail to operate when the data becomes sparse. Unfortunately, data is often insufficient in most parametric fluid dynamics problems since each data point in the parameter space requires an expensive numerical simulation based on the first principle, e.g., Naiver--Stokes equations. In this paper, we provide a physics-constrained DL approach for surrogate modeling of fluid flows without relying on any simulation data. Specifically, a structured deep neural network (DNN) architecture is devised to enforce the initial and boundary conditions, and the governing partial differential equations are incorporated into the loss of the DNN to drive the training. Numerical experiments are conducted on a number of internal flows relevant to hemodynamics applications, and the forward propagation of uncertainties in fluid properties and domain geometry is studied as well. The results show excellent agreement on the flow field and forward-propagated uncertainties between the DL surrogate approximations and the first-principle numerical simulations.Comment: 43 pages, 12 figure

    Solving for high dimensional committor functions using artificial neural networks

    Full text link
    In this note we propose a method based on artificial neural network to study the transition between states governed by stochastic processes. In particular, we aim for numerical schemes for the committor function, the central object of transition path theory, which satisfies a high-dimensional Fokker-Planck equation. By working with the variational formulation of such partial differential equation and parameterizing the committor function in terms of a neural network, approximations can be obtained via optimizing the neural network weights using stochastic algorithms. The numerical examples show that moderate accuracy can be achieved for high-dimensional problems.Comment: 12 pages, 6 figure

    Variational training of neural network approximations of solution maps for physical models

    Full text link
    A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models. Solve-training framework uses the neural network as the ansatz of the solution map and train the network variationally via loss functions from the underlying physical models. Solve-training framework avoids expensive data preparation in the traditional supervised training procedure, which prepares labels for input data, and still achieves effective representation of the solution map adapted to the input data distribution. The efficiency of solve-training framework is demonstrated through obtaining solutions maps for linear and nonlinear elliptic equations, and maps from potentials to ground states of linear and nonlinear Schr\"odinger equations

    Learning optimal multigrid smoothers via neural networks

    Full text link
    Multigrid methods are one of the most efficient techniques for solving linear systems arising from Partial Differential Equations (PDEs) and graph Laplacians from machine learning applications. One of the key components of multigrid is smoothing, which aims at reducing high-frequency errors on each grid level. However, finding optimal smoothing algorithms is problem-dependent and can impose challenges for many problems. In this paper, we propose an efficient adaptive framework for learning optimized smoothers from operator stencils in the form of convolutional neural networks (CNNs). The CNNs are trained on small-scale problems from a given type of PDEs based on a supervised loss function derived from multigrid convergence theories, and can be applied to large-scale problems of the same class of PDEs. Numerical results on anisotropic rotated Laplacian problems demonstrate improved convergence rates and solution time compared with classical hand-crafted relaxation methods

    Deep Domain Decomposition Method: Elliptic Problems

    Full text link
    This paper proposes a deep-learning-based domain decomposition method (DeepDDM), which leverages deep neural networks (DNN) to discretize the subproblems divided by domain decomposition methods (DDM) for solving partial differential equations (PDE). Using DNN to solve PDE is a physics-informed learning problem with the objective involving two terms, domain term and boundary term, which respectively make the desired solution satisfy the PDE and corresponding boundary conditions. DeepDDM will exchange the subproblem information across the interface in DDM by adjusting the boundary term for solving each subproblem by DNN. Benefiting from the simple implementation and mesh-free strategy of using DNN for PDE, DeepDDM will simplify the implementation of DDM and make DDM more flexible for complex PDE, e.g., those with complex interfaces in the computational domain. This paper will firstly investigate the performance of using DeepDDM for elliptic problems, including a model problem and an interface problem. The numerical examples demonstrate that DeepDDM exhibits behaviors consistent with conventional DDM: the number of iterations by DeepDDM is independent of network architecture and decreases with increasing overlapping size. The performance of DeepDDM on elliptic problems will encourage us to further investigate its performance for other kinds of PDE and may provide new insights for improving the PDE solver by deep learning
    • …
    corecore