722 research outputs found

    An Expert's Guide to Training Physics-informed Neural Networks

    Full text link
    Physics-informed neural networks (PINNs) have been popularized as a deep learning framework that can seamlessly synthesize observational data and partial differential equation (PDE) constraints. Their practical effectiveness however can be hampered by training pathologies, but also oftentimes by poor choices made by users who lack deep learning expertise. In this paper we present a series of best practices that can significantly improve the training efficiency and overall accuracy of PINNs. We also put forth a series of challenging benchmark problems that highlight some of the most prominent difficulties in training PINNs, and present comprehensive and fully reproducible ablation studies that demonstrate how different architecture choices and training strategies affect the test accuracy of the resulting models. We show that the methods and guiding principles put forth in this study lead to state-of-the-art results and provide strong baselines that future studies should use for comparison purposes. To this end, we also release a highly optimized library in JAX that can be used to reproduce all results reported in this paper, enable future research studies, as well as facilitate easy adaptation to new use-case scenarios.Comment: 36 pages, 25 figures, 13 table

    Physics-informed neural networks for modeling rate- and temperature-dependent plasticity

    Full text link
    This work presents a physics-informed neural network (PINN) based framework to model the strain-rate and temperature dependence of the deformation fields in elastic-viscoplastic solids. To avoid unbalanced back-propagated gradients during training, the proposed framework uses a simple strategy with no added computational complexity for selecting scalar weights that balance the interplay between different terms in the physics-based loss function. In addition, we highlight a fundamental challenge involving the selection of appropriate model outputs so that the mechanical problem can be faithfully solved using a PINN-based approach. We demonstrate the effectiveness of this approach by studying two test problems modeling the elastic-viscoplastic deformation in solids at different strain rates and temperatures, respectively. Our results show that the proposed PINN-based approach can accurately predict the spatio-temporal evolution of deformation in elastic-viscoplastic materials.Comment: 11 pages, 7 figures; Accepted in NeurIPS 2022, Machine Learning and the Physical Sciences worksho

    Modeling Power Systems Dynamics with Symbolic Physics-Informed Neural Networks

    Full text link
    In recent years, scientific machine learning, particularly physic-informed neural networks (PINNs), has introduced new innovative methods to understanding the differential equations that describe power system dynamics, providing a more efficient alternative to traditional methods. However, using a single neural network to capture patterns of all variables requires a large enough size of networks, leading to a long time of training and still high computational costs. In this paper, we utilize the interfacing of PINNs with symbolic techniques to construct multiple single-output neural networks by taking the loss function apart and integrating it over the relevant domain. Also, we reweigh the factors of the components in the loss function to improve the performance of the network for instability systems. Our results show that the symbolic PINNs provide higher accuracy with significantly fewer parameters and faster training time. By using the adaptive weight method, the symbolic PINNs can avoid the vanishing gradient problem and numerical instability

    Physics-Informed Neural Networks for 2nd order ODEs with sharp gradients

    Get PDF
    In this work, four different methods based on Physics-Informed Neural Networks (PINNs) for solving Differential Equations (DE) are compared: Classic-PINN that makes use of Deep Neural Networks (DNNs) to approximate the DE solution;Deep-TFC improves the efficiency of classic-PINN by employing the constrained expression from the Theory of Functional Connections (TFC) so to analytically satisfy the DE constraints;PIELM that improves the accuracy of classic-PINN by employing a single-layer NN trained via Extreme Learning Machine (ELM) algorithm;X-TFC, which makes use of both constrained expression and ELM. The last has been recently introduced to solve challenging problems affected by discontinuity, learning solutions in cases where the other three methods fail. The four methods are compared by solving the boundary value problem arising from the 1D Steady-State Advection–Diffusion Equation for different values of the diffusion coefficient. The solutions of the DEs exhibit steep gradients as the value of the diffusion coefficient decreases, increasing the challenge of the problem

    Neural Eikonal Solver: improving accuracy of physics-informed neural networks for solving eikonal equation in case of caustics

    Full text link
    The concept of physics-informed neural networks has become a useful tool for solving differential equations due to its flexibility. There are a few approaches using this concept to solve the eikonal equation which describes the first-arrival traveltimes of acoustic and elastic waves in smooth heterogeneous velocity models. However, the challenge of the eikonal is exacerbated by the velocity models producing caustics, resulting in instabilities and deterioration of accuracy due to the non-smooth solution behaviour. In this paper, we revisit the problem of solving the eikonal equation using neural networks to tackle the caustic pathologies. We introduce the novel Neural Eikonal Solver (NES) for solving the isotropic eikonal equation in two formulations: the one-point problem is for a fixed source location; the two-point problem is for an arbitrary source-receiver pair. We present several techniques which provide stability in velocity models producing caustics: improved factorization; non-symmetric loss function based on Hamiltonian; gaussian activation; symmetrization. In our tests, NES showed the relative-mean-absolute error of about 0.2-0.4% from the second-order factored Fast Marching Method, and outperformed existing neural-network solvers giving 10-60 times lower errors and 2-30 times faster training. The inference time of NES is comparable with the Fast Marching. The one-point NES provides the most accurate solution, whereas the two-point NES provides slightly lower accuracy but gives an extremely compact representation. It can be useful in various seismic applications where massive computations are required (millions of source-receiver pairs): ray modeling, traveltime tomography, hypocenter localization, and Kirchhoff migration.Comment: The paper has 14 pages and 6 figures. Source code is available at https://github.com/sgrubas/NE

    About optimal loss function for training physics-informed neural networks under respecting causality

    Full text link
    A method is presented that allows to reduce a problem described by differential equations with initial and boundary conditions to the problem described only by differential equations. The advantage of using the modified problem for physics-informed neural networks (PINNs) methodology is that it becomes possible to represent the loss function in the form of a single term associated with differential equations, thus eliminating the need to tune the scaling coefficients for the terms related to boundary and initial conditions. The weighted loss functions respecting causality were modified and new weighted loss functions based on generalized functions are derived. Numerical experiments have been carried out for a number of problems, demonstrating the accuracy of the proposed methods.Comment: 25 pages, 7 figures, 6 table
    • …
    corecore