145 research outputs found

    Physics-informed neural networks with residual/gradient-based adaptive sampling methods for solving PDEs with sharp solutions

    Full text link
    We consider solving the forward and inverse PDEs which have sharp solutions using physics-informed neural networks (PINNs) in this work. In particular, to better capture the sharpness of the solution, we propose adaptive sampling methods (ASMs) based on the residual and the gradient of the solution. We first present a residual only based ASM algorithm denoted by ASM I. In this approach, we first train the neural network by using a small number of residual points and divide the computational domain into a certain number of sub-domains, we then add new residual points in the sub-domain which has the largest mean absolute value of the residual, and those points which have largest absolute values of the residual in this sub-domain will be added as new residual points. We further develop a second type of ASM algorithm (denoted by ASM II) based on both the residual and the gradient of the solution due to the fact that only the residual may be not able to efficiently capture the sharpness of the solution. The procedure of ASM II is almost the same as that of ASM I except that in ASM II, we add new residual points which not only have large residual but also large gradient. To demonstrate the effectiveness of the present methods, we employ both ASM I and ASM II to solve a number of PDEs, including Burger equation, compressible Euler equation, Poisson equation over an L-shape domain as well as high-dimensional Poisson equation. It has been shown from the numerical results that the sharp solutions can be well approximated by using either ASM I or ASM II algorithm, and both methods deliver much more accurate solution than original PINNs with the same number of residual points. Moreover, the ASM II algorithm has better performance in terms of accuracy, efficiency and stability compared with the ASM I algorithm.Comment: 22 pages, 9 figure

    Analysis and Hermite spectral approximation of diffusive-viscous wave equations in unbounded domains arising in geophysics

    Full text link
    The diffusive-viscous wave equation (DVWE) is widely used in seismic exploration since it can explain frequency-dependent seismic reflections in a reservoir with hydrocarbons. Most of the existing numerical approximations for the DVWE are based on domain truncation with ad hoc boundary conditions. However, this would generate artificial reflections as well as truncation errors. To this end, we directly consider the DVWE in unbounded domains. We first show the existence, uniqueness, and regularity of the solution of the DVWE. We then develop a Hermite spectral Galerkin scheme and derive the corresponding error estimate showing that the Hermite spectral Galerkin approximation delivers a spectral rate of convergence provided sufficiently smooth solutions. Several numerical experiments with constant and discontinuous coefficients are provided to verify the theoretical result and to demonstrate the effectiveness of the proposed method. In particular, We verify the error estimate for both smooth and non-smooth source terms and initial conditions. In view of the error estimate and the regularity result, we show the sharpness of the convergence rate in terms of the regularity of the source term. We also show that the artificial reflection does not occur by using the present method.Comment: 32 pages, 27 figure

    Operator Learning Enhanced Physics-informed Neural Networks for Solving Partial Differential Equations Characterized by Sharp Solutions

    Full text link
    Physics-informed Neural Networks (PINNs) have been shown as a promising approach for solving both forward and inverse problems of partial differential equations (PDEs). Meanwhile, the neural operator approach, including methods such as Deep Operator Network (DeepONet) and Fourier neural operator (FNO), has been introduced and extensively employed in approximating solution of PDEs. Nevertheless, to solve problems consisting of sharp solutions poses a significant challenge when employing these two approaches. To address this issue, we propose in this work a novel framework termed Operator Learning Enhanced Physics-informed Neural Networks (OL-PINN). Initially, we utilize DeepONet to learn the solution operator for a set of smooth problems relevant to the PDEs characterized by sharp solutions. Subsequently, we integrate the pre-trained DeepONet with PINN to resolve the target sharp solution problem. We showcase the efficacy of OL-PINN by successfully addressing various problems, such as the nonlinear diffusion-reaction equation, the Burgers equation and the incompressible Navier-Stokes equation at high Reynolds number. Compared with the vanilla PINN, the proposed method requires only a small number of residual points to achieve a strong generalization capability. Moreover, it substantially enhances accuracy, while also ensuring a robust training process. Furthermore, OL-PINN inherits the advantage of PINN for solving inverse problems. To this end, we apply the OL-PINN approach for solving problems with only partial boundary conditions, which usually cannot be solved by the classical numerical methods, showing its capacity in solving ill-posed problems and consequently more complex inverse problems.Comment: Preprint submitted to Elsevie

    DeepM&Mnet for hypersonics: Predicting the coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation of operators

    Full text link
    In high-speed flow past a normal shock, the fluid temperature rises rapidly triggering downstream chemical dissociation reactions. The chemical changes lead to appreciable changes in fluid properties, and these coupled multiphysics and the resulting multiscale dynamics are challenging to resolve numerically. Using conventional computational fluid dynamics (CFD) requires excessive computing cost. Here, we propose a totally new efficient approach, assuming that some sparse measurements of the state variables are available that can be seamlessly integrated in the simulation algorithm. We employ a special neural network for approximating nonlinear operators, the DeepONet, which is used to predict separately each individual field, given inputs from the rest of the fields of the coupled multiphysics system. We demonstrate the effectiveness of DeepONet by predicting five species in the non-equilibrium chemistry downstream of a normal shock at high Mach numbers as well as the velocity and temperature fields. We show that upon training, DeepONets can be over five orders of magnitude faster than the CFD solver employed to generate the training data and yield good accuracy for unseen Mach numbers within the range of training. Outside this range, DeepONet can still predict accurately and fast if a few sparse measurements are available. We then propose a composite supervised neural network, DeepM&Mnet, that uses multiple pre-trained DeepONets as building blocks and scattered measurements to infer the set of all seven fields in the entire domain of interest. Two DeepM&Mnet architectures are tested, and we demonstrate the accuracy and capacity for efficient data assimilation. DeepM&Mnet is simple and general: it can be employed to construct complex multiphysics and multiscale models and assimilate sparse measurements using pre-trained DeepONets in a "plug-and-play" mode.Comment: 30 pages, 17 figure
    • …
    corecore