2 research outputs found

    A spectral penalty method for two-sided fractional differential equations with general boundary conditions

    Full text link
    We consider spectral approximations to the conservative form of the two-sided Riemann-Liouville (R-L) and Caputo fractional differential equations (FDEs) with nonhomogeneous Dirichlet (fractional and classical, respectively) and Neumann (fractional) boundary conditions. In particular, we develop a spectral penalty method (SPM) by using the Jacobi poly-fractonomial approximation for the conservative R-L FDEs while using the polynomial approximation for the conservative Caputo FDEs. We establish the well-posedness of the corresponding weak problems and analyze sufficient conditions for the coercivity of the SPM for different types of fractional boundary value problems. This analysis allows us to estimate the proper values of the penalty parameters at boundary points. We present several numerical examples to verify the theory and demonstrate the high accuracy of SPM, both for stationary and time dependent FDEs. Moreover, we compare the results against a Petrov-Galerkin spectral tau method (PGS-Ï„\tau, an extension of [Z. Mao, G.E. Karniadakis, SIAM J. Numer. Anal., 2018]) and demonstrate the superior accuracy of SPM for all cases considered.Comment: 27 page

    Variational Physics-Informed Neural Networks For Solving Partial Differential Equations

    Full text link
    Physics-informed neural networks (PINNs) [31] use automatic differentiation to solve partial differential equations (PDEs) by penalizing the PDE in the loss function at a random set of points in the domain of interest. Here, we develop a Petrov-Galerkin version of PINNs based on the nonlinear approximation of deep neural networks (DNNs) by selecting the {\em trial space} to be the space of neural networks and the {\em test space} to be the space of Legendre polynomials. We formulate the \textit{variational residual} of the PDE using the DNN approximation by incorporating the variational form of the problem into the loss function of the network and construct a \textit{variational physics-informed neural network} (VPINN). By integrating by parts the integrand in the variational form, we lower the order of the differential operators represented by the neural networks, hence effectively reducing the training cost in VPINNs while increasing their accuracy compared to PINNs that essentially employ delta test functions. For shallow networks with one hidden layer, we analytically obtain explicit forms of the \textit{variational residual}. We demonstrate the performance of the new formulation for several examples that show clear advantages of VPINNs over PINNs in terms of both accuracy and speed.Comment: 24 pages, 12 figure
    corecore