27 research outputs found

    Deep Conservation: A latent-dynamics model for exact satisfaction of physical conservation laws

    Full text link
    This work proposes an approach for latent-dynamics learning that exactly enforces physical conservation laws. The method comprises two steps. First, the method computes a low-dimensional embedding of the high-dimensional dynamical-system state using deep convolutional autoencoders. This defines a low-dimensional nonlinear manifold on which the state is subsequently enforced to evolve. Second, the method defines a latent-dynamics model that associates with the solution to a constrained optimization problem. Here, the objective function is defined as the sum of squares of conservation-law violations over control volumes within a finite-volume discretization of the problem; nonlinear equality constraints explicitly enforce conservation over prescribed subdomains of the problem. Under modest conditions, the resulting dynamics model guarantees that the time-evolution of the latent state exactly satisfies conservation laws over the prescribed subdomains

    FAST AND OPTIMAL SOLUTION ALGORITHMS FOR PARAMETERIZED PARTIAL DIFFERENTIAL EQUATIONS

    Get PDF
    This dissertation presents efficient and optimal numerical algorithms for the solution of parameterized partial differential equations (PDEs) in the context of stochastic Galerkin discretization. The stochastic Galerkin method often leads to a large coupled system of algebraic equations, whose solution is computationally expensive to compute using traditional solvers. For efficient computation of such solutions, we present low-rank iterative solvers, which compute low-rank approximations to the solutions of those systems while not losing much accuracy. We first introduce a low-rank iterative solver for linear systems obtained from the stochastic Galerkin discretization of linear elliptic parameterized PDEs. Then we present a low-rank nonlinear iterative solver for efficiently computing approximate solutions of nonlinear parameterized PDEs, the incompressible Navier–Stokes equations. Along with the computational issue, the stochastic Galerkin method suffers from an optimality issue. The method, in general, does not minimize the solution error in any measure. To address this issue, we present an optimal projection method, a least-squares Petrov–Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted l2-norm of the solution error over all solutions in a given finite-dimensional subspace. The method can be adapted to minimize the solution error in different weighted l2-norms by simply choosing a specific weighting function within the least-squares formulation

    Reduced-order modeling for parameterized PDEs via implicit neural representations

    Full text link
    We present a new data-driven reduced-order modeling approach to efficiently solve parametrized partial differential equations (PDEs) for many-query problems. This work is inspired by the concept of implicit neural representation (INR), which models physics signals in a continuous manner and independent of spatial/temporal discretization. The proposed framework encodes PDE and utilizes a parametrized neural ODE (PNODE) to learn latent dynamics characterized by multiple PDE parameters. PNODE can be inferred by a hypernetwork to reduce the potential difficulties in learning PNODE due to a complex multilayer perceptron (MLP). The framework uses an INR to decode the latent dynamics and reconstruct accurate PDE solutions. Further, a physics-informed loss is also introduced to correct the prediction of unseen parameter instances. Incorporating the physics-informed loss also enables the model to be fine-tuned in an unsupervised manner on unseen PDE parameters. A numerical experiment is performed on a two-dimensional Burgers equation with a large variation of PDE parameters. We evaluate the proposed method at a large Reynolds number and obtain up to speedup of O(10^3) and ~1% relative error to the ground truth values.Comment: 9 pages, 5 figures, Machine Learning and the Physical Sciences Workshop, NeurIPS 202

    DPM: A Novel Training Method for Physics-Informed Neural Networks in Extrapolation

    Full text link
    We present a method for learning dynamics of complex physical processes described by time-dependent nonlinear partial differential equations (PDEs). Our particular interest lies in extrapolating solutions in time beyond the range of temporal domain used in training. Our choice for a baseline method is physics-informed neural network (PINN) [Raissi et al., J. Comput. Phys., 378:686--707, 2019] because the method parameterizes not only the solutions but also the equations that describe the dynamics of physical processes. We demonstrate that PINN performs poorly on extrapolation tasks in many benchmark problems. To address this, we propose a novel method for better training PINN and demonstrate that our newly enhanced PINNs can accurately extrapolate solutions in time. Our method shows up to 72% smaller errors than existing methods in terms of the standard L2-norm metric.Comment: Accepted by AAAI 202
    corecore