23 research outputs found
Multipole Graph Neural Operator for Parametric Partial Differential Equations
One of the main challenges in using deep learning-based methods for simulating physical systems and solving partial differential equations (PDEs) is formulating physics-based data in the desired structure for neural networks. Graph neural networks (GNNs) have gained popularity in this area since graphs offer a natural way of modeling particle interactions and provide a clear way of discretizing the continuum models. However, the graphs constructed for approximating such tasks usually ignore long-range interactions due to unfavorable scaling of the computational complexity with respect to the number of nodes. The errors due to these approximations scale with the discretization of the system, thereby not allowing for generalization under mesh-refinement. Inspired by the classical multipole methods, we purpose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity. Our multi-level formulation is equivalent to recursively adding inducing points to the kernel matrix, unifying GNNs with multi-resolution matrix factorization of the kernel. Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time
Multipole Graph Neural Operator for Parametric Partial Differential Equations
One of the main challenges in using deep learning-based methods for
simulating physical systems and solving partial differential equations (PDEs)
is formulating physics-based data in the desired structure for neural networks.
Graph neural networks (GNNs) have gained popularity in this area since graphs
offer a natural way of modeling particle interactions and provide a clear way
of discretizing the continuum models. However, the graphs constructed for
approximating such tasks usually ignore long-range interactions due to
unfavorable scaling of the computational complexity with respect to the number
of nodes. The errors due to these approximations scale with the discretization
of the system, thereby not allowing for generalization under mesh-refinement.
Inspired by the classical multipole methods, we propose a novel multi-level
graph neural network framework that captures interaction at all ranges with
only linear complexity. Our multi-level formulation is equivalent to
recursively adding inducing points to the kernel matrix, unifying GNNs with
multi-resolution matrix factorization of the kernel. Experiments confirm our
multi-graph network learns discretization-invariant solution operators to PDEs
and can be evaluated in linear time
Multiscale Neural Operators for Solving Time-Independent PDEs
Time-independent Partial Differential Equations (PDEs) on large meshes pose
significant challenges for data-driven neural PDE solvers. We introduce a novel
graph rewiring technique to tackle some of these challenges, such as
aggregating information across scales and on irregular meshes. Our proposed
approach bridges distant nodes, enhancing the global interaction capabilities
of GNNs. Our experiments on three datasets reveal that GNN-based methods set
new performance standards for time-independent PDEs on irregular meshes.
Finally, we show that our graph rewiring strategy boosts the performance of
baseline methods, achieving state-of-the-art results in one of the tasks.Comment: The Symbiosis of Deep Learning and Differential Equations III @
NeurIPS 202
Rapid Seismic Waveform Modeling and Inversion with Universal Neural Operators
Seismic waveform modeling is a powerful tool for determining earth structure
models and unraveling earthquake rupture processes, but it is computationally
expensive. We introduce a scheme to vastly accelerate these calculations with a
recently developed machine learning paradigm called the neural operator. Once
trained, these models can simulate a full wavefield for arbitrary velocity
models at negligible cost. We use a U-shaped neural operator to learn a general
solution operator to the 2D elastic wave equation from an ensemble of numerical
simulations performed with random velocity models and source locations. We show
that full waveform modeling with neural operators is nearly two orders of
magnitude faster than conventional numerical methods, and more importantly, the
trained model enables accurate simulation for arbitrary velocity models, source
locations, and mesh discretization, even when distinctly different from the
training dataset. The method also enables efficient full-waveform inversion
with automatic differentiation
Optimizing Carbon Storage Operations for Long-Term Safety
To combat global warming and mitigate the risks associated with climate
change, carbon capture and storage (CCS) has emerged as a crucial technology.
However, safely sequestering CO2 in geological formations for long-term storage
presents several challenges. In this study, we address these issues by modeling
the decision-making process for carbon storage operations as a partially
observable Markov decision process (POMDP). We solve the POMDP using belief
state planning to optimize injector and monitoring well locations, with the
goal of maximizing stored CO2 while maintaining safety. Empirical results in
simulation demonstrate that our approach is effective in ensuring safe
long-term carbon storage operations. We showcase the flexibility of our
approach by introducing three different monitoring strategies and examining
their impact on decision quality. Additionally, we introduce a neural network
surrogate model for the POMDP decision-making process to handle the complex
dynamics of the multi-phase flow. We also investigate the effects of different
fidelity levels of the surrogate model on decision qualities
Accelerated Solutions of Coupled Phase-Field Problems using Generative Adversarial Networks
Multiphysics problems such as multicomponent diffusion, phase transformations
in multiphase systems and alloy solidification involve numerical solution of a
coupled system of nonlinear partial differential equations (PDEs). Numerical
solutions of these PDEs using mesh-based methods require spatiotemporal
discretization of these equations. Hence, the numerical solutions are often
sensitive to discretization parameters and may have inaccuracies (resulting
from grid-based approximations). Moreover, choice of finer mesh for higher
accuracy make these methods computationally expensive. Neural network-based PDE
solvers are emerging as robust alternatives to conventional numerical methods
because these use machine learnable structures that are grid-independent, fast
and accurate. However, neural network based solvers require large amount of
training data, thus affecting their generalizabilty and scalability. These
concerns become more acute for coupled systems of time-dependent PDEs. To
address these issues, we develop a new neural network based framework that uses
encoder-decoder based conditional Generative Adversarial Networks with ConvLSTM
layers to solve a system of Cahn-Hilliard equations. These equations govern
microstructural evolution of a ternary alloy undergoing spinodal decomposition
when quenched inside a three-phase miscibility gap. We show that the trained
models are mesh and scale-independent, thereby warranting application as
effective neural operators.Comment: 18 pages, 21 figures (including subfigures). Will be submitted to the
journal: "Computational Materials Science" soo
Bayesian Inversion with Neural Operator (BINO) for Modeling Subdiffusion: Forward and Inverse Problems
Fractional diffusion equations have been an effective tool for modeling
anomalous diffusion in complicated systems. However, traditional numerical
methods require expensive computation cost and storage resources because of the
memory effect brought by the convolution integral of time fractional
derivative. We propose a Bayesian Inversion with Neural Operator (BINO) to
overcome the difficulty in traditional methods as follows. We employ a deep
operator network to learn the solution operators for the fractional diffusion
equations, allowing us to swiftly and precisely solve a forward problem for
given inputs (including fractional order, diffusion coefficient, source terms,
etc.). In addition, we integrate the deep operator network with a Bayesian
inversion method for modelling a problem by subdiffusion process and solving
inverse subdiffusion problems, which reduces the time costs (without suffering
from overwhelm storage resources) significantly. A large number of numerical
experiments demonstrate that the operator learning method proposed in this work
can efficiently solve the forward problems and Bayesian inverse problems of the
subdiffusion equation
GNN-based physics solver for time-independent PDEs
Physics-based deep learning frameworks have shown to be effective in
accurately modeling the dynamics of complex physical systems with
generalization capability across problem inputs. However, time-independent
problems pose the challenge of requiring long-range exchange of information
across the computational domain for obtaining accurate predictions. In the
context of graph neural networks (GNNs), this calls for deeper networks, which,
in turn, may compromise or slow down the training process. In this work, we
present two GNN architectures to overcome this challenge - the Edge Augmented
GNN and the Multi-GNN. We show that both these networks perform significantly
better (by a factor of 1.5 to 2) than baseline methods when applied to
time-independent solid mechanics problems. Furthermore, the proposed
architectures generalize well to unseen domains, boundary conditions, and
materials. Here, the treatment of variable domains is facilitated by a novel
coordinate transformation that enables rotation and translation invariance. By
broadening the range of problems that neural operators based on graph neural
networks can tackle, this paper provides the groundwork for their application
to complex scientific and industrial settings.Comment: 12 pages, 2 figure
Fourier Neural Operator for Parametric Partial Differential Equations
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and the Navier-Stokes equation (including the turbulent regime). Our Fourier neural operator shows state-of-the-art performance compared to existing neural network methodologies and it is up to three orders of magnitude faster compared to traditional PDE solvers