911 research outputs found
Graph Neural Networks and Applied Linear Algebra
Sparse matrix computations are ubiquitous in scientific computing. With the
recent interest in scientific machine learning, it is natural to ask how sparse
matrix computations can leverage neural networks (NN). Unfortunately,
multi-layer perceptron (MLP) neural networks are typically not natural for
either graph or sparse matrix computations. The issue lies with the fact that
MLPs require fixed-sized inputs while scientific applications generally
generate sparse matrices with arbitrary dimensions and a wide range of nonzero
patterns (or matrix graph vertex interconnections). While convolutional NNs
could possibly address matrix graphs where all vertices have the same number of
nearest neighbors, a more general approach is needed for arbitrary sparse
matrices, e.g. arising from discretized partial differential equations on
unstructured meshes. Graph neural networks (GNNs) are one approach suitable to
sparse matrices. GNNs define aggregation functions (e.g., summations) that
operate on variable size input data to produce data of a fixed output size so
that MLPs can be applied. The goal of this paper is to provide an introduction
to GNNs for a numerical linear algebra audience. Concrete examples are provided
to illustrate how many common linear algebra tasks can be accomplished using
GNNs. We focus on iterative methods that employ computational kernels such as
matrix-vector products, interpolation, relaxation methods, and
strength-of-connection measures. Our GNN examples include cases where
parameters are determined a-priori as well as cases where parameters must be
learned. The intent with this article is to help computational scientists
understand how GNNs can be used to adapt machine learning concepts to
computational tasks associated with sparse matrices. It is hoped that this
understanding will stimulate data-driven extensions of classical sparse linear
algebra tasks
- …