In recent years operator networks have emerged as promising deep learning
tools for approximating the solution to partial differential equations (PDEs).
These networks map input functions that describe material properties, forcing
functions and boundary data to the solution of a PDE. This work describes a new
architecture for operator networks that mimics the form of the numerical
solution obtained from an approximate variational or weak formulation of the
problem. The application of these ideas to a generic elliptic PDE leads to a
variationally mimetic operator network (VarMiON). Like the conventional Deep
Operator Network (DeepONet) the VarMiON is also composed of a sub-network that
constructs the basis functions for the output and another that constructs the
coefficients for these basis functions. However, in contrast to the DeepONet,
the architecture of these sub-networks in the VarMiON is precisely determined.
An analysis of the error in the VarMiON solution reveals that it contains
contributions from the error in the training data, the training error, the
quadrature error in sampling input and output functions, and a "covering error"
that measures the distance between the test input functions and the nearest
functions in the training dataset. It also depends on the stability constants
for the exact solution operator and its VarMiON approximation. The application
of the VarMiON to a canonical elliptic PDE and a nonlinear PDE reveals that for
approximately the same number of network parameters, on average the VarMiON
incurs smaller errors than a standard DeepONet and a recently proposed
multiple-input operator network (MIONet). Further, its performance is more
robust to variations in input functions, the techniques used to sample the
input and output functions, the techniques used to construct the basis
functions, and the number of input functions.Comment: 49 pages, 18 figures, 1 Appendi