A burgeoning line of research leverages deep neural networks to approximate
the solutions to high dimensional PDEs, opening lines of theoretical inquiry
focused on explaining how it is that these models appear to evade the curse of
dimensionality. However, most prior theoretical analyses have been limited to
linear PDEs. In this work, we take a step towards studying the representational
power of neural networks for approximating solutions to nonlinear PDEs. We
focus on a class of PDEs known as \emph{nonlinear elliptic variational PDEs},
whose solutions minimize an \emph{Euler-Lagrange} energy functional
E(u)=∫ΩL(x,u(x),∇u(x))−f(x)u(x)dx. We show
that if composing a function with Barron norm b with partial derivatives of
L produces a function of Barron norm at most BLbp, the solution to the
PDE can be ϵ-approximated in the L2 sense by a function with Barron
norm O((dBL)max{plog(1/ϵ),plog(1/ϵ)}). By a classical result due to Barron [1993],
this correspondingly bounds the size of a 2-layer neural network needed to
approximate the solution. Treating p,ϵ,BL as constants, this
quantity is polynomial in dimension, thus showing neural networks can evade the
curse of dimensionality. Our proof technique involves neurally simulating
(preconditioned) gradient in an appropriate Hilbert space, which converges
exponentially fast to the solution of the PDE, and such that we can bound the
increase of the Barron norm at each iterate. Our results subsume and
substantially generalize analogous prior results for linear elliptic PDEs over
a unit hypercube