We study the necessary and sufficient complexity of ReLU neural networks---in
terms of depth and number of weights---which is required for approximating
classifier functions in L2. As a model class, we consider the set
Eβ(Rd) of possibly discontinuous piecewise Cβ
functions f:[−1/2,1/2]d→R, where the different smooth regions
of f are separated by Cβ hypersurfaces. For dimension d≥2,
regularity β>0, and accuracy ε>0, we construct artificial
neural networks with ReLU activation function that approximate functions from
Eβ(Rd) up to L2 error of ε. The
constructed networks have a fixed number of layers, depending only on d and
β, and they have O(ε−2(d−1)/β) many nonzero weights,
which we prove to be optimal. In addition to the optimality in terms of the
number of weights, we show that in order to achieve the optimal approximation
rate, one needs ReLU networks of a certain depth. Precisely, for piecewise
Cβ(Rd) functions, this minimal depth is given---up to a
multiplicative constant---by β/d. Up to a log factor, our constructed
networks match this bound. This partly explains the benefits of depth for ReLU
networks by showing that deep networks are necessary to achieve efficient
approximation of (piecewise) smooth functions. Finally, we analyze
approximation in high-dimensional spaces where the function f to be
approximated can be factorized into a smooth dimension reducing feature map
Ï„ and classifier function g---defined on a low-dimensional feature
space---as f=g∘τ. We show that in this case the approximation rate
depends only on the dimension of the feature space and not the input dimension.Comment: Generalized some estimates to Lp norms for $0<p<\infty