Two aspects of neural networks that have been extensively studied in the
recent literature are their function approximation properties and their
training by gradient descent methods. The approximation problem seeks accurate
approximations with a minimal number of weights. In most of the current
literature these weights are fully or partially hand-crafted, showing the
capabilities of neural networks but not necessarily their practical
performance. In contrast, optimization theory for neural networks heavily
relies on an abundance of weights in over-parametrized regimes.
This paper balances these two demands and provides an approximation result
for shallow networks in 1d with non-convex weight optimization by gradient
descent. We consider finite width networks and infinite sample limits, which is
the typical setup in approximation theory. Technically, this problem is not
over-parametrized, however, some form of redundancy reappears as a loss in
approximation rate compared to best possible rates