1 research outputs found
On Sharpness of Error Bounds for Single Hidden Layer Feedforward Neural Networks
A new non-linear variant of a quantitative extension of the uniform
boundedness principle is used to show sharpness of error bounds for univariate
approximation by sums of sigmoid and ReLU functions. Single hidden layer
feedforward neural networks with one input node perform such operations. Errors
of best approximation can be expressed using moduli of smoothness of the
function to be approximated (i.e., to be learned). In this context, the
quantitative extension of the uniform boundedness principle indeed allows to
construct counter examples that show approximation rates to be best possible.
Approximation errors do not belong to the little-o class of given bounds. By
choosing piecewise linear activation functions, the discussed problem becomes
free knot spline approximation. Results of the present paper also hold for
non-polynomial (and not piecewise defined) activation functions like inverse
tangent. Based on Vapnik-Chervonenkis dimension, first results are shown for
the logistic function.Comment: pre-print of paper accepted by Results in Mathematic