This paper is motivated by an open problem around deep networks, namely, the
apparent absence of over-fitting despite large over-parametrization which
allows perfect fitting of the training data. In this paper, we analyze this
phenomenon in the case of regression problems when each unit evaluates a
periodic activation function. We argue that the minimal expected value of the
square loss is inappropriate to measure the generalization error in
approximation of compositional functions in order to take full advantage of the
compositional structure. Instead, we measure the generalization error in the
sense of maximum loss, and sometimes, as a pointwise error. We give estimates
on exactly how many parameters ensure both zero training error as well as a
good generalization error. We prove that a solution of a regularization problem
is guaranteed to yield a good training error as well as a good generalization
error and estimate how much error to expect at which test data.Comment: 21 pages; Accepted for publication in Neural Network