Deep neural networks (DNNs) achieve impressive results for complicated tasks
like object detection on images and speech recognition. Motivated by this
practical success, there is now a strong interest in showing good theoretical
properties of DNNs. To describe for which tasks DNNs perform well and when they
fail, it is a key challenge to understand their performance. The aim of this
paper is to contribute to the current statistical theory of DNNs. We apply DNNs
on high dimensional data and we show that the least squares regression
estimates using DNNs are able to achieve dimensionality reduction in case that
the regression function has locally low dimensionality. Consequently, the rate
of convergence of the estimate does not depend on its input dimension d, but
on its local dimension d∗ and the DNNs are able to circumvent the curse of
dimensionality in case that d∗ is much smaller than d. In our simulation
study we provide numerical experiments to support our theoretical result and we
compare our estimate with other conventional nonparametric regression
estimates. The performance of our estimates is also validated in experiments
with real data