The realization of complex classification tasks requires training of deep
learning (DL) architectures consisting of tens or even hundreds of
convolutional and fully connected hidden layers, which is far from the reality
of the human brain. According to the DL rationale, the first convolutional
layer reveals localized patterns in the input and large-scale patterns in the
following layers, until it reliably characterizes a class of inputs. Here, we
demonstrate that with a fixed ratio between the depths of the first and second
convolutional layers, the error rates of the generalized shallow LeNet
architecture, consisting of only five layers, decay as a power law with the
number of filters in the first convolutional layer. The extrapolation of this
power law indicates that the generalized LeNet can achieve small error rates
that were previously obtained for the CIFAR-10 database using DL architectures.
A power law with a similar exponent also characterizes the generalized VGG-16
architecture. However, this results in a significantly increased number of
operations required to achieve a given error rate with respect to LeNet. This
power law phenomenon governs various generalized LeNet and VGG-16
architectures, hinting at its universal behavior and suggesting a quantitative
hierarchical time-space complexity among machine learning architectures.
Additionally, the conservation law along the convolutional layers, which is the
square-root of their size times their depth, is found to asymptotically
minimize error rates. The efficient shallow learning that is demonstrated in
this study calls for further quantitative examination using various databases
and architectures and its accelerated implementation using future dedicated
hardware developments.Comment: 26 pages, 4 figures (improved figures resolution