Deep architectures consist of tens or hundreds of convolutional layers (CLs)
that terminate with a few fully connected (FC) layers and an output layer
representing the possible labels of a complex classification task. According to
the existing deep learning (DL) rationale, the first CL reveals localized
features from the raw data, whereas the subsequent layers progressively extract
higher-level features required for refined classification. This article
presents an efficient three-phase procedure for quantifying the mechanism
underlying successful DL. First, a deep architecture is trained to maximize the
success rate (SR). Next, the weights of the first several CLs are fixed and
only the concatenated new FC layer connected to the output is trained,
resulting in SRs that progress with the layers. Finally, the trained FC weights
are silenced, except for those emerging from a single filter, enabling the
quantification of the functionality of this filter using a correlation matrix
between input labels and averaged output fields, hence a well-defined set of
quantifiable features is obtained. Each filter essentially selects a single
output label independent of the input label, which seems to prevent high SRs;
however, it counterintuitively identifies a small subset of possible output
labels. This feature is an essential part of the underlying DL mechanism and is
progressively sharpened with layers, resulting in enhanced signal-to-noise
ratios and SRs. Quantitatively, this mechanism is exemplified by the VGG-16,
VGG-6, and AVGG-16. The proposed mechanism underlying DL provides an accurate
tool for identifying each filter's quality and is expected to direct additional
procedures to improve the SR, computational complexity, and latency of DL.Comment: 33 pages, 8 figure