1 research outputs found
A Group Theoretic Perspective on Unsupervised Deep Learning
Why does Deep Learning work? What representations does it capture? How do
higher-order representations emerge? We study these questions from the
perspective of group theory, thereby opening a new approach towards a theory of
Deep learning.
One factor behind the recent resurgence of the subject is a key algorithmic
step called {\em pretraining}: first search for a good generative model for the
input samples, and repeat the process one layer at a time. We show deeper
implications of this simple principle, by establishing a connection with the
interplay of orbits and stabilizers of group actions. Although the neural
networks themselves may not form groups, we show the existence of {\em shadow}
groups whose elements serve as close approximations.
Over the shadow groups, the pre-training step, originally introduced as a
mechanism to better initialize a network, becomes equivalent to a search for
features with minimal orbits. Intuitively, these features are in a way the {\em
simplest}. Which explains why a deep learning network learns simple features
first. Next, we show how the same principle, when repeated in the deeper
layers, can capture higher order representations, and why representation
complexity increases as the layers get deeper.Comment: 2-page version of arXiv:1412.6621 prepared for presentation at ICLR
2015 workshop as required by ICLR PC). This version has some minor formatting
changes as required by the conferenc