3,465 research outputs found
Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks
Stochastic gradient descent (SGD) is widely believed to perform implicit
regularization when used to train deep neural networks, but the precise manner
in which this occurs has thus far been elusive. We prove that SGD minimizes an
average potential over the posterior distribution of weights along with an
entropic regularization term. This potential is however not the original loss
function in general. So SGD does perform variational inference, but for a
different loss than the one used to compute the gradients. Even more
surprisingly, SGD does not even converge in the classical sense: we show that
the most likely trajectories of SGD for deep networks do not behave like
Brownian motion around critical points. Instead, they resemble closed loops
with deterministic components. We prove that such "out-of-equilibrium" behavior
is a consequence of highly non-isotropic gradient noise in SGD; the covariance
matrix of mini-batch gradients for deep networks has a rank as small as 1% of
its dimension. We provide extensive empirical validation of these claims,
proven in the appendix
Demystifying Deep Learning: A Geometric Approach to Iterative Projections
Parametric approaches to Learning, such as deep learning (DL), are highly
popular in nonlinear regression, in spite of their extremely difficult training
with their increasing complexity (e.g. number of layers in DL). In this paper,
we present an alternative semi-parametric framework which foregoes the
ordinarily required feedback, by introducing the novel idea of geometric
regularization. We show that certain deep learning techniques such as residual
network (ResNet) architecture are closely related to our approach. Hence, our
technique can be used to analyze these types of deep learning. Moreover, we
present preliminary results which confirm that our approach can be easily
trained to obtain complex structures.Comment: To be appeared in the ICASSP 2018 proceeding
- …