2 research outputs found
Stochastic Gradient Methods with Block Diagonal Matrix Adaptation
Adaptive gradient approaches that automatically adjust the learning rate on a
per-feature basis have been very popular for training deep networks. This rich
class of algorithms includes Adagrad, RMSprop, Adam, and recent extensions. All
these algorithms have adopted diagonal matrix adaptation, due to the
prohibitive computational burden of manipulating full matrices in
high-dimensions. In this paper, we show that block-diagonal matrix adaptation
can be a practical and powerful solution that can effectively utilize
structural characteristics of deep learning architectures, and significantly
improve convergence and out-of-sample generalization. We present a general
framework with block-diagonal matrix updates via coordinate grouping, which
includes counterparts of the aforementioned algorithms, prove their convergence
in non-convex optimization, highlighting benefits compared to diagonal
versions. In addition, we propose an efficient spectrum-clipping scheme that
benefits from superior generalization performance of Sgd. Extensive experiments
reveal that block-diagonal approaches achieve state-of-the-art results on
several deep learning tasks, and can outperform adaptive diagonal methods,
vanilla Sgd, as well as a modified version of full-matrix adaptation proposed
very recently.Comment: 31 page
A General Family of Stochastic Proximal Gradient Methods for Deep Learning
We study the training of regularized neural networks where the regularizer
can be non-smooth and non-convex. We propose a unified framework for stochastic
proximal gradient descent, which we term ProxGen, that allows for arbitrary
positive preconditioners and lower semi-continuous regularizers. Our framework
encompasses standard stochastic proximal gradient methods without
preconditioners as special cases, which have been extensively studied in
various settings. Not only that, we present two important update rules beyond
the well-known standard methods as a byproduct of our approach: (i) the first
closed-form proximal mappings of regularization ()
for adaptive stochastic gradient methods, and (ii) a revised version of
ProxQuant that fixes a caveat of the original approach for
quantization-specific regularizers. We analyze the convergence of ProxGen and
show that the whole family of ProxGen enjoys the same convergence rate as
stochastic proximal gradient descent without preconditioners. We also
empirically show the superiority of proximal methods compared to
subgradient-based approaches via extensive experiments. Interestingly, our
results indicate that proximal methods with non-convex regularizers are more
effective than those with convex regularizers.Comment: 21 page