13,513 research outputs found
Fast Convergence in Learning Two-Layer Neural Networks with Separable Data
Normalized gradient descent has shown substantial success in speeding up the
convergence of exponentially-tailed loss functions (which includes exponential
and logistic losses) on linear classifiers with separable data. In this paper,
we go beyond linear models by studying normalized GD on two-layer neural nets.
We prove for exponentially-tailed losses that using normalized GD leads to
linear rate of convergence of the training loss to the global optimum. This is
made possible by showing certain gradient self-boundedness conditions and a
log-Lipschitzness property. We also study generalization of normalized GD for
convex objectives via an algorithmic-stability analysis. In particular, we show
that normalized GD does not overfit during training by establishing finite-time
generalization bounds
The Implicit Bias of Gradient Descent on Separable Data
We examine gradient descent on unregularized logistic regression problems,
with homogeneous linear predictors on linearly separable datasets. We show the
predictor converges to the direction of the max-margin (hard margin SVM)
solution. The result also generalizes to other monotone decreasing loss
functions with an infimum at infinity, to multi-class problems, and to training
a weight layer in a deep network in a certain restricted setting. Furthermore,
we show this convergence is very slow, and only logarithmic in the convergence
of the loss itself. This can help explain the benefit of continuing to optimize
the logistic or cross-entropy loss even after the training error is zero and
the training loss is extremely small, and, as we show, even if the validation
loss increases. Our methodology can also aid in understanding implicit
regularization n more complex models and with other optimization methods.Comment: Final JMLR version, with improved discussions over v3. Main
improvements in journal version over conference version (v2 appeared in
ICLR): We proved the measure zero case for main theorem (with implications
for the rates), and the multi-class cas
Parallel coordinate descent for the Adaboost problem
We design a randomised parallel version of Adaboost based on previous studies
on parallel coordinate descent. The algorithm uses the fact that the logarithm
of the exponential loss is a function with coordinate-wise Lipschitz continuous
gradient, in order to define the step lengths. We provide the proof of
convergence for this randomised Adaboost algorithm and a theoretical
parallelisation speedup factor. We finally provide numerical examples on
learning problems of various sizes that show that the algorithm is competitive
with concurrent approaches, especially for large scale problems.Comment: 7 pages, 3 figures, extended version of the paper presented to
ICMLA'1
- …