132 research outputs found
Convergence of Unregularized Online Learning Algorithms
In this paper we study the convergence of online gradient descent algorithms
in reproducing kernel Hilbert spaces (RKHSs) without regularization. We
establish a sufficient condition and a necessary condition for the convergence
of excess generalization errors in expectation. A sufficient condition for the
almost sure convergence is also given. With high probability, we provide
explicit convergence rates of the excess generalization errors for both
averaged iterates and the last iterate, which in turn also imply convergence
rates with probability one. To our best knowledge, this is the first
high-probability convergence rate for the last iterate of online gradient
descent algorithms without strong convexity. Without any boundedness
assumptions on iterates, our results are derived by a novel use of two measures
of the algorithm's one-step progress, respectively by generalization errors and
by distances in RKHSs, where the variances of the involved martingales are
cancelled out by the descent property of the algorithm
Convergence of Online Mirror Descent
In this paper we consider online mirror descent (OMD) algorithms, a class of
scalable online learning algorithms exploiting data geometric structures
through mirror maps. Necessary and sufficient conditions are presented in terms
of the step size sequence for the convergence of an OMD
algorithm with respect to the expected Bregman distance induced by the mirror
map. The condition is in the case of positive variances. It is
reduced to in the case of zero variances for
which the linear convergence may be achieved by taking a constant step size
sequence. A sufficient condition on the almost sure convergence is also given.
We establish tight error bounds under mild conditions on the mirror map, the
loss function, and the regularizer. Our results are achieved by some novel
analysis on the one-step progress of the OMD algorithm using smoothness and
strong convexity of the mirror map and the loss function.Comment: Published in Applied and Computational Harmonic Analysis, 202
Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms
This paper studies the generalization performance of multi-class
classification algorithms, for which we obtain, for the first time, a
data-dependent generalization error bound with a logarithmic dependence on the
class size, substantially improving the state-of-the-art linear dependence in
the existing data-dependent generalization analysis. The theoretical analysis
motivates us to introduce a new multi-class classification machine based on
-norm regularization, where the parameter controls the complexity
of the corresponding bounds. We derive an efficient optimization algorithm
based on Fenchel duality theory. Benchmarks on several real-world datasets show
that the proposed algorithm can achieve significant accuracy gains over the
state of the art
Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning
We show a Talagrand-type concentration inequality for Multi-Task Learning
(MTL), using which we establish sharp excess risk bounds for MTL in terms of
distribution- and data-dependent versions of the Local Rademacher Complexity
(LRC). We also give a new bound on the LRC for norm regularized as well as
strongly convex hypothesis classes, which applies not only to MTL but also to
the standard i.i.d. setting. Combining both results, one can now easily derive
fast-rate bounds on the excess risk for many prominent MTL methods,
including---as we demonstrate---Schatten-norm, group-norm, and
graph-regularized MTL. The derived bounds reflect a relationship akeen to a
conservation law of asymptotic convergence rates. This very relationship allows
for trading off slower rates w.r.t. the number of tasks for faster rates with
respect to the number of available samples per task, when compared to the rates
obtained via a traditional, global Rademacher analysis.Comment: In this version, some arguments and results (of the previous version)
have been corrected, or modifie
- …
