85,707 research outputs found
Convergence of Unregularized Online Learning Algorithms
In this paper we study the convergence of online gradient descent algorithms
in reproducing kernel Hilbert spaces (RKHSs) without regularization. We
establish a sufficient condition and a necessary condition for the convergence
of excess generalization errors in expectation. A sufficient condition for the
almost sure convergence is also given. With high probability, we provide
explicit convergence rates of the excess generalization errors for both
averaged iterates and the last iterate, which in turn also imply convergence
rates with probability one. To our best knowledge, this is the first
high-probability convergence rate for the last iterate of online gradient
descent algorithms without strong convexity. Without any boundedness
assumptions on iterates, our results are derived by a novel use of two measures
of the algorithm's one-step progress, respectively by generalization errors and
by distances in RKHSs, where the variances of the involved martingales are
cancelled out by the descent property of the algorithm
A Parallel Best-Response Algorithm with Exact Line Search for Nonconvex Sparsity-Regularized Rank Minimization
In this paper, we propose a convergent parallel best-response algorithm with
the exact line search for the nondifferentiable nonconvex sparsity-regularized
rank minimization problem. On the one hand, it exhibits a faster convergence
than subgradient algorithms and block coordinate descent algorithms. On the
other hand, its convergence to a stationary point is guaranteed, while ADMM
algorithms only converge for convex problems. Furthermore, the exact line
search procedure in the proposed algorithm is performed efficiently in
closed-form to avoid the meticulous choice of stepsizes, which is however a
common bottleneck in subgradient algorithms and successive convex approximation
algorithms. Finally, the proposed algorithm is numerically tested.Comment: Submitted to IEEE ICASSP 201
- …