813 research outputs found
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
In this note, we present a new averaging technique for the projected
stochastic subgradient method. By using a weighted average with a weight of t+1
for each iterate w_t at iteration t, we obtain the convergence rate of O(1/t)
with both an easy proof and an easy implementation. The new scheme is compared
empirically to existing techniques, with similar performance behavior.Comment: 8 pages, 6 figures. Changes with previous version: Added reference to
concurrently submitted work arXiv:1212.1824v1; clarifications added; typos
corrected; title changed to 'subgradient method' as 'subgradient descent' is
misnome
Domain decomposition methods for compressed sensing
We present several domain decomposition algorithms for sequential and
parallel minimization of functionals formed by a discrepancy term with respect
to data and total variation constraints. The convergence properties of the
algorithms are analyzed. We provide several numerical experiments, showing the
successful application of the algorithms for the restoration 1D and 2D signals
in interpolation/inpainting problems respectively, and in a compressed sensing
problem, for recovering piecewise constant medical-type images from partial
Fourier ensembles.Comment: 4 page
The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than
The {\it forward-backward algorithm} is a powerful tool for solving
optimization problems with a {\it additively separable} and {\it smooth} + {\it
nonsmooth} structure. In the convex setting, a simple but ingenious
acceleration scheme developed by Nesterov has been proved useful to improve the
theoretical rate of convergence for the function values from the standard
down to . In this short paper, we
prove that the rate of convergence of a slight variant of Nesterov's
accelerated forward-backward method, which produces {\it convergent} sequences,
is actually , rather than . Our arguments rely
on the connection between this algorithm and a second-order differential
inclusion with vanishing damping
Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints
In this paper we consider a class of optimization problems with a strongly
convex objective function and the feasible set given by an intersection of a
simple convex set with a set given by a number of linear equality and
inequality constraints. A number of optimization problems in applications can
be stated in this form, examples being the entropy-linear programming, the
ridge regression, the elastic net, the regularized optimal transport, etc. We
extend the Fast Gradient Method applied to the dual problem in order to make it
primal-dual so that it allows not only to solve the dual problem, but also to
construct nearly optimal and nearly feasible solution of the primal problem. We
also prove a theorem about the convergence rate for the proposed algorithm in
terms of the objective function and the linear constraints infeasibility.Comment: Submitted for DOOR 201
- …