52 research outputs found
Sharpness, Restart and Acceleration
International audienceThe Łojasiewicz inequality shows that sharpness bounds on the minimum of convex optimization problems hold almost generically. Sharpness directly controls the performance of restart schemes, as observed by Nemirovskii and Nesterov [1985]. The constants quantifying these sharpness bounds are of course unobservable , but we show that optimal restart strategies are robust, in the sense that, in some important cases, finding the best restart scheme only requires a log scale grid search. Overall then, restart schemes generically accelerate accelerated first-order methods
Sharpness, Restart and Acceleration
The Łojasievicz inequality shows that sharpness bounds on the minimum of convex optimization problems hold almost generically. Here, we show that sharpness directly controls the performance of restart schemes. The constants quantifying sharpness are of course unobservable, but we show that optimal restart strategies are fairly robust, and searching for the best scheme only increases the complexity by a logarithmic factor compared to the optimal bound. Overall then, restart schemes generically accelerate accelerated methods
Mirror Descent and Convex Optimization Problems With Non-Smooth Inequality Constraints
We consider the problem of minimization of a convex function on a simple set
with convex non-smooth inequality constraint and describe first-order methods
to solve such problems in different situations: smooth or non-smooth objective
function; convex or strongly convex objective and constraint; deterministic or
randomized information about the objective and constraint. We hope that it is
convenient for a reader to have all the methods for different settings in one
place. Described methods are based on Mirror Descent algorithm and switching
subgradient scheme. One of our focus is to propose, for the listed different
settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule.
This means that neither stepsize nor stopping rule require to know the
Lipschitz constant of the objective or constraint. We also construct Mirror
Descent for problems with objective function, which is not Lipschitz
continuous, e.g. is a quadratic function. Besides that, we address the problem
of recovering the solution of the dual problem
Computational Complexity versus Statistical Performance on Sparse Recovery Problems
We show that several classical quantities controlling compressed sensing
performance directly match classical parameters controlling algorithmic
complexity. We first describe linearly convergent restart schemes on
first-order methods solving a broad range of compressed sensing problems, where
sharpness at the optimum controls convergence speed. We show that for sparse
recovery problems, this sharpness can be written as a condition number, given
by the ratio between true signal sparsity and the largest signal size that can
be recovered by the observation matrix. In a similar vein, Renegar's condition
number is a data-driven complexity measure for convex programs, generalizing
classical condition numbers for linear systems. We show that for a broad class
of compressed sensing problems, the worst case value of this algorithmic
complexity measure taken over all signals matches the restricted singular value
of the observation matrix which controls robust recovery performance. Overall,
this means in both cases that, in compressed sensing problems, a single
parameter directly controls both computational complexity and recovery
performance. Numerical experiments illustrate these points using several
classical algorithms.Comment: Final version, to appear in information and Inferenc
- …