1,230 research outputs found
A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
We propose a new first-order primal-dual optimization framework for a convex
optimization template with broad applications. Our optimization algorithms
feature optimal convergence guarantees under a variety of common structure
assumptions on the problem template. Our analysis relies on a novel combination
of three classic ideas applied to the primal-dual gap function: smoothing,
acceleration, and homotopy. The algorithms due to the new approach achieve the
best known convergence rate results, in particular when the template consists
of only non-smooth functions. We also outline a restart strategy for the
acceleration to significantly enhance the practical performance. We demonstrate
relations with the augmented Lagrangian method and show how to exploit the
strongly convex objectives with rigorous convergence rate guarantees. We
provide numerical evidence with two examples and illustrate that the new
methods can outperform the state-of-the-art, including Chambolle-Pock, and the
alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech.
Report, Oct. 2015 (last update Sept. 2016
Smoothing technique for nonsmooth composite minimization with linear operator
We introduce and analyze an algorithm for the minimization of convex
functions that are the sum of differentiable terms and proximable terms
composed with linear operators. The method builds upon the recently developed
smoothed gap technique. In addition to a precise convergence rate result, valid
even in the presence of linear inclusion constraints, this new method allows an
explicit treatment of the gradient of differentiable functions and can be
enhanced with line-search. We also study the consequences of restarting the
acceleration of the algorithm at a given frequency. These new features are not
classical for primal-dual methods and allow us to solve difficult large-scale
convex optimization problems. We numerically illustrate the superior
performance of the algorithm on basis pursuit, TV-regularized least squares
regression and L1 regression problems against the state-of-the-art.Comment: 26 pages, 5 figure
Convex optimization over intersection of simple sets: improved convergence rate guarantees via an exact penalty approach
We consider the problem of minimizing a convex function over the intersection
of finitely many simple sets which are easy to project onto. This is an
important problem arising in various domains such as machine learning. The main
difficulty lies in finding the projection of a point in the intersection of
many sets. Existing approaches yield an infeasible point with an
iteration-complexity of for nonsmooth problems with no
guarantees on the in-feasibility. By reformulating the problem through exact
penalty functions, we derive first-order algorithms which not only guarantees
that the distance to the intersection is small but also improve the complexity
to and for smooth functions. For
composite and smooth problems, this is achieved through a saddle-point
reformulation where the proximal operators required by the primal-dual
algorithms can be computed in closed form. We illustrate the benefits of our
approach on a graph transduction problem and on graph matching
Smooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization
We propose a new randomized coordinate descent method for a convex
optimization template with broad applications. Our analysis relies on a novel
combination of four ideas applied to the primal-dual gap function: smoothing,
acceleration, homotopy, and coordinate descent with non-uniform sampling. As a
result, our method features the first convergence rate guarantees among the
coordinate descent methods, that are the best-known under a variety of common
structure assumptions on the template. We provide numerical evidence to support
the theoretical results with a comparison to state-of-the-art algorithms.Comment: NIPS 201
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
This paper develops a general framework for solving a variety of convex cone
problems that frequently arise in signal processing, machine learning,
statistics, and other fields. The approach works as follows: first, determine a
conic formulation of the problem; second, determine its dual; third, apply
smoothing; and fourth, solve using an optimal first-order method. A merit of
this approach is its flexibility: for example, all compressed sensing problems
can be solved via this approach. These include models with objective
functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or
a combination thereof. In addition, the paper also introduces a number of
technical contributions such as a novel continuation scheme, a novel approach
for controlling the step size, and some new results showing that the smooth and
unsmoothed problems are sometimes formally equivalent. Combined with our
framework, these lead to novel, stable and computationally efficient
algorithms. For instance, our general implementation is competitive with
state-of-the-art methods for solving intensively studied problems such as the
LASSO. Further, numerical experiments show that one can solve the Dantzig
selector problem, for which no efficient large-scale solvers exist, in a few
hundred iterations. Finally, the paper is accompanied with a software release.
This software is not a single, monolithic solver; rather, it is a suite of
programs and routines designed to serve as building blocks for constructing
complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This
version has updated reference
- …