1,827 research outputs found
Semi-proximal Mirror-Prox for Nonsmooth Composite Minimization
We propose a new first-order optimisation algorithm to solve high-dimensional
non-smooth composite minimisation problems. Typical examples of such problems
have an objective that decomposes into a non-smooth empirical risk part and a
non-smooth regularisation penalty. The proposed algorithm, called Semi-Proximal
Mirror-Prox, leverages the Fenchel-type representation of one part of the
objective while handling the other part of the objective via linear
minimization over the domain. The algorithm stands in contrast with more
classical proximal gradient algorithms with smoothing, which require the
computation of proximal operators at each iteration and can therefore be
impractical for high-dimensional problems. We establish the theoretical
convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal
complexity bounds, i.e. , for the number of calls to linear
minimization oracle. We present promising experimental results showing the
interest of the approach in comparison to competing methods
Inexact Model: A Framework for Optimization and Variational Inequalities
In this paper we propose a general algorithmic framework for first-order
methods in optimization in a broad sense, including minimization problems,
saddle-point problems and variational inequalities. This framework allows to
obtain many known methods as a special case, the list including accelerated
gradient method, composite optimization methods, level-set methods, proximal
methods. The idea of the framework is based on constructing an inexact model of
the main problem component, i.e. objective function in optimization or operator
in variational inequalities. Besides reproducing known results, our framework
allows to construct new methods, which we illustrate by constructing a
universal method for variational inequalities with composite structure. This
method works for smooth and non-smooth problems with optimal complexity without
a priori knowledge of the problem smoothness. We also generalize our framework
for strongly convex objectives and strongly monotone variational inequalities.Comment: 41 page
Convex optimization over intersection of simple sets: improved convergence rate guarantees via an exact penalty approach
We consider the problem of minimizing a convex function over the intersection
of finitely many simple sets which are easy to project onto. This is an
important problem arising in various domains such as machine learning. The main
difficulty lies in finding the projection of a point in the intersection of
many sets. Existing approaches yield an infeasible point with an
iteration-complexity of for nonsmooth problems with no
guarantees on the in-feasibility. By reformulating the problem through exact
penalty functions, we derive first-order algorithms which not only guarantees
that the distance to the intersection is small but also improve the complexity
to and for smooth functions. For
composite and smooth problems, this is achieved through a saddle-point
reformulation where the proximal operators required by the primal-dual
algorithms can be computed in closed form. We illustrate the benefits of our
approach on a graph transduction problem and on graph matching
Variable metric inexact line-search based methods for nonsmooth optimization
We develop a new proximal-gradient method for minimizing the sum of a
differentiable, possibly nonconvex, function plus a convex, possibly non
differentiable, function. The key features of the proposed method are the
definition of a suitable descent direction, based on the proximal operator
associated to the convex part of the objective function, and an Armijo-like
rule to determine the step size along this direction ensuring the sufficient
decrease of the objective function. In this frame, we especially address the
possibility of adopting a metric which may change at each iteration and an
inexact computation of the proximal point defining the descent direction. For
the more general nonconvex case, we prove that all limit points of the iterates
sequence are stationary, while for convex objective functions we prove the
convergence of the whole sequence to a minimizer, under the assumption that a
minimizer exists. In the latter case, assuming also that the gradient of the
smooth part of the objective function is Lipschitz, we also give a convergence
rate estimate, showing the O(1/k) complexity with respect to the function
values. We also discuss verifiable sufficient conditions for the inexact
proximal point and we present the results of a numerical experience on a convex
total variation based image restoration problem, showing that the proposed
approach is competitive with another state-of-the-art method
A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
We propose a new first-order primal-dual optimization framework for a convex
optimization template with broad applications. Our optimization algorithms
feature optimal convergence guarantees under a variety of common structure
assumptions on the problem template. Our analysis relies on a novel combination
of three classic ideas applied to the primal-dual gap function: smoothing,
acceleration, and homotopy. The algorithms due to the new approach achieve the
best known convergence rate results, in particular when the template consists
of only non-smooth functions. We also outline a restart strategy for the
acceleration to significantly enhance the practical performance. We demonstrate
relations with the augmented Lagrangian method and show how to exploit the
strongly convex objectives with rigorous convergence rate guarantees. We
provide numerical evidence with two examples and illustrate that the new
methods can outperform the state-of-the-art, including Chambolle-Pock, and the
alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech.
Report, Oct. 2015 (last update Sept. 2016
- …