1,404 research outputs found
Semi-proximal Mirror-Prox for Nonsmooth Composite Minimization
We propose a new first-order optimisation algorithm to solve high-dimensional
non-smooth composite minimisation problems. Typical examples of such problems
have an objective that decomposes into a non-smooth empirical risk part and a
non-smooth regularisation penalty. The proposed algorithm, called Semi-Proximal
Mirror-Prox, leverages the Fenchel-type representation of one part of the
objective while handling the other part of the objective via linear
minimization over the domain. The algorithm stands in contrast with more
classical proximal gradient algorithms with smoothing, which require the
computation of proximal operators at each iteration and can therefore be
impractical for high-dimensional problems. We establish the theoretical
convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal
complexity bounds, i.e. , for the number of calls to linear
minimization oracle. We present promising experimental results showing the
interest of the approach in comparison to competing methods
The Forward-Backward-Forward Method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces
Tseng's forward-backward-forward algorithm is a valuable alternative for
Korpelevich's extragradient method when solving variational inequalities over a
convex and closed set governed by monotone and Lipschitz continuous operators,
as it requires in every step only one projection operation. However, it is
well-known that Korpelevich's method converges and can therefore be used also
for solving variational inequalities governed by pseudo-monotone and Lipschitz
continuous operators. In this paper, we first associate to a pseudo-monotone
variational inequality a forward-backward-forward dynamical system and carry
out an asymptotic analysis for the generated trajectories. The explicit time
discretization of this system results into Tseng's forward-backward-forward
algorithm with relaxation parameters, which we prove to converge also when it
is applied to pseudo-monotone variational inequalities. In addition, we show
that linear convergence is guaranteed under strong pseudo-monotonicity.
Numerical experiments are carried out for pseudo-monotone variational
inequalities over polyhedral sets and fractional programming problems
Solving Variational Inequalities with Monotone Operators on Domains Given by Linear Minimization Oracles
The standard algorithms for solving large-scale convex-concave saddle point
problems, or, more generally, variational inequalities with monotone operators,
are proximal type algorithms which at every iteration need to compute a
prox-mapping, that is, to minimize over problem's domain the sum of a
linear form and the specific convex distance-generating function underlying the
algorithms in question. Relative computational simplicity of prox-mappings,
which is the standard requirement when implementing proximal algorithms,
clearly implies the possibility to equip with a relatively computationally
cheap Linear Minimization Oracle (LMO) able to minimize over linear forms.
There are, however, important situations where a cheap LMO indeed is available,
but where no proximal setup with easy-to-compute prox-mappings is known. This
fact motivates our goal in this paper, which is to develop techniques for
solving variational inequalities with monotone operators on domains given by
Linear Minimization Oracles. The techniques we develope can be viewed as a
substantial extension of the proposed in [5] method of nonsmooth convex
minimization over an LMO-represented domain
Inexact Model: A Framework for Optimization and Variational Inequalities
In this paper we propose a general algorithmic framework for first-order
methods in optimization in a broad sense, including minimization problems,
saddle-point problems and variational inequalities. This framework allows to
obtain many known methods as a special case, the list including accelerated
gradient method, composite optimization methods, level-set methods, proximal
methods. The idea of the framework is based on constructing an inexact model of
the main problem component, i.e. objective function in optimization or operator
in variational inequalities. Besides reproducing known results, our framework
allows to construct new methods, which we illustrate by constructing a
universal method for variational inequalities with composite structure. This
method works for smooth and non-smooth problems with optimal complexity without
a priori knowledge of the problem smoothness. We also generalize our framework
for strongly convex objectives and strongly monotone variational inequalities.Comment: 41 page
Generalized Forward-Backward Splitting
This paper introduces the generalized forward-backward splitting algorithm
for minimizing convex functions of the form , where
has a Lipschitz-continuous gradient and the 's are simple in the sense
that their Moreau proximity operators are easy to compute. While the
forward-backward algorithm cannot deal with more than non-smooth
function, our method generalizes it to the case of arbitrary . Our method
makes an explicit use of the regularity of in the forward step, and the
proximity operators of the 's are applied in parallel in the backward
step. This allows the generalized forward backward to efficiently address an
important class of convex problems. We prove its convergence in infinite
dimension, and its robustness to errors on the computation of the proximity
operators and of the gradient of . Examples on inverse problems in imaging
demonstrate the advantage of the proposed methods in comparison to other
splitting algorithms.Comment: 24 pages, 4 figure
- …