3,385 research outputs found
Decomposition Techniques for Bilinear Saddle Point Problems and Variational Inequalities with Affine Monotone Operators on Domains Given by Linear Minimization Oracles
The majority of First Order methods for large-scale convex-concave saddle
point problems and variational inequalities with monotone operators are
proximal algorithms which at every iteration need to minimize over problem's
domain X the sum of a linear form and a strongly convex function. To make such
an algorithm practical, X should be proximal-friendly -- admit a strongly
convex function with easy to minimize linear perturbations. As a byproduct, X
admits a computationally cheap Linear Minimization Oracle (LMO) capable to
minimize over X linear forms. There are, however, important situations where a
cheap LMO indeed is available, but X is not proximal-friendly, which motivates
search for algorithms based solely on LMO's. For smooth convex minimization,
there exists a classical LMO-based algorithm -- Conditional Gradient. In
contrast, known to us LMO-based techniques for other problems with convex
structure (nonsmooth convex minimization, convex-concave saddle point problems,
even as simple as bilinear ones, and variational inequalities with monotone
operators, even as simple as affine) are quite recent and utilize common
approach based on Fenchel-type representations of the associated
objectives/vector fields. The goal of this paper is to develop an alternative
(and seemingly much simpler) LMO-based decomposition techniques for bilinear
saddle point problems and for variational inequalities with affine monotone
operators
Bounded perturbation resilience of extragradient-type methods and their applications
In this paper we study the bounded perturbation resilience of the
extragradient and the subgradient extragradient methods for solving variational
inequality (VI) problem in real Hilbert spaces. This is an important property
of algorithms which guarantees the convergence of the scheme under summable
errors, meaning that an inexact version of the methods can also be considered.
Moreover, once an algorithm is proved to be bounded perturbation resilience,
superiorizion can be used, and this allows flexibility in choosing the bounded
perturbations in order to obtain a superior solution, as well explained in the
paper. We also discuss some inertial extragradient methods. Under mild and
standard assumptions of monotonicity and Lipschitz continuity of the VI's
associated mapping, convergence of the perturbed extragradient and subgradient
extragradient methods is proved. In addition we show that the perturbed
algorithms converges at the rate of . Numerical illustrations are given
to demonstrate the performances of the algorithms.Comment: Accepted for publication in The Journal of Inequalities and
Applications. arXiv admin note: text overlap with arXiv:1711.01936 and text
overlap with arXiv:1507.07302 by other author
Accelerating two projection methods via perturbations with application to Intensity-Modulated Radiation Therapy
Constrained convex optimization problems arise naturally in many real-world
applications. One strategy to solve them in an approximate way is to translate
them into a sequence of convex feasibility problems via the recently developed
level set scheme and then solve each feasibility problem using projection
methods. However, if the problem is ill-conditioned, projection methods often
show zigzagging behavior and therefore converge slowly.
To address this issue, we exploit the bounded perturbation resilience of the
projection methods and introduce two new perturbations which avoid zigzagging
behavior. The first perturbation is in the spirit of -step methods and uses
gradient information from previous iterates. The second uses the approach of
surrogate constraint methods combined with relaxed, averaged projections.
We apply two different projection methods in the unperturbed version, as well
as the two perturbed versions, to linear feasibility problems along with
nonlinear optimization problems arising from intensity-modulated radiation
therapy (IMRT) treatment planning. We demonstrate that for all the considered
problems the perturbations can significantly accelerate the convergence of the
projection methods and hence the overall procedure of the level set scheme. For
the IMRT optimization problems the perturbed projection methods found an
approximate solution up to 4 times faster than the unperturbed methods while at
the same time achieving objective function values which were 0.5 to 5.1% lower.Comment: Accepted for publication in Applied Mathematics & Optimizatio
Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
In this paper, we present a new stochastic algorithm, namely the stochastic
block mirror descent (SBMD) method for solving large-scale nonsmooth and
stochastic optimization problems. The basic idea of this algorithm is to
incorporate the block-coordinate decomposition and an incremental block
averaging scheme into the classic (stochastic) mirror-descent method, in order
to significantly reduce the cost per iteration of the latter algorithm. We
establish the rate of convergence of the SBMD method along with its associated
large-deviation results for solving general nonsmooth and stochastic
optimization problems. We also introduce different variants of this method and
establish their rate of convergence for solving strongly convex, smooth, and
composite optimization problems, as well as certain nonconvex optimization
problems. To the best of our knowledge, all these developments related to the
SBMD methods are new in the stochastic optimization literature. Moreover, some
of our results also seem to be new for block coordinate descent methods for
deterministic optimization
- …