217 research outputs found
Primal-dual extragradient methods for nonlinear nonsmooth PDE-constrained optimization
We study the extension of the Chambolle--Pock primal-dual algorithm to
nonsmooth optimization problems involving nonlinear operators between function
spaces. Local convergence is shown under technical conditions including metric
regularity of the corresponding primal-dual optimality conditions. We also show
convergence for a Nesterov-type accelerated variant provided one part of the
functional is strongly convex.
We show the applicability of the accelerated algorithm to examples of inverse
problems with - and -fitting terms as well as of
state-constrained optimal control problems, where convergence can be guaranteed
after introducing an (arbitrary small, still nonsmooth) Moreau--Yosida
regularization. This is verified in numerical examples
Some recent advances in projection-type methods for variational inequalities
AbstractProjection-type methods are a class of simple methods for solving variational inequalities, especially for complementarity problems. In this paper we review and summarize recent developments in this class of methods, and focus mainly on some new trends in projection-type methods
Accelerating two projection methods via perturbations with application to Intensity-Modulated Radiation Therapy
Constrained convex optimization problems arise naturally in many real-world
applications. One strategy to solve them in an approximate way is to translate
them into a sequence of convex feasibility problems via the recently developed
level set scheme and then solve each feasibility problem using projection
methods. However, if the problem is ill-conditioned, projection methods often
show zigzagging behavior and therefore converge slowly.
To address this issue, we exploit the bounded perturbation resilience of the
projection methods and introduce two new perturbations which avoid zigzagging
behavior. The first perturbation is in the spirit of -step methods and uses
gradient information from previous iterates. The second uses the approach of
surrogate constraint methods combined with relaxed, averaged projections.
We apply two different projection methods in the unperturbed version, as well
as the two perturbed versions, to linear feasibility problems along with
nonlinear optimization problems arising from intensity-modulated radiation
therapy (IMRT) treatment planning. We demonstrate that for all the considered
problems the perturbations can significantly accelerate the convergence of the
projection methods and hence the overall procedure of the level set scheme. For
the IMRT optimization problems the perturbed projection methods found an
approximate solution up to 4 times faster than the unperturbed methods while at
the same time achieving objective function values which were 0.5 to 5.1% lower.Comment: Accepted for publication in Applied Mathematics & Optimizatio
Iterative Methods for Stochastic Variational Inequalities
In this work, we consider stochastic variational inequalities arising from a certain class of equilibrium problems with uncertainties. Uncertainties in the models are introduced through data that are known through their probabilistic distributions. We consider several extragradient methods for the solutions of the variational inequalities and compare their relative efficiency and eectiveness through thorough numerical comparisons. Several applications such as trac equilibrium, environmental games, and oligopolistic market equilibrium are considered
Principled Analyses and Design of First-Order Methods with Inexact Proximal Operators
Proximal operations are among the most common primitives appearing in both
practical and theoretical (or high-level) optimization methods. This basic
operation typically consists in solving an intermediary (hopefully simpler)
optimization problem. In this work, we survey notions of inaccuracies that can
be used when solving those intermediary optimization problems. Then, we show
that worst-case guarantees for algorithms relying on such inexact proximal
operations can be systematically obtained through a generic procedure based on
semidefinite programming. This methodology is primarily based on the approach
introduced by Drori and Teboulle (Mathematical Programming, 2014) and on convex
interpolation results, and allows producing non-improvable worst-case analyzes.
In other words, for a given algorithm, the methodology generates both
worst-case certificates (i.e., proofs) and problem instances on which those
bounds are achieved.
Relying on this methodology, we provide three new methods with conceptually
simple proofs: (i) an optimized relatively inexact proximal point method, (ii)
an extension of the hybrid proximal extragradient method of Monteiro and
Svaiter (SIAM Journal on Optimization, 2013), and (iii) an inexact accelerated
forward-backward splitting supporting backtracking line-search, and both (ii)
and (iii) supporting possibly strongly convex objectives. Finally, we use the
methodology for studying a recent inexact variant of the Douglas-Rachford
splitting due to Eckstein and Yao (Mathematical Programming, 2018).
We showcase and compare the different variants of the accelerated inexact
forward-backward method on a factorization and a total variation problem.Comment: Minor modifications including acknowledgments and references. Code
available at https://github.com/mathbarre/InexactProximalOperator
- …