169 research outputs found
The Forward-Backward-Forward Method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces
Tseng's forward-backward-forward algorithm is a valuable alternative for
Korpelevich's extragradient method when solving variational inequalities over a
convex and closed set governed by monotone and Lipschitz continuous operators,
as it requires in every step only one projection operation. However, it is
well-known that Korpelevich's method converges and can therefore be used also
for solving variational inequalities governed by pseudo-monotone and Lipschitz
continuous operators. In this paper, we first associate to a pseudo-monotone
variational inequality a forward-backward-forward dynamical system and carry
out an asymptotic analysis for the generated trajectories. The explicit time
discretization of this system results into Tseng's forward-backward-forward
algorithm with relaxation parameters, which we prove to converge also when it
is applied to pseudo-monotone variational inequalities. In addition, we show
that linear convergence is guaranteed under strong pseudo-monotonicity.
Numerical experiments are carried out for pseudo-monotone variational
inequalities over polyhedral sets and fractional programming problems
Cyclic Coordinate Dual Averaging with Extrapolation
Cyclic block coordinate methods are a fundamental class of optimization
methods widely used in practice and implemented as part of standard software
packages for statistical learning. Nevertheless, their convergence is generally
not well understood and so far their good practical performance has not been
explained by existing convergence analyses. In this work, we introduce a new
block coordinate method that applies to the general class of variational
inequality (VI) problems with monotone operators. This class includes composite
convex optimization problems and convex-concave min-max optimization problems
as special cases and has not been addressed by the existing work. The resulting
convergence bounds match the optimal convergence bounds of full gradient
methods, but are provided in terms of a novel gradient Lipschitz condition
w.r.t.~a Mahalanobis norm. For coordinate blocks, the resulting gradient
Lipschitz constant in our bounds is never larger than a factor
compared to the traditional Euclidean Lipschitz constant, while it is possible
for it to be much smaller. Further, for the case when the operator in the VI
has finite-sum structure, we propose a variance reduced variant of our method
which further decreases the per-iteration cost and has better convergence rates
in certain regimes. To obtain these results, we use a gradient extrapolation
strategy that allows us to view a cyclic collection of block coordinate-wise
gradients as one implicit gradient.Comment: 27 pages, 2 figures. Accepted to SIAM Journal on Optimization.
Version prior to final copy editin
- …