594 research outputs found
Nonconvex Generalization of ADMM for Nonlinear Equality Constrained Problems
The ever-increasing demand for efficient and distributed optimization
algorithms for large-scale data has led to the growing popularity of the
Alternating Direction Method of Multipliers (ADMM). However, although the use
of ADMM to solve linear equality constrained problems is well understood, we
lacks a generic framework for solving problems with nonlinear equality
constraints, which are common in practical applications (e.g., spherical
constraints). To address this problem, we are proposing a new generic ADMM
framework for handling nonlinear equality constraints, neADMM. After
introducing the generalized problem formulation and the neADMM algorithm, the
convergence properties of neADMM are discussed, along with its sublinear
convergence rate , where is the number of iterations. Next, two
important applications of neADMM are considered and the paper concludes by
describing extensive experiments on several synthetic and real-world datasets
to demonstrate the convergence and effectiveness of neADMM compared to existing
state-of-the-art methods
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Semi-Anchored Multi-Step Gradient Descent Ascent Method for Structured Nonconvex-Nonconcave Composite Minimax Problems
Minimax problems, such as generative adversarial network, adversarial
training, and fair training, are widely solved by a multi-step gradient descent
ascent (MGDA) method in practice. However, its convergence guarantee is
limited. In this paper, inspired by the primal-dual hybrid gradient method, we
propose a new semi-anchoring (SA) technique for the MGDA method. This makes the
MGDA method find a stationary point of a structured nonconvex-nonconcave
composite minimax problem; its saddle-subdifferential operator satisfies the
weak Minty variational inequality condition. The resulting method, named
SA-MGDA, is built upon a Bregman proximal point method. We further develop its
backtracking line-search version, and its non-Euclidean version for smooth
adaptable functions. Numerical experiments, including a fair classification
training, are provided
Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
In this paper, we present a new stochastic algorithm, namely the stochastic
block mirror descent (SBMD) method for solving large-scale nonsmooth and
stochastic optimization problems. The basic idea of this algorithm is to
incorporate the block-coordinate decomposition and an incremental block
averaging scheme into the classic (stochastic) mirror-descent method, in order
to significantly reduce the cost per iteration of the latter algorithm. We
establish the rate of convergence of the SBMD method along with its associated
large-deviation results for solving general nonsmooth and stochastic
optimization problems. We also introduce different variants of this method and
establish their rate of convergence for solving strongly convex, smooth, and
composite optimization problems, as well as certain nonconvex optimization
problems. To the best of our knowledge, all these developments related to the
SBMD methods are new in the stochastic optimization literature. Moreover, some
of our results also seem to be new for block coordinate descent methods for
deterministic optimization
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions
Single-call stochastic extragradient methods, like stochastic past
extragradient (SPEG) and stochastic optimistic gradient (SOG), have gained a
lot of interest in recent years and are one of the most efficient algorithms
for solving large-scale min-max optimization and variational inequalities
problems (VIP) appearing in various machine learning tasks. However, despite
their undoubted popularity, current convergence analyses of SPEG and SOG
require a bounded variance assumption. In addition, several important questions
regarding the convergence properties of these methods are still open, including
mini-batching, efficient step-size selection, and convergence guarantees under
different sampling strategies. In this work, we address these questions and
provide convergence guarantees for two large classes of structured non-monotone
VIPs: (i) quasi-strongly monotone problems (a generalization of strongly
monotone problems) and (ii) weak Minty variational inequalities (a
generalization of monotone and Minty VIPs). We introduce the expected residual
condition, explain its benefits, and show how it can be used to obtain a
strictly weaker bound than previously used growth conditions, expected
co-coercivity, or bounded variance assumptions. Equipped with this condition,
we provide theoretical guarantees for the convergence of single-call
extragradient methods for different step-size selections, including constant,
decreasing, and step-size-switching rules. Furthermore, our convergence
analysis holds under the arbitrary sampling paradigm, which includes importance
sampling and various mini-batching strategies as special cases.Comment: 37th Conference on Neural Information Processing Systems (NeurIPS
2023
Adjoint-based predictor-corrector sequential convex programming for parametric nonlinear optimization
This paper proposes an algorithmic framework for solving parametric
optimization problems which we call adjoint-based predictor-corrector
sequential convex programming. After presenting the algorithm, we prove a
contraction estimate that guarantees the tracking performance of the algorithm.
Two variants of this algorithm are investigated. The first one can be used to
solve nonlinear programming problems while the second variant is aimed to treat
online parametric nonlinear programming problems. The local convergence of
these variants is proved. An application to a large-scale benchmark problem
that originates from nonlinear model predictive control of a hydro power plant
is implemented to examine the performance of the algorithms.Comment: This manuscript consists of 25 pages and 7 figure
- …