254 research outputs found
Randomized Lagrangian Stochastic Approximation for Large-Scale Constrained Stochastic Nash Games
In this paper, we consider stochastic monotone Nash games where each player's
strategy set is characterized by possibly a large number of explicit convex
constraint inequalities. Notably, the functional constraints of each player may
depend on the strategies of other players, allowing for capturing a subclass of
generalized Nash equilibrium problems (GNEP). While there is limited work that
provide guarantees for this class of stochastic GNEPs, even when the functional
constraints of the players are independent of each other, the majority of the
existing methods rely on employing projected stochastic approximation (SA)
methods. However, the projected SA methods perform poorly when the constraint
set is afflicted by the presence of a large number of possibly nonlinear
functional inequalities. Motivated by the absence of performance guarantees for
computing the Nash equilibrium in constrained stochastic monotone Nash games,
we develop a single timescale randomized Lagrangian multiplier stochastic
approximation method where in the primal space, we employ an SA scheme, and in
the dual space, we employ a randomized block-coordinate scheme where only a
randomly selected Lagrangian multiplier is updated. We show that our method
achieves a convergence rate of
for suitably defined
suboptimality and infeasibility metrics in a mean sense.Comment: The result of this paper has been presented at International
Conference on Continuous Optimization (ICCOPT) 2022 and East Coast
Optimization Meeting (ECOM) 202
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
The Geometry of Monotone Operator Splitting Methods
We propose a geometric framework to describe and analyze a wide array of
operator splitting methods for solving monotone inclusion problems. The initial
inclusion problem, which typically involves several operators combined through
monotonicity-preserving operations, is seldom solvable in its original form. We
embed it in an auxiliary space, where it is associated with a surrogate
monotone inclusion problem with a more tractable structure and which allows for
easy recovery of solutions to the initial problem. The surrogate problem is
solved by successive projections onto half-spaces containing its solution set.
The outer approximation half-spaces are constructed by using the individual
operators present in the model separately. This geometric framework is shown to
encompass traditional methods as well as state-of-the-art asynchronous
block-iterative algorithms, and its flexible structure provides a pattern to
design new ones
Recommended from our members
Provably effective algorithms for min-max optimization
Many fundamental machine learning tasks can be formulated as min-max optimization. This motivates us to design effective and efficient first-order methods that provably converge to the global min-max points. For this purpose, this thesis focuses on designing practical algorithms for several specific machine learning tasks. We considered some different settings: unconstrained or constrained strongly-convex (strongly-)concave, constrained convex-concave, and nonconvex-concave problems. We tackle the following concrete questions by studying the above problems:
1. Can we reformulate a single minimization problem to two-player games to help reduce
the computational complexity of finding global optimal points?
2. Can projection-free algorithms achieve last-iterate convergence for constrained min-max
optimization problems with the convex-concave landscape?
3. Can we show that stochastic gradient descent-ascent, a method commonly used in practice for GAN training, actually finds global optima and can learn a target distribution?
We make progress on these questions by proposing practical algorithms with theoretical guarantees. We also present extensive empirical studies to verify the effectiveness of our proposed methods.Computational Science, Engineering, and Mathematic
- …