1,967 research outputs found
Approximate Convex Optimization by Online Game Playing
Lagrangian relaxation and approximate optimization algorithms have received
much attention in the last two decades. Typically, the running time of these
methods to obtain a approximate solution is proportional to
. Recently, Bienstock and Iyengar, following Nesterov,
gave an algorithm for fractional packing linear programs which runs in
iterations. The latter algorithm requires to solve a
convex quadratic program every iteration - an optimization subroutine which
dominates the theoretical running time.
We give an algorithm for convex programs with strictly convex constraints
which runs in time proportional to . The algorithm does NOT
require to solve any quadratic program, but uses gradient steps and elementary
operations only. Problems which have strictly convex constraints include
maximum entropy frequency estimation, portfolio optimization with loss risk
constraints, and various computational problems in signal processing.
As a side product, we also obtain a simpler version of Bienstock and
Iyengar's result for general linear programming, with similar running time.
We derive these algorithms using a new framework for deriving convex
optimization algorithms from online game playing algorithms, which may be of
independent interest
Convex-Concave Min-Max Stackelberg Games
Min-max optimization problems (i.e., min-max games) have been attracting a
great deal of attention because of their applicability to a wide range of
machine learning problems. Although significant progress has been made
recently, the literature to date has focused on games with independent strategy
sets; little is known about solving games with dependent strategy sets, which
can be characterized as min-max Stackelberg games. We introduce two first-order
methods that solve a large class of convex-concave min-max Stackelberg games,
and show that our methods converge in polynomial time. Min-max Stackelberg
games were first studied by Wald, under the posthumous name of Wald's maximin
model, a variant of which is the main paradigm used in robust optimization,
which means that our methods can likewise solve many convex robust optimization
problems. We observe that the computation of competitive equilibria in Fisher
markets also comprises a min-max Stackelberg game. Further, we demonstrate the
efficacy and efficiency of our algorithms in practice by computing competitive
equilibria in Fisher markets with varying utility structures. Our experiments
suggest potential ways to extend our theoretical results, by demonstrating how
different smoothness properties can affect the convergence rate of our
algorithms.Comment: 25 pages, 4 tables, 1 figure, Forthcoming in NeurIPS 202
Randomized Lagrangian Stochastic Approximation for Large-Scale Constrained Stochastic Nash Games
In this paper, we consider stochastic monotone Nash games where each player's
strategy set is characterized by possibly a large number of explicit convex
constraint inequalities. Notably, the functional constraints of each player may
depend on the strategies of other players, allowing for capturing a subclass of
generalized Nash equilibrium problems (GNEP). While there is limited work that
provide guarantees for this class of stochastic GNEPs, even when the functional
constraints of the players are independent of each other, the majority of the
existing methods rely on employing projected stochastic approximation (SA)
methods. However, the projected SA methods perform poorly when the constraint
set is afflicted by the presence of a large number of possibly nonlinear
functional inequalities. Motivated by the absence of performance guarantees for
computing the Nash equilibrium in constrained stochastic monotone Nash games,
we develop a single timescale randomized Lagrangian multiplier stochastic
approximation method where in the primal space, we employ an SA scheme, and in
the dual space, we employ a randomized block-coordinate scheme where only a
randomly selected Lagrangian multiplier is updated. We show that our method
achieves a convergence rate of
for suitably defined
suboptimality and infeasibility metrics in a mean sense.Comment: The result of this paper has been presented at International
Conference on Continuous Optimization (ICCOPT) 2022 and East Coast
Optimization Meeting (ECOM) 202
- …