417 research outputs found
On the existence of solutions to stochastic quasi-variational inequality and complementarity problems
Variational inequality problems allow for capturing an expansive class of
problems, including convex optimization problems, convex Nash games and
economic equilibrium problems, amongst others. Yet in most practical settings,
such problems are complicated by uncertainty, motivating the examination of a
stochastic generalization of the variational inequality problem and its
extensions in which the components of the mapping contain expectations. When
the associated sets are unbounded, ascertaining existence requires having
access to analytical forms of the expectations. Naturally, in practical
settings, such expressions are often difficult to derive, severely limiting the
applicability of such an approach. Consequently, our goal lies in developing
techniques that obviate the need for integration and our emphasis lies in
developing tractable and verifiable sufficiency conditions for claiming
existence. We begin by recapping almost-sure sufficiency conditions for
stochastic variational inequality problems with single-valued maps provided in
our prior work [44] and provide extensions to multi-valued mappings. Next, we
extend these statements to quasi-variational regimes where maps can be either
single or set-valued. Finally, we refine the obtained results to accommodate
stochastic complementarity problems where the maps are either general or
co-coercive. The applicability of our results is demonstrated on practically
occurring instances of stochastic quasi-variational inequality problems and
stochastic complementarity problems, arising as nonsmooth generalized
Nash-Cournot games and power markets, respectively
Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants
We consider the stochastic variational inequality problem in which the map is
expectation-valued in a component-wise sense. Much of the available convergence
theory and rate statements for stochastic approximation schemes are limited to
monotone maps. However, non-monotone stochastic variational inequality problems
are not uncommon and are seen to arise from product pricing, fractional
optimization problems, and subclasses of economic equilibrium problems.
Motivated by the need to address a broader class of maps, we make the following
contributions: (i) We present an extragradient-based stochastic approximation
scheme and prove that the iterates converge to a solution of the original
problem under either pseudomonotonicity requirements or a suitably defined
acute angle condition. Such statements are shown to be generalizable to the
stochastic mirror-prox framework; (ii) Under strong pseudomonotonicity, we show
that the mean-squared error in the solution iterates produced by the
extragradient SA scheme converges at the optimal rate of O(1/k) statements that
were hitherto unavailable K in this regime. Notably, we optimize the initial
steplength by obtaining an {\epsilon}-infimum of a discontinuous nonconvex
function. Similar statements are derived for mirror-prox generalizations and
can accommodate monotone SVIs under a weak-sharpness requirement. Finally, both
the asymptotics and the empirical rates of the schemes are studied on a set of
variational problems where it is seen that the theoretically specified initial
steplength leads to significant performance benefits.Comment: Computational Optimization and Applications, 201
Tractable ADMM Schemes for Computing KKT Points and Local Minimizers for -Minimization Problems
We consider an -minimization problem where is
minimized over a polyhedral set and the -norm regularizer implicitly
emphasizes sparsity of the solution. Such a setting captures a range of
problems in image processing and statistical learning. Given the nonconvex and
discontinuous nature of this norm, convex regularizers are often employed as
substitutes. Therefore, far less is known about directly solving the
-minimization problem. Inspired by [19], we consider resolving an
equivalent formulation of the -minimization problem as a mathematical
program with complementarity constraints (MPCC) and make the following
contributions towards the characterization and computation of its KKT points:
(i) First, we show that feasible points of this formulation satisfy the
relatively weak Guignard constraint qualification. Furthermore, under the
suitable convexity assumption on , an equivalence is derived between
first-order KKT points and local minimizers of the MPCC formulation. (ii) Next,
we apply two alternating direction method of multiplier (ADMM) algorithms to
exploit special structure of the MPCC formulation: (ADMM) and (ADMM). These two ADMM schemes both have
tractable subproblems. Specifically, in spite of the overall nonconvexity, we
show that the first of the ADMM updates can be effectively reduced to a
closed-form expression by recognizing a hidden convexity property while the
second necessitates solving a convex program. In (ADMM), we prove subsequential convergence to a perturbed KKT point under mild
assumptions. Our preliminary numerical experiments suggest that the tractable
ADMM schemes are more scalable than their standard counterpart and ADMM compares well with its competitors to solve the -minimization
problem.Comment: 47 pages, 3 table
On the resolution of misspecified convex optimization and monotone variational inequality problems
We consider a misspecified optimization problem that requires minimizing a
function f(x;q*) over a closed and convex set X where q* is an unknown vector
of parameters that may be learnt by a parallel learning process. In this
context, We examine the development of coupled schemes that generate iterates
{x_k,q_k} as k goes to infinity, then {x_k} converges x*, a minimizer of
f(x;q*) over X and {q_k} converges to q*. In the first part of the paper, we
consider the solution of problems where f is either smooth or nonsmooth under
various convexity assumptions on function f. In addition, rate statements are
also provided to quantify the degradation in rate resulted from learning
process. In the second part of the paper, we consider the solution of
misspecified monotone variational inequality problems to contend with more
general equilibrium problems as well as the possibility of misspecification in
the constraints. We first present a constant steplength misspecified
extragradient scheme and prove its asymptotic convergence. This scheme is
reliant on problem parameters (such as Lipschitz constants)and leads us to
present a misspecified variant of iterative Tikhonov regularization. Numerics
support the asymptotic and rate statements.Comment: 35 pages, 5 figure
Asynchronous Schemes for Stochastic and Misspecified Potential Games and Nonconvex Optimization
The distributed computation of equilibria and optima has seen growing
interest in a broad collection of networked problems. We consider the
computation of equilibria of convex stochastic Nash games characterized by a
possibly nonconvex potential function. Our focus is on two classes of
stochastic Nash games: (P1): A potential stochastic Nash game, in which each
player solves a parameterized stochastic convex program; and (P2): A
misspecified generalization, where the player-specific stochastic program is
complicated by a parametric misspecification. In both settings, exact proximal
BR solutions are generally unavailable in finite time since they necessitate
solving parameterized stochastic programs. Consequently, we design two
asynchronous inexact proximal BR schemes to solve the problems, where in each
iteration a single player is randomly chosen to compute an inexact proximal BR
solution with rivals' possibly outdated information. Yet, in the misspecified
regime (P2), each player possesses an extra estimate of the misspecified
parameter and updates its estimate by a projected stochastic gradient (SG)
algorithm. By Since any stationary point of the potential function is a Nash
equilibrium of the associated game, we believe this paper is amongst the first
ones for stochastic nonconvex (but block convex) optimization problems equipped
with almost-sure convergence guarantees. These statements can be extended to
allow for accommodating weighted potential games and generalized potential
games. Finally, we present preliminary numerics based on applying the proposed
schemes to congestion control and Nash-Cournot games
SI-ADMM: A Stochastic Inexact ADMM Framework for Stochastic Convex Programs
We consider the structured stochastic convex program requiring the
minimization of
subject to the constraint . Motivated by the need for
decentralized schemes and structure, we propose a stochastic inexact ADMM
(SI-ADMM) framework where subproblems are solved inexactly via stochastic
approximation schemes. Based on this framework, we prove the following: (i)
under suitable assumptions on the associated batch-size of samples utilized at
each iteration, the SI-ADMM scheme produces a sequence that converges to the
unique solution almost surely; (ii) If the number of gradient steps (or
equivalently, the number of sampled gradients) utilized for solving the
subproblems in each iteration increases at a geometric rate, the mean-squared
error diminishes to zero at a prescribed geometric rate; (iii) The overall
iteration complexity in terms of gradient steps (or equivalently samples) is
found to be consistent with the canonical level of .
Preliminary applications on LASSO and distributed regression suggest that the
scheme performs well compared to its competitors.Comment: 37 pages, 2 figures, 3 table
On robust solutions to uncertain linear complementarity problems and their variants
A popular approach for addressing uncertainty in variational inequality
problems is by solving the expected residual minimization (ERM) problem. This
avenue necessitates distributional information associated with the uncertainty
and requires minimizing nonconvex expectation-valued functions. We consider a
distinctly different approach in the context of uncertain linear
complementarity problems with a view towards obtaining robust solutions.
Specifically, we define a robust solution to a complementarity problem as one
that minimizes the worst-case of the gap function. In what we believe is
amongst the first efforts to comprehensively address such problems in a
distribution-free environment, we show that under specified assumptions on the
uncertainty sets, the robust solutions to uncertain monotone linear
complementarity problem can be tractably obtained through the solution of a
single convex program. We also define uncertainty sets that ensure that robust
solutions to non-monotone generalizations can also be obtained by solving
convex programs. More generally, robust counterparts of uncertain non-monotone
LCPs are proven to be low-dimensional nonconvex quadratically constrained
quadratic programs. We show that these problems may be globally resolved by
customizing an existing branching scheme. We further extend the tractability
results to include uncertain affine variational inequality problems defined
over uncertain polyhedral sets as well as to hierarchical regimes captured by
mathematical programs with uncertain complementarity constraints. Preliminary
numerics on uncertain linear complementarity and traffic equilibrium problems
suggest that the presented avenues hold promise.Comment: 37 pages, 3 figures, 8 table
On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
Classical extragradient schemes and their stochastic counterpart represent a
cornerstone for resolving monotone variational inequality problems. Yet, such
schemes have a per-iteration complexity of two projections onto a convex set
and require two evaluations of the map, the former of which could be relatively
expensive if is a complicated set. We consider two related avenues where
the per-iteration complexity is significantly reduced: (i) A stochastic
projected reflected gradient method requiring a single evaluation of the map
and a single projection; and (ii) A stochastic subgradient extragradient method
that requires two evaluations of the map, a single projection onto , and a
significantly cheaper projection (onto a halfspace) computable in closed form.
Under a variance-reduced framework reliant on a sample-average of the map based
on an increasing batch-size, we prove almost sure (a.s.) convergence of the
iterates to a random point in the solution set for both schemes. Additionally,
both schemes display a non-asymptotic rate of where
denotes the number of iterations; notably, both rates match those obtained in
deterministic regimes. To address feasibility sets given by the intersection of
a large number of convex constraints, we adapt both of the aforementioned
schemes to a random projection framework. We then show that the random
projection analogs of both schemes also display a.s. convergence under a
weak-sharpness requirement; furthermore, without imposing the weak-sharpness
requirement, both schemes are characterized by a provable rate of
in terms of the gap function of the projection of the
averaged sequence onto as well as the infeasibility of this sequence.
Preliminary numerics support theoretical findings and the schemes outperform
standard extragradient schemes in terms of the per-iteration complexity
Distributed Variable Sample-Size Gradient-response and Best-response Schemes for Stochastic Nash Equilibrium Problems over Graphs
This paper considers a stochastic Nash game in which each player minimizes an
expectation valued composite objective. We make the following contributions.
(I) Under suitable monotonicity assumptions on the concatenated gradient map,
we derive optimal rate statements and oracle complexity bounds for the proposed
variable sample-size proximal stochastic gradient-response (VS-PGR) scheme when
the sample-size increases at a geometric rate. If the sample-size increases at
a polynomial rate of degree , the mean-squared errordecays at a
corresponding polynomial rate while the iteration and oracle complexities to
obtain an -NE are and
, respectively. (II) We then overlay (VS-PGR)
with a consensus phase with a view towards developing distributed protocols for
aggregative stochastic Nash games. In the resulting scheme, when the
sample-size and the consensus steps grow at a geometric and linear rate,
computing an -NE requires similar iteration and oracle complexities
to (VS-PGR) with a communication complexity of
; (III) Under a suitable contractive property
associated with the proximal best-response (BR) map, we design a variable
sample-size proximal BR (VS-PBR) scheme, where each player solves a
sample-average BR problem. Akin to (I), we also give the rate statements,
oracle and iteration complexity bounds. (IV) Akin to (II), the distributed
variant achieves similar iteration and oracle complexities to the centralized
(VS-PBR) with a communication complexity of
when the communication rounds per iteration increase at a linear rate. Finally,
we present some preliminary numerics to provide empirical support for the rate
and complexity statements
A Shared-Constraint Approach to Multi-leader Multi-follower Games
Multi-leader multi-follower games are a class of hierarchical games in which
a collection of leaders compete in a Nash game constrained by the equilibrium
conditions of another Nash game amongst the followers. The resulting
equilibrium problem with equilibrium constraints is complicated by nonconvex
agent problems and therefore providing tractable conditions for existence of
global or even local equilibria for it has proved challenging. Consequently,
much of the extant research on this topic is either model specific or relies on
weaker notions of equilibria. We consider a modified formulation in which every
leader is cognizant of the equilibrium constraints of all leaders. Equilibria
of this modified game contain the equilibria, if any, of the original game. The
new formulation has a constraint structure called shared constraints, and our
main result shows that if the leader objectives admit a potential function, the
global minimizers of the potential function over the shared constraint are
equilibria of the modified formulation. We provide another existence result
using fixed point theory that does not require potentiality. Additionally,
local minima, B-stationary, and strong-stationary points of this minimization
are shown to be local Nash equilibria, Nash B-stationary, and Nash
strong-stationary points of the corresponding multi-leader multi-follower game.
We demonstrate the relationship between variational equilibria associated with
this modified shared-constraint game and equilibria of the original game from
the standpoint of the multiplier sets and show how equilibria of the original
formulation may be recovered. We note through several examples that such
potential multi-leader multi-follower games capture a breadth of application
problems of interest and demonstrate our findings on a multi-leader
multi-follower Cournot game.Comment: The earlier manuscript was rejected. We felt it had too many themes
crowding it and decided to make a separate paper from each theme. This
submission draws some parts from the earlier manuscript and adds new results.
Another parts is under review with the IEEE TAC (on arxiv) and another was
published in Proc IEEE CDC, 2013. This submission is under review with
Set-valued and Variational Analysi
- …