Single timescale regularized stochastic approximation schemes for monotone Nash games under uncertainty

Abstract

Abstract—In this paper, we consider the distributed compu-tation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only. In fact, standard extensions of stochastic ap-proximation schemes for merely monotone mappings require the solution of a sequence of related strongly monotone prob-lems, a natively two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We first show that, under suitable assump-tions, standard projection schemes can indeed be extended to allow for strict, rather than strong monotonicity. Furthermore, we introduce a class of regularized stochastic approximation schemes, in which the regularization parameter is updated at every step, leading to a single timescale method. The scheme is a stochastic extension of an iterative Tikhonov regularization method and its global convergence is established. To aid in networked implementations, we consider an extension to this result where players are allowed to choose their steplengths independently and show if the deviation across their choices is suitably constrained, then the convergence of the scheme may be claimed. I

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 05/06/2019