research

Distributed Learning for Stochastic Generalized Nash Equilibrium Problems

Abstract

This work examines a stochastic formulation of the generalized Nash equilibrium problem (GNEP) where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully-distributed online learning by agents and employ penalized individual cost functions to deal with coupled constraints. Three stochastic gradient strategies are developed with constant step-sizes. We allow the agents to use heterogeneous step-sizes and show that the penalty solution is able to approach the Nash equilibrium in a stable manner within O(μmax)O(\mu_\text{max}), for small step-size value μmax\mu_\text{max} and sufficiently large penalty parameters. The operation of the algorithm is illustrated by considering the network Cournot competition problem

    Similar works