32 research outputs found

    A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games

    Full text link
    We consider a distributed stochastic approximation (SA) scheme for computing an equilibrium of a stochastic Nash game. Standard SA schemes employ diminishing steplength sequences that are square summable but not summable. Such requirements provide a little or no guidance for how to leverage Lipschitzian and monotonicity properties of the problem and naive choices generally do not preform uniformly well on a breadth of problems. While a centralized adaptive stepsize SA scheme is proposed in [1] for the optimization framework, such a scheme provides no freedom for the agents in choosing their own stepsizes. Thus, a direct application of centralized stepsize schemes is impractical in solving Nash games. Furthermore, extensions to game-theoretic regimes where players may independently choose steplength sequences are limited to recent work by Koshal et al. [2]. Motivated by these shortcomings, we present a distributed algorithm in which each player updates his steplength based on the previous steplength and some problem parameters. The steplength rules are derived from minimizing an upper bound of the errors associated with players' decisions. It is shown that these rules generate sequences that converge almost surely to an equilibrium of the stochastic Nash game. Importantly, variants of this rule are suggested where players independently select steplength sequences while abiding by an overall coordination requirement. Preliminary numerical results are seen to be promising.Comment: 8 pages, Proceedings of the American Control Conference, Washington, 201

    Single timescale regularized stochastic approximation schemes for monotone Nash games under uncertainty

    Full text link
    Abstract—In this paper, we consider the distributed compu-tation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only. In fact, standard extensions of stochastic ap-proximation schemes for merely monotone mappings require the solution of a sequence of related strongly monotone prob-lems, a natively two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We first show that, under suitable assump-tions, standard projection schemes can indeed be extended to allow for strict, rather than strong monotonicity. Furthermore, we introduce a class of regularized stochastic approximation schemes, in which the regularization parameter is updated at every step, leading to a single timescale method. The scheme is a stochastic extension of an iterative Tikhonov regularization method and its global convergence is established. To aid in networked implementations, we consider an extension to this result where players are allowed to choose their steplengths independently and show if the deviation across their choices is suitably constrained, then the convergence of the scheme may be claimed. I

    On the convergence of mirror descent beyond stochastic convex programming

    Get PDF
    In this paper, we examine the convergence of mirror descent in a class of stochastic optimization problems that are not necessarily convex (or even quasi-convex), and which we call variationally coherent. Since the standard technique of "ergodic averaging" offers no tangible benefits beyond convex programming, we focus directly on the algorithm's last generated sample (its "last iterate"), and we show that it converges with probabiility 11 if the underlying problem is coherent. We further consider a localized version of variational coherence which ensures local convergence of stochastic mirror descent (SMD) with high probability. These results contribute to the landscape of non-convex stochastic optimization by showing that (quasi-)convexity is not essential for convergence to a global minimum: rather, variational coherence, a much weaker requirement, suffices. Finally, building on the above, we reveal an interesting insight regarding the convergence speed of SMD: in problems with sharp minima (such as generic linear programs or concave minimization problems), SMD reaches a minimum point in a finite number of steps (a.s.), even in the presence of persistent gradient noise. This result is to be contrasted with existing black-box convergence rate estimates that are only asymptotic.Comment: 30 pages, 5 figure
    corecore