28 research outputs found

    Single timescale regularized stochastic approximation schemes for monotone Nash games under uncertainty

    Full text link
    Abstract—In this paper, we consider the distributed compu-tation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only. In fact, standard extensions of stochastic ap-proximation schemes for merely monotone mappings require the solution of a sequence of related strongly monotone prob-lems, a natively two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We first show that, under suitable assump-tions, standard projection schemes can indeed be extended to allow for strict, rather than strong monotonicity. Furthermore, we introduce a class of regularized stochastic approximation schemes, in which the regularization parameter is updated at every step, leading to a single timescale method. The scheme is a stochastic extension of an iterative Tikhonov regularization method and its global convergence is established. To aid in networked implementations, we consider an extension to this result where players are allowed to choose their steplengths independently and show if the deviation across their choices is suitably constrained, then the convergence of the scheme may be claimed. I

    On distributed optimization problems with variational inequality constraints: Algorithms, complexity analysis, and applications

    Get PDF
    Traditionally, constrained optimization models include constraints in the form of inequalities and equations. In this dissertation, we consider a unifying class of optimization problems with variational inequality (VI) constraints that allows for capturing a wide range of applications that may not be formulated by the existing standard constrained models. The main motivation arises from the notion of efficiency of equilibria in multi-agent networks. To this end, first we consider a class of optimization problems with Cartesian variational inequality (CVI) constraints, where the objective function is convex and the CVI is associated with a monotone mapping and a convex Cartesian product set. Motivated by the absence of performance guarantees for addressing this class of problems, we develop an averaged randomized block iteratively regularized gradient scheme. The main contributions include: (i) When the set of the CVI is bounded, we derive new non-asymptotic rate statements for suboptimality and infeasibility error metrics. (ii) When the set of the CVI is unbounded, we establish the global convergence in an almost sure and a mean sense. We numerically validate the proposed method on a networked Nash Cournot competition. We also implement our scheme on classical image deblurring applications and numerically demonstrate that the proposed scheme outperforms the standard sequential regularization method. In the second part, we consider a class of constrained multi-agent optimization problems where the goal is to cooperatively minimize the sum of agent-specific objectives. In this framework, the objective function and the VI mappings are locally known. We develop an iteratively regularized incremental gradient method where the agents communicate over a cycle graph. We derive new non-asymptotic agent-wise convergence rates for suboptimality and infeasibility metrics. We numerically validate the proposed scheme on a transportation network problem. We also apply the proposed scheme to address a special case of this distributed formulation, where the VI constraint characterizes a feasible set. We show the superiority of the proposed scheme to existing incremental gradient methods. A potential future direction is to extend the results of this dissertation to employ gradient tracking techniques and address multi-agent systems requiring weaker assumptions on the network topology with asynchronous communications
    corecore