107 research outputs found

    Distributed strategy-updating rules for aggregative games of multi-integrator systems with coupled constraints

    Full text link
    In this paper, we explore aggregative games over networks of multi-integrator agents with coupled constraints. To reach the general Nash equilibrium of an aggregative game, a distributed strategy-updating rule is proposed by a combination of the coordination of Lagrange multipliers and the estimation of the aggregator. Each player has only access to partial-decision information and communicates with his neighbors in a weight-balanced digraph which characterizes players' preferences as to the values of information received from neighbors. We first consider networks of double-integrator agents and then focus on multi-integrator agents. The effectiveness of the proposed strategy-updating rules is demonstrated by analyzing the convergence of corresponding dynamical systems via the Lyapunov stability theory, singular perturbation theory and passive theory. Numerical examples are given to illustrate our results.Comment: 9 pages, 4 figure

    Network games with dynamic players: Stabilization and output convergence to Nash equilibrium

    Full text link
    This paper addresses a class of network games played by dynamic agents using their outputs. Unlike most existing related works, the Nash equilibrium in this work is defined by functions of agent outputs instead of full agent states, which allows the agents to have more general and heterogeneous dynamics and maintain some privacy of their local states. The concerned network game is formulated with agents modeled by uncertain linear systems subject to external disturbances. The cost function of each agent is a linear quadratic function depending on the outputs of its own and its neighbors in the underlying graph. The main challenge stemming from this game formulation is that merely driving the agent outputs to the Nash equilibrium does not guarantee the stability of the agent dynamics. Using local output and the outputs from the neighbors of each agent, we aim at designing game strategies that achieve output Nash equilibrium seeking and stabilization of the closed-loop dynamics. Particularly, when each agents knows how the actions of its neighbors affect its cost function, a game strategy is developed for network games with digraph topology. When each agent is also allowed to exchange part of its compensator state, a distributed strategy can be designed for networks with connected undirected graphs or connected digraphs

    Linear quadratic network games with dynamic players:Stabilization and output convergence to Nash equilibrium

    Get PDF
    This paper addresses a class of network games played by dynamic agents using their outputs. Unlike most existing related works, the Nash equilibrium in this work is defined by functions of agent outputs instead of full agent states, which allows the agents to have more general and heterogeneous dynamics and maintain some privacy of their local states. The concerned network game is formulated with agents modeled by uncertain linear systems subject to external disturbances. The cost function of each agent is a linear quadratic function depending on the outputs of its own and its neighbors in the underlying graph. The main challenge stemming from this game formulation is that merely driving the agent outputs to the Nash equilibrium does not guarantee the stability of the agent dynamics. Using local output and the outputs from the neighbors of each agent, we aim at designing game strategies that achieve output Nash equilibrium seeking and stabilization of the closed-loop dynamics. Particularly, when each agent knows how the actions of its neighbors affect its cost function, a game strategy is developed for network games with digraph topology. When each agent is also allowed to exchange part of its compensator state, a distributed strategy can be designed for networks with connected undirected graphs or weakly connected digraphs. (C) 2021 The Author(s). Published by Elsevier Ltd

    Distributed aggregative optimization with quantized communication

    Get PDF
    summary:In this paper, we focus on an aggregative optimization problem under the communication bottleneck. The aggregative optimization is to minimize the sum of local cost functions. Each cost function depends on not only local state variables but also the sum of functions of global state variables. The goal is to solve the aggregative optimization problem through distributed computation and local efficient communication over a network of agents without a central coordinator. Using the variable tracking method to seek the global state variables and the quantization scheme to reduce the communication cost spent in the optimization process, we develop a novel distributed quantized algorithm, called D-QAGT, to track the optimal variables with finite bits communication. Although quantization may lose transmitting information, our algorithm can still achieve the exact optimal solution with linear convergence rate. Simulation experiments on an optimal placement problem is carried out to verify the correctness of the theoretical results

    Distributed accelerated Nash equilibrium learning for two-subnetwork zero-sum game with bilinear coupling

    Get PDF
    summary:This paper proposes a distributed accelerated first-order continuous-time algorithm for O(1/t2)O({1}/{t^2}) convergence to Nash equilibria in a class of two-subnetwork zero-sum games with bilinear couplings. First-order methods, which only use subgradients of functions, are frequently used in distributed/parallel algorithms for solving large-scale and big-data problems due to their simple structures. However, in the worst cases, first-order methods for two-subnetwork zero-sum games often have an asymptotic or O(1/t)O(1/t) convergence. In contrast to existing time-invariant first-order methods, this paper designs a distributed accelerated algorithm by combining saddle-point dynamics and time-varying derivative feedback techniques. If the parameters of the proposed algorithm are suitable, the algorithm owns O(1/t2)O(1/t^2) convergence in terms of the duality gap function without any uniform or strong convexity requirement. Numerical simulations show the efficacy of the algorithm

    Generalized Nash Equilibrium Seeking Algorithm Design for Distributed Constrained Multi-Cluster Games

    Full text link
    The multi-cluster games are addressed in this paper, where all players team up with the players in the cluster that they belong to, and compete against the players in other clusters to minimize the cost function of their own cluster. The decision of every player is constrained by coupling inequality constraints, local inequality constraints and local convex set constraints. Our problem extends well-known noncooperative game problems and resource allocation problems by considering the competition between clusters and the cooperation within clusters at the same time. Besides, without involving the resource allocation within clusters, the noncooperative game between clusters, and the aforementioned constraints, existing game algorithms as well as resource allocation algorithms cannot solve the problem. In order to seek the variational generalized Nash equilibrium (GNE) of the multi-cluster games, we design a distributed algorithm via gradient descent and projections. Moreover, we analyze the convergence of the algorithm with the help of variational analysis and Lyapunov stability theory. Under the algorithm, all players asymptotically converge to the variational GNE of the multi-cluster game. Simulation examples are presented to verify the effectiveness of the algorithm
    • …
    corecore