6 research outputs found

    Network games with dynamic players: Stabilization and output convergence to Nash equilibrium

    Full text link
    This paper addresses a class of network games played by dynamic agents using their outputs. Unlike most existing related works, the Nash equilibrium in this work is defined by functions of agent outputs instead of full agent states, which allows the agents to have more general and heterogeneous dynamics and maintain some privacy of their local states. The concerned network game is formulated with agents modeled by uncertain linear systems subject to external disturbances. The cost function of each agent is a linear quadratic function depending on the outputs of its own and its neighbors in the underlying graph. The main challenge stemming from this game formulation is that merely driving the agent outputs to the Nash equilibrium does not guarantee the stability of the agent dynamics. Using local output and the outputs from the neighbors of each agent, we aim at designing game strategies that achieve output Nash equilibrium seeking and stabilization of the closed-loop dynamics. Particularly, when each agents knows how the actions of its neighbors affect its cost function, a game strategy is developed for network games with digraph topology. When each agent is also allowed to exchange part of its compensator state, a distributed strategy can be designed for networks with connected undirected graphs or connected digraphs

    Distributed Game Strategy Design With Application To Multi-Agent Formation Control

    No full text
    In this paper, we consider a multi-agent formation control problem from a game theory point of view. It is well known that a major difficulty in a communication network based formation control problem is that each agent is only able to exchange information with other agents according to the communication topology. This information constraint prevents many game strategy design approaches that require individual agents to have global information from being implemented in many cases. We formulate the formation control problem in such a way that individual agents try to minimize their locally measured formation errors and to solve it as a differential game problem. We consider two cases of non-cooperative and cooperative games and propose a novel distributed design approach that utilizes the relationship between the initial and terminal state variables. This approach is applied to an illustrative formation control example among three agents and the formation errors under various scenarios are compared and analyzed
    corecore