407 research outputs found

    Nash equilibrium seeking over digraphs with row-stochastic matrices and network-independent step-sizes

    Full text link
    In this paper, we address the challenge of Nash equilibrium (NE) seeking in non-cooperative convex games with partial-decision information. We propose a distributed algorithm, where each agent refines its strategy through projected-gradient steps and an averaging procedure. Each agent uses estimates of competitors' actions obtained solely from local neighbor interactions, in a directed communication network. Unlike previous approaches that rely on (strong) monotonicity assumptions, this work establishes the convergence towards a NE under a diagonal dominance property of the pseudo-gradient mapping, that can be checked locally by the agents. Further, this condition is physically interpretable and of relevance for many applications, as it suggests that an agent's objective function is primarily influenced by its individual strategic decisions, rather than by the actions of its competitors. In virtue of a novel block-infinity norm convergence argument, we provide explicit bounds for constant step-size that are independent of the communication structure, and can be computed in a totally decentralized way. Numerical simulations on an optical network's power control problem validate the algorithm's effectiveness

    Distributed Nash Equilibrium Seeking with Limited Cost Function Knowledge via A Consensus-Based Gradient-Free Method

    Full text link
    This paper considers a distributed Nash equilibrium seeking problem, where the players only have partial access to other players' actions, such as their neighbors' actions. Thus, the players are supposed to communicate with each other to estimate other players' actions. To solve the problem, a leader-following consensus gradient-free distributed Nash equilibrium seeking algorithm is proposed. This algorithm utilizes only the measurements of the player's local cost function without the knowledge of its explicit expression or the requirement on its smoothness. Hence, the algorithm is gradient-free during the entire updating process. Moreover, the analysis on the convergence of the Nash equilibrium is studied for the algorithm with both diminishing and constant step-sizes, respectively. Specifically, in the case of diminishing step-size, it is shown that the players' actions converge to the Nash equilibrium almost surely, while in the case of fixed step-size, the convergence to the neighborhood of the Nash equilibrium is achieved. The performance of the proposed algorithm is verified through numerical simulations

    Fully Distributed Nash Equilibrium Seeking in N-Cluster Games

    Full text link
    Distributed optimization and Nash equilibrium (NE) seeking problems have drawn much attention in the control community recently. This paper studies a class of non-cooperative games, known as NN-cluster game, which subsumes both cooperative and non-cooperative nature among multiple agents in the two problems: solving distributed optimization problem within the cluster, while playing a non-cooperative game across the clusters. Moreover, we consider a partial-decision information game setup, i.e., the agents do not have direct access to other agents' decisions, and hence need to communicate with each other through a directed graph whose associated adjacency matrix is assumed to be non-doubly stochastic. To solve the NN-cluster game problem, we propose a fully distributed NE seeking algorithm by a synthesis of leader-following consensus and gradient tracking, where the leader-following consensus protocol is adopted to estimate the other agents' decisions and the gradient tracking method is employed to trace some weighted average of the gradient. Furthermore, the algorithm is equipped with uncoordinated constant step-sizes, which allows the agents to choose their own preferred step-sizes, instead of a uniform coordinated step-size. We prove that all agents' decisions converge linearly to their corresponding NE so long as the largest step-size and the heterogeneity of the step-size are small. We verify the derived results through a numerical example in a Cournot competition game

    Geometric Convergence of Distributed Heavy-Ball Nash Equilibrium Algorithm over Time-Varying Digraphs with Unconstrained Actions

    Full text link
    We propose a new distributed algorithm that combines heavy-ball momentum and a consensus-based gradient method to find a Nash equilibrium (NE) in a class of non-cooperative convex games with unconstrained action sets. In this approach, each agent in the game has access to its own smooth local cost function and can exchange information with its neighbors over a communication network. The proposed method is designed to work on a general sequence of time-varying directed graphs and allows for non-identical step-sizes and momentum parameters. Our work is the first to incorporate heavy-ball momentum in the context of non-cooperative games, and we provide a rigorous proof of its geometric convergence to the NE under the common assumptions of strong convexity and Lipschitz continuity of the agents' cost functions. Moreover, we establish explicit bounds for the step-size values and momentum parameters based on the characteristics of the cost functions, mixing matrices, and graph connectivity structures. To showcase the efficacy of our proposed method, we perform numerical simulations on a Nash-Cournot game to demonstrate its accelerated convergence compared to existing methods

    Gradient-Free Nash Equilibrium Seeking in N-Cluster Games with Uncoordinated Constant Step-Sizes

    Full text link
    In this paper, we consider a problem of simultaneous global cost minimization and Nash equilibrium seeking, which commonly exists in NN-cluster non-cooperative games. Specifically, the agents in the same cluster collaborate to minimize a global cost function, being a summation of their individual cost functions, and jointly play a non-cooperative game with other clusters as players. For the problem settings, we suppose that the explicit analytical expressions of the agents' local cost functions are unknown, but the function values can be measured. We propose a gradient-free Nash equilibrium seeking algorithm by a synthesis of Gaussian smoothing techniques and gradient tracking. Furthermore, instead of using the uniform coordinated step-size, we allow the agents across different clusters to choose different constant step-sizes. When the largest step-size is sufficiently small, we prove a linear convergence of the agents' actions to a neighborhood of the unique Nash equilibrium under a strongly monotone game mapping condition, with the error gap being propotional to the largest step-size and the smoothing parameter. The performance of the proposed algorithm is validated by numerical simulations
    corecore