1,218 research outputs found
A relaxed-inertial forward-backward-forward algorithm for Stochastic Generalized Nash equilibrium seeking
In this paper we propose a new operator splitting algorithm for distributed
Nash equilibrium seeking under stochastic uncertainty, featuring relaxation and
inertial effects. Our work is inspired by recent deterministic operator
splitting methods, designed for solving structured monotone inclusion problems.
The algorithm is derived from a forward-backward-forward scheme for solving
structured monotone inclusion problems featuring a Lipschitz continuous and
monotone game operator. To the best of our knowledge, this is the first
distributed (generalized) Nash equilibrium seeking algorithm featuring
acceleration techniques in stochastic Nash games without assuming cocoercivity.
Numerical examples illustrate the effect of inertia and relaxation on the
performance of our proposed algorithm
Learning multi-robot coordination from demonstrations
This paper develops a Distributed Differentiable Dynamic Game (DDDG)
framework, which enables learning multi-robot coordination from demonstrations.
We represent multi-robot coordination as a dynamic game, where the behavior of
a robot is dictated by its own dynamics and objective that also depends on
others' behavior. The coordination thus can be adapted by tuning the objective
and dynamics of each robot. The proposed DDDG enables each robot to
automatically tune its individual dynamics and objectives in a distributed
manner by minimizing the mismatch between its trajectory and demonstrations.
This process requires a new distributed design of the forward-pass, where all
robots collaboratively seek Nash equilibrium behavior, and a backward-pass,
where gradients are propagated via the communication graph. We test the DDDG in
simulation with a team of quadrotors given different task configurations. The
results demonstrate the capability of DDDG for learning multi-robot
coordination from demonstrationsComment: 6 figure
Semi-decentralized generalized Nash equilibrium seeking in monotone aggregative games
We address the generalized Nash equilibrium seeking problem for a population
of agents playing aggregative games with affine coupling constraints. We focus
on semi-decentralized communication architectures, where there is a central
coordinator able to gather and broadcast signals of aggregative nature to the
agents. By exploiting the framework of monotone operator theory and operator
splitting, we first critically review the most relevant available algorithms
and then design two novel schemes: (i) a single-layer, fixed-step algorithm
with convergence guarantee for general (non cocoercive, non-strictly) monotone
aggregative games and (ii) a single-layer proximal-type algorithm for a class
of monotone aggregative games with linearly coupled cost functions. We also
design novel accelerated variants of the algorithms via (alternating) inertial
and over-relaxation steps. Finally, we show via numerical simulations that the
proposed algorithms outperform those in the literature in terms of convergence
speed
An asynchronous distributed and scalable generalized Nash equilibrium seeking algorithm for strongly monotone games
In this paper, we present three distributed algorithms to solve a class of Generalized Nash Equilibrium (GNE) seeking problems in strongly monotone games. The first one (SD-GENO) is based on synchronous updates of the agents, while the second and the third (AD-GEED and AD-GENO) represent asynchronous solutions that are robust to communication delays. AD-GENO can be seen as a refinement of AD-GEED, since it only requires node auxiliary variables, enhancing the scalability of the algorithm. Our main contribution is to prove convergence to a v-GNE variational-GNE (vGNE) of the game via an operator-theoretic approach. Finally, we apply the algorithms to network Cournot games and show how different activation sequences and delays affect convergence. We also compare the proposed algorithms to a state-of-the-art algorithm solving a similar problem, and observe that AD-GENO outperforms it.</p
- …