7 research outputs found
On the linear convergence of distributed Nash equilibrium seeking for multi-cluster games under partial-decision information
This paper considers the distributed strategy design for Nash equilibrium
(NE) seeking in multi-cluster games under a partial-decision information
scenario. In the considered game, there are multiple clusters and each cluster
consists of a group of agents. A cluster is viewed as a virtual noncooperative
player that aims to minimize its local payoff function and the agents in a
cluster are the actual players that cooperate within the cluster to optimize
the payoff function of the cluster through communication via a connected graph.
In our setting, agents have only partial-decision information, that is, they
only know local information and cannot have full access to opponents'
decisions. To solve the NE seeking problem of this formulated game, a
discrete-time distributed algorithm, called distributed gradient tracking
algorithm (DGT), is devised based on the inter- and intra-communication of
clusters. In the designed algorithm, each agent is equipped with strategy
variables including its own strategy and estimates of other clusters'
strategies. With the help of a weighted Fronbenius norm and a weighted
Euclidean norm, theoretical analysis is presented to rigorously show the linear
convergence of the algorithm. Finally, a numerical example is given to
illustrate the proposed algorithm
Distributed Algorithms for Computing a Fixed Point of Multi-Agent Nonexpansive Operators
This paper investigates the problem of finding a fixed point for a global
nonexpansive operator under time-varying communication graphs in real Hilbert
spaces, where the global operator is separable and composed of an aggregate sum
of local nonexpansive operators. Each local operator is only privately
accessible to each agent, and all agents constitute a network. To seek a fixed
point of the global operator, it is indispensable for agents to exchange local
information and update their solution cooperatively. To solve the problem, two
algorithms are developed, called distributed Krasnosel'ski\u{\i}-Mann (D-KM)
and distributed block-coordinate Krasnosel'ski\u{\i}-Mann (D-BKM) iterations,
for which the D-BKM iteration is a block-coordinate version of the D-KM
iteration in the sense of randomly choosing and computing only one
block-coordinate of local operators at each time for each agent. It is shown
that the proposed two algorithms can both converge weakly to a fixed point of
the global operator. Meanwhile, the designed algorithms are applied to recover
the classical distributed gradient descent (DGD) algorithm, devise a new
block-coordinate DGD algorithm, handle a distributed shortest distance problem
in the Hilbert space for the first time, and solve linear algebraic equations
in a novel distributed approach. Finally, the theoretical results are
corroborated by a few numerical examples
Decentralized Non-Convex Learning with Linearly Coupled Constraints
Motivated by the need for decentralized learning, this paper aims at
designing a distributed algorithm for solving nonconvex problems with general
linear constraints over a multi-agent network. In the considered problem, each
agent owns some local information and a local variable for jointly minimizing a
cost function, but local variables are coupled by linear constraints. Most of
the existing methods for such problems are only applicable for convex problems
or problems with specific linear constraints. There still lacks a distributed
algorithm for such problems with general linear constraints and under nonconvex
setting. In this paper, to tackle this problem, we propose a new algorithm,
called "proximal dual consensus" (PDC) algorithm, which combines a proximal
technique and a dual consensus method. We build the theoretical convergence
conditions and show that the proposed PDC algorithm can converge to an
-Karush-Kuhn-Tucker solution within
iterations. For computation reduction, the PDC algorithm can choose to perform
cheap gradient descent per iteration while preserving the same order of
iteration complexity. Numerical results are presented
to demonstrate the good performance of the proposed algorithms for solving a
regression problem and a classification problem over a network where agents
have only partial observations of data features