7,857 research outputs found
Distributed convergence to Nash equilibria in two-network zero-sum games
This paper considers a class of strategic scenarios in which two networks of
agents have opposing objectives with regards to the optimization of a common
objective function. In the resulting zero-sum game, individual agents
collaborate with neighbors in their respective network and have only partial
knowledge of the state of the agents in the other network. For the case when
the interaction topology of each network is undirected, we synthesize a
distributed saddle-point strategy and establish its convergence to the Nash
equilibrium for the class of strictly concave-convex and locally Lipschitz
objective functions. We also show that this dynamics does not converge in
general if the topologies are directed. This justifies the introduction, in the
directed case, of a generalization of this distributed dynamics which we show
converges to the Nash equilibrium for the class of strictly concave-convex
differentiable functions with locally Lipschitz gradients. The technical
approach combines tools from algebraic graph theory, nonsmooth analysis,
set-valued dynamical systems, and game theory
Non-Convex Distributed Optimization
We study distributed non-convex optimization on a time-varying multi-agent
network. Each node has access to its own smooth local cost function, and the
collective goal is to minimize the sum of these functions. We generalize the
results obtained previously to the case of non-convex functions. Under some
additional technical assumptions on the gradients we prove the convergence of
the distributed push-sum algorithm to some critical point of the objective
function. By utilizing perturbations on the update process, we show the almost
sure convergence of the perturbed dynamics to a local minimum of the global
objective function. Our analysis shows that this noised procedure converges at
a rate of
- …