30 research outputs found
Lyapunov Approach to Consensus Problems
This paper investigates the weighted-averaging dynamic for unconstrained and
constrained consensus problems. Through the use of a suitably defined adjoint
dynamic, quadratic Lyapunov comparison functions are constructed to analyze the
behavior of weighted-averaging dynamic. As a result, new convergence rate
results are obtained that capture the graph structure in a novel way. In
particular, the exponential convergence rate is established for unconstrained
consensus with the exponent of the order of . Also, the
exponential convergence rate is established for constrained consensus, which
extends the existing results limited to the use of doubly stochastic weight
matrices
Distributed Stochastic Optimization under Imperfect Information
We consider a stochastic convex optimization problem that requires minimizing
a sum of misspecified agentspecific expectation-valued convex functions over
the intersection of a collection of agent-specific convex sets. This
misspecification is manifested in a parametric sense and may be resolved
through solving a distinct stochastic convex learning problem. Our interest
lies in the development of distributed algorithms in which every agent makes
decisions based on the knowledge of its objective and feasibility set while
learning the decisions of other agents by communicating with its local
neighbors over a time-varying connectivity graph. While a significant body of
research currently exists in the context of such problems, we believe that the
misspecified generalization of this problem is both important and has seen
little study, if at all. Accordingly, our focus lies on the simultaneous
resolution of both problems through a joint set of schemes that combine three
distinct steps: (i) An alignment step in which every agent updates its current
belief by averaging over the beliefs of its neighbors; (ii) A projected
(stochastic) gradient step in which every agent further updates this averaged
estimate; and (iii) A learning step in which agents update their belief of the
misspecified parameter by utilizing a stochastic gradient step. Under an
assumption of mere convexity on agent objectives and strong convexity of the
learning problems, we show that the sequences generated by this collection of
update rules converge almost surely to the solution of the correctly specified
stochastic convex optimization problem and the stochastic learning problem,
respectively
A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games
We consider a distributed stochastic approximation (SA) scheme for computing
an equilibrium of a stochastic Nash game. Standard SA schemes employ
diminishing steplength sequences that are square summable but not summable.
Such requirements provide a little or no guidance for how to leverage
Lipschitzian and monotonicity properties of the problem and naive choices
generally do not preform uniformly well on a breadth of problems. While a
centralized adaptive stepsize SA scheme is proposed in [1] for the optimization
framework, such a scheme provides no freedom for the agents in choosing their
own stepsizes. Thus, a direct application of centralized stepsize schemes is
impractical in solving Nash games. Furthermore, extensions to game-theoretic
regimes where players may independently choose steplength sequences are limited
to recent work by Koshal et al. [2]. Motivated by these shortcomings, we
present a distributed algorithm in which each player updates his steplength
based on the previous steplength and some problem parameters. The steplength
rules are derived from minimizing an upper bound of the errors associated with
players' decisions. It is shown that these rules generate sequences that
converge almost surely to an equilibrium of the stochastic Nash game.
Importantly, variants of this rule are suggested where players independently
select steplength sequences while abiding by an overall coordination
requirement. Preliminary numerical results are seen to be promising.Comment: 8 pages, Proceedings of the American Control Conference, Washington,
201
Differentially-private Distributed Algorithms for Aggregative Games with Guaranteed Convergence
The distributed computation of a Nash equilibrium in aggregative games is
gaining increased traction in recent years. Of particular interest is the
mediator-free scenario where individual players only access or observe the
decisions of their neighbors due to practical constraints. Given the
competitive rivalry among participating players, protecting the privacy of
individual players becomes imperative when sensitive information is involved.
We propose a fully distributed equilibrium-computation approach for aggregative
games that can achieve both rigorous differential privacy and guaranteed
computation accuracy of the Nash equilibrium. This is in sharp contrast to
existing differential-privacy solutions for aggregative games that have to
either sacrifice the accuracy of equilibrium computation to gain rigorous
privacy guarantees, or allow the cumulative privacy budget to grow unbounded,
hence losing privacy guarantees, as iteration proceeds. Our approach uses
independent noises across players, thus making it effective even when
adversaries have access to all shared messages as well as the underlying
algorithm structure. The encryption-free nature of the proposed approach, also
ensures efficiency in computation and communication. The approach is also
applicable in stochastic aggregative games, able to ensure both rigorous
differential privacy and guaranteed computation accuracy of the Nash
equilibrium when individual players only have stochastic estimates of their
pseudo-gradient mappings. Numerical comparisons with existing counterparts
confirm the effectiveness of the proposed approach.Comment: arXiv admin note: text overlap with arXiv:2202.0111