33 research outputs found
Non-Convex Distributed Optimization
We study distributed non-convex optimization on a time-varying multi-agent
network. Each node has access to its own smooth local cost function, and the
collective goal is to minimize the sum of these functions. We generalize the
results obtained previously to the case of non-convex functions. Under some
additional technical assumptions on the gradients we prove the convergence of
the distributed push-sum algorithm to some critical point of the objective
function. By utilizing perturbations on the update process, we show the almost
sure convergence of the perturbed dynamics to a local minimum of the global
objective function. Our analysis shows that this noised procedure converges at
a rate of
On Endogenous Random Consensus and Averaging Dynamics
Motivated by various random variations of Hegselmann-Krause model for opinion
dynamics and gossip algorithm in an endogenously changing environment, we
propose a general framework for the study of endogenously varying random
averaging dynamics, i.e.\ an averaging dynamics whose evolution suffers from
history dependent sources of randomness. We show that under general assumptions
on the averaging dynamics, such dynamics is convergent almost surely. We also
determine the limiting behavior of such dynamics and show such dynamics admit
infinitely many time-varying Lyapunov functions
On Convergence Rate of Scalar Hegselmann-Krause Dynamics
In this work, we derive a new upper bound on the termination time of the
Hegselmann-Krause model for opinion dynamics. Using a novel method, we show
that the termination rate of this dynamics happens no longer than
which improves the best known upper bound of by a factor of .Comment: 5 pages, 2 figures, submitted to The American Control Conference,
Sep. 201