Consider the consensus problem of minimizing f(x)=βi=1nβfiβ(x) where
each fiβ is only known to one individual agent i out of a connected network
of n agents. All the agents shall collaboratively solve this problem and
obtain the solution subject to data exchanges restricted to between neighboring
agents. Such algorithms avoid the need of a fusion center, offer better network
load balance, and improve data privacy. We study the decentralized gradient
descent method in which each agent i updates its variable x(i)β, which is
a local approximate to the unknown variable x, by combining the average of
its neighbors' with the negative gradient step βΞ±βfiβ(x(i)β).
The iteration is x(i)β(k+1)βneighborjofiββwijβx(j)β(k)βΞ±βfiβ(x(i)β(k)),forΒ eachΒ agenti, where the averaging coefficients form a symmetric doubly stochastic matrix
W=[wijβ]βRnΓn. We analyze the convergence of this
iteration and derive its converge rate, assuming that each fiβ is proper
closed convex and lower bounded, βfiβ is Lipschitz continuous with
constant Lfiββ, and stepsize Ξ± is fixed. Provided that Ξ±<O(1/Lhβ) where Lhβ=maxiβ{Lfiββ}, the objective error at the averaged
solution, f(n1ββiβx(i)β(k))βfβ, reduces at a speed of O(1/k)
until it reaches O(Ξ±). If fiβ are further (restricted) strongly
convex, then both n1ββiβx(i)β(k) and each x(i)β(k) converge
to the global minimizer xβ at a linear rate until reaching an
O(Ξ±)-neighborhood of xβ. We also develop an iteration for
decentralized basis pursuit and establish its linear convergence to an
O(Ξ±)-neighborhood of the true unknown sparse signal