1 research outputs found
Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks
This note studies the distributed non-convex optimization problem with
non-smooth regularization, which has wide applications in decentralized
learning, estimation and control. The objective function is the sum of
different local objective functions, which consist of differentiable (possibly
non-convex) cost functions and non-smooth convex functions. This paper presents
a distributed proximal gradient algorithm for the non-smooth non-convex
optimization problem over time-varying multi-agent networks. Each agent updates
local variable estimate by the multi-step consensus operator and the proximal
operator. We prove that the generated local variables achieve consensus and
converge to the set of critical points with convergence rate . Finally,
we verify the efficacy of proposed algorithm by numerical simulations