154 research outputs found
Distributed Stochastic Optimization under Imperfect Information
We consider a stochastic convex optimization problem that requires minimizing
a sum of misspecified agentspecific expectation-valued convex functions over
the intersection of a collection of agent-specific convex sets. This
misspecification is manifested in a parametric sense and may be resolved
through solving a distinct stochastic convex learning problem. Our interest
lies in the development of distributed algorithms in which every agent makes
decisions based on the knowledge of its objective and feasibility set while
learning the decisions of other agents by communicating with its local
neighbors over a time-varying connectivity graph. While a significant body of
research currently exists in the context of such problems, we believe that the
misspecified generalization of this problem is both important and has seen
little study, if at all. Accordingly, our focus lies on the simultaneous
resolution of both problems through a joint set of schemes that combine three
distinct steps: (i) An alignment step in which every agent updates its current
belief by averaging over the beliefs of its neighbors; (ii) A projected
(stochastic) gradient step in which every agent further updates this averaged
estimate; and (iii) A learning step in which agents update their belief of the
misspecified parameter by utilizing a stochastic gradient step. Under an
assumption of mere convexity on agent objectives and strong convexity of the
learning problems, we show that the sequences generated by this collection of
update rules converge almost surely to the solution of the correctly specified
stochastic convex optimization problem and the stochastic learning problem,
respectively
On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
In decentralized consensus optimization, a connected network of agents
collaboratively minimize the sum of their local objective functions over a
common decision variable, where their information exchange is restricted
between the neighbors. To this end, one can first obtain a problem
reformulation and then apply the alternating direction method of multipliers
(ADMM). The method applies iterative computation at the individual agents and
information exchange between the neighbors. This approach has been observed to
converge quickly and deemed powerful. This paper establishes its linear
convergence rate for decentralized consensus optimization problem with strongly
convex local objective functions. The theoretical convergence rate is
explicitly given in terms of the network topology, the properties of local
objective functions, and the algorithm parameter. This result is not only a
performance guarantee but also a guideline toward accelerating the ADMM
convergence.Comment: 11 figures, IEEE Transactions on Signal Processing, 201
- …