1 research outputs found
A Stochastic Second-Order Proximal Method for Distributed Optimization
In this paper, we propose a distributed stochastic second-order proximal
method that enables agents in a network to cooperatively minimize the sum of
their local loss functions without any centralized coordination. The proposed
algorithm, referred to as St-SoPro, incorporates a decentralized second-order
approximation into an augmented Lagrangian function, and then randomly samples
the local gradients and Hessian matrices of the agents, so that it is
computationally and memory-wise efficient, particularly for large-scale
optimization problems. We show that for globally restricted strongly convex
problems, the expected optimality error of St-SoPro asymptotically drops below
an explicit error bound at a linear rate, and the error bound can be
arbitrarily small with proper parameter settings. Simulations over real machine
learning datasets demonstrate that St-SoPro outperforms several
state-of-the-art distributed stochastic first-order methods in terms of
convergence speed as well as computation and communication costs.Comment: 6 pages, 8 figure