1 research outputs found
Distributed Adaptive Newton Methods with Globally Superlinear Convergence
This paper considers the distributed optimization problem over a network
where the global objective is to optimize a sum of local functions using only
local computation and communication. Since the existing algorithms either adopt
a linear consensus mechanism, which converges at best linearly, or assume that
each node starts sufficiently close to an optimal solution, they cannot achieve
globally superlinear convergence. To break through the linear consensus rate,
we propose a finite-time set-consensus method, and then incorporate it into
Polyak's adaptive Newton method, leading to our distributed adaptive Newton
algorithm (DAN). To avoid transmitting local Hessians, we adopt a low-rank
approximation idea to compress the Hessian and design a communication-efficient
DAN-LA. Then, the size of transmitted messages in DAN-LA is reduced to
per iteration, where is the dimension of decision vectors and is the same
as the first-order methods. We show that DAN and DAN-LA can globally achieve
quadratic and superlinear convergence rates, respectively. Numerical
experiments on logistic regression problems are finally conducted to show the
advantages over existing methods.Comment: Submitted to IEEE Transactions on Automatic Control. 14 pages, 4
figure