8,895 research outputs found
Distributed Nonconvex Multiagent Optimization Over Time-Varying Networks
We study nonconvex distributed optimization in multiagent networks where the
communications between nodes is modeled as a time-varying sequence of arbitrary
digraphs. We introduce a novel broadcast-based distributed algorithmic
framework for the (constrained) minimization of the sum of a smooth (possibly
nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a
convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually
employed to enforce some structure in the solution, typically sparsity. The
proposed method hinges on Successive Convex Approximation (SCA) techniques
coupled with i) a tracking mechanism instrumental to locally estimate the
gradients of agents' cost functions; and ii) a novel broadcast protocol to
disseminate information and distribute the computation among the agents.
Asymptotic convergence to stationary solutions is established. A key feature of
the proposed algorithm is that it neither requires the double-stochasticity of
the consensus matrices (but only column stochasticity) nor the knowledge of the
graph sequence to implement. To the best of our knowledge, the proposed
framework is the first broadcast-based distributed algorithm for convex and
nonconvex constrained optimization over arbitrary, time-varying digraphs.
Numerical results show that our algorithm outperforms current schemes on both
convex and nonconvex problems.Comment: Copyright 2001 SS&C. Published in the Proceedings of the 50th annual
Asilomar conference on signals, systems, and computers, Nov. 6-9, 2016, CA,
US
Near-Optimal Decentralized Momentum Method for Nonconvex-PL Minimax Problems
Minimax optimization plays an important role in many machine learning tasks
such as generative adversarial networks (GANs) and adversarial training.
Although recently a wide variety of optimization methods have been proposed to
solve the minimax problems, most of them ignore the distributed setting where
the data is distributed on multiple workers. Meanwhile, the existing
decentralized minimax optimization methods rely on the strictly assumptions
such as (strongly) concavity and variational inequality conditions. In the
paper, thus, we propose an efficient decentralized momentum-based gradient
descent ascent (DM-GDA) method for the distributed nonconvex-PL minimax
optimization, which is nonconvex in primal variable and is nonconcave in dual
variable and satisfies the Polyak-Lojasiewicz (PL) condition. In particular,
our DM-GDA method simultaneously uses the momentum-based techniques to update
variables and estimate the stochastic gradients. Moreover, we provide a solid
convergence analysis for our DM-GDA method, and prove that it obtains a
near-optimal gradient complexity of for finding an
-stationary solution of the nonconvex-PL stochastic minimax problems,
which reaches the lower bound of nonconvex stochastic optimization. To the best
of our knowledge, we first study the decentralized algorithm for Nonconvex-PL
stochastic minimax optimization over a network.Comment: 31 page
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
- …