132,129 research outputs found
Communication Lower Bounds for Distributed-Memory Computations
In this paper we propose a new approach to the study of the communication requirements of distributed computations, which advocates for the removal of the restrictive assumptions under which earlier results were derived. We illustrate our approach by giving tight lower bounds on the communication complexity required to solve several computational problems in a distributed-memory parallel machine, namely standard matrix multiplication, stencil computations, comparison sorting, and the Fast Fourier Transform. Our bounds rely only on a mild assumption on work distribution, and significantly strengthen previous results which require either the computation to be balanced among the processors, or specific initial distributions of the input data, or an upper bound on the size of processors\u27 local memories
Multi-consensus Decentralized Accelerated Gradient Descent
This paper considers the decentralized optimization problem, which has
applications in large scale machine learning, sensor networks, and control
theory. We propose a novel algorithm that can achieve near optimal
communication complexity, matching the known lower bound up to a logarithmic
factor of the condition number of the problem. Our theoretical results give
affirmative answers to the open problem on whether there exists an algorithm
that can achieve a communication complexity (nearly) matching the lower bound
depending on the global condition number instead of the local one. Moreover,
the proposed algorithm achieves the optimal computation complexity matching the
lower bound up to universal constants. Furthermore, to achieve a linear
convergence rate, our algorithm \emph{doesn't} require the individual functions
to be (strongly) convex. Our method relies on a novel combination of known
techniques including Nesterov's accelerated gradient descent, multi-consensus
and gradient-tracking. The analysis is new, and may be applied to other related
problems. Empirical studies demonstrate the effectiveness of our method for
machine learning applications
On The Communication Complexity of Linear Algebraic Problems in the Message Passing Model
We study the communication complexity of linear algebraic problems over
finite fields in the multi-player message passing model, proving a number of
tight lower bounds. Specifically, for a matrix which is distributed among a
number of players, we consider the problem of determining its rank, of
computing entries in its inverse, and of solving linear equations. We also
consider related problems such as computing the generalized inner product of
vectors held on different servers. We give a general framework for reducing
these multi-player problems to their two-player counterparts, showing that the
randomized -player communication complexity of these problems is at least
times the randomized two-player communication complexity. Provided the
problem has a certain amount of algebraic symmetry, which we formally define,
we can show the hardest input distribution is a symmetric distribution, and
therefore apply a recent multi-player lower bound technique of Phillips et al.
Further, we give new two-player lower bounds for a number of these problems. In
particular, our optimal lower bound for the two-player version of the matrix
rank problem resolves an open question of Sun and Wang.
A common feature of our lower bounds is that they apply even to the special
"threshold promise" versions of these problems, wherein the underlying
quantity, e.g., rank, is promised to be one of just two values, one on each
side of some critical threshold. These kinds of promise problems are
commonplace in the literature on data streaming as sources of hardness for
reductions giving space lower bounds
GIANT: Globally Improved Approximate Newton Method for Distributed Optimization
For distributed computing environment, we consider the empirical risk
minimization problem and propose a distributed and communication-efficient
Newton-type optimization method. At every iteration, each worker locally finds
an Approximate NewTon (ANT) direction, which is sent to the main driver. The
main driver, then, averages all the ANT directions received from workers to
form a {\it Globally Improved ANT} (GIANT) direction. GIANT is highly
communication efficient and naturally exploits the trade-offs between local
computations and global communications in that more local computations result
in fewer overall rounds of communications. Theoretically, we show that GIANT
enjoys an improved convergence rate as compared with first-order methods and
existing distributed Newton-type methods. Further, and in sharp contrast with
many existing distributed Newton-type methods, as well as popular first-order
methods, a highly advantageous practical feature of GIANT is that it only
involves one tuning parameter. We conduct large-scale experiments on a computer
cluster and, empirically, demonstrate the superior performance of GIANT.Comment: Fixed some typos. Improved writin
- …