68 research outputs found
Super-Fast MST Algorithms in the Congested Clique Using o(m) Messages
In a sequence of recent results (PODC 2015 and PODC 2016), the running time of the fastest algorithm for the minimum spanning tree (MST) problem in the Congested Clique model was first improved to O(log(log(log(n)))) from O(log(log(n))) (Hegeman et al., PODC 2015) and then to O(log^*(n)) (Ghaffari and Parter, PODC 2016). All of these algorithms use Theta(n^2) messages independent of the number of edges in the input graph.
This paper positively answers a question raised in Hegeman et al., and presents the first "super-fast" MST algorithm with o(m) message complexity for input graphs with m edges. Specifically, we present an algorithm running in O(log^*(n)) rounds, with message complexity ~O(sqrt{m * n}) and then build on this algorithm to derive a family of algorithms, containing for any epsilon, 0 < epsilon <= 1, an algorithm running in O(log^*(n)/epsilon) rounds, using ~O(n^{1 + epsilon}/epsilon) messages. Setting epsilon = log(log(n))/log(n) leads to the first sub-logarithmic round Congested Clique MST algorithm that uses only ~O(n) messages.
Our primary tools in achieving these results are
(i) a component-wise bound on the number of candidates for MST edges, extending the sampling lemma of Karger, Klein, and Tarjan (Karger, Klein, and Tarjan, JACM 1995) and
(ii) Theta(log(n))-wise-independent linear graph sketches (Cormode and Firmani, Dist. Par. Databases, 2014) for generating MST candidate edges
On the Distributed Complexity of Large-Scale Graph Computations
Motivated by the increasing need to understand the distributed algorithmic
foundations of large-scale graph computations, we study some fundamental graph
problems in a message-passing model for distributed computing where
machines jointly perform computations on graphs with nodes (typically, ). The input graph is assumed to be initially randomly partitioned among
the machines, a common implementation in many real-world systems.
Communication is point-to-point, and the goal is to minimize the number of
communication {\em rounds} of the computation.
Our main contribution is the {\em General Lower Bound Theorem}, a theorem
that can be used to show non-trivial lower bounds on the round complexity of
distributed large-scale data computations. The General Lower Bound Theorem is
established via an information-theoretic approach that relates the round
complexity to the minimal amount of information required by machines to solve
the problem. Our approach is generic and this theorem can be used in a
"cookbook" fashion to show distributed lower bounds in the context of several
problems, including non-graph problems. We present two applications by showing
(almost) tight lower bounds for the round complexity of two fundamental graph
problems, namely {\em PageRank computation} and {\em triangle enumeration}. Our
approach, as demonstrated in the case of PageRank, can yield tight lower bounds
for problems (including, and especially, under a stochastic partition of the
input) where communication complexity techniques are not obvious.
Our approach, as demonstrated in the case of triangle enumeration, can yield
stronger round lower bounds as well as message-round tradeoffs compared to
approaches that use communication complexity techniques
Fast Distributed Algorithms for Connectivity and MST in Large Graphs
Motivated by the increasing need to understand the algorithmic foundations of
distributed large-scale graph computations, we study a number of fundamental
graph problems in a message-passing model for distributed computing where machines jointly perform computations on graphs with nodes
(typically, ). The input graph is assumed to be initially randomly
partitioned among the machines, a common implementation in many real-world
systems. Communication is point-to-point, and the goal is to minimize the
number of communication rounds of the computation.
Our main result is an (almost) optimal distributed randomized algorithm for
graph connectivity. Our algorithm runs in rounds
( notation hides a \poly\log(n) factor and an additive
\poly\log(n) term). This improves over the best previously known bound of
[Klauck et al., SODA 2015], and is optimal (up to a
polylogarithmic factor) in view of an existing lower bound of
. Our improved algorithm uses a bunch of techniques,
including linear graph sketching, that prove useful in the design of efficient
distributed graph algorithms. Using the connectivity algorithm as a building
block, we then present fast randomized algorithms for computing minimum
spanning trees, (approximate) min-cuts, and for many graph verification
problems. All these algorithms take rounds, and are optimal
up to polylogarithmic factors. We also show an almost matching lower bound of
rounds for many graph verification problems by
leveraging lower bounds in random-partition communication complexity
Time and Space Optimal Massively Parallel Algorithm for the 2-Ruling Set Problem
In this work, we present a constant-round algorithm for the -ruling set
problem in the Congested Clique model. As a direct consequence, we obtain a
constant round algorithm in the MPC model with linear space-per-machine and
optimal total space. Our results improve on the -round
algorithm by [HPS, DISC'14] and the -round algorithm by
[GGKMR, PODC'18]. Our techniques can also be applied to the semi-streaming
model to obtain an -pass algorithm. Our main technical contribution is a
novel sampling procedure that returns a small subgraph such that almost all
nodes in the input graph are adjacent to the sampled subgraph. An MIS on the
sampled subgraph provides a -ruling set for a large fraction of the input
graph. As a technical challenge, we must handle the remaining part of the
graph, which might still be relatively large. We overcome this challenge by
showing useful structural properties of the remaining graph and show that
running our process twice yields a -ruling set of the original input graph
with high probability
Being Fast Means Being Chatty: The Local Information Cost of Graph Spanners
We introduce a new measure for quantifying the amount of information that the
nodes in a network need to learn to jointly solve a graph problem. We show that
the local information cost () presents a natural lower bound on
the communication complexity of distributed algorithms. For the synchronous
CONGEST-KT1 model, where each node has initial knowledge of its neighbors' IDs,
we prove that bits are
required for solving a graph problem with a -round algorithm that
errs with probability at most . Our result is the first lower bound
that yields a general trade-off between communication and time for graph
problems in the CONGEST-KT1 model.
We demonstrate how to apply the local information cost by deriving a lower
bound on the communication complexity of computing a -spanner that
consists of at most edges, where . Our main result is that any -time
algorithm must send at least bits in the
CONGEST model under the KT1 assumption. Previously, only a trivial lower bound
of bits was known for this problem.
A consequence of our lower bound is that achieving both time- and
communication-optimality is impossible when designing a distributed spanner
algorithm. In light of the work of King, Kutten, and Thorup (PODC 2015), this
shows that computing a minimum spanning tree can be done significantly faster
than finding a spanner when considering algorithms with
communication complexity. Our result also implies time complexity lower bounds
for constructing a spanner in the node-congested clique of Augustine et al.
(2019) and in the push-pull gossip model with limited bandwidth
A Deterministic Algorithm for the MST Problem in Constant Rounds of Congested Clique
In this paper, we show that the Minimum Spanning Tree problem can be solved
\emph{deterministically}, in rounds of the
model.
In the model, there are players
that perform computation in synchronous rounds. Each round consist of a phase
of local computation and a phase of communication, in which each pair of
players is allowed to exchange bit messages.
The studies of this model began with the MST problem: in the paper by Lotker
et al.[SPAA'03, SICOMP'05] that defines the
model the authors give a deterministic round algorithm that improved over a trivial round
adaptation of Bor\r{u}vka's algorithm.
There was a sequence of gradual improvements to this result: an
round algorithm by Hegeman et al. [PODC'15], an
round algorithm by Ghaffari and Parter, [PODC'16] and
an round algorithm by Jurdzi\'nski and Nowicki, [SODA'18], but
all those algorithms were randomized, which left the question about the
existence of any deterministic round algorithms for the
Minimum Spanning Tree problem open.
Our result resolves this question and establishes that
rounds is enough to solve the MST problem in the
model, even if we are not allowed to use any randomness.
Furthermore, the amount of communication needed by the algorithm makes it
applicable to some variants of the model
- …