15 research outputs found
Derandomizing Local Distributed Algorithms under Bandwidth Restrictions
This paper addresses the cornerstone family of local problems in distributed computing, and investigates the curious gap between randomized and deterministic solutions under bandwidth restrictions.
Our main contribution is in providing tools for derandomizing solutions to local problems, when the n nodes can only send O(log n)-bit messages in each round of communication. We combine bounded independence, which we show to be sufficient for some algorithms, with the method of conditional expectations and with additional machinery, to obtain the following results.
First, we show that in the Congested Clique model, which allows all-to-all communication, there is a deterministic maximal independent set (MIS) algorithm that runs in O(log^2 Delta) rounds, where Delta is the maximum degree. When Delta=O(n^(1/3)), the bound improves to O(log Delta).
Adapting the above to the CONGEST model gives an O(D log^2 n)-round deterministic MIS algorithm, where D is the diameter of the graph. Apart from a previous unproven claim of a O(D log^3 n)-round algorithm, the only known deterministic solutions for the CONGEST model are a coloring-based O(Delta + log^* n)-round algorithm, where Delta is the maximal degree in the graph, and a 2^O(sqrt(log n log log n))-round algorithm, which is super-polylogarithmic in n.
In addition, we deterministically construct a (2k-1)-spanner with O(kn^(1+1/k) log n) edges in O(k log n) rounds in the Congested Clique model. For comparison, in the more stringent CONGEST model, where the communication graph is identical to the input graph, the best deterministic algorithm for constructing a (2k-1)-spanner with O(kn^(1+1/k)) edges runs in O(n^(1-1/k)) rounds
Towards a complexity theory for the congested clique
The congested clique model of distributed computing has been receiving
attention as a model for densely connected distributed systems. While there has
been significant progress on the side of upper bounds, we have very little in
terms of lower bounds for the congested clique; indeed, it is now know that
proving explicit congested clique lower bounds is as difficult as proving
circuit lower bounds.
In this work, we use various more traditional complexity-theoretic tools to
build a clearer picture of the complexity landscape of the congested clique:
-- Nondeterminism and beyond: We introduce the nondeterministic congested
clique model (analogous to NP) and show that there is a natural canonical
problem family that captures all problems solvable in constant time with
nondeterministic algorithms. We further generalise these notions by introducing
the constant-round decision hierarchy (analogous to the polynomial hierarchy).
-- Non-constructive lower bounds: We lift the prior non-uniform counting
arguments to a general technique for proving non-constructive uniform lower
bounds for the congested clique. In particular, we prove a time hierarchy
theorem for the congested clique, showing that there are decision problems of
essentially all complexities, both in the deterministic and nondeterministic
settings.
-- Fine-grained complexity: We map out relationships between various natural
problems in the congested clique model, arguing that a reduction-based
complexity theory currently gives us a fairly good picture of the complexity
landscape of the congested clique
Improved Deterministic Network Decomposition
Network decomposition is a central tool in distributed graph algorithms. We
present two improvements on the state of the art for network decomposition,
which thus lead to improvements in the (deterministic and randomized)
complexity of several well-studied graph problems.
- We provide a deterministic distributed network decomposition algorithm with
round complexity, using -bit messages. This improves
on the -round algorithm of Rozho\v{n} and Ghaffari [STOC'20],
which used large messages, and their -round algorithm with -bit messages. This directly leads to similar improvements for a wide range
of deterministic and randomized distributed algorithms, whose solution relies
on network decomposition, including the general distributed derandomization of
Ghaffari, Kuhn, and Harris [FOCS'18].
- One drawback of the algorithm of Rozho\v{n} and Ghaffari, in the
model, was its dependence on the length of the identifiers.
Because of this, for instance, the algorithm could not be used in the
shattering framework in the model. Thus, the state of the
art randomized complexity of several problems in this model remained with an
additive term, which was a clear leftover of the
older network decomposition complexity [Panconesi and Srinivasan STOC'92]. We
present a modified version that remedies this, constructing a decomposition
whose quality does not depend on the identifiers, and thus improves the
randomized round complexity for various problems