29 research outputs found
New Classes of Distributed Time Complexity
A number of recent papers -- e.g. Brandt et al. (STOC 2016), Chang et al.
(FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang &
Pettie (FOCS 2017) -- have advanced our understanding of one of the most
fundamental questions in theory of distributed computing: what are the possible
time complexity classes of LCL problems in the LOCAL model? In essence, we have
a graph problem in which a solution can be verified by checking all
radius- neighbourhoods, and the question is what is the smallest such
that a solution can be computed so that each node chooses its own output based
on its radius- neighbourhood. Here is the distributed time complexity of
.
The time complexity classes for deterministic algorithms in bounded-degree
graphs that are known to exist by prior work are , , , , and . It is also known
that there are two gaps: one between and , and
another between and . It has been conjectured
that many more gaps exist, and that the overall time hierarchy is relatively
simple -- indeed, this is known to be the case in restricted graph families
such as cycles and grids.
We show that the picture is much more diverse than previously expected. We
present a general technique for engineering LCL problems with numerous
different deterministic time complexities, including
for any , for any , and
for any in the high end of the complexity
spectrum, and for any ,
for any , and
for any in the low end; here
is a positive rational number
Locality of not-so-weak coloring
Many graph problems are locally checkable: a solution is globally feasible if
it looks valid in all constant-radius neighborhoods. This idea is formalized in
the concept of locally checkable labelings (LCLs), introduced by Naor and
Stockmeyer (1995). Recently, Chang et al. (2016) showed that in bounded-degree
graphs, every LCL problem belongs to one of the following classes:
- "Easy": solvable in rounds with both deterministic and
randomized distributed algorithms.
- "Hard": requires at least rounds with deterministic and
rounds with randomized distributed algorithms.
Hence for any parameterized LCL problem, when we move from local problems
towards global problems, there is some point at which complexity suddenly jumps
from easy to hard. For example, for vertex coloring in -regular graphs it is
now known that this jump is at precisely colors: coloring with colors
is easy, while coloring with colors is hard.
However, it is currently poorly understood where this jump takes place when
one looks at defective colorings. To study this question, we define -partial
-coloring as follows: nodes are labeled with numbers between and ,
and every node is incident to at least properly colored edges.
It is known that -partial -coloring (a.k.a. weak -coloring) is easy
for any . As our main result, we show that -partial -coloring
becomes hard as soon as , no matter how large a we have.
We also show that this is fundamentally different from -partial
-coloring: no matter which we choose, the problem is always hard
for but it becomes easy when . The same was known previously
for partial -coloring with , but the case of was open
Towards a complexity theory for the congested clique
The congested clique model of distributed computing has been receiving
attention as a model for densely connected distributed systems. While there has
been significant progress on the side of upper bounds, we have very little in
terms of lower bounds for the congested clique; indeed, it is now know that
proving explicit congested clique lower bounds is as difficult as proving
circuit lower bounds.
In this work, we use various more traditional complexity-theoretic tools to
build a clearer picture of the complexity landscape of the congested clique:
-- Nondeterminism and beyond: We introduce the nondeterministic congested
clique model (analogous to NP) and show that there is a natural canonical
problem family that captures all problems solvable in constant time with
nondeterministic algorithms. We further generalise these notions by introducing
the constant-round decision hierarchy (analogous to the polynomial hierarchy).
-- Non-constructive lower bounds: We lift the prior non-uniform counting
arguments to a general technique for proving non-constructive uniform lower
bounds for the congested clique. In particular, we prove a time hierarchy
theorem for the congested clique, showing that there are decision problems of
essentially all complexities, both in the deterministic and nondeterministic
settings.
-- Fine-grained complexity: We map out relationships between various natural
problems in the congested clique model, arguing that a reduction-based
complexity theory currently gives us a fairly good picture of the complexity
landscape of the congested clique
A Breezing Proof of the KMW Bound
In their seminal paper from 2004, Kuhn, Moscibroda, and Wattenhofer (KMW)
proved a hardness result for several fundamental graph problems in the LOCAL
model: For any (randomized) algorithm, there are input graphs with nodes
and maximum degree on which (expected) communication rounds are
required to obtain polylogarithmic approximations to a minimum vertex cover,
minimum dominating set, or maximum matching. Via reduction, this hardness
extends to symmetry breaking tasks like finding maximal independent sets or
maximal matchings. Today, more than years later, there is still no proof
of this result that is easy on the reader. Setting out to change this, in this
work, we provide a fully self-contained and proof of the KMW
lower bound. The key argument is algorithmic, and it relies on an invariant
that can be readily verified from the generation rules of the lower bound
graphs.Comment: 21 pages, 6 figure
Locality of Not-So-Weak Coloring
Many graph problems are locally checkable: a solution is globally feasible if it looks valid in all constant-radius neighborhoods. This idea is formalized in the concept of locally checkable labelings (LCLs), introduced by Naor and Stockmeyer (1995). Recently, Chang et al. (2016) showed that in bounded-degree graphs, every LCL problem belongs to one of the following classes: - "Easy": solvable in rounds with both deterministic and randomized distributed algorithms. - "Hard": requires at least rounds with deterministic and rounds with randomized distributed algorithms. Hence for any parameterized LCL problem, when we move from local problems towards global problems, there is some point at which complexity suddenly jumps from easy to hard. For example, for vertex coloring in -regular graphs it is now known that this jump is at precisely colors: coloring with colors is easy, while coloring with colors is hard. However, it is currently poorly understood where this jump takes place when one looks at defective colorings. To study this question, we define -partial -coloring as follows: nodes are labeled with numbers between and , and every node is incident to at least properly colored edges. It is known that -partial -coloring (a.k.a. weak -coloring) is easy for any . As our main result, we show that -partial -coloring becomes hard as soon as , no matter how large a we have. We also show that this is fundamentally different from -partial -coloring: no matter which we choose, the problem is always hard for but it becomes easy when . The same was known previously for partial -coloring with , but the case of was open
Constant Space and Non-Constant Time in Distributed Computing
While the relationship of time and space is an established topic in traditional centralised com- plexity theory, this is not the case in distributed computing. We aim to remedy this by studying the time and space complexity of algorithms in a weak message-passing model of distributed com- puting. While a constant number of communication rounds implies a constant number of states visited during the execution, the other direction is not clear at all. We show that indeed, there exist non-trivial graph problems that are solvable by constant-space algorithms but that require a non-constant running time. Somewhat surprisingly, this holds even when restricted to the class of only cycle and path graphs. Our work provides us with a new complexity class for distributed computing and raises interesting questions about the existence of further combinations of time and space complexity