43 research outputs found
Distributed coloring in sparse graphs with fewer colors
This paper is concerned with efficiently coloring sparse graphs in the
distributed setting with as few colors as possible. According to the celebrated
Four Color Theorem, planar graphs can be colored with at most 4 colors, and the
proof gives a (sequential) quadratic algorithm finding such a coloring. A
natural problem is to improve this complexity in the distributed setting. Using
the fact that planar graphs contain linearly many vertices of degree at most 6,
Goldberg, Plotkin, and Shannon obtained a deterministic distributed algorithm
coloring -vertex planar graphs with 7 colors in rounds. Here, we
show how to color planar graphs with 6 colors in \mbox{polylog}(n) rounds.
Our algorithm indeed works more generally in the list-coloring setting and for
sparse graphs (for such graphs we improve by at least one the number of colors
resulting from an efficient algorithm of Barenboim and Elkin, at the expense of
a slightly worst complexity). Our bounds on the number of colors turn out to be
quite sharp in general. Among other results, we show that no distributed
algorithm can color every -vertex planar graph with 4 colors in
rounds.Comment: 16 pages, 4 figures - An extended abstract of this work was presented
at PODC'18 (ACM Symposium on Principles of Distributed Computing
Efficient Distributed Decomposition and Routing Algorithms in Minor-Free Networks and Their Applications
In the LOCAL model, low-diameter decomposition is a useful tool in designing
algorithms, as it allows us to shift from the general graph setting to the
low-diameter graph setting, where brute-force information gathering can be done
efficiently. Recently, Chang and Su [PODC 2022] showed that any
high-conductance network excluding a fixed minor contains a high-degree vertex,
so the entire graph topology can be gathered to one vertex efficiently in the
CONGEST model using expander routing. Therefore, in networks excluding a fixed
minor, many problems that can be solved efficiently in LOCAL via low-diameter
decomposition can also be solved efficiently in CONGEST via expander
decomposition.
In this work, we show improved decomposition and routing algorithms for
networks excluding a fixed minor in the CONGEST model. Our algorithms cost
rounds deterministically. For bounded-degree
graphs, our algorithms finish in
rounds.
Our algorithms have a wide range of applications, including the following
results in CONGEST.
1. A -approximate maximum independent set in a network
excluding a fixed minor can be computed deterministically in
rounds, nearly matching the
lower bound of Lenzen and Wattenhofer [DISC
2008].
2. Property testing of any additive minor-closed property can be done
deterministically in rounds if is a constant or
rounds if the maximum degree
is a constant, nearly matching the lower
bound of Levi, Medina, and Ron [PODC 2018].Comment: To appear in PODC 202
Faster Distributed Shortest Path Approximations via Shortcuts
A long series of recent results and breakthroughs have led to faster and better distributed approximation algorithms for single source shortest paths (SSSP) and related problems in the CONGEST model. The runtime of all these algorithms, however, is Omega~(sqrt{n}), regardless of the network topology, even on nice networks with a (poly)logarithmic network diameter D. While this is known to be necessary for some pathological networks, most topologies of interest are arguably not of this type.
We give the first distributed approximation algorithms for shortest paths problems that adjust to the topology they are run on, thus achieving significantly faster running times on many topologies of interest. The running time of our algorithms depends on and is close to Q, where Q is the quality of the best shortcut that exists for the given topology. While Q = Theta~(sqrt{n} + D) for pathological worst-case topologies, many topologies of interest have Q = Theta~(D), which results in near instance optimal running times for our algorithm, given the trivial Omega(D) lower bound.
The problems we consider are as follows:
- an approximate shortest path tree and SSSP distances,
- a polylogarithmic size distance label for every node such that from the labels of any two nodes alone one can determine their distance (approximately), and
- an (approximately) optimal flow for the transshipment problem.
Our algorithms have a tunable tradeoff between running time and approximation ratio. Our fastest algorithms have an arbitrarily good polynomial approximation guarantee and an essentially optimal O~(Q) running time. On the other end of the spectrum, we achieve polylogarithmic approximations in O~(Q * n^epsilon) rounds for any epsilon > 0. It seems likely that eventually, our non-trivial approximation algorithms for the SSSP tree and transshipment problem can be bootstrapped to give fast Q * 2^O(sqrt{log n log log n}) round (1+epsilon)-approximation algorithms using a recent result by Becker et al
Exponential Speedup over Locality in MPC with Optimal Memory
Locally Checkable Labeling (LCL) problems are graph problems in which a solution is correct if it satisfies some given constraints in the local neighborhood of each node. Example problems in this class include maximal matching, maximal independent set, and coloring problems. A successful line of research has been studying the complexities of LCL problems on paths/cycles, trees, and general graphs, providing many interesting results for the LOCAL model of distributed computing. In this work, we initiate the study of LCL problems in the low-space Massively Parallel Computation (MPC) model. In particular, on forests, we provide a method that, given the complexity of an LCL problem in the LOCAL model, automatically provides an exponentially faster algorithm for the low-space MPC setting that uses optimal global memory, that is, truly linear.
While restricting to forests may seem to weaken the result, we emphasize that all known (conditional) lower bounds for the MPC setting are obtained by lifting lower bounds obtained in the distributed setting in tree-like networks (either forests or high girth graphs), and hence the problems that we study are challenging already on forests. Moreover, the most important technical feature of our algorithms is that they use optimal global memory, that is, memory linear in the number of edges of the graph. In contrast, most of the state-of-the-art algorithms use more than linear global memory. Further, they typically start with a dense graph, sparsify it, and then solve the problem on the residual graph, exploiting the relative increase in global memory. On forests, this is not possible, because the given graph is already as sparse as it can be, and using optimal memory requires new solutions