1,237 research outputs found

    Deterministic Distributed Edge-Coloring via Hypergraph Maximal Matching

    Full text link
    We present a deterministic distributed algorithm that computes a (2Δ1)(2\Delta-1)-edge-coloring, or even list-edge-coloring, in any nn-node graph with maximum degree Δ\Delta, in O(log7Δlogn)O(\log^7 \Delta \log n) rounds. This answers one of the long-standing open questions of \emph{distributed graph algorithms} from the late 1980s, which asked for a polylogarithmic-time algorithm. See, e.g., Open Problem 4 in the Distributed Graph Coloring book of Barenboim and Elkin. The previous best round complexities were 2O(logn)2^{O(\sqrt{\log n})} by Panconesi and Srinivasan [STOC'92] and O~(Δ)+O(logn)\tilde{O}(\sqrt{\Delta}) + O(\log^* n) by Fraigniaud, Heinrich, and Kosowski [FOCS'16]. A corollary of our deterministic list-edge-coloring also improves the randomized complexity of (2Δ1)(2\Delta-1)-edge-coloring to poly(loglogn)(\log\log n) rounds. The key technical ingredient is a deterministic distributed algorithm for \emph{hypergraph maximal matching}, which we believe will be of interest beyond this result. In any hypergraph of rank rr --- where each hyperedge has at most rr vertices --- with nn nodes and maximum degree Δ\Delta, this algorithm computes a maximal matching in O(r5log6+logrΔlogn)O(r^5 \log^{6+\log r } \Delta \log n) rounds. This hypergraph matching algorithm and its extensions lead to a number of other results. In particular, a polylogarithmic-time deterministic distributed maximal independent set algorithm for graphs with bounded neighborhood independence, hence answering Open Problem 5 of Barenboim and Elkin's book, a ((logΔ/ε)O(log(1/ε)))((\log \Delta/\varepsilon)^{O(\log (1/\varepsilon))})-round deterministic algorithm for (1+ε)(1+\varepsilon)-approximation of maximum matching, and a quasi-polylogarithmic-time deterministic distributed algorithm for orienting λ\lambda-arboricity graphs with out-degree at most (1+ε)λ(1+\varepsilon)\lambda, for any constant ε>0\varepsilon>0, hence partially answering Open Problem 10 of Barenboim and Elkin's book

    Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms

    Full text link
    There has been significant progress in understanding the parallelism inherent to iterative sequential algorithms: for many classic algorithms, the depth of the dependence structure is now well understood, and scheduling techniques have been developed to exploit this shallow dependence structure for efficient parallel implementations. A related, applied research strand has studied methods by which certain iterative task-based algorithms can be efficiently parallelized via relaxed concurrent priority schedulers. These allow for high concurrency when inserting and removing tasks, at the cost of executing superfluous work due to the relaxed semantics of the scheduler. In this work, we take a step towards unifying these two research directions, by showing that there exists a family of relaxed priority schedulers that can efficiently and deterministically execute classic iterative algorithms such as greedy maximal independent set (MIS) and matching. Our primary result shows that, given a randomized scheduler with an expected relaxation factor of kk in terms of the maximum allowed priority inversions on a task, and any graph on nn vertices, the scheduler is able to execute greedy MIS with only an additive factor of poly(kk) expected additional iterations compared to an exact (but not scalable) scheduler. This counter-intuitive result demonstrates that the overhead of relaxation when computing MIS is not dependent on the input size or structure of the input graph. Experimental results show that this overhead can be clearly offset by the gain in performance due to the highly scalable scheduler. In sum, we present an efficient method to deterministically parallelize iterative sequential algorithms, with provable runtime guarantees in terms of the number of executed tasks to completion.Comment: PODC 2018, pages 377-386 in proceeding

    Super-Fast Distributed Algorithms for Metric Facility Location

    Full text link
    This paper presents a distributed O(1)-approximation algorithm, with expected-O(loglogn)O(\log \log n) running time, in the CONGEST\mathcal{CONGEST} model for the metric facility location problem on a size-nn clique network. Though metric facility location has been considered by a number of researchers in low-diameter settings, this is the first sub-logarithmic-round algorithm for the problem that yields an O(1)-approximation in the setting of non-uniform facility opening costs. In order to obtain this result, our paper makes three main technical contributions. First, we show a new lower bound for metric facility location, extending the lower bound of B\u{a}doiu et al. (ICALP 2005) that applies only to the special case of uniform facility opening costs. Next, we demonstrate a reduction of the distributed metric facility location problem to the problem of computing an O(1)-ruling set of an appropriate spanning subgraph. Finally, we present a sub-logarithmic-round (in expectation) algorithm for computing a 2-ruling set in a spanning subgraph of a clique. Our algorithm accomplishes this by using a combination of randomized and deterministic sparsification.Comment: 15 pages, 2 figures. This is the full version of a paper that appeared in ICALP 201

    Optimal Dynamic Distributed MIS

    Full text link
    Finding a maximal independent set (MIS) in a graph is a cornerstone task in distributed computing. The local nature of an MIS allows for fast solutions in a static distributed setting, which are logarithmic in the number of nodes or in their degrees. The result trivially applies for the dynamic distributed model, in which edges or nodes may be inserted or deleted. In this paper, we take a different approach which exploits locality to the extreme, and show how to update an MIS in a dynamic distributed setting, either \emph{synchronous} or \emph{asynchronous}, with only \emph{a single adjustment} and in a single round, in expectation. These strong guarantees hold for the \emph{complete fully dynamic} setting: Insertions and deletions, of edges as well as nodes, gracefully and abruptly. This strongly separates the static and dynamic distributed models, as super-constant lower bounds exist for computing an MIS in the former. Our results are obtained by a novel analysis of the surprisingly simple solution of carefully simulating the greedy \emph{sequential} MIS algorithm with a random ordering of the nodes. As such, our algorithm has a direct application as a 33-approximation algorithm for correlation clustering. This adds to the important toolbox of distributed graph decompositions, which are widely used as crucial building blocks in distributed computing. Finally, our algorithm enjoys a useful \emph{history-independence} property, meaning the output is independent of the history of topology changes that constructed that graph. This means the output cannot be chosen, or even biased, by the adversary in case its goal is to prevent us from optimizing some objective function.Comment: 19 pages including appendix and reference
    corecore