184 research outputs found

    Parameterized Distributed Algorithms

    Get PDF
    In this work, we initiate a thorough study of graph optimization problems parameterized by the output size in the distributed setting. In such a problem, an algorithm decides whether a solution of size bounded by k exists and if so, it finds one. We study fundamental problems, including Minimum Vertex Cover (MVC), Maximum Independent Set (MaxIS), Maximum Matching (MaxM), and many others, in both the LOCAL and CONGEST distributed computation models. We present lower bounds for the round complexity of solving parameterized problems in both models, together with optimal and near-optimal upper bounds. Our results extend beyond the scope of parameterized problems. We show that any LOCAL (1+epsilon)-approximation algorithm for the above problems must take Omega(epsilon^{-1}) rounds. Joined with the (epsilon^{-1}log n)^{O(1)} rounds algorithm of [Ghaffari et al., 2017] and the Omega (sqrt{(log n)/(log log n)}) lower bound of [Fabian Kuhn et al., 2016], the lower bounds match the upper bound up to polynomial factors in both parameters. We also show that our parameterized approach reduces the runtime of exact and approximate CONGEST algorithms for MVC and MaxM if the optimal solution is small, without knowing its size beforehand. Finally, we propose the first o(n^2) rounds CONGEST algorithms that approximate MVC within a factor strictly smaller than 2

    Fast Distributed Approximation for Max-Cut

    Full text link
    Finding a maximum cut is a fundamental task in many computational settings. Surprisingly, it has been insufficiently studied in the classic distributed settings, where vertices communicate by synchronously sending messages to their neighbors according to the underlying graph, known as the LOCAL\mathcal{LOCAL} or CONGEST\mathcal{CONGEST} models. We amend this by obtaining almost optimal algorithms for Max-Cut on a wide class of graphs in these models. In particular, for any Ï”>0\epsilon > 0, we develop randomized approximation algorithms achieving a ratio of (1−ϔ)(1-\epsilon) to the optimum for Max-Cut on bipartite graphs in the CONGEST\mathcal{CONGEST} model, and on general graphs in the LOCAL\mathcal{LOCAL} model. We further present efficient deterministic algorithms, including a 1/31/3-approximation for Max-Dicut in our models, thus improving the best known (randomized) ratio of 1/41/4. Our algorithms make non-trivial use of the greedy approach of Buchbinder et al. (SIAM Journal on Computing, 2015) for maximizing an unconstrained (non-monotone) submodular function, which may be of independent interest

    Towards a complexity theory for the congested clique

    Full text link
    The congested clique model of distributed computing has been receiving attention as a model for densely connected distributed systems. While there has been significant progress on the side of upper bounds, we have very little in terms of lower bounds for the congested clique; indeed, it is now know that proving explicit congested clique lower bounds is as difficult as proving circuit lower bounds. In this work, we use various more traditional complexity-theoretic tools to build a clearer picture of the complexity landscape of the congested clique: -- Nondeterminism and beyond: We introduce the nondeterministic congested clique model (analogous to NP) and show that there is a natural canonical problem family that captures all problems solvable in constant time with nondeterministic algorithms. We further generalise these notions by introducing the constant-round decision hierarchy (analogous to the polynomial hierarchy). -- Non-constructive lower bounds: We lift the prior non-uniform counting arguments to a general technique for proving non-constructive uniform lower bounds for the congested clique. In particular, we prove a time hierarchy theorem for the congested clique, showing that there are decision problems of essentially all complexities, both in the deterministic and nondeterministic settings. -- Fine-grained complexity: We map out relationships between various natural problems in the congested clique model, arguing that a reduction-based complexity theory currently gives us a fairly good picture of the complexity landscape of the congested clique

    A Faster Distributed Single-Source Shortest Paths Algorithm

    Full text link
    We devise new algorithms for the single-source shortest paths (SSSP) problem with non-negative edge weights in the CONGEST model of distributed computing. While close-to-optimal solutions, in terms of the number of rounds spent by the algorithm, have recently been developed for computing SSSP approximately, the fastest known exact algorithms are still far away from matching the lower bound of Ω~(n+D) \tilde \Omega (\sqrt{n} + D) rounds by Peleg and Rubinovich [SIAM Journal on Computing 2000], where n n is the number of nodes in the network and D D is its diameter. The state of the art is Elkin's randomized algorithm [STOC 2017] that performs O~(n2/3D1/3+n5/6) \tilde O(n^{2/3} D^{1/3} + n^{5/6}) rounds. We significantly improve upon this upper bound with our two new randomized algorithms for polynomially bounded integer edge weights, the first performing O~(nD) \tilde O (\sqrt{n D}) rounds and the second performing O~(nD1/4+n3/5+D) \tilde O (\sqrt{n} D^{1/4} + n^{3/5} + D) rounds. Our bounds also compare favorably to the independent result by Ghaffari and Li [STOC 2018]. As side results, we obtain a (1+ϔ) (1 + \epsilon) -approximation O~((nD1/4+D)/ϔ) \tilde O ((\sqrt{n} D^{1/4} + D) / \epsilon) -round algorithm for directed SSSP and a new work/depth trade-off for exact SSSP on directed graphs in the PRAM model.Comment: Presented at the the 59th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2018

    Low Diameter Graph Decompositions by Approximate Distance Computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    Algorithms for the Minimum Dominating Set Problem in Bounded Arboricity Graphs: Simpler, Faster, and Combinatorial

    Get PDF
    We revisit the minimum dominating set problem on graphs with arboricity bounded by α\alpha. In the (standard) centralized setting, Bansal and Umboh [BU17] gave an O(α)O(\alpha)-approximation LP rounding algorithm. Moreover, [BU17] showed that it is NP-hard to achieve an asymptotic improvement. On the other hand, the previous two non-LP-based algorithms, by Lenzen and Wattenhofer [LW10], and Jones et al. [JLR+13], achieve an approximation factor of O(α2)O(\alpha^2) in linear time. There is a similar situation in the distributed setting: While there are polylog⁥n\text{poly}\log n-round LP-based O(α)O(\alpha)-approximation algorithms [KMW06, DKM19], the best non-LP-based algorithm by Lenzen and Wattenhofer [LW10] is an implementation of their centralized algorithm, providing an O(α2)O(\alpha^2)-approximation within O(log⁥n)O(\log n) rounds with high probability. We address the question of whether one can achieve a simple, elementary O(α)O(\alpha)-approximation algorithm not based on any LP-based methods, either in the centralized setting or in the distributed setting. We resolve these questions in the affirmative. More specifically, our contribution is two-fold: 1. In the centralized setting, we provide a surprisingly simple combinatorial algorithm that is asymptotically optimal in terms of both approximation factor and running time: an O(α)O(\alpha)-approximation in linear time. 2. Based on our centralized algorithm, we design a distributed combinatorial O(α)O(\alpha)-approximation algorithm in the CONGEST\mathsf{CONGEST} model that runs in O(αlog⁥n)O(\alpha\log n ) rounds with high probability. Our round complexity outperforms the best LP-based distributed algorithm for a wide range of parameters

    Distributed distance-r covering problems on sparse high-girth graphs

    Get PDF
    We prove that the distance-r dominating set, distance-r connected dominating set, distance-r vertex cover, and distance-r connected vertex cover problems admit constant factor approximations in the CONGEST model of distributed computing in a constant number of rounds on classes of sparse high-girth graphs. In this paper, sparse means bounded expansion, and high-girth means girth at least 4r + 2. Our algorithm is quite simple; however, the proof of its approximation guarantee is non-trivial. To complement the algorithmic results, we show tightness of our approximation by providing a loosely matching lower bound on rings. Our result is the first to show the existence of constant-factor approximations in a constant number of rounds in non-trivial classes of graphs for distance-r covering problems
    • 

    corecore