1,402 research outputs found

    Parallel algorithms for two processors precedence constraint scheduling

    Get PDF
    The final publication is available at link.springer.comPeer ReviewedPostprint (author's final draft

    Low Diameter Graph Decompositions by Approximate Distance Computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    Sparse Hopsets in Congested Clique

    Get PDF
    We give the first Congested Clique algorithm that computes a sparse hopset with polylogarithmic hopbound in polylogarithmic time. Given a graph G=(V,E)G=(V,E), a (β,ϵ)(\beta,\epsilon)-hopset HH with "hopbound" β\beta, is a set of edges added to GG such that for any pair of nodes uu and vv in GG there is a path with at most β\beta hops in GHG \cup H with length within (1+ϵ)(1+\epsilon) of the shortest path between uu and vv in GG. Our hopsets are significantly sparser than the recent construction of Censor-Hillel et al. [6], that constructs a hopset of size O~(n3/2)\tilde{O}(n^{3/2}), but with a smaller polylogarithmic hopbound. On the other hand, the previously known constructions of sparse hopsets with polylogarithmic hopbound in the Congested Clique model, proposed by Elkin and Neiman [10],[11],[12], all require polynomial rounds. One tool that we use is an efficient algorithm that constructs an \ell-limited neighborhood cover, that may be of independent interest. Finally, as a side result, we also give a hopset construction in a variant of the low-memory Massively Parallel Computation model, with improved running time over existing algorithms

    Distributed Strong Diameter Network Decomposition

    Full text link
    For a pair of positive parameters D,χD,\chi, a partition P{\cal P} of the vertex set VV of an nn-vertex graph G=(V,E)G = (V,E) into disjoint clusters of diameter at most DD each is called a (D,χ)(D,\chi) network decomposition, if the supergraph G(P){\cal G}({\cal P}), obtained by contracting each of the clusters of P{\cal P}, can be properly χ\chi-colored. The decomposition P{\cal P} is said to be strong (resp., weak) if each of the clusters has strong (resp., weak) diameter at most DD, i.e., if for every cluster CPC \in {\cal P} and every two vertices u,vCu,v \in C, the distance between them in the induced graph G(C)G(C) of CC (resp., in GG) is at most DD. Network decomposition is a powerful construct, very useful in distributed computing and beyond. It was shown by Awerbuch \etal \cite{AGLP89} and Panconesi and Srinivasan \cite{PS92}, that strong (2O(logn),2O(logn))(2^{O(\sqrt{\log n})},2^{O(\sqrt{\log n})}) network decompositions can be computed in 2O(logn)2^{O(\sqrt{\log n})} distributed time. Linial and Saks \cite{LS93} devised an ingenious randomized algorithm that constructs {\em weak} (O(logn),O(logn))(O(\log n),O(\log n)) network decompositions in O(log2n)O(\log^2 n) time. It was however open till now if {\em strong} network decompositions with both parameters 2o(logn)2^{o(\sqrt{\log n})} can be constructed in distributed 2o(logn)2^{o(\sqrt{\log n})} time. In this paper we answer this long-standing open question in the affirmative, and show that strong (O(logn),O(logn))(O(\log n),O(\log n)) network decompositions can be computed in O(log2n)O(\log^2 n) time. We also present a tradeoff between parameters of our network decomposition. Our work is inspired by and relies on the "shifted shortest path approach", due to Blelloch \etal \cite{BGKMPT11}, and Miller \etal \cite{MPX13}. These authors developed this approach for PRAM algorithms for padded partitions. We adapt their approach to network decompositions in the distributed model of computation
    corecore