60 research outputs found

    Massively Parallel Approximate Distance Sketches

    Get PDF
    Data structures that allow efficient distance estimation (distance oracles, distance sketches, etc.) have been extensively studied, and are particularly well studied in centralized models and classical distributed models such as CONGEST. We initiate their study in newer (and arguably more realistic) models of distributed computation: the Congested Clique model and the Massively Parallel Computation (MPC) model. We provide efficient constructions in both of these models, but our core results are for MPC. In MPC we give two main results: an algorithm that constructs stretch/space optimal distance sketches but takes a (small) polynomial number of rounds, and an algorithm that constructs distance sketches with worse stretch but that only takes polylogarithmic rounds. Along the way, we show that other useful combinatorial structures can also be computed in MPC. In particular, one key component we use to construct distance sketches are an MPC construction of the hopsets of [Elkin and Neiman, 2016]. This result has additional applications such as the first polylogarithmic time algorithm for constant approximate single-source shortest paths for weighted graphs in the low memory MPC setting

    Almost Shortest Paths with Near-Additive Error in Weighted Graphs

    Get PDF
    Let G=(V,E,w)G=(V,E,w) be a weighted undirected graph with nn vertices and mm edges, and fix a set of ss sources SVS\subseteq V. We study the problem of computing {\em almost shortest paths} (ASP) for all pairs in S×VS \times V in both classical centralized and parallel (PRAM) models of computation. Consider the regime of multiplicative approximation of 1+ϵ1+\epsilon, for an arbitrarily small constant ϵ>0\epsilon > 0 . In this regime existing centralized algorithms require Ω(min{Es,nω})\Omega(\min\{|E|s,n^\omega\}) time, where ω<2.372\omega < 2.372 is the matrix multiplication exponent. Existing PRAM algorithms with polylogarithmic depth (aka time) require work Ω(min{Es,nω})\Omega(\min\{|E|s,n^\omega\}). Our centralized algorithm has running time O((m+ns)nρ)O((m+ ns)n^\rho), and its PRAM counterpart has polylogarithmic depth and work O((m+ns)nρ)O((m + ns)n^\rho), for an arbitrarily small constant ρ>0\rho > 0. For a pair (s,v)S×V(s,v) \in S\times V, it provides a path of length d^(s,v)\hat{d}(s,v) that satisfies d^(s,v)(1+ϵ)dG(s,v)+βW(s,v)\hat{d}(s,v) \le (1+\epsilon)d_G(s,v) + \beta \cdot W(s,v), where W(s,v)W(s,v) is the weight of the heaviest edge on some shortest svs-v path. Hence our additive term depends linearly on a {\em local} maximum edge weight, as opposed to the global maximum edge weight in previous works. Finally, our β=(1/ρ)O(1/ρ)\beta = (1/\rho)^{O(1/\rho)}. We also extend a centralized algorithm of Dor et al. \cite{DHZ00}. For a parameter κ=1,2,\kappa = 1,2,\ldots, this algorithm provides for {\em unweighted} graphs a purely additive approximation of 2(κ1)2(\kappa -1) for {\em all pairs shortest paths} (APASP) in time O~(n2+1/κ)\tilde{O}(n^{2+1/\kappa}). Within the same running time, our algorithm for {\em weighted} graphs provides a purely additive error of 2(κ1)W(u,v)2(\kappa - 1) W(u,v), for every vertex pair (u,v)(V2)(u,v) \in {V \choose 2}, with W(u,v)W(u,v) defined as above. On the way to these results we devise a suit of novel constructions of spanners, emulators and hopsets

    Undirected (1+ε)(1+\varepsilon)-Shortest Paths via Minor-Aggregates: Near-Optimal Deterministic Parallel & Distributed Algorithms

    Full text link
    This paper presents near-optimal deterministic parallel and distributed algorithms for computing (1+ε)(1+\varepsilon)-approximate single-source shortest paths in any undirected weighted graph. On a high level, we deterministically reduce this and other shortest-path problems to O~(1)\tilde{O}(1) Minor-Aggregations. A Minor-Aggregation computes an aggregate (e.g., max or sum) of node-values for every connected component of some subgraph. Our reduction immediately implies: Optimal deterministic parallel (PRAM) algorithms with O~(1)\tilde{O}(1) depth and near-linear work. Universally-optimal deterministic distributed (CONGEST) algorithms, whenever deterministic Minor-Aggregate algorithms exist. For example, an optimal O~(HopDiameter(G))\tilde{O}(HopDiameter(G))-round deterministic CONGEST algorithm for excluded-minor networks. Several novel tools developed for the above results are interesting in their own right: A local iterative approach for reducing shortest path computations "up to distance DD" to computing low-diameter decompositions "up to distance D2\frac{D}{2}". Compared to the recursive vertex-reduction approach of [Li20], our approach is simpler, suitable for distributed algorithms, and eliminates many derandomization barriers. A simple graph-based O~(1)\tilde{O}(1)-competitive 1\ell_1-oblivious routing based on low-diameter decompositions that can be evaluated in near-linear work. The previous such routing [ZGY+20] was no(1)n^{o(1)}-competitive and required no(1)n^{o(1)} more work. A deterministic algorithm to round any fractional single-source transshipment flow into an integral tree solution. The first distributed algorithms for computing Eulerian orientations

    Sparse Hopsets in Congested Clique

    Get PDF
    We give the first Congested Clique algorithm that computes a sparse hopset with polylogarithmic hopbound in polylogarithmic time. Given a graph G=(V,E)G=(V,E), a (β,ϵ)(\beta,\epsilon)-hopset HH with "hopbound" β\beta, is a set of edges added to GG such that for any pair of nodes uu and vv in GG there is a path with at most β\beta hops in GHG \cup H with length within (1+ϵ)(1+\epsilon) of the shortest path between uu and vv in GG. Our hopsets are significantly sparser than the recent construction of Censor-Hillel et al. [6], that constructs a hopset of size O~(n3/2)\tilde{O}(n^{3/2}), but with a smaller polylogarithmic hopbound. On the other hand, the previously known constructions of sparse hopsets with polylogarithmic hopbound in the Congested Clique model, proposed by Elkin and Neiman [10],[11],[12], all require polynomial rounds. One tool that we use is an efficient algorithm that constructs an \ell-limited neighborhood cover, that may be of independent interest. Finally, as a side result, we also give a hopset construction in a variant of the low-memory Massively Parallel Computation model, with improved running time over existing algorithms

    Improved Parallel Algorithms for Spanners and Hopsets

    Full text link
    We use exponential start time clustering to design faster and more work-efficient parallel graph algorithms involving distances. Previous algorithms usually rely on graph decomposition routines with strict restrictions on the diameters of the decomposed pieces. We weaken these bounds in favor of stronger local probabilistic guarantees. This allows more direct analyses of the overall process, giving: * Linear work parallel algorithms that construct spanners with O(k)O(k) stretch and size O(n1+1/k)O(n^{1+1/k}) in unweighted graphs, and size O(n1+1/klogk)O(n^{1+1/k} \log k) in weighted graphs. * Hopsets that lead to the first parallel algorithm for approximating shortest paths in undirected graphs with O(m  polylog  n)O(m\;\mathrm{polylog}\;n) work

    DISTRIBUTED, PARALLEL AND DYNAMIC DISTANCE STRUCTURES

    Get PDF
    Many fundamental computational tasks can be modeled by distances on a graph. This has inspired studying various structures that preserve approximate distances, but trade off this approximation factor with size, running time, or the number of hops on the approximate shortest paths. Our focus is on three important objects involving preservation of graph distances: hopsets, in which our goal is to ensure that small-hop paths also provide approximate shortest paths; distance oracles, in which we build a small data structure that supports efficient distance queries; and spanners, in which we find a sparse subgraph that approximately preserves all distances. We study efficient constructions and applications of these structures in various models of computation that capture different aspects of computational systems. Specifically, we propose new algorithms for constructing hopsets and distance oracles in two modern distributed models: the Massively Parallel Computation (MPC) and the Congested Clique model. These models have received significant attention recently due to their close connection to present-day big data platforms. In a different direction, we consider a centralized dynamic model in which the input changes over time. We propose new dynamic algorithms for constructing hopsets and distance oracles that lead to state-of-the-art approximate single-source, multi-source and all-pairs shortest path algorithms with respect to update-time. Finally, we study the problem of finding optimal spanners in a different distributed model, the LOCAL model. Unlike our other results, for this problem our goal is to find the best solution for a specific input graph rather than giving a general guarantee that holds for all inputs. One contribution of this work is to emphasize the significance of the tools and the techniques used for these distance problems rather than heavily focusing on a specific model. In other words, we show that our techniques are broad enough that they can be extended to different models

    A Distributed Algorithm for Directed Minimum-Weight Spanning Tree

    Get PDF
    corecore