561 research outputs found

    Dynamic Low-Stretch Trees via Dynamic Low-Diameter Decompositions

    Full text link
    Spanning trees of low average stretch on the non-tree edges, as introduced by Alon et al. [SICOMP 1995], are a natural graph-theoretic object. In recent years, they have found significant applications in solvers for symmetric diagonally dominant (SDD) linear systems. In this work, we provide the first dynamic algorithm for maintaining such trees under edge insertions and deletions to the input graph. Our algorithm has update time n1/2+o(1) n^{1/2 + o(1)} and the average stretch of the maintained tree is no(1) n^{o(1)} , which matches the stretch in the seminal result of Alon et al. Similar to Alon et al., our dynamic low-stretch tree algorithm employs a dynamic hierarchy of low-diameter decompositions (LDDs). As a major building block we use a dynamic LDD that we obtain by adapting the random-shift clustering of Miller et al. [SPAA 2013] to the dynamic setting. The major technical challenge in our approach is to control the propagation of updates within our hierarchy of LDDs: each update to one level of the hierarchy could potentially induce several insertions and deletions to the next level of the hierarchy. We achieve this goal by a sophisticated amortization approach. We believe that the dynamic random-shift clustering might be useful for independent applications. One of these applications is the dynamic spanner problem. By combining the random-shift clustering with the recent spanner construction of Elkin and Neiman [SODA 2017]. We obtain a fully dynamic algorithm for maintaining a spanner of stretch 2k1 2k - 1 and size O(n1+1/klogn) O (n^{1 + 1/k} \log{n}) with amortized update time O(klog2n) O (k \log^2 n) for any integer 2klogn 2 \leq k \leq \log n . Compared to the state-of-the art in this regime [Baswana et al. TALG '12], we improve upon the size of the spanner and the update time by a factor of k k .Comment: To be presented at the 51st Annual ACM Symposium on the Theory of Computing (STOC 2019); abstract shortened to respect the arXiv limit of 1920 character

    Low Diameter Graph Decompositions by Approximate Distance Computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    On Strong Diameter Padded Decompositions

    Get PDF
    Given a weighted graph G=(V,E,w), a partition of V is Delta-bounded if the diameter of each cluster is bounded by Delta. A distribution over Delta-bounded partitions is a beta-padded decomposition if every ball of radius gamma Delta is contained in a single cluster with probability at least e^{-beta * gamma}. The weak diameter of a cluster C is measured w.r.t. distances in G, while the strong diameter is measured w.r.t. distances in the induced graph G[C]. The decomposition is weak/strong according to the diameter guarantee. Formerly, it was proven that K_r free graphs admit weak decompositions with padding parameter O(r), while for strong decompositions only O(r^2) padding parameter was known. Furthermore, for the case of a graph G, for which the induced shortest path metric d_G has doubling dimension ddim, a weak O(ddim)-padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known. We construct strong O(r)-padded decompositions for K_r free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension ddim we construct a strong O(ddim)-padded decomposition, which is also tight. We use this decomposition to construct (O(ddim),O~(ddim))-sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles

    Low diameter graph decompositions by approximate distance computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is “used” only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    An Improved Random Shift Algorithm for Spanners and Low Diameter Decompositions

    Get PDF
    Spanners have been shown to be a powerful tool in graph algorithms. Many spanner constructions use a certain type of clustering at their core, where each cluster has small diameter and there are relatively few spanner edges between clusters. In this paper, we provide a clustering algorithm that, given k ? 2, can be used to compute a spanner of stretch 2k-1 and expected size O(n^{1+1/k}) in k rounds in the CONGEST model. This improves upon the state of the art (by Elkin, and Neiman [TALG\u2719]) by making the bounds on both running time and stretch independent of the random choices of the algorithm, whereas they only hold with high probability in previous results. Spanners are used in certain synchronizers, thus our improvement directly carries over to such synchronizers. Furthermore, for keeping the total number of inter-cluster edges small in low diameter decompositions, our clustering algorithm provides the following guarantees. Given ? ? (0,1], we compute a low diameter decomposition with diameter bound O((log n)/?) such that each edge e ? E is an inter-cluster edge with probability at most ?? w(e) in O((log n)/?) rounds in the CONGEST model. Again, this improves upon the state of the art (by Miller, Peng, and Xu [SPAA\u2713]) by making the bounds on both running time and diameter independent of the random choices of the algorithm, whereas they only hold with high probability in previous results

    Distributed Strong Diameter Network Decomposition

    Full text link
    For a pair of positive parameters D,χD,\chi, a partition P{\cal P} of the vertex set VV of an nn-vertex graph G=(V,E)G = (V,E) into disjoint clusters of diameter at most DD each is called a (D,χ)(D,\chi) network decomposition, if the supergraph G(P){\cal G}({\cal P}), obtained by contracting each of the clusters of P{\cal P}, can be properly χ\chi-colored. The decomposition P{\cal P} is said to be strong (resp., weak) if each of the clusters has strong (resp., weak) diameter at most DD, i.e., if for every cluster CPC \in {\cal P} and every two vertices u,vCu,v \in C, the distance between them in the induced graph G(C)G(C) of CC (resp., in GG) is at most DD. Network decomposition is a powerful construct, very useful in distributed computing and beyond. It was shown by Awerbuch \etal \cite{AGLP89} and Panconesi and Srinivasan \cite{PS92}, that strong (2O(logn),2O(logn))(2^{O(\sqrt{\log n})},2^{O(\sqrt{\log n})}) network decompositions can be computed in 2O(logn)2^{O(\sqrt{\log n})} distributed time. Linial and Saks \cite{LS93} devised an ingenious randomized algorithm that constructs {\em weak} (O(logn),O(logn))(O(\log n),O(\log n)) network decompositions in O(log2n)O(\log^2 n) time. It was however open till now if {\em strong} network decompositions with both parameters 2o(logn)2^{o(\sqrt{\log n})} can be constructed in distributed 2o(logn)2^{o(\sqrt{\log n})} time. In this paper we answer this long-standing open question in the affirmative, and show that strong (O(logn),O(logn))(O(\log n),O(\log n)) network decompositions can be computed in O(log2n)O(\log^2 n) time. We also present a tradeoff between parameters of our network decomposition. Our work is inspired by and relies on the "shifted shortest path approach", due to Blelloch \etal \cite{BGKMPT11}, and Miller \etal \cite{MPX13}. These authors developed this approach for PRAM algorithms for padded partitions. We adapt their approach to network decompositions in the distributed model of computation
    corecore