793 research outputs found

    Distributed Edge Connectivity in Sublinear Time

    Full text link
    We present the first sublinear-time algorithm for a distributed message-passing network sto compute its edge connectivity λ\lambda exactly in the CONGEST model, as long as there are no parallel edges. Our algorithm takes O~(n11/353D1/353+n11/706)\tilde O(n^{1-1/353}D^{1/353}+n^{1-1/706}) time to compute λ\lambda and a cut of cardinality λ\lambda with high probability, where nn and DD are the number of nodes and the diameter of the network, respectively, and O~\tilde O hides polylogarithmic factors. This running time is sublinear in nn (i.e. O~(n1ϵ)\tilde O(n^{1-\epsilon})) whenever DD is. Previous sublinear-time distributed algorithms can solve this problem either (i) exactly only when λ=O(n1/8ϵ)\lambda=O(n^{1/8-\epsilon}) [Thurimella PODC'95; Pritchard, Thurimella, ACM Trans. Algorithms'11; Nanongkai, Su, DISC'14] or (ii) approximately [Ghaffari, Kuhn, DISC'13; Nanongkai, Su, DISC'14]. To achieve this we develop and combine several new techniques. First, we design the first distributed algorithm that can compute a kk-edge connectivity certificate for any k=O(n1ϵ)k=O(n^{1-\epsilon}) in time O~(nk+D)\tilde O(\sqrt{nk}+D). Second, we show that by combining the recent distributed expander decomposition technique of [Chang, Pettie, Zhang, SODA'19] with techniques from the sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup, STOC'15], we can decompose the network into a sublinear number of clusters with small average diameter and without any mincut separating a cluster (except the `trivial' ones). Finally, by extending the tree packing technique from [Karger STOC'96], we can find the minimum cut in time proportional to the number of components. As a byproduct of this technique, we obtain an O~(n)\tilde O(n)-time algorithm for computing exact minimum cut for weighted graphs.Comment: Accepted at 51st ACM Symposium on Theory of Computing (STOC 2019

    Brief Announcement: Faster Asynchronous MST and Low Diameter Tree Construction with Sublinear Communication

    Get PDF
    Building a spanning tree, minimum spanning tree (MST), and BFS tree in a distributed network are fundamental problems which are still not fully understood in terms of time and communication cost. The first work to succeed in computing a spanning tree with communication sublinear in the number of edges in an asynchronous CONGEST network appeared in DISC 2018. That algorithm which constructs an MST is sequential in the worst case; its running time is proportional to the total number of messages sent. Our paper matches its message complexity but brings the running time down to linear in n. Our techniques can also be used to provide an asynchronous algorithm with sublinear communication to construct a tree in which the distance from a source to each node is within an additive term of sqrt{n} of its actual distance

    Estimating the weight of metric minimum spanning trees in sublinear time

    Get PDF
    In this paper we present a sublinear-time (1+ε)(1+\varepsilon)-approximation randomized algorithm to estimate the weight of the minimum spanning tree of an nn-point metric space. The running time of the algorithm is O~(n/εO(1))\widetilde{\mathcal{O}}(n/\varepsilon^{\mathcal{O}(1)}). Since the full description of an nn-point metric space is of size Θ(n2)\Theta(n^2), the complexity of our algorithm is sublinear with respect to the input size. Our algorithm is almost optimal as it is not possible to approximate in o(n)o(n) time the weight of the minimum spanning tree to within any factor. We also show that no deterministic algorithm can achieve a BB-approximation in o(n2/B3)o(n^2/B^3) time. Furthermore, it has been previously shown that no o(n2)o(n^2) algorithm exists that returns a spanning tree whose weight is within a constant times the optimum

    Almost-Tight Distributed Minimum Cut Algorithms

    Full text link
    We study the problem of computing the minimum cut in a weighted distributed message-passing networks (the CONGEST model). Let λ\lambda be the minimum cut, nn be the number of nodes in the network, and DD be the network diameter. Our algorithm can compute λ\lambda exactly in O((nlogn+D)λ4log2n)O((\sqrt{n} \log^{*} n+D)\lambda^4 \log^2 n) time. To the best of our knowledge, this is the first paper that explicitly studies computing the exact minimum cut in the distributed setting. Previously, non-trivial sublinear time algorithms for this problem are known only for unweighted graphs when λ3\lambda\leq 3 due to Pritchard and Thurimella's O(D)O(D)-time and O(D+n1/2logn)O(D+n^{1/2}\log^* n)-time algorithms for computing 22-edge-connected and 33-edge-connected components. By using the edge sampling technique of Karger's, we can convert this algorithm into a (1+ϵ)(1+\epsilon)-approximation O((nlogn+D)ϵ5log3n)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^3 n)-time algorithm for any ϵ>0\epsilon>0. This improves over the previous (2+ϵ)(2+\epsilon)-approximation O((nlogn+D)ϵ5log2nloglogn)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^2 n\log\log n)-time algorithm and O(ϵ1)O(\epsilon^{-1})-approximation O(D+n12+ϵpolylogn)O(D+n^{\frac{1}{2}+\epsilon} \mathrm{poly}\log n)-time algorithm of Ghaffari and Kuhn. Due to the lower bound of Ω(D+n1/2/logn)\Omega(D+n^{1/2}/\log n) by Das Sarma et al. which holds for any approximation algorithm, this running time is tight up to a polylogn \mathrm{poly}\log n factor. To get the stated running time, we developed an approximation algorithm which combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It saves an ϵ9log7n\epsilon^{-9}\log^{7} n factor as compared to applying Thorup's tree packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning algorithm and Karger's dynamic programming to achieve an efficient distributed algorithm that finds the minimum cut when we are given a spanning tree that crosses the minimum cut exactly once

    A Simple Deterministic Distributed MST Algorithm, with Near-Optimal Time and Message Complexities

    Full text link
    Distributed minimum spanning tree (MST) problem is one of the most central and fundamental problems in distributed graph algorithms. Garay et al. \cite{GKP98,KP98} devised an algorithm with running time O(D+nlogn)O(D + \sqrt{n} \cdot \log^* n), where DD is the hop-diameter of the input nn-vertex mm-edge graph, and with message complexity O(m+n3/2)O(m + n^{3/2}). Peleg and Rubinovich \cite{PR99} showed that the running time of the algorithm of \cite{KP98} is essentially tight, and asked if one can achieve near-optimal running time **together with near-optimal message complexity**. In a recent breakthrough, Pandurangan et al. \cite{PRS16} answered this question in the affirmative, and devised a **randomized** algorithm with time O~(D+n)\tilde{O}(D+ \sqrt{n}) and message complexity O~(m)\tilde{O}(m). They asked if such a simultaneous time- and message-optimality can be achieved by a **deterministic** algorithm. In this paper, building upon the work of \cite{PRS16}, we answer this question in the affirmative, and devise a **deterministic** algorithm that computes MST in time O((D+n)logn)O((D + \sqrt{n}) \cdot \log n), using O(mlogn+nlognlogn)O(m \cdot \log n + n \log n \cdot \log^* n) messages. The polylogarithmic factors in the time and message complexities of our algorithm are significantly smaller than the respective factors in the result of \cite{PRS16}. Also, our algorithm and its analysis are very **simple** and self-contained, as opposed to rather complicated previous sublinear-time algorithms \cite{GKP98,KP98,E04b,PRS16}
    corecore