12 research outputs found

    Almost-Tight Distributed Minimum Cut Algorithms

    Full text link
    We study the problem of computing the minimum cut in a weighted distributed message-passing networks (the CONGEST model). Let λ\lambda be the minimum cut, nn be the number of nodes in the network, and DD be the network diameter. Our algorithm can compute λ\lambda exactly in O((nlogn+D)λ4log2n)O((\sqrt{n} \log^{*} n+D)\lambda^4 \log^2 n) time. To the best of our knowledge, this is the first paper that explicitly studies computing the exact minimum cut in the distributed setting. Previously, non-trivial sublinear time algorithms for this problem are known only for unweighted graphs when λ3\lambda\leq 3 due to Pritchard and Thurimella's O(D)O(D)-time and O(D+n1/2logn)O(D+n^{1/2}\log^* n)-time algorithms for computing 22-edge-connected and 33-edge-connected components. By using the edge sampling technique of Karger's, we can convert this algorithm into a (1+ϵ)(1+\epsilon)-approximation O((nlogn+D)ϵ5log3n)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^3 n)-time algorithm for any ϵ>0\epsilon>0. This improves over the previous (2+ϵ)(2+\epsilon)-approximation O((nlogn+D)ϵ5log2nloglogn)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^2 n\log\log n)-time algorithm and O(ϵ1)O(\epsilon^{-1})-approximation O(D+n12+ϵpolylogn)O(D+n^{\frac{1}{2}+\epsilon} \mathrm{poly}\log n)-time algorithm of Ghaffari and Kuhn. Due to the lower bound of Ω(D+n1/2/logn)\Omega(D+n^{1/2}/\log n) by Das Sarma et al. which holds for any approximation algorithm, this running time is tight up to a polylogn \mathrm{poly}\log n factor. To get the stated running time, we developed an approximation algorithm which combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It saves an ϵ9log7n\epsilon^{-9}\log^{7} n factor as compared to applying Thorup's tree packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning algorithm and Karger's dynamic programming to achieve an efficient distributed algorithm that finds the minimum cut when we are given a spanning tree that crosses the minimum cut exactly once

    Distributed Minimum Cut Approximation

    Full text link
    We study the problem of computing approximate minimum edge cuts by distributed algorithms. We use a standard synchronous message passing model where in each round, O(logn)O(\log n) bits can be transmitted over each edge (a.k.a. the CONGEST model). We present a distributed algorithm that, for any weighted graph and any ϵ(0,1)\epsilon \in (0, 1), with high probability finds a cut of size at most O(ϵ1λ)O(\epsilon^{-1}\lambda) in O(D)+O~(n1/2+ϵ)O(D) + \tilde{O}(n^{1/2 + \epsilon}) rounds, where λ\lambda is the size of the minimum cut. This algorithm is based on a simple approach for analyzing random edge sampling, which we call the random layering technique. In addition, we also present another distributed algorithm, which is based on a centralized algorithm due to Matula [SODA '93], that with high probability computes a cut of size at most (2+ϵ)λ(2+\epsilon)\lambda in O~((D+n)/ϵ5)\tilde{O}((D+\sqrt{n})/\epsilon^5) rounds for any ϵ>0\epsilon>0. The time complexities of both of these algorithms almost match the Ω~(D+n)\tilde{\Omega}(D + \sqrt{n}) lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACT-News '04] and Das Sarma et al. [STOC '11]. Furthermore, we also strengthen the lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which O(wlogn)O(w\log n) bits can be transmitted in each round over an edge of weight ww), even if the diameter is D=O(logn)D=O(\log n). For unweighted simple graphs, we show that even for networks of diameter O~(1λnαλ)\tilde{O}(\frac{1}{\lambda}\cdot \sqrt{\frac{n}{\alpha\lambda}}), finding an α\alpha-approximate minimum cut in networks of edge connectivity λ\lambda or computing an α\alpha-approximation of the edge connectivity requires Ω~(D+nαλ)\tilde{\Omega}(D + \sqrt{\frac{n}{\alpha\lambda}}) rounds

    Optimal Output Sensitive Fault Tolerant Cuts

    Get PDF
    In this paper we consider two classic cut-problems, Global Min-Cut and Min k-Cut, via the lens of fault tolerant network design. In particular, given a graph G on n vertices, and a positive integer f, our objective is to compute an upper bound on the size of the sparsest subgraph H of G that preserves edge connectivity of G (denoted by ?(G)) in the case of Global Min-Cut, and ?(G,k) (denotes the minimum number of edges whose removal would partition the graph into at least k connected components) in the case of Min k-Cut, upon failure of any f edges of G. The subgraph H corresponding to Global Min-Cut and Min k-Cut is called f-FTCS and f-FT-k-CS, respectively. We obtain the following results about the sizes of f-FTCS and f-FT-k-CS. - There exists an f-FTCS with (n-1)(f+?(G)) edges. We complement this upper bound with a matching lower bound, by constructing an infinite family of graphs where any f-FTCS must have at least ((n-?(G)-1)(?(G)+f-1))/2+(n-?(G)-1)+/?(G)(?(G)+1))/2 edges. - There exists an f-FT-k-CS with min{(2f+?(G,k)-(k-1))(n-1), (f+?(G,k))(n-k)+?} edges. We complement this upper bound with a lower bound, by constructing an infinite family of graphs where any f-FT-k-CS must have at least ((n-?(G,k)-1)(?(G,k)+f-k+1))/2)+n-?(G,k)+k-3+((?(G,k)-k+3)(?(G,k)-k+2))/2 edges. Our upper bounds exploit the structural properties of k-connectivity certificates. On the other hand, for our lower bounds we construct an infinite family of graphs, such that for any graph in the family any f-FTCS (or f-FT-k-CS) must contain all its edges. We also add that our upper bounds are constructive. That is, there exist polynomial time algorithms that construct H with the aforementioned number of edges

    Fast Augmenting Paths by Random Sampling from Residual Graphs

    Get PDF
    Consider an n-vertex, m-edge, undirected graph with integral capacities and max-flow value v. We give a new [~ over O](m + nv)-time maximum flow algorithm. After assigning certain special sampling probabilities to edges in [~ over O](m)$ time, our algorithm is very simple: repeatedly find an augmenting path in a random sample of edges from the residual graph. Breaking from past work, we demonstrate that we can benefit by random sampling from directed (residual) graphs. We also slightly improve an algorithm for approximating flows of arbitrary value, finding a flow of value (1 - ε) times the maximum in [~ over O](m√n/ε) time.National Science Foundation (U.S.

    Faster Algorithms for Edge Connectivity via Random 22-Out Contractions

    Full text link
    We provide a simple new randomized contraction approach to the global minimum cut problem for simple undirected graphs. The contractions exploit 2-out edge sampling from each vertex rather than the standard uniform edge sampling. We demonstrate the power of our new approach by obtaining better algorithms for sequential, distributed, and parallel models of computation. Our end results include the following randomized algorithms for computing edge connectivity with high probability: -- Two sequential algorithms with complexities O(mlogn)O(m \log n) and O(m+nlog3n)O(m+n \log^3 n). These improve on a long line of developments including a celebrated O(mlog3n)O(m \log^3 n) algorithm of Karger [STOC'96] and the state of the art O(mlog2n(loglogn)2)O(m \log^2 n (\log\log n)^2) algorithm of Henzinger et al. [SODA'17]. Moreover, our O(m+nlog3n)O(m+n \log^3 n) algorithm is optimal whenever m=Ω(nlog3n)m = \Omega(n \log^3 n). Within our new time bounds, whp, we can also construct the cactus representation of all minimal cuts. -- An O˜(n0.8D0.2+n0.9)\~O(n^{0.8} D^{0.2} + n^{0.9}) round distributed algorithm, where D denotes the graph diameter. This improves substantially on a recent breakthrough of Daga et al. [STOC'19], which achieved a round complexity of O˜(n11/353D1/353+n11/706)\~O(n^{1-1/353}D^{1/353} + n^{1-1/706}), hence providing the first sublinear distributed algorithm for exactly computing the edge connectivity. -- The first O(1)O(1) round algorithm for the massively parallel computation setting with linear memory per machine.Comment: algorithms and data structures, graph algorithms, edge connectivity, out-contractions, randomized algorithms, distributed algorithms, massively parallel computatio

    Algorithms for Fundamental Problems in Computer Networks.

    Full text link
    Traditional studies of algorithms consider the sequential setting, where the whole input data is fed into a single device that computes the solution. Today, the network, such as the Internet, contains of a vast amount of information. The overhead of aggregating all the information into a single device is too expensive, so a distributed approach to solve the problem is often preferable. In this thesis, we aim to develop efficient algorithms for the following fundamental graph problems that arise in networks, in both sequential and distributed settings. Graph coloring is a basic symmetry breaking problem in distributed computing. Each node is to be assigned a color such that adjacent nodes are assigned different colors. Both the efficiency and the quality of coloring are important measures of an algorithm. One of our main contributions is providing tools for obtaining colorings of good quality whose existence are non-trivial. We also consider other optimization problems in the distributed setting. For example, we investigate efficient methods for identifying the connectivity as well as the bottleneck edges in a distributed network. Our approximation algorithm is almost-tight in the sense that the running time matches the known lower bound up to a poly-logarithmic factor. For another example, we model how the task allocation can be done in ant colonies, when the ants may have different capabilities in doing different tasks. The matching problems are one of the classic combinatorial optimization problems. We study the weighted matching problems in the sequential setting. We give a new scaling algorithm for finding the maximum weight perfect matching in general graphs, which improves the long-standing Gabow-Tarjan's algorithm (1991) and matches the running time of the best weighted bipartite perfect matching algorithm (Gabow and Tarjan, 1989). Furthermore, for the maximum weight matching problem in bipartite graphs, we give a faster scaling algorithm whose running time is faster than Gabow and Tarjan's weighted bipartite {it perfect} matching algorithm.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113540/1/hsinhao_1.pd

    Profile-guided redundancy elimination

    Full text link
    Program optimisations analyse and transform the programs such that better performance results can be achieved. Classical optimisations mainly use the static properties of the programs to analyse program code and make sure that the optimisations work for every possible combination of the program and the input data. This approach is conservative in those cases when the programs show the same runtime behaviours for most of their execution time. On the other hand, profile-guided optimisations use runtime profiling information to discover the aforementioned common behaviours of the programs and explore more optimisation opportunities, which are missed in the classical, non-profile-guided optimisations. Redundancy elimination is one of the most powerful optimisations in compilers. In this thesis, a new partial redundancy elimination (PRE) algorithm and a partial dead code elimination algorithm (PDE) are proposed for a profile-guided redundancy elimination framework. During the design and implementation of the algorithms, we address three critical issues: optimality, feasibility and profitability. First, we prove that both our speculative PRE algorithm and our region-based PDE algorithm are optimal for given edge profiling information. The total number of dynamic occurrences of redundant expressions or dead codes cannot be further eliminated by any other code motion. Moreover, our speculative PRE algorithm is lifetime optimal, which means that the lifetimes of new introduced temporary variables are minimised. Second, we show that both algorithms are practical and can be efficiently implemented in production compilers. For SPEC CPU2000 benchmarks, the average compilation overhead for our PRE algorithm is 3%, and the average overhead for our PDE algorithm is less than 2%. Moreover, edge profiling rather than expensive path profiling is sufficient to guarantee the optimality of the algorithms. Finally, we demonstrate that the proposed profile-guided redundancy elimination techniques can provide speedups on real machines by conducting a thorough performance evaluation. To the best of our knowledge, this is the first performance evaluation of the profile-guided redundancy elimination techniques on real machines
    corecore