12 research outputs found
Almost-Tight Distributed Minimum Cut Algorithms
We study the problem of computing the minimum cut in a weighted distributed
message-passing networks (the CONGEST model). Let be the minimum cut,
be the number of nodes in the network, and be the network diameter. Our
algorithm can compute exactly in time. To the best of our knowledge, this is the first paper that
explicitly studies computing the exact minimum cut in the distributed setting.
Previously, non-trivial sublinear time algorithms for this problem are known
only for unweighted graphs when due to Pritchard and
Thurimella's -time and -time algorithms for
computing -edge-connected and -edge-connected components.
By using the edge sampling technique of Karger's, we can convert this
algorithm into a -approximation -time algorithm for any . This improves
over the previous -approximation -time algorithm and
-approximation -time algorithm of Ghaffari and Kuhn. Due to the lower
bound of by Das Sarma et al. which holds for any
approximation algorithm, this running time is tight up to a factor.
To get the stated running time, we developed an approximation algorithm which
combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It
saves an factor as compared to applying Thorup's tree
packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning
algorithm and Karger's dynamic programming to achieve an efficient distributed
algorithm that finds the minimum cut when we are given a spanning tree that
crosses the minimum cut exactly once
Distributed Minimum Cut Approximation
We study the problem of computing approximate minimum edge cuts by
distributed algorithms. We use a standard synchronous message passing model
where in each round, bits can be transmitted over each edge (a.k.a.
the CONGEST model). We present a distributed algorithm that, for any weighted
graph and any , with high probability finds a cut of size
at most in
rounds, where is the size of the minimum cut. This algorithm is based
on a simple approach for analyzing random edge sampling, which we call the
random layering technique. In addition, we also present another distributed
algorithm, which is based on a centralized algorithm due to Matula [SODA '93],
that with high probability computes a cut of size at most
in rounds for any .
The time complexities of both of these algorithms almost match the
lower bound of Das Sarma et al. [STOC '11], thus
leading to an answer to an open question raised by Elkin [SIGACT-News '04] and
Das Sarma et al. [STOC '11].
Furthermore, we also strengthen the lower bound of Das Sarma et al. by
extending it to unweighted graphs. We show that the same lower bound also holds
for unweighted multigraphs (or equivalently for weighted graphs in which
bits can be transmitted in each round over an edge of weight ),
even if the diameter is . For unweighted simple graphs, we show
that even for networks of diameter , finding an -approximate minimum cut
in networks of edge connectivity or computing an
-approximation of the edge connectivity requires rounds
Optimal Output Sensitive Fault Tolerant Cuts
In this paper we consider two classic cut-problems, Global Min-Cut and Min k-Cut, via the lens of fault tolerant network design. In particular, given a graph G on n vertices, and a positive integer f, our objective is to compute an upper bound on the size of the sparsest subgraph H of G that preserves edge connectivity of G (denoted by ?(G)) in the case of Global Min-Cut, and ?(G,k) (denotes the minimum number of edges whose removal would partition the graph into at least k connected components) in the case of Min k-Cut, upon failure of any f edges of G. The subgraph H corresponding to Global Min-Cut and Min k-Cut is called f-FTCS and f-FT-k-CS, respectively. We obtain the following results about the sizes of f-FTCS and f-FT-k-CS.
- There exists an f-FTCS with (n-1)(f+?(G)) edges. We complement this upper bound with a matching lower bound, by constructing an infinite family of graphs where any f-FTCS must have at least ((n-?(G)-1)(?(G)+f-1))/2+(n-?(G)-1)+/?(G)(?(G)+1))/2 edges.
- There exists an f-FT-k-CS with min{(2f+?(G,k)-(k-1))(n-1), (f+?(G,k))(n-k)+?} edges. We complement this upper bound with a lower bound, by constructing an infinite family of graphs where any f-FT-k-CS must have at least ((n-?(G,k)-1)(?(G,k)+f-k+1))/2)+n-?(G,k)+k-3+((?(G,k)-k+3)(?(G,k)-k+2))/2 edges. Our upper bounds exploit the structural properties of k-connectivity certificates. On the other hand, for our lower bounds we construct an infinite family of graphs, such that for any graph in the family any f-FTCS (or f-FT-k-CS) must contain all its edges. We also add that our upper bounds are constructive. That is, there exist polynomial time algorithms that construct H with the aforementioned number of edges
Fast Augmenting Paths by Random Sampling from Residual Graphs
Consider an n-vertex, m-edge, undirected graph with integral capacities and max-flow value v. We give a new [~ over O](m + nv)-time maximum flow algorithm. After assigning certain special sampling probabilities to edges in [~ over O](m)$ time, our algorithm is very simple: repeatedly find an augmenting path in a random sample of edges from the residual graph. Breaking from past work, we demonstrate that we can benefit by random sampling from directed (residual) graphs. We also slightly improve an algorithm for approximating flows of arbitrary value, finding a flow of value (1 - ε) times the maximum in [~ over O](m√n/ε) time.National Science Foundation (U.S.
Faster Algorithms for Edge Connectivity via Random -Out Contractions
We provide a simple new randomized contraction approach to the global minimum
cut problem for simple undirected graphs. The contractions exploit 2-out edge
sampling from each vertex rather than the standard uniform edge sampling. We
demonstrate the power of our new approach by obtaining better algorithms for
sequential, distributed, and parallel models of computation. Our end results
include the following randomized algorithms for computing edge connectivity
with high probability:
-- Two sequential algorithms with complexities and . These improve on a long line of developments including a celebrated
algorithm of Karger [STOC'96] and the state of the art algorithm of Henzinger et al. [SODA'17]. Moreover,
our algorithm is optimal whenever .
Within our new time bounds, whp, we can also construct the cactus
representation of all minimal cuts.
-- An round distributed algorithm, where D
denotes the graph diameter. This improves substantially on a recent
breakthrough of Daga et al. [STOC'19], which achieved a round complexity of
, hence providing the first sublinear
distributed algorithm for exactly computing the edge connectivity.
-- The first round algorithm for the massively parallel computation
setting with linear memory per machine.Comment: algorithms and data structures, graph algorithms, edge connectivity,
out-contractions, randomized algorithms, distributed algorithms, massively
parallel computatio
Algorithms for Fundamental Problems in Computer Networks.
Traditional studies of algorithms consider the sequential setting, where the whole input data is fed into a single device that computes the solution. Today, the network, such as the Internet, contains of a vast amount of information. The overhead of aggregating all the information into a single device is too expensive, so a distributed approach to solve the problem is often preferable. In this thesis, we aim to develop efficient algorithms for the following fundamental graph problems that arise in networks, in both sequential and distributed settings.
Graph coloring is a basic symmetry breaking problem in distributed computing. Each node is to be assigned a color such that adjacent nodes are assigned different colors. Both the efficiency and the quality of coloring are important measures of an algorithm. One of our main contributions is providing tools for obtaining colorings of good quality whose existence are non-trivial. We also consider other optimization problems in the distributed setting. For example, we investigate efficient methods for identifying the connectivity as well as the bottleneck edges in a distributed network. Our approximation algorithm is almost-tight in the sense that the running time matches the known lower bound up to a poly-logarithmic factor. For another example, we model how the task allocation can be done in ant colonies, when the ants may have different capabilities in doing different tasks.
The matching problems are one of the classic combinatorial optimization problems. We study the weighted matching problems in the sequential setting. We give a new scaling algorithm for finding the maximum weight perfect matching in general graphs, which improves the long-standing Gabow-Tarjan's algorithm (1991) and matches the running time of the best weighted bipartite perfect matching algorithm (Gabow and Tarjan, 1989). Furthermore, for the maximum weight matching problem in bipartite graphs, we give a faster scaling algorithm whose running time is faster than Gabow and Tarjan's weighted bipartite {it perfect} matching algorithm.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113540/1/hsinhao_1.pd
Profile-guided redundancy elimination
Program optimisations analyse and transform the programs such
that better performance results can be achieved. Classical optimisations
mainly use the static properties of the programs to analyse program
code and make sure that the optimisations work for every possible
combination of the program and the input data. This approach
is conservative in those cases when the programs show the same runtime
behaviours for most of their execution time. On the other hand,
profile-guided optimisations use runtime profiling information to discover
the aforementioned common behaviours of the programs and explore
more optimisation opportunities, which are missed in the classical,
non-profile-guided optimisations. Redundancy elimination is one of the
most powerful optimisations in compilers. In this thesis, a new partial
redundancy elimination (PRE) algorithm and a partial dead code elimination
algorithm (PDE) are proposed for a profile-guided redundancy
elimination framework. During the design and implementation of the
algorithms, we address three critical issues: optimality, feasibility and
profitability.
First, we prove that both our speculative PRE algorithm and our
region-based PDE algorithm are optimal for given edge profiling information.
The total number of dynamic occurrences of redundant expressions
or dead codes cannot be further eliminated by any other code
motion. Moreover, our speculative PRE algorithm is lifetime optimal,
which means that the lifetimes of new introduced temporary variables
are minimised.
Second, we show that both algorithms are practical and can be efficiently
implemented in production compilers. For SPEC CPU2000
benchmarks, the average compilation overhead for our PRE algorithm
is 3%, and the average overhead for our PDE algorithm is less than 2%.
Moreover, edge profiling rather than expensive path profiling is sufficient
to guarantee the optimality of the algorithms.
Finally, we demonstrate that the proposed profile-guided redundancy
elimination techniques can provide speedups on real machines by conducting
a thorough performance evaluation. To the best of our knowledge,
this is the first performance evaluation of the profile-guided redundancy
elimination techniques on real machines