78 research outputs found
Maximum Weight Disjoint Paths in Outerplanar Graphs via Single-Tree Cut Approximators
Since 1997 there has been a steady stream of advances for the maximum
disjoint paths problem. Achieving tractable results has usually required
focusing on relaxations such as: (i) to allow some bounded edge congestion in
solutions, (ii) to only consider the unit weight (cardinality) setting, (iii)
to only require fractional routability of the selected demands (the
all-or-nothing flow setting). For the general form (no congestion, general
weights, integral routing) of edge-disjoint paths ({\sc edp}) even the case of
unit capacity trees which are stars generalizes the maximum matching problem
for which Edmonds provided an exact algorithm. For general capacitated trees,
Garg, Vazirani, Yannakakis showed the problem is APX-Hard and Chekuri, Mydlarz,
Shepherd provided a -approximation. This is essentially the only setting
where a constant approximation is known for the general form of \textsc{edp}.
We extend their result by giving a constant-factor approximation algorithm for
general-form \textsc{edp} in outerplanar graphs. A key component for the
algorithm is to find a {\em single-tree} cut approximator for
outerplanar graphs. Previously cut approximators were only known via
distributions on trees and these were based implicitly on the results of Gupta,
Newman, Rabinovich and Sinclair for distance tree embeddings combined with
results of Anderson and Feige.Comment: 19 pages, 6 figure
Parallel Approximate Maximum Flows in Near-Linear Work and Polylogarithmic Depth
We present a parallel algorithm for the -approximate maximum
flow problem in capacitated, undirected graphs with vertices and edges,
achieving depth and work in the PRAM model. Although near-linear time sequential
algorithms for this problem have been known for almost a decade, no parallel
algorithms that simultaneously achieved polylogarithmic depth and near-linear
work were known.
At the heart of our result is a polylogarithmic depth, near-linear work
recursive algorithm for computing congestion approximators. Our algorithm
involves a recursive step to obtain a low-quality congestion approximator
followed by a "boosting" step to improve its quality which prevents a
multiplicative blow-up in error. Similar to Peng [SODA'16], our boosting step
builds upon the hierarchical decomposition scheme of R\"acke, Shah, and
T\"aubig [SODA'14].
A direct implementation of this approach, however, leads only to an algorithm
with depth and work. To get around this, we introduce a
new hierarchical decomposition scheme, in which we only need to solve maximum
flows on subgraphs obtained by contracting vertices, as opposed to
vertex-induced subgraphs used in R\"acke, Shah, and T\"aubig [SODA'14]. In
particular, we are able to directly extract congestion approximators for the
subgraphs from a congestion approximator for the entire graph, thereby avoiding
additional recursion on those subgraphs. Along the way, we also develop a
parallel flow-decomposition algorithm that is crucial to achieving
polylogarithmic depth and may be of independent interest
Near-Optimal Distributed Maximum Flow
We present a near-optimal distributed algorithm for -approximation of single-commodity maximum flow in undirected weighted networks that runs in communication rounds in the \Congest model. Here, and denote the number of nodes and the network diameter, respectively. This is the first improvement over the trivial bound of , and it nearly matches the round complexity lower bound. The development of the algorithm contains two results of independent interest: (i) A -round distributed construction of a spanning tree of average stretch . (ii) A -round distributed construction of an -congestion approximator consisting of the cuts induced by virtual trees. The distributed representation of the cut approximator allows for evaluation in rounds. All our algorithms make use of randomization and succeed with high probability
On non-linear network embedding methods
As a linear method, spectral clustering is the only network embedding algorithm that offers both a provably fast computation and an advanced theoretical understanding. The accuracy of spectral clustering depends on the Cheeger ratio defined as the ratio between the graph conductance and the 2nd smallest eigenvalue of its normalizedLaplacian. In several graph families whose Cheeger ratio reaches its upper bound of Theta(n), the approximation power of spectral clustering is proven to perform poorly. Moreover, recent non-linear network embedding methods have surpassed spectral clustering by state-of-the-art performance with little to no theoretical understanding to back them.
The dissertation includes work that: (1) extends the theory of spectral clustering in order to address its weakness and provide ground for a theoretical understanding of existing non-linear network embedding methods.; (2) provides non-linear extensions of spectral clustering with theoretical guarantees, e.g., via different spectral modification algorithms; (3) demonstrates the potentials of this approach on different types and sizes of graphs from industrial applications; and (4)makes a theory-informed use of artificial networks
Fast Generation of Random Spanning Trees and the Effective Resistance Metric
We present a new algorithm for generating a uniformly random spanning tree in
an undirected graph. Our algorithm samples such a tree in expected
time. This improves over the best previously known bound
of -- that follows from the work of
Kelner and M\k{a}dry [FOCS'09] and of Colbourn et al. [J. Algorithms'96] --
whenever the input graph is sufficiently sparse.
At a high level, our result stems from carefully exploiting the interplay of
random spanning trees, random walks, and the notion of effective resistance, as
well as from devising a way to algorithmically relate these concepts to the
combinatorial structure of the graph. This involves, in particular,
establishing a new connection between the effective resistance metric and the
cut structure of the underlying graph
- …