870 research outputs found
Improved Approximation Bounds for Minimum Weight Cycle in the CONGEST Model
Minimum Weight Cycle (MWC) is the problem of finding a simple cycle of
minimum weight in a graph . This is a fundamental graph problem with
classical sequential algorithms that run in and
time where and . In recent years this problem
has received significant attention in the context of hardness through fine
grained sequential complexity as well as in design of faster sequential
approximation algorithms.
For computing minimum weight cycle in the distributed CONGEST model,
near-linear in lower and upper bounds on round complexity are known for
directed graphs (weighted and unweighted), and for undirected weighted graphs;
these lower bounds also apply to any -approximation algorithm.
This paper focuses on round complexity bounds for approximating MWC in the
CONGEST model: For coarse approximations we show that for any constant , computing an -approximation of MWC requires rounds on weighted undirected graphs and on directed graphs, even
if unweighted. We complement these lower bounds with sublinear
-round algorithms for approximating MWC close to a factor
of 2 in these classes of graphs.
A key ingredient of our approximation algorithms is an efficient algorithm
for computing -approximate shortest paths from sources in
directed and weighted graphs, which may be of independent interest for other
CONGEST problems. We present an algorithm that runs in rounds if and rounds if , and this round
complexity smoothly interpolates between the best known upper bounds for
approximate (or exact) SSSP when and APSP when
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
Space-Efficient Routing Tables for Almost All Networks and the Incompressibility Method
We use the incompressibility method based on Kolmogorov complexity to
determine the total number of bits of routing information for almost all
network topologies. In most models for routing, for almost all labeled graphs
bits are necessary and sufficient for shortest path routing. By
`almost all graphs' we mean the Kolmogorov random graphs which constitute a
fraction of of all graphs on nodes, where is an arbitrary
fixed constant. There is a model for which the average case lower bound rises
to and another model where the average case upper bound
drops to . This clearly exposes the sensitivity of such bounds
to the model under consideration. If paths have to be short, but need not be
shortest (if the stretch factor may be larger than 1), then much less space is
needed on average, even in the more demanding models. Full-information routing
requires bits on average. For worst-case static networks we
prove a lower bound for shortest path routing and all
stretch factors in some networks where free relabeling is not allowed.Comment: 19 pages, Latex, 1 table, 1 figure; SIAM J. Comput., To appea
- …