45 research outputs found
Massively Parallel Algorithms for Distance Approximation and Spanners
Over the past decade, there has been increasing interest in
distributed/parallel algorithms for processing large-scale graphs. By now, we
have quite fast algorithms -- usually sublogarithmic-time and often
-time, or even faster -- for a number of fundamental graph
problems in the massively parallel computation (MPC) model. This model is a
widely-adopted theoretical abstraction of MapReduce style settings, where a
number of machines communicate in an all-to-all manner to process large-scale
data. Contributing to this line of work on MPC graph algorithms, we present
round MPC algorithms for computing
-spanners in the strongly sublinear regime of local memory. To
the best of our knowledge, these are the first sublogarithmic-time MPC
algorithms for spanner construction. As primary applications of our spanners,
we get two important implications, as follows:
-For the MPC setting, we get an -round algorithm for
approximation of all pairs shortest paths (APSP) in the
near-linear regime of local memory. To the best of our knowledge, this is the
first sublogarithmic-time MPC algorithm for distance approximations.
-Our result above also extends to the Congested Clique model of distributed
computing, with the same round complexity and approximation guarantee. This
gives the first sub-logarithmic algorithm for approximating APSP in weighted
graphs in the Congested Clique model
An Efficient Construction of Yao-Graph in Data-Distributed Settings
A sparse graph that preserves an approximation of the shortest paths between
all pairs of points in a plane is called a geometric spanner. Using range trees
of sublinear size, we design an algorithm in massively parallel computation
(MPC) model for constructing a geometric spanner known as Yao-graph. This
improves the total time and the total memory of existing algorithms for
geometric spanners from subquadratic to near-linear
A Massively Parallel Dynamic Programming for Approximate Rectangle Escape Problem
Sublinear time complexity is required by the massively parallel computation
(MPC) model. Breaking dynamic programs into a set of sparse dynamic programs
that can be divided, solved, and merged in sublinear time.
The rectangle escape problem (REP) is defined as follows: For
axis-aligned rectangles inside an axis-aligned bounding box , extend each
rectangle in only one of the four directions: up, down, left, or right until it
reaches and the density is minimized, where is the maximum number
of extensions of rectangles to the boundary that pass through a point inside
bounding box . REP is NP-hard for . If the rectangles are points of a
grid (or unit squares of a grid), the problem is called the square escape
problem (SEP) and it is still NP-hard.
We give a -approximation algorithm for SEP with with time
complexity . This improves the time complexity of existing
algorithms which are at least quadratic. Also, the approximation ratio of our
algorithm for is which is tight. We also give a
-approximation algorithm for REP with time complexity and
give a MPC version of this algorithm for which is the first parallel
algorithm for this problem
Recommended from our members
Streaming Algorithms Via Reductions
In the streaming algorithms model of computation we must process data in order and without enough memory to remember the entire input. We study reductions between problems in the streaming model with an eye to using reductions as an algorithm design technique. Our contributions include:
* Linear Transformation reductions, which compose with existing linear sketch techniques. We use these for small-space algorithms for numeric measurements of distance-from-periodicity, finding the period of a numeric stream, and detecting cyclic shifts.
* The first streaming graph algorithms in the sliding window\u27 model, where we must consider only the most recent L elements for some fixed threshold L. We develop basic algorithms for connectivity and unweighted maximum matching, then develop a variety of other algorithms via reductions to these problems.
* A new reduction from maximum weighted matching to maximum unweighted matching. This reduction immediately yields improved approximation guarantees for maximum weighted matching in the semistreaming, sliding window, and MapReduce models, and extends to the more general problem of finding maximum independent sets in p-systems.
* Algorithms in a stream-of-samples model which exhibit clear sample vs. space tradeoffs. These algorithms are also inspired by examining reductions. We provide algorithms for calculating F_k frequency moments and graph connectivity
Coresets Meet EDCS: Algorithms for Matching and Vertex Cover on Massive Graphs
As massive graphs become more prevalent, there is a rapidly growing need for
scalable algorithms that solve classical graph problems, such as maximum
matching and minimum vertex cover, on large datasets. For massive inputs,
several different computational models have been introduced, including the
streaming model, the distributed communication model, and the massively
parallel computation (MPC) model that is a common abstraction of
MapReduce-style computation. In each model, algorithms are analyzed in terms of
resources such as space used or rounds of communication needed, in addition to
the more traditional approximation ratio.
In this paper, we give a single unified approach that yields better
approximation algorithms for matching and vertex cover in all these models. The
highlights include:
* The first one pass, significantly-better-than-2-approximation for matching
in random arrival streams that uses subquadratic space, namely a
-approximation streaming algorithm that uses space
for constant .
* The first 2-round, better-than-2-approximation for matching in the MPC
model that uses subquadratic space per machine, namely a
-approximation algorithm with memory per
machine for constant .
By building on our unified approach, we further develop parallel algorithms
in the MPC model that give a -approximation to matching and an
-approximation to vertex cover in only MPC rounds and
memory per machine. These results settle multiple open
questions posed in the recent paper of Czumaj~et.al. [STOC 2018]
Local Algorithms for Bounded Degree Sparsifiers in Sparse Graphs
In graph sparsification, the goal has almost always been of global nature: compress a graph into a smaller subgraph (sparsifier) that maintains certain features of the original graph.
Algorithms can then run on the sparsifier, which in many cases leads to improvements in the overall runtime and memory.
This paper studies sparsifiers that have bounded (maximum) degree, and are thus locally sparse, aiming to improve local measures of runtime and memory. To improve those local measures, it is important to be able to compute such sparsifiers locally.
We initiate the study of local algorithms for bounded degree sparsifiers in unweighted sparse graphs, focusing on the problems of vertex cover, matching, and independent set. Let eps > 0 be a slack parameter and alpha ge 1 be a density parameter.
We devise local algorithms for computing:
1. A (1+eps)-vertex cover sparsifier of degree O(alpha / eps), for any graph of arboricity alpha.footnote{In a graph of arboricity alpha the average degree of any induced subgraph is at most 2alpha.}
2. A (1+eps)-maximum matching sparsifier and also a (1+eps)-maximal matching sparsifier of degree O(alpha / eps, for any graph of arboricity alpha.
3. A (1+eps)-independent set sparsifier of degree O(alpha^2 / eps), for any graph of average degree alpha.
Our algorithms require only a single communication round in the standard message passing model of distributed computing,
and moreover, they can be simulated locally in a trivial way.
As an immediate application we can extend results from distributed computing and local computation algorithms that apply to graphs of degree bounded by d to graphs of arboricity O(d / eps) or average degree O(d^2 / eps), at the expense of increasing the approximation guarantee by a factor of (1+eps). In particular, we can extend the plethora of recent local computation algorithms for approximate maximum and maximal matching from bounded degree graphs to bounded arboricity graphs with a negligible loss in the approximation guarantee.
The inherently local behavior of our algorithms can be used to amplify the approximation guarantee of any sparsifier in time roughly linear in its size,
which has immediate applications in the area of dynamic graph algorithms. In particular, the state-of-the-art algorithm for
maintaining (2-eps)-vertex cover (VC) is at least linear in the graph size, even in dynamic forests. We provide a reduction from the dynamic to the static case, showing that if a t-VC can be computed from scratch in time T(n) in any (sub)family of graphs with arboricity bounded by alpha, for an arbitrary t ge 1,
then a (t+eps)-VC can be maintained with update time frac{T(n)}{O((n / alpha) cdot eps^2)}, for any eps > 0. For planar graphs this yields an algorithm for maintaining a (1+eps)-VC with constant update time for any constant eps > 0