96,527 research outputs found
Coresets Meet EDCS: Algorithms for Matching and Vertex Cover on Massive Graphs
As massive graphs become more prevalent, there is a rapidly growing need for
scalable algorithms that solve classical graph problems, such as maximum
matching and minimum vertex cover, on large datasets. For massive inputs,
several different computational models have been introduced, including the
streaming model, the distributed communication model, and the massively
parallel computation (MPC) model that is a common abstraction of
MapReduce-style computation. In each model, algorithms are analyzed in terms of
resources such as space used or rounds of communication needed, in addition to
the more traditional approximation ratio.
In this paper, we give a single unified approach that yields better
approximation algorithms for matching and vertex cover in all these models. The
highlights include:
* The first one pass, significantly-better-than-2-approximation for matching
in random arrival streams that uses subquadratic space, namely a
-approximation streaming algorithm that uses space
for constant .
* The first 2-round, better-than-2-approximation for matching in the MPC
model that uses subquadratic space per machine, namely a
-approximation algorithm with memory per
machine for constant .
By building on our unified approach, we further develop parallel algorithms
in the MPC model that give a -approximation to matching and an
-approximation to vertex cover in only MPC rounds and
memory per machine. These results settle multiple open
questions posed in the recent paper of Czumaj~et.al. [STOC 2018]
Densest Subgraph in Dynamic Graph Streams
In this paper, we consider the problem of approximating the densest subgraph
in the dynamic graph stream model. In this model of computation, the input
graph is defined by an arbitrary sequence of edge insertions and deletions and
the goal is to analyze properties of the resulting graph given memory that is
sub-linear in the size of the stream. We present a single-pass algorithm that
returns a approximation of the maximum density with high
probability; the algorithm uses O(\epsilon^{-2} n \polylog n) space,
processes each stream update in \polylog (n) time, and uses \poly(n)
post-processing time where is the number of nodes. The space used by our
algorithm matches the lower bound of Bahmani et al.~(PVLDB 2012) up to a
poly-logarithmic factor for constant . The best existing results for
this problem were established recently by Bhattacharya et al.~(STOC 2015). They
presented a approximation algorithm using similar space and
another algorithm that both processed each update and maintained a
approximation of the current maximum density in \polylog (n)
time per-update.Comment: To appear in MFCS 201
Optimal lower bounds for universal relation, and for samplers and finding duplicates in streams
In the communication problem (universal relation) [KRW95],
Alice and Bob respectively receive with the promise that
. The last player to receive a message must output an index such
that . We prove that the randomized one-way communication
complexity of this problem in the public coin model is exactly
for failure
probability . Our lower bound holds even if promised
. As a corollary, we obtain
optimal lower bounds for -sampling in strict turnstile streams for
, as well as for the problem of finding duplicates in a stream. Our
lower bounds do not need to use large weights, and hold even if promised
at all points in the stream.
We give two different proofs of our main result. The first proof demonstrates
that any algorithm solving sampling problems in turnstile streams
in low memory can be used to encode subsets of of certain sizes into a
number of bits below the information theoretic minimum. Our encoder makes
adaptive queries to throughout its execution, but done carefully
so as to not violate correctness. This is accomplished by injecting random
noise into the encoder's interactions with , which is loosely
motivated by techniques in differential privacy. Our second proof is via a
novel randomized reduction from Augmented Indexing [MNSW98] which needs to
interact with adaptively. To handle the adaptivity we identify
certain likely interaction patterns and union bound over them to guarantee
correct interaction on all of them. To guarantee correctness, it is important
that the interaction hides some of its randomness from in the
reduction.Comment: merge of arXiv:1703.08139 and of work of Kapralov, Woodruff, and
Yahyazade
- …