3,321 research outputs found
Distributed Approximation of Maximum Independent Set and Maximum Matching
We present a simple distributed -approximation algorithm for maximum
weight independent set (MaxIS) in the model which completes
in rounds, where is the maximum
degree, is the number of rounds needed to compute a maximal
independent set (MIS) on , and is the maximum weight of a node. %Whether
our algorithm is randomized or deterministic depends on the \texttt{MIS}
algorithm used as a black-box.
Plugging in the best known algorithm for MIS gives a randomized solution in
rounds, where is the number of nodes.
We also present a deterministic -round algorithm based
on coloring.
We then show how to use our MaxIS approximation algorithms to compute a
-approximation for maximum weight matching without incurring any additional
round penalty in the model. We use a known reduction for
simulating algorithms on the line graph while incurring congestion, but we show
our algorithm is part of a broad family of \emph{local aggregation algorithms}
for which we describe a mechanism that allows the simulation to run in the
model without an additional overhead.
Next, we show that for maximum weight matching, relaxing the approximation
factor to () allows us to devise a distributed algorithm
requiring rounds for any constant
. For the unweighted case, we can even obtain a
-approximation in this number of rounds. These algorithms are
the first to achieve the provably optimal round complexity with respect to
dependency on
On Conceptually Simple Algorithms for Variants of Online Bipartite Matching
We present a series of results regarding conceptually simple algorithms for
bipartite matching in various online and related models. We first consider a
deterministic adversarial model. The best approximation ratio possible for a
one-pass deterministic online algorithm is , which is achieved by any
greedy algorithm. D\"urr et al. recently presented a -pass algorithm called
Category-Advice that achieves approximation ratio . We extend their
algorithm to multiple passes. We prove the exact approximation ratio for the
-pass Category-Advice algorithm for all , and show that the
approximation ratio converges to the inverse of the golden ratio
as goes to infinity. The convergence is
extremely fast --- the -pass Category-Advice algorithm is already within
of the inverse of the golden ratio.
We then consider a natural greedy algorithm in the online stochastic IID
model---MinDegree. This algorithm is an online version of a well-known and
extensively studied offline algorithm MinGreedy. We show that MinDegree cannot
achieve an approximation ratio better than , which is guaranteed by any
consistent greedy algorithm in the known IID model.
Finally, following the work in Besser and Poloczek, we depart from an
adversarial or stochastic ordering and investigate a natural randomized
algorithm (MinRanking) in the priority model. Although the priority model
allows the algorithm to choose the input ordering in a general but well defined
way, this natural algorithm cannot obtain the approximation of the Ranking
algorithm in the ROM model
- …