1,811 research outputs found
Robust randomized matchings
The following game is played on a weighted graph: Alice selects a matching
and Bob selects a number . Alice's payoff is the ratio of the weight of
the heaviest edges of to the maximum weight of a matching of size at
most . If guarantees a payoff of at least then it is called
-robust. In 2002, Hassin and Rubinstein gave an algorithm that returns
a -robust matching, which is best possible.
We show that Alice can improve her payoff to by playing a
randomized strategy. This result extends to a very general class of
independence systems that includes matroid intersection, b-matchings, and
strong 2-exchange systems. It also implies an improved approximation factor for
a stochastic optimization variant known as the maximum priority matching
problem and translates to an asymptotic robustness guarantee for deterministic
matchings, in which Bob can only select numbers larger than a given constant.
Moreover, we give a new LP-based proof of Hassin and Rubinstein's bound
Robust Assignments via Ear Decompositions and Randomized Rounding
Many real-life planning problems require making a priori decisions before all
parameters of the problem have been revealed. An important special case of such
problem arises in scheduling problems, where a set of tasks needs to be
assigned to the available set of machines or personnel (resources), in a way
that all tasks have assigned resources, and no two tasks share the same
resource. In its nominal form, the resulting computational problem becomes the
\emph{assignment problem} on general bipartite graphs.
This paper deals with a robust variant of the assignment problem modeling
situations where certain edges in the corresponding graph are \emph{vulnerable}
and may become unavailable after a solution has been chosen. The goal is to
choose a minimum-cost collection of edges such that if any vulnerable edge
becomes unavailable, the remaining part of the solution contains an assignment
of all tasks.
We present approximation results and hardness proofs for this type of
problems, and establish several connections to well-known concepts from
matching theory, robust optimization and LP-based techniques.Comment: Full version of ICALP 2016 pape
Mix and match: a strategyproof mechanism for multi-hospital kidney exchange
As kidney exchange programs are growing, manipulation by hospitals becomes more of an issue. Assuming that hospitals wish to maximize the number of their own patients who receive a kidney, they may have an incentive to withhold some of their incompatible donor–patient pairs and match them internally, thus harming social welfare. We study mechanisms for two-way exchanges that are strategyproof, i.e., make it a dominant strategy for hospitals to report all their incompatible pairs. We establish lower bounds on the welfare loss of strategyproof mechanisms, both deterministic and randomized, and propose a randomized mechanism that guarantees at least half of the maximum social welfare in the worst case. Simulations using realistic distributions for blood types and other parameters suggest that in practice our mechanism performs much closer to optimal
A No-Go Theorem for Derandomized Parallel Repetition: Beyond Feige-Kilian
In this work we show a barrier towards proving a randomness-efficient
parallel repetition, a promising avenue for achieving many tight
inapproximability results. Feige and Kilian (STOC'95) proved an impossibility
result for randomness-efficient parallel repetition for two prover games with
small degree, i.e., when each prover has only few possibilities for the
question of the other prover. In recent years, there have been indications that
randomness-efficient parallel repetition (also called derandomized parallel
repetition) might be possible for games with large degree, circumventing the
impossibility result of Feige and Kilian. In particular, Dinur and Meir
(CCC'11) construct games with large degree whose repetition can be derandomized
using a theorem of Impagliazzo, Kabanets and Wigderson (SICOMP'12). However,
obtaining derandomized parallel repetition theorems that would yield optimal
inapproximability results has remained elusive.
This paper presents an explanation for the current impasse in progress, by
proving a limitation on derandomized parallel repetition. We formalize two
properties which we call "fortification-friendliness" and "yields robust
embeddings." We show that any proof of derandomized parallel repetition
achieving almost-linear blow-up cannot both (a) be fortification-friendly and
(b) yield robust embeddings. Unlike Feige and Kilian, we do not require the
small degree assumption.
Given that virtually all existing proofs of parallel repetition, including
the derandomized parallel repetition result of Dinur and Meir, share these two
properties, our no-go theorem highlights a major barrier to achieving
almost-linear derandomized parallel repetition
Coresets Meet EDCS: Algorithms for Matching and Vertex Cover on Massive Graphs
As massive graphs become more prevalent, there is a rapidly growing need for
scalable algorithms that solve classical graph problems, such as maximum
matching and minimum vertex cover, on large datasets. For massive inputs,
several different computational models have been introduced, including the
streaming model, the distributed communication model, and the massively
parallel computation (MPC) model that is a common abstraction of
MapReduce-style computation. In each model, algorithms are analyzed in terms of
resources such as space used or rounds of communication needed, in addition to
the more traditional approximation ratio.
In this paper, we give a single unified approach that yields better
approximation algorithms for matching and vertex cover in all these models. The
highlights include:
* The first one pass, significantly-better-than-2-approximation for matching
in random arrival streams that uses subquadratic space, namely a
-approximation streaming algorithm that uses space
for constant .
* The first 2-round, better-than-2-approximation for matching in the MPC
model that uses subquadratic space per machine, namely a
-approximation algorithm with memory per
machine for constant .
By building on our unified approach, we further develop parallel algorithms
in the MPC model that give a -approximation to matching and an
-approximation to vertex cover in only MPC rounds and
memory per machine. These results settle multiple open
questions posed in the recent paper of Czumaj~et.al. [STOC 2018]
Hamilton cycles in graphs and hypergraphs: an extremal perspective
As one of the most fundamental and well-known NP-complete problems, the
Hamilton cycle problem has been the subject of intensive research. Recent
developments in the area have highlighted the crucial role played by the
notions of expansion and quasi-randomness. These concepts and other recent
techniques have led to the solution of several long-standing problems in the
area. New aspects have also emerged, such as resilience, robustness and the
study of Hamilton cycles in hypergraphs. We survey these developments and
highlight open problems, with an emphasis on extremal and probabilistic
approaches.Comment: to appear in the Proceedings of the ICM 2014; due to given page
limits, this final version is slightly shorter than the previous arxiv
versio
- …