1,746 research outputs found

    Robust randomized matchings

    Full text link
    The following game is played on a weighted graph: Alice selects a matching MM and Bob selects a number kk. Alice's payoff is the ratio of the weight of the kk heaviest edges of MM to the maximum weight of a matching of size at most kk. If MM guarantees a payoff of at least α\alpha then it is called α\alpha-robust. In 2002, Hassin and Rubinstein gave an algorithm that returns a 1/21/\sqrt{2}-robust matching, which is best possible. We show that Alice can improve her payoff to 1/ln(4)1/\ln(4) by playing a randomized strategy. This result extends to a very general class of independence systems that includes matroid intersection, b-matchings, and strong 2-exchange systems. It also implies an improved approximation factor for a stochastic optimization variant known as the maximum priority matching problem and translates to an asymptotic robustness guarantee for deterministic matchings, in which Bob can only select numbers larger than a given constant. Moreover, we give a new LP-based proof of Hassin and Rubinstein's bound

    Robust Assignments via Ear Decompositions and Randomized Rounding

    Get PDF
    Many real-life planning problems require making a priori decisions before all parameters of the problem have been revealed. An important special case of such problem arises in scheduling problems, where a set of tasks needs to be assigned to the available set of machines or personnel (resources), in a way that all tasks have assigned resources, and no two tasks share the same resource. In its nominal form, the resulting computational problem becomes the \emph{assignment problem} on general bipartite graphs. This paper deals with a robust variant of the assignment problem modeling situations where certain edges in the corresponding graph are \emph{vulnerable} and may become unavailable after a solution has been chosen. The goal is to choose a minimum-cost collection of edges such that if any vulnerable edge becomes unavailable, the remaining part of the solution contains an assignment of all tasks. We present approximation results and hardness proofs for this type of problems, and establish several connections to well-known concepts from matching theory, robust optimization and LP-based techniques.Comment: Full version of ICALP 2016 pape

    Mix and match: a strategyproof mechanism for multi-hospital kidney exchange

    Get PDF
    As kidney exchange programs are growing, manipulation by hospitals becomes more of an issue. Assuming that hospitals wish to maximize the number of their own patients who receive a kidney, they may have an incentive to withhold some of their incompatible donor–patient pairs and match them internally, thus harming social welfare. We study mechanisms for two-way exchanges that are strategyproof, i.e., make it a dominant strategy for hospitals to report all their incompatible pairs. We establish lower bounds on the welfare loss of strategyproof mechanisms, both deterministic and randomized, and propose a randomized mechanism that guarantees at least half of the maximum social welfare in the worst case. Simulations using realistic distributions for blood types and other parameters suggest that in practice our mechanism performs much closer to optimal

    A No-Go Theorem for Derandomized Parallel Repetition: Beyond Feige-Kilian

    Get PDF
    In this work we show a barrier towards proving a randomness-efficient parallel repetition, a promising avenue for achieving many tight inapproximability results. Feige and Kilian (STOC'95) proved an impossibility result for randomness-efficient parallel repetition for two prover games with small degree, i.e., when each prover has only few possibilities for the question of the other prover. In recent years, there have been indications that randomness-efficient parallel repetition (also called derandomized parallel repetition) might be possible for games with large degree, circumventing the impossibility result of Feige and Kilian. In particular, Dinur and Meir (CCC'11) construct games with large degree whose repetition can be derandomized using a theorem of Impagliazzo, Kabanets and Wigderson (SICOMP'12). However, obtaining derandomized parallel repetition theorems that would yield optimal inapproximability results has remained elusive. This paper presents an explanation for the current impasse in progress, by proving a limitation on derandomized parallel repetition. We formalize two properties which we call "fortification-friendliness" and "yields robust embeddings." We show that any proof of derandomized parallel repetition achieving almost-linear blow-up cannot both (a) be fortification-friendly and (b) yield robust embeddings. Unlike Feige and Kilian, we do not require the small degree assumption. Given that virtually all existing proofs of parallel repetition, including the derandomized parallel repetition result of Dinur and Meir, share these two properties, our no-go theorem highlights a major barrier to achieving almost-linear derandomized parallel repetition

    Coresets Meet EDCS: Algorithms for Matching and Vertex Cover on Massive Graphs

    Full text link
    As massive graphs become more prevalent, there is a rapidly growing need for scalable algorithms that solve classical graph problems, such as maximum matching and minimum vertex cover, on large datasets. For massive inputs, several different computational models have been introduced, including the streaming model, the distributed communication model, and the massively parallel computation (MPC) model that is a common abstraction of MapReduce-style computation. In each model, algorithms are analyzed in terms of resources such as space used or rounds of communication needed, in addition to the more traditional approximation ratio. In this paper, we give a single unified approach that yields better approximation algorithms for matching and vertex cover in all these models. The highlights include: * The first one pass, significantly-better-than-2-approximation for matching in random arrival streams that uses subquadratic space, namely a (1.5+ϵ)(1.5+\epsilon)-approximation streaming algorithm that uses O(n1.5)O(n^{1.5}) space for constant ϵ>0\epsilon > 0. * The first 2-round, better-than-2-approximation for matching in the MPC model that uses subquadratic space per machine, namely a (1.5+ϵ)(1.5+\epsilon)-approximation algorithm with O(mn+n)O(\sqrt{mn} + n) memory per machine for constant ϵ>0\epsilon > 0. By building on our unified approach, we further develop parallel algorithms in the MPC model that give a (1+ϵ)(1 + \epsilon)-approximation to matching and an O(1)O(1)-approximation to vertex cover in only O(loglogn)O(\log\log{n}) MPC rounds and O(n/polylog(n))O(n/poly\log{(n)}) memory per machine. These results settle multiple open questions posed in the recent paper of Czumaj~et.al. [STOC 2018]

    Hamilton cycles in graphs and hypergraphs: an extremal perspective

    Full text link
    As one of the most fundamental and well-known NP-complete problems, the Hamilton cycle problem has been the subject of intensive research. Recent developments in the area have highlighted the crucial role played by the notions of expansion and quasi-randomness. These concepts and other recent techniques have led to the solution of several long-standing problems in the area. New aspects have also emerged, such as resilience, robustness and the study of Hamilton cycles in hypergraphs. We survey these developments and highlight open problems, with an emphasis on extremal and probabilistic approaches.Comment: to appear in the Proceedings of the ICM 2014; due to given page limits, this final version is slightly shorter than the previous arxiv versio

    Decentralized Erasure Codes for Distributed Networked Storage

    Full text link
    We consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k<n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce Decentralized Erasure Codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding.Comment: to appear in IEEE Transactions on Information Theory, Special Issue: Networking and Information Theor
    corecore