635 research outputs found

    Robust randomized matchings

    Full text link
    The following game is played on a weighted graph: Alice selects a matching MM and Bob selects a number kk. Alice's payoff is the ratio of the weight of the kk heaviest edges of MM to the maximum weight of a matching of size at most kk. If MM guarantees a payoff of at least α\alpha then it is called α\alpha-robust. In 2002, Hassin and Rubinstein gave an algorithm that returns a 1/21/\sqrt{2}-robust matching, which is best possible. We show that Alice can improve her payoff to 1/ln(4)1/\ln(4) by playing a randomized strategy. This result extends to a very general class of independence systems that includes matroid intersection, b-matchings, and strong 2-exchange systems. It also implies an improved approximation factor for a stochastic optimization variant known as the maximum priority matching problem and translates to an asymptotic robustness guarantee for deterministic matchings, in which Bob can only select numbers larger than a given constant. Moreover, we give a new LP-based proof of Hassin and Rubinstein's bound

    Changing Bases: Multistage Optimization for Matroids and Matchings

    Full text link
    This paper is motivated by the fact that many systems need to be maintained continually while the underlying costs change over time. The challenge is to continually maintain near-optimal solutions to the underlying optimization problems, without creating too much churn in the solution itself. We model this as a multistage combinatorial optimization problem where the input is a sequence of cost functions (one for each time step); while we can change the solution from step to step, we incur an additional cost for every such change. We study the multistage matroid maintenance problem, where we need to maintain a base of a matroid in each time step under the changing cost functions and acquisition costs for adding new elements. The online version of this problem generalizes online paging. E.g., given a graph, we need to maintain a spanning tree TtT_t at each step: we pay ct(Tt)c_t(T_t) for the cost of the tree at time tt, and also TtTt1| T_t\setminus T_{t-1} | for the number of edges changed at this step. Our main result is an O(logmlogr)O(\log m \log r)-approximation, where mm is the number of elements/edges and rr is the rank of the matroid. We also give an O(logm)O(\log m) approximation for the offline version of the problem. These bounds hold when the acquisition costs are non-uniform, in which caseboth these results are the best possible unless P=NP. We also study the perfect matching version of the problem, where we must maintain a perfect matching at each step under changing cost functions and costs for adding new elements. Surprisingly, the hardness drastically increases: for any constant ϵ>0\epsilon>0, there is no O(n1ϵ)O(n^{1-\epsilon})-approximation to the multistage matching maintenance problem, even in the offline case

    Randomized Strategies for Robust Combinatorial Optimization

    Full text link
    In this paper, we study the following robust optimization problem. Given an independence system and candidate objective functions, we choose an independent set, and then an adversary chooses one objective function, knowing our choice. Our goal is to find a randomized strategy (i.e., a probability distribution over the independent sets) that maximizes the expected objective value. To solve the problem, we propose two types of schemes for designing approximation algorithms. One scheme is for the case when objective functions are linear. It first finds an approximately optimal aggregated strategy and then retrieves a desired solution with little loss of the objective value. The approximation ratio depends on a relaxation of an independence system polytope. As applications, we provide approximation algorithms for a knapsack constraint or a matroid intersection by developing appropriate relaxations and retrievals. The other scheme is based on the multiplicative weights update method. A key technique is to introduce a new concept called (η,γ)(\eta,\gamma)-reductions for objective functions with parameters η,γ\eta, \gamma. We show that our scheme outputs a nearly α\alpha-approximate solution if there exists an α\alpha-approximation algorithm for a subproblem defined by (η,γ)(\eta,\gamma)-reductions. This improves approximation ratio in previous results. Using our result, we provide approximation algorithms when the objective functions are submodular or correspond to the cardinality robustness for the knapsack problem

    Online Contention Resolution Schemes

    Full text link
    We introduce a new rounding technique designed for online optimization problems, which is related to contention resolution schemes, a technique initially introduced in the context of submodular function maximization. Our rounding technique, which we call online contention resolution schemes (OCRSs), is applicable to many online selection problems, including Bayesian online selection, oblivious posted pricing mechanisms, and stochastic probing models. It allows for handling a wide set of constraints, and shares many strong properties of offline contention resolution schemes. In particular, OCRSs for different constraint families can be combined to obtain an OCRS for their intersection. Moreover, we can approximately maximize submodular functions in the online settings we consider. We, thus, get a broadly applicable framework for several online selection problems, which improves on previous approaches in terms of the types of constraints that can be handled, the objective functions that can be dealt with, and the assumptions on the strength of the adversary. Furthermore, we resolve two open problems from the literature; namely, we present the first constant-factor constrained oblivious posted price mechanism for matroid constraints, and the first constant-factor algorithm for weighted stochastic probing with deadlines.Comment: 33 pages. To appear in SODA 201
    corecore