7 research outputs found

    Optimal Online Contention Resolution Schemes via Ex-Ante Prophet Inequalities

    Get PDF
    Online contention resolution schemes (OCRSs) were proposed by Feldman, Svensson, and Zenklusen [Moran Feldman et al., 2016] as a generic technique to round a fractional solution in the matroid polytope in an online fashion. It has found applications in several stochastic combinatorial problems where there is a commitment constraint: on seeing the value of a stochastic element, the algorithm has to immediately and irrevocably decide whether to select it while always maintaining an independent set in the matroid. Although OCRSs immediately lead to prophet inequalities, these prophet inequalities are not optimal. Can we instead use prophet inequalities to design optimal OCRSs? We design the first optimal 1/2-OCRS for matroids by reducing the problem to designing a matroid prophet inequality where we compare to the stronger benchmark of an ex-ante relaxation. We also introduce and design optimal (1-1/e)-random order CRSs for matroids, which are similar to OCRSs but the arrival order is chosen uniformly at random

    When the Optimum is also Blind: a New Perspective on Universal Optimization

    Get PDF
    Consider the following variant of the set cover problem. We are given a universe U={1,...,n} and a collection of subsets C = {S_1,...,S_m} where each S_i is a subset of U. For every element u from U we need to find a set phi(u) from collection C such that u belongs to phi(u). Once we construct and fix the mapping phi from U to C a subset X from the universe U is revealed, and we need to cover all elements from X with exactly phi(X), that is {phi(u)}_{all u from X}. The goal is to find a mapping such that the cover phi(X) is as cheap as possible. This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed. Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worst-case ratio: universal solution for a given instance vs optimum solution for the same instance. As the universal solution is significantly more constrained, it is typical that such a worst-case ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worst-cases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worst-case ratio. But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do? We show that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For example, for the set cover problem we obtain HnH_n approximation which matches the approximation ratio from the classic deterministic setup. Moreover, we show this for all possible probability distributions over UU that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. Our result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization. The same basic approach leads to improved approximation algorithms for other related problems, including Vertex Cover, Edge Cover, Directed Steiner Tree, Multicut, and Facility Location

    (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value Probing

    Get PDF
    Consider a kidney-exchange application where we want to find a max-matching in a random graph. To find whether an edge e exists, we need to perform an expensive test, in which case the edge e appears independently with a known probability p_e. Given a budget on the total cost of the tests, our goal is to find a testing strategy that maximizes the expected maximum matching size. The above application is an example of the stochastic probing problem. In general the optimal stochastic probing strategy is difficult to find because it is adaptive - decides on the next edge to probe based on the outcomes of the probed edges. An alternate approach is to show the adaptivity gap is small, i.e., the best non-adaptive strategy always has a value close to the best adaptive strategy. This allows us to focus on designing non-adaptive strategies that are much simpler. Previous works, however, have focused on Bernoulli random variables that can only capture whether an edge appears or not. In this work we introduce a multi-value stochastic probing problem, which can also model situations where the weight of an edge has a probability distribution over multiple values. Our main technical contribution is to obtain (near) optimal bounds for the (worst-case) adaptivity gaps for multi-value stochastic probing over prefix-closed constraints. For a monotone submodular function, we show the adaptivity gap is at most 2 and provide a matching lower bound. For a weighted rank function of a k-extendible system (a generalization of intersection of k matroids), we show the adaptivity gap is between O(k log k) and k. None of these results were known even in the Bernoulli case where both our upper and lower bounds also apply, thereby resolving an open question of Gupta et al. [Gupta et al., 2017]

    Submodular Dominance and Applications

    Get PDF
    In submodular optimization we often deal with the expected value of a submodular function f on a distribution ? over sets of elements. In this work we study such submodular expectations for negatively dependent distributions. We introduce a natural notion of negative dependence, which we call Weak Negative Regression (WNR), that generalizes both Negative Association and Negative Regression. We observe that WNR distributions satisfy Submodular Dominance, whereby the expected value of f under ? is at least the expected value of f under a product distribution with the same element-marginals. Next, we give several applications of Submodular Dominance to submodular optimization. In particular, we improve the best known submodular prophet inequalities, we develop new rounding techniques for polytopes of set systems that admit negatively dependent distributions, and we prove existence of contention resolution schemes for WNR distributions

    Fault Tolerance in Euclidean Committee Selection

    Full text link
    In the committee selection problem, the goal is to choose a subset of size kk from a set of candidates CC that collectively gives the best representation to a set of voters. We consider this problem in Euclidean dd-space where each voter/candidate is a point and voters' preferences are implicitly represented by Euclidean distances to candidates. We explore fault-tolerance in committee selection and study the following three variants: (1) given a committee and a set of ff failing candidates, find their optimal replacement; (2) compute the worst-case replacement score for a given committee under failure of ff candidates; and (3) design a committee with the best replacement score under worst-case failures. The score of a committee is determined using the well-known (min-max) Chamberlin-Courant rule: minimize the maximum distance between any voter and its closest candidate in the committee. Our main results include the following: (1) in one dimension, all three problems can be solved in polynomial time; (2) in dimension d≥2d \geq 2, all three problems are NP-hard; and (3) all three problems admit a constant-factor approximation in any fixed dimension, and the optimal committee problem has an FPT bicriterion approximation.Comment: The paper will appear in the proceedings of ESA 202

    Applications of Strong Convex Relaxations to Allocation Problems

    Get PDF
    Approximation algorithms are a commonly used tool for designing efficient algorithmic solutions for intractable problems, at the expense of the quality of the output solution. A prominent technique for designing such algorithms is the use of Linear Programming (LP) relaxations. An optimal solution to such a relaxation provides a bound on the objective value of the optimal integral solution, to which we compare the integral solution we return. In this context, when studying a specific problem, two natural questions often arise: What is a strong LP relaxation for this problem, and how can we exploit it? Over the course of the past few decades, a significant amount of effort has been expended by the research community in order to answer these questions for a variety of interesting intractable problems. Although there exist multiple problems for which we have designed LP relaxations that achieve best-possible guarantees, there still exist numerous problems for which we either have no strong LP relaxations, or do not know how to use them. The main focus of this thesis is extending our understanding of such strong relaxations. We focus on designing good approximation algorithms for certain allocation problems, by employing a class of strong LP relaxations, called configuration-LPs. For many such allocation problems, the best-known results are derived by using simple and natural LP relaxations, whereas configuration-LPs have been used successfully on several occasions in order to break pre-existing barriers set by weaker relaxations. However, our understanding of configuration-LPs is far from complete for many problems. Therefore, understanding and using these relaxations to the farthest extent possible is a quite intriguing question. Answering this question could result in improved approximation algorithms for a wide variety of allocation problems. The first problem we address in this thesis is the restricted max-min fair allocation problem. Prior to our work, the best known result provided an Ω(1)\Omega(1)-approximation that ran in polynomial time. Also, it was known how to estimate the value of an optimal solution to the problem within a factor of 1/(4+c)1/(4+c), for any c>0c>0, by solving the corresponding configuration-LP. Our first contribution in this thesis is the design of a 1/131/13-approximation algorithm for the problem, using the configuration-LP. Specifically, although our algorithm is fully combinatorial, it consists of a local-search procedure that is guaranteed to succeed only when the configuration-LP is feasible. In order to establish the correctness and running time of the algorithm, it is crucial to use the configuration-LP in our analysis. The second problem we study is the scheduling of jobs on unrelated machines in order to minimize the sum of weighted completion times. For this problem, the best known approximation algorithm achieves a ratio of 3/2−r3/2-r, for some small r>0r>0. Our second contribution in this thesis is the improvement of this ratio to (1+2)/2+c(1+\sqrt{2})/2+c, for any c>0c>0, for the special case of the problem where the jobs have uniform Smith ratios. To achieve this ratio, we design a randomized rounding algorithm that rounds solutions to the corresponding configuration-LP. Through a careful examination of the distribution this randomized algorithm outputs, we identify the one that maximizes the approximation ratio, and we then upper bound the ratio this worst-case distribution exhibits by (1+2)/2+c(1+\sqrt{2})/2+c
    corecore