126 research outputs found

    The Price of Information in Combinatorial Optimization

    Full text link
    Consider a network design application where we wish to lay down a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. To learn the precise cost of any edge, we have to conduct a study that incurs a price. Our goal is to find a spanning tree while minimizing the disutility, which is the sum of the tree cost and the total price that we spend on the studies. In a different application, each edge gives a stochastic reward value. Our goal is to find a spanning tree while maximizing the utility, which is the tree reward minus the prices that we pay. Situations such as the above two often arise in practice where we wish to find a good solution to an optimization problem, but we start with only some partial knowledge about the parameters of the problem. The missing information can be found only after paying a probing price, which we call the price of information. What strategy should we adopt to optimize our expected utility/disutility? A classical example of the above setting is Weitzman's "Pandora's box" problem where we are given probability distributions on values of nn independent random variables. The goal is to choose a single variable with a large value, but we can find the actual outcomes only after paying a price. Our work is a generalization of this model to other combinatorial optimization problems such as matching, set cover, facility location, and prize-collecting Steiner tree. We give a technique that reduces such problems to their non-price counterparts, and use it to design exact/approximation algorithms to optimize our utility/disutility. Our techniques extend to situations where there are additional constraints on what parameters can be probed or when we can simultaneously probe a subset of the parameters.Comment: SODA 201

    Knapsack Cover Subject to a Matroid Constraint

    Get PDF
    We consider the Knapsack Covering problem subject to a matroid constraint. In this problem, we are given an universe U of n items where item i has attributes: a cost c(i) and a size s(i). We also have a demand D. We are also given a matroid M = (U, I) on the set U. A feasible solution S to the problem is one such that (i) the cumulative size of the items chosen is at least D, and (ii) the set S is independent in the matroid M (i.e. S is in I). The objective is to minimize the total cost of the items selected, sum_{i in S}c(i). Our main result proves a 2-factor approximation for this problem. The problem described above falls in the realm of mixed packing covering problems. We also consider packing extensions of certain other covering problems and prove that in such cases it is not possible to derive any constant factor pproximations

    Submodular Maximization with Matroid and Packing Constraints in Parallel

    Full text link
    We consider the problem of maximizing the multilinear extension of a submodular function subject a single matroid constraint or multiple packing constraints with a small number of adaptive rounds of evaluation queries. We obtain the first algorithms with low adaptivity for submodular maximization with a matroid constraint. Our algorithms achieve a 11/eϵ1-1/e-\epsilon approximation for monotone functions and a 1/eϵ1/e-\epsilon approximation for non-monotone functions, which nearly matches the best guarantees known in the fully adaptive setting. The number of rounds of adaptivity is O(log2n/ϵ3)O(\log^2{n}/\epsilon^3), which is an exponential speedup over the existing algorithms. We obtain the first parallel algorithm for non-monotone submodular maximization subject to packing constraints. Our algorithm achieves a 1/eϵ1/e-\epsilon approximation using O(log(n/ϵ)log(1/ϵ)log(n+m)/ϵ2)O(\log(n/\epsilon) \log(1/\epsilon) \log(n+m)/ \epsilon^2) parallel rounds, which is again an exponential speedup in parallel time over the existing algorithms. For monotone functions, we obtain a 11/eϵ1-1/e-\epsilon approximation in O(log(n/ϵ)log(m)/ϵ2)O(\log(n/\epsilon)\log(m)/\epsilon^2) parallel rounds. The number of parallel rounds of our algorithm matches that of the state of the art algorithm for solving packing LPs with a linear objective. Our results apply more generally to the problem of maximizing a diminishing returns submodular (DR-submodular) function

    A Unifying Hierarchy of Valuations with Complements and Substitutes

    Full text link
    We introduce a new hierarchy over monotone set functions, that we refer to as MPH\mathcal{MPH} (Maximum over Positive Hypergraphs). Levels of the hierarchy correspond to the degree of complementarity in a given function. The highest level of the hierarchy, MPH\mathcal{MPH}-mm (where mm is the total number of items) captures all monotone functions. The lowest level, MPH\mathcal{MPH}-11, captures all monotone submodular functions, and more generally, the class of functions known as XOS\mathcal{XOS}. Every monotone function that has a positive hypergraph representation of rank kk (in the sense defined by Abraham, Babaioff, Dughmi and Roughgarden [EC 2012]) is in MPH\mathcal{MPH}-kk. Every monotone function that has supermodular degree kk (in the sense defined by Feige and Izsak [ITCS 2013]) is in MPH\mathcal{MPH}-(k+1)(k+1). In both cases, the converse direction does not hold, even in an approximate sense. We present additional results that demonstrate the expressiveness power of MPH\mathcal{MPH}-kk. One can obtain good approximation ratios for some natural optimization problems, provided that functions are required to lie in low levels of the MPH\mathcal{MPH} hierarchy. We present two such applications. One shows that the maximum welfare problem can be approximated within a ratio of k+1k+1 if all players hold valuation functions in MPH\mathcal{MPH}-kk. The other is an upper bound of 2k2k on the price of anarchy of simultaneous first price auctions. Being in MPH\mathcal{MPH}-kk can be shown to involve two requirements -- one is monotonicity and the other is a certain requirement that we refer to as PLE\mathcal{PLE} (Positive Lower Envelope). Removing the monotonicity requirement, one obtains the PLE\mathcal{PLE} hierarchy over all non-negative set functions (whether monotone or not), which can be fertile ground for further research
    corecore