19,495 research outputs found

    Uniform unweighted set cover: The power of non-oblivious local search

    Get PDF
    AbstractWe are given n base elements and a finite collection of subsets of them. The size of any subset varies between p to k (p<k). In addition, we assume that the input contains all possible subsets of size p. Our objective is to find a subcollection of minimum-cardinality which covers all the elements. This problem is known to be NP-hard. We provide two approximation algorithms for it, one for the generic case, and an improved one for the special case of (p,k)=(2,4).The algorithm for the generic case is a greedy one, based on packing phases: at each phase we pick a collection of disjoint subsets covering i new elements, starting from i=k down to i=p+1. At a final step we cover the remaining base elements by the subsets of size p. We derive the exact performance guarantee of this algorithm for all values of k and p, which is less than Hk, where Hk is the k’th harmonic number. However, the algorithm exhibits the known improvement methods over the greedy one for the unweighted k-set cover problem (in which subset sizes are only restricted not to exceed k), and hence it serves as a benchmark for our improved algorithm.The improved algorithm for the special case of (p,k)=(2,4) is based on non-oblivious local search: it starts with a feasible cover, and then repeatedly tries to replace sets of size 3 and 4 so as to maximize an objective function which prefers big sets over small ones. For this case, our generic algorithm achieves an asymptotic approximation ratio of 1.5+ϵ, and the local search algorithm achieves a better ratio, which is bounded by 1.458333…+ϵ

    Algorithms as Mechanisms: The Price of Anarchy of Relax-and-Round

    Full text link
    Many algorithms that are originally designed without explicitly considering incentive properties are later combined with simple pricing rules and used as mechanisms. The resulting mechanisms are often natural and simple to understand. But how good are these algorithms as mechanisms? Truthful reporting of valuations is typically not a dominant strategy (certainly not with a pay-your-bid, first-price rule, but it is likely not a good strategy even with a critical value, or second-price style rule either). Our goal is to show that a wide class of approximation algorithms yields this way mechanisms with low Price of Anarchy. The seminal result of Lucier and Borodin [SODA 2010] shows that combining a greedy algorithm that is an α\alpha-approximation algorithm with a pay-your-bid payment rule yields a mechanism whose Price of Anarchy is O(α)O(\alpha). In this paper we significantly extend the class of algorithms for which such a result is available by showing that this close connection between approximation ratio on the one hand and Price of Anarchy on the other also holds for the design principle of relaxation and rounding provided that the relaxation is smooth and the rounding is oblivious. We demonstrate the far-reaching consequences of our result by showing its implications for sparse packing integer programs, such as multi-unit auctions and generalized matching, for the maximum traveling salesman problem, for combinatorial auctions, and for single source unsplittable flow problems. In all these problems our approach leads to novel simple, near-optimal mechanisms whose Price of Anarchy either matches or beats the performance guarantees of known mechanisms.Comment: Extended abstract appeared in Proc. of 16th ACM Conference on Economics and Computation (EC'15

    Bidimensionality and EPTAS

    Full text link
    Bidimensionality theory is a powerful framework for the development of metaalgorithmic techniques. It was introduced by Demaine et al. as a tool to obtain sub-exponential time parameterized algorithms for problems on H-minor free graphs. Demaine and Hajiaghayi extended the theory to obtain PTASs for bidimensional problems, and subsequently improved these results to EPTASs. Fomin et. al related the theory to the existence of linear kernels for parameterized problems. In this paper we revisit bidimensionality theory from the perspective of approximation algorithms and redesign the framework for obtaining EPTASs to be more powerful, easier to apply and easier to understand. Two of the most widely used approaches to obtain PTASs on planar graphs are the Lipton-Tarjan separator based approach, and Baker's approach. Demaine and Hajiaghayi strengthened both approaches using bidimensionality and obtained EPTASs for a multitude of problems. We unify the two strenghtened approaches to combine the best of both worlds. At the heart of our framework is a decomposition lemma which states that for "most" bidimensional problems, there is a polynomial time algorithm which given an H-minor-free graph G as input and an e > 0 outputs a vertex set X of size e * OPT such that the treewidth of G n X is f(e). Here, OPT is the objective function value of the problem in question and f is a function depending only on e. This allows us to obtain EPTASs on (apex)-minor-free graphs for all problems covered by the previous framework, as well as for a wide range of packing problems, partial covering problems and problems that are neither closed under taking minors, nor contractions. To the best of our knowledge for many of these problems including cycle packing, vertex-h-packing, maximum leaf spanning tree, and partial r-dominating set no EPTASs on planar graphs were previously known

    Bidimensionality and Geometric Graphs

    Full text link
    In this paper we use several of the key ideas from Bidimensionality to give a new generic approach to design EPTASs and subexponential time parameterized algorithms for problems on classes of graphs which are not minor closed, but instead exhibit a geometric structure. In particular we present EPTASs and subexponential time parameterized algorithms for Feedback Vertex Set, Vertex Cover, Connected Vertex Cover, Diamond Hitting Set, on map graphs and unit disk graphs, and for Cycle Packing and Minimum-Vertex Feedback Edge Set on unit disk graphs. Our results are based on the recent decomposition theorems proved by Fomin et al [SODA 2011], and our algorithms work directly on the input graph. Thus it is not necessary to compute the geometric representations of the input graph. To the best of our knowledge, these results are previously unknown, with the exception of the EPTAS and a subexponential time parameterized algorithm on unit disk graphs for Vertex Cover, which were obtained by Marx [ESA 2005] and Alber and Fiala [J. Algorithms 2004], respectively. We proceed to show that our approach can not be extended in its full generality to more general classes of geometric graphs, such as intersection graphs of unit balls in R^d, d >= 3. Specifically we prove that Feedback Vertex Set on unit-ball graphs in R^3 neither admits PTASs unless P=NP, nor subexponential time algorithms unless the Exponential Time Hypothesis fails. Additionally, we show that the decomposition theorems which our approach is based on fail for disk graphs and that therefore any extension of our results to disk graphs would require new algorithmic ideas. On the other hand, we prove that our EPTASs and subexponential time algorithms for Vertex Cover and Connected Vertex Cover carry over both to disk graphs and to unit-ball graphs in R^d for every fixed d

    The Price of Information in Combinatorial Optimization

    Full text link
    Consider a network design application where we wish to lay down a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. To learn the precise cost of any edge, we have to conduct a study that incurs a price. Our goal is to find a spanning tree while minimizing the disutility, which is the sum of the tree cost and the total price that we spend on the studies. In a different application, each edge gives a stochastic reward value. Our goal is to find a spanning tree while maximizing the utility, which is the tree reward minus the prices that we pay. Situations such as the above two often arise in practice where we wish to find a good solution to an optimization problem, but we start with only some partial knowledge about the parameters of the problem. The missing information can be found only after paying a probing price, which we call the price of information. What strategy should we adopt to optimize our expected utility/disutility? A classical example of the above setting is Weitzman's "Pandora's box" problem where we are given probability distributions on values of nn independent random variables. The goal is to choose a single variable with a large value, but we can find the actual outcomes only after paying a price. Our work is a generalization of this model to other combinatorial optimization problems such as matching, set cover, facility location, and prize-collecting Steiner tree. We give a technique that reduces such problems to their non-price counterparts, and use it to design exact/approximation algorithms to optimize our utility/disutility. Our techniques extend to situations where there are additional constraints on what parameters can be probed or when we can simultaneously probe a subset of the parameters.Comment: SODA 201

    Vector Bin Packing with Multiple-Choice

    Full text link
    We consider a variant of bin packing called multiple-choice vector bin packing. In this problem we are given a set of items, where each item can be selected in one of several DD-dimensional incarnations. We are also given TT bin types, each with its own cost and DD-dimensional size. Our goal is to pack the items in a set of bins of minimum overall cost. The problem is motivated by scheduling in networks with guaranteed quality of service (QoS), but due to its general formulation it has many other applications as well. We present an approximation algorithm that is guaranteed to produce a solution whose cost is about lnD\ln D times the optimum. For the running time to be polynomial we require D=O(1)D=O(1) and T=O(logn)T=O(\log n). This extends previous results for vector bin packing, in which each item has a single incarnation and there is only one bin type. To obtain our result we also present a PTAS for the multiple-choice version of multidimensional knapsack, where we are given only one bin and the goal is to pack a maximum weight set of (incarnations of) items in that bin
    corecore