3,227 research outputs found

    Stochastic Combinatorial Optimization via Poisson Approximation

    Full text link
    We study several stochastic combinatorial problems, including the expected utility maximization problem, the stochastic knapsack problem and the stochastic bin packing problem. A common technical challenge in these problems is to optimize some function of the sum of a set of random variables. The difficulty is mainly due to the fact that the probability distribution of the sum is the convolution of a set of distributions, which is not an easy objective function to work with. To tackle this difficulty, we introduce the Poisson approximation technique. The technique is based on the Poisson approximation theorem discovered by Le Cam, which enables us to approximate the distribution of the sum of a set of random variables using a compound Poisson distribution. We first study the expected utility maximization problem introduced recently [Li and Despande, FOCS11]. For monotone and Lipschitz utility functions, we obtain an additive PTAS if there is a multidimensional PTAS for the multi-objective version of the problem, strictly generalizing the previous result. For the stochastic bin packing problem (introduced in [Kleinberg, Rabani and Tardos, STOC97]), we show there is a polynomial time algorithm which uses at most the optimal number of bins, if we relax the size of each bin and the overflow probability by eps. For stochastic knapsack, we show a 1+eps-approximation using eps extra capacity, even when the size and reward of each item may be correlated and cancelations of items are allowed. This generalizes the previous work [Balghat, Goel and Khanna, SODA11] for the case without correlation and cancelation. Our algorithm is also simpler. We also present a factor 2+eps approximation algorithm for stochastic knapsack with cancelations. the current known approximation factor of 8 [Gupta, Krishnaswamy, Molinaro and Ravi, FOCS11].Comment: 42 pages, 1 figure, Preliminary version appears in the Proceeding of the 45th ACM Symposium on the Theory of Computing (STOC13

    Small Extended Formulation for Knapsack Cover Inequalities from Monotone Circuits

    Full text link
    Initially developed for the min-knapsack problem, the knapsack cover inequalities are used in the current best relaxations for numerous combinatorial optimization problems of covering type. In spite of their widespread use, these inequalities yield linear programming (LP) relaxations of exponential size, over which it is not known how to optimize exactly in polynomial time. In this paper we address this issue and obtain LP relaxations of quasi-polynomial size that are at least as strong as that given by the knapsack cover inequalities. For the min-knapsack cover problem, our main result can be stated formally as follows: for any ε>0\varepsilon >0, there is a (1/ε)O(1)nO(logn)(1/\varepsilon)^{O(1)}n^{O(\log n)}-size LP relaxation with an integrality gap of at most 2+ε2+\varepsilon, where nn is the number of items. Prior to this work, there was no known relaxation of subexponential size with a constant upper bound on the integrality gap. Our construction is inspired by a connection between extended formulations and monotone circuit complexity via Karchmer-Wigderson games. In particular, our LP is based on O(log2n)O(\log^2 n)-depth monotone circuits with fan-in~22 for evaluating weighted threshold functions with nn inputs, as constructed by Beimel and Weinreb. We believe that a further understanding of this connection may lead to more positive results complementing the numerous lower bounds recently proved for extended formulations.Comment: 21 page

    Throughput Maximization in Multiprocessor Speed-Scaling

    Full text link
    We are given a set of nn jobs that have to be executed on a set of mm speed-scalable machines that can vary their speeds dynamically using the energy model introduced in [Yao et al., FOCS'95]. Every job jj is characterized by its release date rjr_j, its deadline djd_j, its processing volume pi,jp_{i,j} if jj is executed on machine ii and its weight wjw_j. We are also given a budget of energy EE and our objective is to maximize the weighted throughput, i.e. the total weight of jobs that are completed between their respective release dates and deadlines. We propose a polynomial-time approximation algorithm where the preemption of the jobs is allowed but not their migration. Our algorithm uses a primal-dual approach on a linearized version of a convex program with linear constraints. Furthermore, we present two optimal algorithms for the non-preemptive case where the number of machines is bounded by a fixed constant. More specifically, we consider: {\em (a)} the case of identical processing volumes, i.e. pi,j=pp_{i,j}=p for every ii and jj, for which we present a polynomial-time algorithm for the unweighted version, which becomes a pseudopolynomial-time algorithm for the weighted throughput version, and {\em (b)} the case of agreeable instances, i.e. for which rirjr_i \le r_j if and only if didjd_i \le d_j, for which we present a pseudopolynomial-time algorithm. Both algorithms are based on a discretization of the problem and the use of dynamic programming

    A Weight-coded Evolutionary Algorithm for the Multidimensional Knapsack Problem

    Get PDF
    A revised weight-coded evolutionary algorithm (RWCEA) is proposed for solving multidimensional knapsack problems. This RWCEA uses a new decoding method and incorporates a heuristic method in initialization. Computational results show that the RWCEA performs better than a weight-coded evolutionary algorithm proposed by Raidl (1999) and to some existing benchmarks, it can yield better results than the ones reported in the OR-library.Comment: Submitted to Applied Mathematics and Computation on April 8, 201

    Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints

    Full text link
    We investigate two new optimization problems -- minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of real-world applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost). These problems are often posed as minimizing the difference between submodular functions [14, 35] which is in the worst case inapproximable. We show, however, that by phrasing these problems as constrained optimization, which is more natural for many applications, we achieve a number of bounded approximation guarantees. We also show that both these problems are closely related and an approximation algorithm solving one can be used to obtain an approximation guarantee for the other. We provide hardness results for both problems thus showing that our approximation factors are tight up to log-factors. Finally, we empirically demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201
    corecore