13 research outputs found

    Improved Approximation Results for Stochastic Knapsack Problems

    Full text link

    Packing a Knapsack of Unknown Capacity

    Get PDF
    We study the problem of packing a knapsack without knowing its capacity. Whenever we attempt to pack an item that does not fit, the item is discarded; if the item fits, we have to include it in the packing. We show that there is always a policy that packs a value within factor 2 of the optimum packing, irrespective of the actual capacity. If all items have unit density, we achieve a factor equal to the golden ratio. Both factors are shown to be best possible. In fact, we obtain the above factors using packing policies that are universal in the sense that they fix a particular order of the items and try to pack the items in this order, independent of the observations made while packing. We give efficient algorithms computing these policies. On the other hand, we show that, for any alpha>1, the problem of deciding whether a given universal policy achieves a factor of alpha is coNP-complete. If alpha is part of the input, the same problem is shown to be coNP-complete for items with unit densities. Finally, we show that it is coNP-hard to decide, for given alpha, whether a set of items admits a universal policy with factor alpha, even if all items have unit densities

    The Price of Information in Combinatorial Optimization

    Full text link
    Consider a network design application where we wish to lay down a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. To learn the precise cost of any edge, we have to conduct a study that incurs a price. Our goal is to find a spanning tree while minimizing the disutility, which is the sum of the tree cost and the total price that we spend on the studies. In a different application, each edge gives a stochastic reward value. Our goal is to find a spanning tree while maximizing the utility, which is the tree reward minus the prices that we pay. Situations such as the above two often arise in practice where we wish to find a good solution to an optimization problem, but we start with only some partial knowledge about the parameters of the problem. The missing information can be found only after paying a probing price, which we call the price of information. What strategy should we adopt to optimize our expected utility/disutility? A classical example of the above setting is Weitzman's "Pandora's box" problem where we are given probability distributions on values of nn independent random variables. The goal is to choose a single variable with a large value, but we can find the actual outcomes only after paying a price. Our work is a generalization of this model to other combinatorial optimization problems such as matching, set cover, facility location, and prize-collecting Steiner tree. We give a technique that reduces such problems to their non-price counterparts, and use it to design exact/approximation algorithms to optimize our utility/disutility. Our techniques extend to situations where there are additional constraints on what parameters can be probed or when we can simultaneously probe a subset of the parameters.Comment: SODA 201

    Stochastic Combinatorial Optimization via Poisson Approximation

    Full text link
    We study several stochastic combinatorial problems, including the expected utility maximization problem, the stochastic knapsack problem and the stochastic bin packing problem. A common technical challenge in these problems is to optimize some function of the sum of a set of random variables. The difficulty is mainly due to the fact that the probability distribution of the sum is the convolution of a set of distributions, which is not an easy objective function to work with. To tackle this difficulty, we introduce the Poisson approximation technique. The technique is based on the Poisson approximation theorem discovered by Le Cam, which enables us to approximate the distribution of the sum of a set of random variables using a compound Poisson distribution. We first study the expected utility maximization problem introduced recently [Li and Despande, FOCS11]. For monotone and Lipschitz utility functions, we obtain an additive PTAS if there is a multidimensional PTAS for the multi-objective version of the problem, strictly generalizing the previous result. For the stochastic bin packing problem (introduced in [Kleinberg, Rabani and Tardos, STOC97]), we show there is a polynomial time algorithm which uses at most the optimal number of bins, if we relax the size of each bin and the overflow probability by eps. For stochastic knapsack, we show a 1+eps-approximation using eps extra capacity, even when the size and reward of each item may be correlated and cancelations of items are allowed. This generalizes the previous work [Balghat, Goel and Khanna, SODA11] for the case without correlation and cancelation. Our algorithm is also simpler. We also present a factor 2+eps approximation algorithm for stochastic knapsack with cancelations. the current known approximation factor of 8 [Gupta, Krishnaswamy, Molinaro and Ravi, FOCS11].Comment: 42 pages, 1 figure, Preliminary version appears in the Proceeding of the 45th ACM Symposium on the Theory of Computing (STOC13

    Approximation Algorithms for Correlated Knapsacks and Non-Martingale Bandits

    Full text link
    In the stochastic knapsack problem, we are given a knapsack of size B, and a set of jobs whose sizes and rewards are drawn from a known probability distribution. However, we know the actual size and reward only when the job completes. How should we schedule jobs to maximize the expected total reward? We know O(1)-approximations when we assume that (i) rewards and sizes are independent random variables, and (ii) we cannot prematurely cancel jobs. What can we say when either or both of these assumptions are changed? The stochastic knapsack problem is of interest in its own right, but techniques developed for it are applicable to other stochastic packing problems. Indeed, ideas for this problem have been useful for budgeted learning problems, where one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to pull the arms a total of B times to maximize the reward obtained. Much recent work on this problem focus on the case when the evolution of the arms follows a martingale, i.e., when the expected reward from the future is the same as the reward at the current state. What can we say when the rewards do not form a martingale? In this paper, we give constant-factor approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations, and also for budgeted learning problems where the martingale condition is not satisfied. Indeed, we can show that previously proposed LP relaxations have large integrality gaps. We propose new time-indexed LP relaxations, and convert the fractional solutions into distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise a randomized adaptive scheduling algorithm. We hope our LP formulation and decomposition methods may provide a new way to address other correlated bandit problems with more general contexts

    Models, algorithms and performance analysis for adaptive operating room scheduling

    Get PDF
    The complex optimisation problems arising in the scheduling of operating rooms have received considerable attention in recent scientific literature because of their impact on costs, revenues and patient health. For an important part, the complexity stems from the stochastic nature of the problem. In practice, this stochastic nature often leads to schedule adaptations on the day of schedule execution. While operating room performance is thus importantly affected by such adaptations, decision-making on adaptations is hardly addressed in scientific literature. Building on previous literature on adaptive scheduling, we develop adaptive operating room scheduling models and problems, and analyse the performance of corresponding adaptive scheduling policies. As previously proposed (fully) adaptive scheduling models and policies are infeasible in operating room scheduling practice, we extend adaptive scheduling theory by introducing the novel concept of committing. Moreover, the core of the proposed adaptive policies with committing is formed by a new, exact, pseudo-polynomial algorithm to solve a general class of stochastic knapsack problems. Using these theoretica

    Learning Strategies in Decentralized Matching Markets under Uncertain Preferences

    Full text link
    We study the problem of decision-making in the setting of a scarcity of shared resources when the preferences of agents are unknown a priori and must be learned from data. Taking the two-sided matching market as a running example, we focus on the decentralized setting, where agents do not share their learned preferences with a central authority. Our approach is based on the representation of preferences in a reproducing kernel Hilbert space, and a learning algorithm for preferences that accounts for uncertainty due to the competition among the agents in the market. Under regularity conditions, we show that our estimator of preferences converges at a minimax optimal rate. Given this result, we derive optimal strategies that maximize agents' expected payoffs and we calibrate the uncertain state by taking opportunity costs into account. We also derive an incentive-compatibility property and show that the outcome from the learned strategies has a stability property. Finally, we prove a fairness property that asserts that there exists no justified envy according to the learned strategies
    corecore