10,862 research outputs found
Packing a Knapsack of Unknown Capacity
We study the problem of packing a knapsack without knowing its capacity.
Whenever we attempt to pack an item that does not fit, the item is discarded;
if the item fits, we have to include it in the packing. We show that there is
always a policy that packs a value within factor 2 of the optimum packing,
irrespective of the actual capacity. If all items have unit density, we achieve
a factor equal to the golden ratio. Both factors are shown to be best possible.
In fact, we obtain the above factors using packing policies that are universal
in the sense that they fix a particular order of the items and try to pack the
items in this order, independent of the observations made while packing. We
give efficient algorithms computing these policies. On the other hand, we show
that, for any alpha>1, the problem of deciding whether a given universal policy
achieves a factor of alpha is coNP-complete. If alpha is part of the input, the
same problem is shown to be coNP-complete for items with unit densities.
Finally, we show that it is coNP-hard to decide, for given alpha, whether a set
of items admits a universal policy with factor alpha, even if all items have
unit densities
Overcommitment in Cloud Services -- Bin packing with Chance Constraints
This paper considers a traditional problem of resource allocation, scheduling
jobs on machines. One such recent application is cloud computing, where jobs
arrive in an online fashion with capacity requirements and need to be
immediately scheduled on physical machines in data centers. It is often
observed that the requested capacities are not fully utilized, hence offering
an opportunity to employ an overcommitment policy, i.e., selling resources
beyond capacity. Setting the right overcommitment level can induce a
significant cost reduction for the cloud provider, while only inducing a very
low risk of violating capacity constraints. We introduce and study a model that
quantifies the value of overcommitment by modeling the problem as a bin packing
with chance constraints. We then propose an alternative formulation that
transforms each chance constraint into a submodular function. We show that our
model captures the risk pooling effect and can guide scheduling and
overcommitment decisions. We also develop a family of online algorithms that
are intuitive, easy to implement and provide a constant factor guarantee from
optimal. Finally, we calibrate our model using realistic workload data, and
test our approach in a practical setting. Our analysis and experiments
illustrate the benefit of overcommitment in cloud services, and suggest a cost
reduction of 1.5% to 17% depending on the provider's risk tolerance
Probabilistic analysis of algorithms for dual bin packing problems
In the dual bin packing problem, the objective is to assign items of given size to the largest possible number of bins, subject to the constraint that the total size of the items assigned to any bin is at least equal to 1. We carry out a probabilistic analysis of this problem under the assumption that the items are drawn independently from the uniform distribution on [0, 1] and reveal the connection between this problem and the classical bin packing problem as well as to renewal theory.
Pressure screening and fluctuations at the bottom of a granular column
We report sets of precise and reproducible measurements on the static
pressure at the bottom of a granular column. We make a quantitative analysis of
the pressure saturation when the column height is increased. We evidence a
great sensitivity of the measurements with the global packing fraction and the
eventual presence of shear bands at the boundaries. We also show the limit of
the classical Janssen model and discuss these experimental results under the
scope of recently proposed theoretical frameworks.Comment: 17 pages, Latex, 8 eps figures, to appear in the European Physical
Journal B (1999
Algorithms to Approximate Column-Sparse Packing Problems
Column-sparse packing problems arise in several contexts in both
deterministic and stochastic discrete optimization. We present two unifying
ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain
improved approximation algorithms for some well-known families of such
problems. As three main examples, we attain the integrality gap, up to
lower-order terms, for known LP relaxations for k-column sparse packing integer
programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set
packing (Bansal et al., Algorithmica, 2012), and go "half the remaining
distance" to optimal for a major integrality-gap conjecture of Furedi, Kahn and
Seymour on hypergraph matching (Combinatorica, 1993).Comment: Extended abstract appeared in SODA 2018. Full version in ACM
Transactions of Algorithm
Granular Pressure and the Thickness of a Layer Jamming on a Rough Incline
Dense granular media have a compaction between the random loose and random
close packings. For these dense media the concept of a granular pressure
depending on compaction is not unanimously accepted because they are often in a
"frozen" state which prevents them to explore all their possible microstates, a
necessary condition for defining a pressure and a compressibility
unambiguously. While periodic tapping or cyclic fluidization have already being
used for that exploration, we here suggest that a succession of flowing states
with velocities slowly decreasing down to zero can also be used for that
purpose. And we propose to deduce the pressure in \emph{dense and flowing}
granular media from experiments measuring the thickness of the granular layer
that remains on a rough incline just after the flow has stopped.Comment: 10 pages, 2 figure
The Price of Information in Combinatorial Optimization
Consider a network design application where we wish to lay down a
minimum-cost spanning tree in a given graph; however, we only have stochastic
information about the edge costs. To learn the precise cost of any edge, we
have to conduct a study that incurs a price. Our goal is to find a spanning
tree while minimizing the disutility, which is the sum of the tree cost and the
total price that we spend on the studies. In a different application, each edge
gives a stochastic reward value. Our goal is to find a spanning tree while
maximizing the utility, which is the tree reward minus the prices that we pay.
Situations such as the above two often arise in practice where we wish to
find a good solution to an optimization problem, but we start with only some
partial knowledge about the parameters of the problem. The missing information
can be found only after paying a probing price, which we call the price of
information. What strategy should we adopt to optimize our expected
utility/disutility?
A classical example of the above setting is Weitzman's "Pandora's box"
problem where we are given probability distributions on values of
independent random variables. The goal is to choose a single variable with a
large value, but we can find the actual outcomes only after paying a price. Our
work is a generalization of this model to other combinatorial optimization
problems such as matching, set cover, facility location, and prize-collecting
Steiner tree. We give a technique that reduces such problems to their non-price
counterparts, and use it to design exact/approximation algorithms to optimize
our utility/disutility. Our techniques extend to situations where there are
additional constraints on what parameters can be probed or when we can
simultaneously probe a subset of the parameters.Comment: SODA 201
- …