247 research outputs found
Improved approximation algorithm for k-level UFL with penalties, a simplistic view on randomizing the scaling parameter
The state of the art in approximation algorithms for facility location
problems are complicated combinations of various techniques. In particular, the
currently best 1.488-approximation algorithm for the uncapacitated facility
location (UFL) problem by Shi Li is presented as a result of a non-trivial
randomization of a certain scaling parameter in the LP-rounding algorithm by
Chudak and Shmoys combined with a primal-dual algorithm of Jain et al. In this
paper we first give a simple interpretation of this randomization process in
terms of solving an aux- iliary (factor revealing) LP. Then, armed with this
simple view point, Abstract. we exercise the randomization on a more
complicated algorithm for the k-level version of the problem with penalties in
which the planner has the option to pay a penalty instead of connecting chosen
clients, which results in an improved approximation algorithm
Approximation algorithms for Capacitated Facility Location Problem with Penalties
In this paper, we address the problem of capacitated facility location
problem with penalties (CapFLPP) paid per unit of unserved demand. In case of
uncapacitated FLP with penalties demands of a client are either entirely met or
are entirely rejected and penalty is paid. In the uncapacitated case, there is
no reason to serve a client partially. Whereas, in case of CapFLPP, it may be
beneficial to serve a client partially instead of not serving at all and, pay
the penalty for the unmet demand. Charikar et. al.
\cite{charikar2001algorithms}, Jain et. al. \cite{jain2003greedy} and Xu- Xu
\cite{xu2009improved} gave , and approximation, respectively,
for the uncapacitated case . We present factor for the case
of uniform capacities and factor for non-uniform
capacities
The Price of Information in Combinatorial Optimization
Consider a network design application where we wish to lay down a
minimum-cost spanning tree in a given graph; however, we only have stochastic
information about the edge costs. To learn the precise cost of any edge, we
have to conduct a study that incurs a price. Our goal is to find a spanning
tree while minimizing the disutility, which is the sum of the tree cost and the
total price that we spend on the studies. In a different application, each edge
gives a stochastic reward value. Our goal is to find a spanning tree while
maximizing the utility, which is the tree reward minus the prices that we pay.
Situations such as the above two often arise in practice where we wish to
find a good solution to an optimization problem, but we start with only some
partial knowledge about the parameters of the problem. The missing information
can be found only after paying a probing price, which we call the price of
information. What strategy should we adopt to optimize our expected
utility/disutility?
A classical example of the above setting is Weitzman's "Pandora's box"
problem where we are given probability distributions on values of
independent random variables. The goal is to choose a single variable with a
large value, but we can find the actual outcomes only after paying a price. Our
work is a generalization of this model to other combinatorial optimization
problems such as matching, set cover, facility location, and prize-collecting
Steiner tree. We give a technique that reduces such problems to their non-price
counterparts, and use it to design exact/approximation algorithms to optimize
our utility/disutility. Our techniques extend to situations where there are
additional constraints on what parameters can be probed or when we can
simultaneously probe a subset of the parameters.Comment: SODA 201
Data-Collection for the Sloan Digital Sky Survey: a Network-Flow Heuristic
The goal of the Sloan Digital Sky Survey is ``to map in detail one-quarter of
the entire sky, determining the positions and absolute brightnesses of more
than 100 million celestial objects''. The survey will be performed by taking
``snapshots'' through a large telescope. Each snapshot can capture up to 600
objects from a small circle of the sky. This paper describes the design and
implementation of the algorithm that is being used to determine the snapshots
so as to minimize their number. The problem is NP-hard in general; the
algorithm described is a heuristic, based on Lagriangian-relaxation and
min-cost network flow. It gets within 5-15% of a naive lower bound, whereas
using a ``uniform'' cover only gets within 25-35%.Comment: proceedings version appeared in ACM-SIAM Symposium on Discrete
Algorithms (1998
Multi-level Facility Location Problems
We conduct a comprehensive review on multi-level facility location problems which extend several classical facility location problems and can be regarded as a subclass within the well-established field of hierarchical facility location. We first present the main characteristics of these problems and discuss some similarities and differences with related areas. Based on the types of decisions involved in the optimization process, we identify three different categories of multi-level facility location problems. We present overviews of formulations, algorithms and applications, and we trace the historical development of the field
- …