933 research outputs found
On Randomized Memoryless Algorithms for the Weighted -server Problem
The weighted -server problem is a generalization of the -server problem
in which the cost of moving a server of weight through a distance
is . The weighted server problem on uniform spaces models
caching where caches have different write costs. We prove tight bounds on the
performance of randomized memoryless algorithms for this problem on uniform
metric spaces. We prove that there is an -competitive memoryless
algorithm for this problem, where ;
. On the other hand we also prove that no randomized memoryless
algorithm can have competitive ratio better than .
To prove the upper bound of we develop a framework to bound from
above the competitive ratio of any randomized memoryless algorithm for this
problem. The key technical contribution is a method for working with potential
functions defined implicitly as the solution of a linear system. The result is
robust in the sense that a small change in the probabilities used by the
algorithm results in a small change in the upper bound on the competitive
ratio. The above result has two important implications. Firstly this yields an
-competitive memoryless algorithm for the weighted -server problem
on uniform spaces. This is the first competitive algorithm for which is
memoryless. Secondly, this helps us prove that the Harmonic algorithm, which
chooses probabilities in inverse proportion to weights, has a competitive ratio
of .Comment: Published at the 54th Annual IEEE Symposium on Foundations of
Computer Science (FOCS 2013
The Geometry of Scheduling
We consider the following general scheduling problem: The input consists of n
jobs, each with an arbitrary release time, size, and a monotone function
specifying the cost incurred when the job is completed at a particular time.
The objective is to find a preemptive schedule of minimum aggregate cost. This
problem formulation is general enough to include many natural scheduling
objectives, such as weighted flow, weighted tardiness, and sum of flow squared.
Our main result is a randomized polynomial-time algorithm with an approximation
ratio O(log log nP), where P is the maximum job size. We also give an O(1)
approximation in the special case when all jobs have identical release times.
The main idea is to reduce this scheduling problem to a particular geometric
set-cover problem which is then solved using the local ratio technique and
Varadarajan's quasi-uniform sampling technique. This general algorithmic
approach improves the best known approximation ratios by at least an
exponential factor (and much more in some cases) for essentially all of the
nontrivial common special cases of this problem. Our geometric interpretation
of scheduling may be of independent interest.Comment: Conference version in FOCS 201
An O(log k)-competitive algorithm for generalized caching
In the generalized caching problem, we have a set of pages and a cache of size k. Each page p has a size wpe1 and fetching cost cp for loading the page into the cache. At any point in time, the sum of the sizes of the pages stored in the cache cannot exceed k. The input consists of a sequence of page requests. If a page is not present in the cache at the time it is requested, it has to be loaded into the cache incurring a cost of cp. We give a randomized O(log k)-competitive online algorithm for the generalized caching problem, improving the previous bound of O(log2 k) by Bansal, Buchbinder, and Naor (STOC'08). This improved bound is tight and of the same order as the known bounds for the classic problem with uniform weights and sizes. We use the same LP based techniques as Bansal et al. but provide improved and slightly simplified methods for rounding fractional solutions online
Caching Connections in Matchings
Motivated by the desire to utilize a limited number of configurable optical
switches by recent advances in Software Defined Networks (SDNs), we define an
online problem which we call the Caching in Matchings problem. This problem has
a natural combinatorial structure and therefore may find additional
applications in theory and practice.
In the Caching in Matchings problem our cache consists of matchings of
connections between servers that form a bipartite graph. To cache a connection
we insert it into one of the matchings possibly evicting at most two other
connections from this matching. This problem resembles the problem known as
Connection Caching, where we also cache connections but our only restriction is
that they form a graph with bounded degree . Our results show a somewhat
surprising qualitative separation between the problems: The competitive ratio
of any online algorithm for caching in matchings must depend on the size of the
graph.
Specifically, we give a deterministic competitive and randomized competitive algorithms for caching in matchings, where is the
number of servers and is the number of matchings. We also show that the
competitive ratio of any deterministic algorithm is
and of any randomized algorithm is . In particular, the lower bound for
randomized algorithms is regardless of , and can be as high
as if , for example. We also show that if we
allow the algorithm to use at least matchings compared to used by
the optimum then we match the competitive ratios of connection catching which
are independent of . Interestingly, we also show that even a single extra
matching for the algorithm allows to get substantially better bounds
- …