1,457 research outputs found
Online Coded Caching
We consider a basic content distribution scenario consisting of a single
origin server connected through a shared bottleneck link to a number of users
each equipped with a cache of finite memory. The users issue a sequence of
content requests from a set of popular files, and the goal is to operate the
caches as well as the server such that these requests are satisfied with the
minimum number of bits sent over the shared link. Assuming a basic Markov model
for renewing the set of popular files, we characterize approximately the
optimal long-term average rate of the shared link. We further prove that the
optimal online scheme has approximately the same performance as the optimal
offline scheme, in which the cache contents can be updated based on the entire
set of popular files before each new request. To support these theoretical
results, we propose an online coded caching scheme termed coded least-recently
sent (LRS) and simulate it for a demand time series derived from the dataset
made available by Netflix for the Netflix Prize. For this time series, we show
that the proposed coded LRS algorithm significantly outperforms the popular
least-recently used (LRU) caching algorithm.Comment: 15 page
The K-Server Dual and Loose Competitiveness for Paging
This paper has two results. The first is based on the surprising observation
that the well-known ``least-recently-used'' paging algorithm and the
``balance'' algorithm for weighted caching are linear-programming primal-dual
algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that
generalizes them both and has an optimal performance guarantee for weighted
caching.
For the second result, the paper presents empirical studies of paging
algorithms, documenting that in practice, on ``typical'' cache sizes and
sequences, the performance of paging strategies are much better than their
worst-case analyses in the standard model suggest. The paper then presents
theoretical results that support and explain this. For example: on any input
sequence, with almost all cache sizes, either the performance guarantee of
least-recently-used is O(log k) or the fault rate (in an absolute sense) is
insignificant.
Both of these results are strengthened and generalized in``On-line File
Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA
(1991
Online paging and file caching with expiration times
AbstractWe consider a paging problem in which each page is assigned an expiration time at the time it is brought into the cache. The expiration time indicates the latest time that the fetched copy of the page may be used. Requests that occur later than the expiration time must be satisfied by bringing a new copy of the page into the cache. The problem has applications in caching of documents on the World Wide Web (WWW). We show that a natural extension of the well-studied least recently used (LRU) paging algorithm is strongly competitive for the uniform retrieval cost, uniform size case. We then describe a similar extension of the recently proposed Landlord algorithm for the case of arbitrary retrieval costs and sizes, and prove that it is strongly competitive. The results extend to the loose model of competitiveness as well
Dynamic Balanced Graph Partitioning
This paper initiates the study of the classic balanced graph partitioning
problem from an online perspective: Given an arbitrary sequence of pairwise
communication requests between nodes, with patterns that may change over
time, the objective is to service these requests efficiently by partitioning
the nodes into clusters, each of size , such that frequently
communicating nodes are located in the same cluster. The partitioning can be
updated dynamically by migrating nodes between clusters. The goal is to devise
online algorithms which jointly minimize the amount of inter-cluster
communication and migration cost.
The problem features interesting connections to other well-known online
problems. For example, scenarios with generalize online paging, and
scenarios with constitute a novel online variant of maximum matching. We
present several lower bounds and algorithms for settings both with and without
cluster-size augmentation. In particular, we prove that any deterministic
online algorithm has a competitive ratio of at least , even with significant
augmentation. Our main algorithmic contributions are an -competitive deterministic algorithm for the general setting with
constant augmentation, and a constant competitive algorithm for the maximum
matching variant
- …