7,122 research outputs found
Updating Content in Cache-Aided Coded Multicast
Motivated by applications to delivery of dynamically updated, but correlated
data in settings such as content distribution networks, and distributed file
sharing systems, we study a single source multiple destination network coded
multicast problem in a cache-aided network. We focus on models where the caches
are primarily located near the destinations, and where the source has no cache.
The source observes a sequence of correlated frames, and is expected to do
frame-by-frame encoding with no access to prior frames. We present a novel
scheme that shows how the caches can be advantageously used to decrease the
overall cost of multicast, even though the source encodes without access to
past data. Our cache design and update scheme works with any choice of network
code designed for a corresponding cache-less network, is largely decentralized,
and works for an arbitrary network. We study a convex relation of the
optimization problem that results form the overall cost function. The results
of the optimization problem determines the rate allocation and caching
strategies. Numerous simulation results are presented to substantiate the
theory developed.Comment: To Appear in IEEE Journal on Selected Areas in Communications:
Special Issue on Caching for Communication Systems and Network
On-Line File Caching
In the on-line file-caching problem problem, the input is a sequence of
requests for files, given on-line (one at a time). Each file has a non-negative
size and a non-negative retrieval cost. The problem is to decide which files to
keep in a fixed-size cache so as to minimize the sum of the retrieval costs for
files that are not in the cache when requested. The problem arises in web
caching by browsers and by proxies. This paper describes a natural
generalization of LRU called Landlord and gives an analysis showing that it has
an optimal performance guarantee (among deterministic on-line algorithms).
The paper also gives an analysis of the algorithm in a so-called ``loosely''
competitive model, showing that on a ``typical'' cache size, either the
performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998
The K-Server Dual and Loose Competitiveness for Paging
This paper has two results. The first is based on the surprising observation
that the well-known ``least-recently-used'' paging algorithm and the
``balance'' algorithm for weighted caching are linear-programming primal-dual
algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that
generalizes them both and has an optimal performance guarantee for weighted
caching.
For the second result, the paper presents empirical studies of paging
algorithms, documenting that in practice, on ``typical'' cache sizes and
sequences, the performance of paging strategies are much better than their
worst-case analyses in the standard model suggest. The paper then presents
theoretical results that support and explain this. For example: on any input
sequence, with almost all cache sizes, either the performance guarantee of
least-recently-used is O(log k) or the fault rate (in an absolute sense) is
insignificant.
Both of these results are strengthened and generalized in``On-line File
Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA
(1991
- …