3,525 research outputs found
On-Line File Caching
In the on-line file-caching problem problem, the input is a sequence of
requests for files, given on-line (one at a time). Each file has a non-negative
size and a non-negative retrieval cost. The problem is to decide which files to
keep in a fixed-size cache so as to minimize the sum of the retrieval costs for
files that are not in the cache when requested. The problem arises in web
caching by browsers and by proxies. This paper describes a natural
generalization of LRU called Landlord and gives an analysis showing that it has
an optimal performance guarantee (among deterministic on-line algorithms).
The paper also gives an analysis of the algorithm in a so-called ``loosely''
competitive model, showing that on a ``typical'' cache size, either the
performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998
Jointly Optimal Routing and Caching for Arbitrary Network Topologies
We study a problem of fundamental importance to ICNs, namely, minimizing
routing costs by jointly optimizing caching and routing decisions over an
arbitrary network topology. We consider both source routing and hop-by-hop
routing settings. The respective offline problems are NP-hard. Nevertheless, we
show that there exist polynomial time approximation algorithms producing
solutions within a constant approximation from the optimal. We also produce
distributed, adaptive algorithms with the same approximation guarantees. We
simulate our adaptive algorithms over a broad array of different topologies.
Our algorithms reduce routing costs by several orders of magnitude compared to
prior art, including algorithms optimizing caching under fixed routing.Comment: This is the extended version of the paper "Jointly Optimal Routing
and Caching for Arbitrary Network Topologies", appearing in the 4th ACM
Conference on Information-Centric Networking (ICN 2017), Berlin, Sep. 26-28,
201
The K-Server Dual and Loose Competitiveness for Paging
This paper has two results. The first is based on the surprising observation
that the well-known ``least-recently-used'' paging algorithm and the
``balance'' algorithm for weighted caching are linear-programming primal-dual
algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that
generalizes them both and has an optimal performance guarantee for weighted
caching.
For the second result, the paper presents empirical studies of paging
algorithms, documenting that in practice, on ``typical'' cache sizes and
sequences, the performance of paging strategies are much better than their
worst-case analyses in the standard model suggest. The paper then presents
theoretical results that support and explain this. For example: on any input
sequence, with almost all cache sizes, either the performance guarantee of
least-recently-used is O(log k) or the fault rate (in an absolute sense) is
insignificant.
Both of these results are strengthened and generalized in``On-line File
Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA
(1991
On Randomized Memoryless Algorithms for the Weighted -server Problem
The weighted -server problem is a generalization of the -server problem
in which the cost of moving a server of weight through a distance
is . The weighted server problem on uniform spaces models
caching where caches have different write costs. We prove tight bounds on the
performance of randomized memoryless algorithms for this problem on uniform
metric spaces. We prove that there is an -competitive memoryless
algorithm for this problem, where ;
. On the other hand we also prove that no randomized memoryless
algorithm can have competitive ratio better than .
To prove the upper bound of we develop a framework to bound from
above the competitive ratio of any randomized memoryless algorithm for this
problem. The key technical contribution is a method for working with potential
functions defined implicitly as the solution of a linear system. The result is
robust in the sense that a small change in the probabilities used by the
algorithm results in a small change in the upper bound on the competitive
ratio. The above result has two important implications. Firstly this yields an
-competitive memoryless algorithm for the weighted -server problem
on uniform spaces. This is the first competitive algorithm for which is
memoryless. Secondly, this helps us prove that the Harmonic algorithm, which
chooses probabilities in inverse proportion to weights, has a competitive ratio
of .Comment: Published at the 54th Annual IEEE Symposium on Foundations of
Computer Science (FOCS 2013
Joint Service Placement and Request Routing in Multi-cell Mobile Edge Computing Networks
The proliferation of innovative mobile services such as augmented reality,
networked gaming, and autonomous driving has spurred a growing need for
low-latency access to computing resources that cannot be met solely by existing
centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an
effective solution to meet the demand for low-latency services by enabling the
execution of computing tasks at the network-periphery, in proximity to
end-users. While a number of recent studies have addressed the problem of
determining the execution of service tasks and the routing of user requests to
corresponding edge servers, the focus has primarily been on the efficient
utilization of computing resources, neglecting the fact that non-trivial
amounts of data need to be stored to enable service execution, and that many
emerging services exhibit asymmetric bandwidth requirements. To fill this gap,
we study the joint optimization of service placement and request routing in
MEC-enabled multi-cell networks with multidimensional
(storage-computation-communication) constraints. We show that this problem
generalizes several problems in literature and propose an algorithm that
achieves close-to-optimal performance using randomized rounding. Evaluation
results demonstrate that our approach can effectively utilize the available
resources to maximize the number of requests served by low-latency edge cloud
servers.Comment: IEEE Infocom 201
An Efficient Coded Multicasting Scheme Preserving the Multiplicative Caching Gain
Coded multicasting has been shown to be a promis- ing approach to
significantly improve the caching performance of content delivery networks with
multiple caches downstream of a common multicast link. However, achievable
schemes proposed to date have been shown to achieve the proved order-optimal
performance only in the asymptotic regime in which the number of packets per
requested item goes to infinity. In this paper, we first extend the asymptotic
analysis of the achievable scheme in [1], [2] to the case of heterogeneous
cache sizes and demand distributions, providing the best known upper bound on
the fundamental limiting performance when the number of packets goes to
infinity. We then show that the scheme achieving this upper bound quickly loses
its multiplicative caching gain for finite content packetization. To overcome
this limitation, we design a novel polynomial-time algorithm based on random
greedy graph- coloring that, while keeping the same finite content
packetization, recovers a significant part of the multiplicative caching gain.
Our results show that the order-optimal coded multicasting schemes proposed to
date, while useful in quantifying the fundamental limiting performance, must be
properly designed for practical regimes of finite packetization.Comment: 6 pages, 7 figures, Published in Infocom CNTCV 201
- …