7 research outputs found
Temporal Locality in Today's Content Caching: Why it Matters and How to Model it
The dimensioning of caching systems represents a difficult task in the design
of infrastructures for content distribution in the current Internet. This paper
addresses the problem of defining a realistic arrival process for the content
requests generated by users, due its critical importance for both analytical
and simulative evaluations of the performance of caching systems. First, with
the aid of YouTube traces collected inside operational residential networks, we
identify the characteristics of real traffic that need to be considered or can
be safely neglected in order to accurately predict the performance of a cache.
Second, we propose a new parsimonious traffic model, named the Shot Noise Model
(SNM), that enables users to natively capture the dynamics of content
popularity, whilst still being sufficiently simple to be employed effectively
for both analytical and scalable simulative studies of caching systems.
Finally, our results show that the SNM presents a much better solution to
account for the temporal locality observed in real traffic compared to existing
approaches.Comment: 7 pages, 7 figures, Accepted for publication in ACM Computer
Communication Revie
Asymptotically-Optimal Incentive-Based En-Route Caching Scheme
Content caching at intermediate nodes is a very effective way to optimize the
operations of Computer networks, so that future requests can be served without
going back to the origin of the content. Several caching techniques have been
proposed since the emergence of the concept, including techniques that require
major changes to the Internet architecture such as Content Centric Networking.
Few of these techniques consider providing caching incentives for the nodes or
quality of service guarantees for content owners. In this work, we present a
low complexity, distributed, and online algorithm for making caching decisions
based on content popularity, while taking into account the aforementioned
issues. Our algorithm performs en-route caching. Therefore, it can be
integrated with the current TCP/IP model. In order to measure the performance
of any online caching algorithm, we define the competitive ratio as the ratio
of the performance of the online algorithm in terms of traffic savings to the
performance of the optimal offline algorithm that has a complete knowledge of
the future. We show that under our settings, no online algorithm can achieve a
better competitive ratio than , where is the number of
nodes in the network. Furthermore, we show that under realistic scenarios, our
algorithm has an asymptotically optimal competitive ratio in terms of the
number of nodes in the network. We also study an extension to the basic
algorithm and show its effectiveness through extensive simulations
Online algorithms for content caching: an economic perspective
Content Caching at intermediate nodes, such that future requests can be served without going back to the origin of the content, is an effective way to optimize the operations of computer networks. Therefore, content caching reduces the delivery delay and improves the usersā Quality of Experience (QoE). The current literature either proposes offline algorithms that have complete knowledge of the request profile a priori, or proposes heuristics without provable performance. In this dissertation, online algorithms are presented for content caching in three different network settings: the current Internet Network, collaborative multi-cell coordinated network, and future Content Centric Networks (CCN). Due to the difficulty of obtaining a prior knowledge of contentsā popularities in real scenarios, an algorithm has to make a decision whether to cache a content or not when a request for the content is made, and without the knowledge of any future requests. The performance of the online algorithms is measured through a competitive ratio analysis, comparing the performance of the online algorithm to that of an omniscient optimal offline algorithm. Through theoretical analyses, it is shown that the proposed online algorithms achieve either the optimal or close to the optimal competitive ratio. Moreover, the algorithms have low complexity and can be implemented in a distributed way. The theoretical analyses are complemented with simulation-based experiments, and it is shown that the online algorithms have better performance compared to the state of the art caching schemes
The cache inference problem and its application to content and request routing
In many networked applications, independent caching agents cooperate by
servicing each otherās miss streams, without revealing the operational
details of the caching mechanisms they employ. Inference of such details
could be instrumental for many other processes. For example, it could be
used for optimized forwarding (or routing) of oneās own miss stream (or
content) to available proxy caches, or for making cache-aware resource
management decisions. In this paper, we introduce the Cache Inference
Problem (CIP) as that of inferring the characteristics of a caching
agent, given the miss stream of that agent. While CIP is insolvable in
its most general form, there are special cases of practical importance
in which it is, including when the request stream follows an Independent
Reference Model (IRM) with generalized power-law (GPL) demand
distribution. To that end, we design two basic ālitmusā tests that
are able to detect the LFU and LRU replacement policies, the effective
size of the cache and of the object universe, and the skewness of the
GPL demand for objects. Using extensive experiments under synthetic as
well as real traces, we show that our methods infer such characteristics
accurately and quite efficiently, and that they remain robust even when
the IRM/GPL assumptions do not hold, and even when the underlying
replacement policies are not āpureā LFU or LRU. We demonstrate the
value of our inference framework by considering example applications
THE CACHE INFERENCE PROBLEM and its Application to Content and Request Routing
Abstract ā In many networked applications, independent caching agents cooperate by servicing each otherās miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of oneās own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic ālitmus ā tests that are able to detect LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not āpure ā LFU or LRU. We exemplify the value of our inference framework by considering example applications. I
The Cache Inference Problem and its Application to Content and Request Routing (Extended Version
Abstract ā In many networked applications, independent caching agents cooperate by servicing each otherās miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of oneās own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic ālitmus ā tests that are able to detect the LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not āpure ā LFU or LRU. We demonstrate the value of our inference framework by considering example applications. I