24,892 research outputs found
A Literature Survey of Cooperative Caching in Content Distribution Networks
Content distribution networks (CDNs) which serve to deliver web objects
(e.g., documents, applications, music and video, etc.) have seen tremendous
growth since its emergence. To minimize the retrieving delay experienced by a
user with a request for a web object, caching strategies are often applied -
contents are replicated at edges of the network which is closer to the user
such that the network distance between the user and the object is reduced. In
this literature survey, evolution of caching is studied. A recent research
paper [15] in the field of large-scale caching for CDN was chosen to be the
anchor paper which serves as a guide to the topic. Research studies after and
relevant to the anchor paper are also analyzed to better evaluate the
statements and results of the anchor paper and more importantly, to obtain an
unbiased view of the large scale collaborate caching systems as a whole.Comment: 5 pages, 5 figure
Cooperative announcement-based caching for video-on-demand streaming
Recently, video-on-demand (VoD) streaming services like Netflix and Hulu have gained a lot of popularity. This has led to a strong increase in bandwidth capacity requirements in the network. To reduce this network load, the design of appropriate caching strategies is of utmost importance. Based on the fact that, typically, a video stream is temporally segmented into smaller chunks that can be accessed and decoded independently, cache replacement strategies have been developed that take advantage of this temporal structure in the video. In this paper, two caching strategies are proposed that additionally take advantage of the phenomenon of binge watching, where users stream multiple consecutive episodes of the same series, reported by recent user behavior studies to become the everyday behavior. Taking into account this information allows us to predict future segment requests, even before the video playout has started. Two strategies are proposed, both with a different level of coordination between the caches in the network. Using a VoD request trace based on binge watching user characteristics, the presented algorithms have been thoroughly evaluated in multiple network topologies with different characteristics, showing their general applicability. It was shown that in a realistic scenario, the proposed election-based caching strategy can outperform the state-of-the-art by 20% in terms of cache hit ratio while using 4% less network bandwidth
Exploiting Traffic Balancing and Multicast Efficiency in Distributed Video-on-Demand Architectures
Distributed Video-on-Demand (DVoD) systems are proposed as a
solution to the limited streaming capacity and null scalability of centralized
systems. In a previous work, we proposed a fully distributed large-scale VoD
architecture, called Double P-Tree, which has shown itself to be a good approach
to the design of flexible and scalable DVoD systems. In this paper, we
present relevant design aspects related to video mapping and traffic balancing in
order to improve Double P-Tree architecture performance. Our simulation results
demonstrate that these techniques yield a more efficient system and considerably
increase its streaming capacity. The results also show the crucial importance
of topology connectivity in improving multicasting performance in
DVoD systems. Finally, a comparison among several DVoD architectures was
performed using simulation, and the results show that the Double P-Tree architecture
incorporating mapping and load balancing policies outperforms similar
DVoD architectures.This work was supported by the MCyT-Spain under contract TIC 2001-2592 and partially supported by the Generalitat de Catalunya- Grup de Recerca Consolidat 2001SGR-00218
Cost-Effective Cache Deployment in Mobile Heterogeneous Networks
This paper investigates one of the fundamental issues in cache-enabled
heterogeneous networks (HetNets): how many cache instances should be deployed
at different base stations, in order to provide guaranteed service in a
cost-effective manner. Specifically, we consider two-tier HetNets with
hierarchical caching, where the most popular files are cached at small cell
base stations (SBSs) while the less popular ones are cached at macro base
stations (MBSs). For a given network cache deployment budget, the cache sizes
for MBSs and SBSs are optimized to maximize network capacity while satisfying
the file transmission rate requirements. As cache sizes of MBSs and SBSs affect
the traffic load distribution, inter-tier traffic steering is also employed for
load balancing. Based on stochastic geometry analysis, the optimal cache sizes
for MBSs and SBSs are obtained, which are threshold-based with respect to cache
budget in the networks constrained by SBS backhauls. Simulation results are
provided to evaluate the proposed schemes and demonstrate the applications in
cost-effective network deployment
Hierarchical Coded Caching
Caching of popular content during off-peak hours is a strategy to reduce
network loads during peak hours. Recent work has shown significant benefits of
designing such caching strategies not only to deliver part of the content
locally, but also to provide coded multicasting opportunities even among users
with different demands. Exploiting both of these gains was shown to be
approximately optimal for caching systems with a single layer of caches.
Motivated by practical scenarios, we consider in this work a hierarchical
content delivery network with two layers of caches. We propose a new caching
scheme that combines two basic approaches. The first approach provides coded
multicasting opportunities within each layer; the second approach provides
coded multicasting opportunities across multiple layers. By striking the right
balance between these two approaches, we show that the proposed scheme achieves
the optimal communication rates to within a constant multiplicative and
additive gap. We further show that there is no tension between the rates in
each of the two layers up to the aforementioned gap. Thus, both layers can
simultaneously operate at approximately the minimum rate.Comment: 31 page
- …