539 research outputs found

    Unravelling the Impact of Temporal and Geographical Locality in Content Caching Systems

    Get PDF
    To assess the performance of caching systems, the definition of a proper process describing the content requests generated by users is required. Starting from the analysis of traces of YouTube video requests collected inside operational networks, we identify the characteristics of real traffic that need to be represented and those that instead can be safely neglected. Based on our observations, we introduce a simple, parsimonious traffic model, named Shot Noise Model (SNM), that allows us to capture temporal and geographical locality of content popularity. The SNM is sufficiently simple to be effectively employed in both analytical and scalable simulative studies of caching systems. We demonstrate this by analytically characterizing the performance of the LRU caching policy under the SNM, for both a single cache and a network of caches. With respect to the standard Independent Reference Model (IRM), some paradigmatic shifts, concerning the impact of various traffic characteristics on cache performance, clearly emerge from our results.Comment: 14 pages, 11 Figures, 2 Appendice

    Analytical Investigation of On-Path Caching Performance in Information Centric Networks

    Get PDF
    Information Centric Networking (ICN) architectures are proposed as a solution to address the shift from host-centric model toward an information centric model in the Internet. In these architectures, routing nodes have caching functionality that can influence the network traffic and communication quality since the data items can be sent from nodes far closer to the requesting users. Therefore, realizing effective caching networks becomes important to grasp the cache characteristics of each node and to manage system resources, taking into account networking metrics (e.g., higher hit ratio) as well as user’s metrics (e.g. shorter delay). This thesis studies the methodologies for improving the performance of cache management in ICNs. As individual sub-problems, this thesis investigates the LRU-2 and 2-LRU algorithms, geographical locality in distribution of users’ requests and efficient caching in ICNs. As the first contribution of this thesis, a mathematical model to approximate the behaviour of the LRU-2 algorithm is proposed. Then, 2-LRU and LRU-2 cache replacement algorithms are analyzed. The 2-LRU caching strategy has been shown to outperform LRU. The main idea behind 2-LRU and LRU-2 is considering both frequency (i.e. metric used in LFU) and recency (i.e. metric used in LRU) together for cache replacement process. The simulation as well as numeric results show that the proposed LRU-2 model precisely approximates the miss rate for LRU-2 algorithm. Next, the influence of geographical locality in users’ requests on the performance of network of caches is investigated. Geographically localized and global request patterns have both been observed to possess Zipf (i.e. a power-law distribution in which few data items have high request frequencies while most of data items have low request frequencies) properties, although the local distributions are poorly correlated with the global distribution. This suggests that several independent Zipf distributions combine to form an emergent Zipf distribution in real client request scenarios. An algorithm is proposed that can generate realistic synthetic traffic to regional caches that possesses Zipf properties as well as produces a global Zipf distribution. The simulation results show that the caching performance could have different behaviour based on what distribution the users’ requests follow. Finally, the efficiency of cache replacement and replication algorithms in ICNs are studied since ICN literature still lacks an empirical and analytical deep understanding of benefits brought by in-network caching. An analytical model is proposed that optimally distributes a total cache budget among the nodes of ICN networks for LRU cache replacement and LCE cache replication algorithms. The results will show how much user-centric and system-centric benefits could be gained through the in-network caching compared to the benefits obtained through caching facilities provided only at the edge of the network

    Time-Shifted Prefetching and Edge-Caching of Video Content: Insights, Algorithms, and Solutions

    Get PDF
    Video traffic accounts for 82% of global Internet traffic and is growing at an unprecedented rate. As a result of this rapid growth and popularity of video content, the network is heavily burdened. To cope with this, service providers have to spend several millions of dollars for infrastructure upgrades; these upgrades are typically triggered when there is a reasonably sustained peak usage that exceeds 80% of capacity. In this context, with network traffic load being significantly higher during peak periods (up to 5 times as much), we explore the problem of prefetching video content during off-peak periods of the network even when such periods are substantially separated from the actual usage-time. To this end, we collected YouTube and Netflix usage from over 1500 users spanning at least a one-year period consisting of approximately 8.5 million videos collectively watched. We use the datasets to analyze and present key insights about user-level usage behavior, and show that our analysis can be used by researchers to tackle a myriad of problems in the general domains of networking and communication. Thereafter, equipped with the datasets and our derived insights, we develop a set of data-driven prediction and prefetching solutions, using machine-learning and deep-learning techniques (specifically supervised classifiers and LSTM networks), which anticipates the video content the user will consume based on their prior watching behavior, and prefetches it during off-peak periods. We find that our developed solutions can reduce nearly 35% of peak-time YouTube traffic and 70% of peak-time Netflix series traffic. We developed and evaluated a proof-of-concept system for prefetching video traffic. We also show how to integrate the two systems for prefetching YouTube and Netflix content. Furthermore, based on our findings from our developed algorithms, we develop a framework for prefetching video content regardless of the type of video and platform upon which it is hosted.Ph.D

    NDN content store and caching policies: performance evaluation

    Get PDF
    Among various factors contributing to performance of named data networking (NDN), the organization of caching is a key factor and has benefited from intense studies by the networking research community. The performed studies aimed at (1) finding the best strategy to adopt for content caching; (2) specifying the best location, and number of content stores (CS) in the network; and (3) defining the best cache replacement policy. Accessing and comparing the performance of the proposed solutions is as essential as the development of the proposals themselves. The present work aims at evaluating and comparing the behavior of four caching policies (i.e., random, least recently used (LRU), least frequently used (LFU), and first in first out (FIFO)) applied to NDN. Several network scenarios are used for simulation (2 topologies, varying the percentage of nodes of the content stores (5–100), 1 and 10 producers, 32 and 41 consumers). Five metrics are considered for the performance evaluation: cache hit ratio (CHR), network traffic, retrieval delay, interest re-transmissions, and the number of upstream hops. The content request follows the Zipf–Mandelbrot distribution (with skewness factor α=1.1 and α=0.75). LFU presents better performance in all considered metrics, except on the NDN testbed, with 41 consumers, 1 producer and a content request rate of 100 packets/s. For the level of content store from 50% to 100%, LRU presents a notably higher performance. Although the network behavior is similar for both skewness factors, when α=0.75, the CHR is significantly reduced, as expected.This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020

    Quality of experience-centric management of adaptive video streaming services : status and challenges

    Get PDF
    Video streaming applications currently dominate Internet traffic. Particularly, HTTP Adaptive Streaming ( HAS) has emerged as the dominant standard for streaming videos over the best-effort Internet, thanks to its capability of matching the video quality to the available network resources. In HAS, the video client is equipped with a heuristic that dynamically decides the most suitable quality to stream the content, based on information such as the perceived network bandwidth or the video player buffer status. The goal of this heuristic is to optimize the quality as perceived by the user, the so-called Quality of Experience (QoE). Despite the many advantages brought by the adaptive streaming principle, optimizing users' QoE is far from trivial. Current heuristics are still suboptimal when sudden bandwidth drops occur, especially in wireless environments, thus leading to freezes in the video playout, the main factor influencing users' QoE. This issue is aggravated in case of live events, where the player buffer has to be kept as small as possible in order to reduce the playout delay between the user and the live signal. In light of the above, in recent years, several works have been proposed with the aim of extending the classical purely client-based structure of adaptive video streaming, in order to fully optimize users' QoE. In this article, a survey is presented of research works on this topic together with a classification based on where the optimization takes place. This classification goes beyond client-based heuristics to investigate the usage of server-and network-assisted architectures and of new application and transport layer protocols. In addition, we outline the major challenges currently arising in the field of multimedia delivery, which are going to be of extreme relevance in future years
    • …
    corecore