337 research outputs found

    Lifecycle-Aware Online Video Caching

    Get PDF
    The current explosion of video traffic compels service providers to deploy caches at edge networks. Nowadays, most caching systems store data with a high programming voltage corresponding to the largest possible ‘expiry date’, typically on the order of years, which maximizes the cache damage. However, popular videos rarely exhibit lifecycles longer than a couple of months. Consequently, the programming voltage can instead be adapted to fit the lifecycle and mitigate the cache damage accordingly. In this paper, we propose LiA-cache, a Lifecycle-Aware caching policy for online videos. LiA-cache finds both near-optimal caching retention times and cache eviction policies by optimizing traffic delivery cost and cache damage cost conjointly. We first investigate temporal patterns of video access from a real-world dataset covering 10 million online videos collected by one of the largest mobile network operators in the world. We next cluster the videos based on their access lifecycles and integrate the clustering into a general model of the caching system. Specifically, LiA-cache analyzes videos and caches them depending on their cluster label. Compared to other popular policies in real-world scenarios, LiA-cache can reduce cache damage up to 90%, while keeping a cache hit ratio close to a policy purely relying on video popularity.Peer reviewe

    Reinforcement learning for proactive content caching in wireless networks

    Get PDF
    Proactive content caching (PC) at the edge of wireless networks, that is, at the base stations (BSs) and/or user equipments (UEs), is a promising strategy to successfully handle the ever-growing mobile data traffic and to improve the quality-of-service for content delivery over wireless networks. However, factors such as limitations in storage capacity, time-variations in wireless channel conditions as well as in content demand profile pose challenges that need to be addressed in order to realise the benefits of PC at the wireless edge. This thesis aims to develop PC solutions that address these challenges. We consider PC directly at UEs equipped with finite capacity cache memories. This consideration is done within the framework of a dynamic system, where mobile users randomly request contents from a non-stationary content library; new contents are added to the library over time and each content may remain in the library for a random lifetime within which it may be requested. Contents are delivered through wireless channels with time-varying quality, and any time contents are transmitted, a transmission cost associated with the number of bits downloaded and the channel quality of the receiving user(s) at that time is incurred by the system. We formulate each considered problem as a Markov decision process with the objective of minimising the long term expected average cost on the system. We then use reinforcement learning (RL) to solve this highly challenging problem with a prohibitively large state and action spaces. In particular, we employ policy approximation techniques for compact representation of complex policy structures, and policy gradient RL methods to train the system. In a single-user problem setting that we consider, we show the optimality of a threshold-based PC scheme that is adaptive to system dynamics. We use this result to characterise and design a multicast-aware PC scheme, based on deep RL framework, when we consider a multi-user problem setting. We perform extensive numerical simulations of the schemes we propose. Our results show not only significant improvements against the state-of-the-art reactive content delivery approaches, but also near-optimality of the proposed RL solutions based on comparisons with some lower bounds.Open Acces

    From Traditional Adaptive Data Caching to Adaptive Context Caching: A Survey

    Full text link
    Context data is in demand more than ever with the rapid increase in the development of many context-aware Internet of Things applications. Research in context and context-awareness is being conducted to broaden its applicability in light of many practical and technical challenges. One of the challenges is improving performance when responding to large number of context queries. Context Management Platforms that infer and deliver context to applications measure this problem using Quality of Service (QoS) parameters. Although caching is a proven way to improve QoS, transiency of context and features such as variability, heterogeneity of context queries pose an additional real-time cost management problem. This paper presents a critical survey of state-of-the-art in adaptive data caching with the objective of developing a body of knowledge in cost- and performance-efficient adaptive caching strategies. We comprehensively survey a large number of research publications and evaluate, compare, and contrast different techniques, policies, approaches, and schemes in adaptive caching. Our critical analysis is motivated by the focus on adaptively caching context as a core research problem. A formal definition for adaptive context caching is then proposed, followed by identified features and requirements of a well-designed, objective optimal adaptive context caching strategy.Comment: This paper is currently under review with ACM Computing Surveys Journal at this time of publishing in arxiv.or

    Cross-Edge Orchestration of Serverless Functions with Probabilistic Caching

    Full text link
    Serverless edge computing adopts an event-based paradigm that provides back-end services on an as-used basis, resulting in efficient resource utilization. To improve the end-to-end latency and revenue, service providers need to optimize the number and placement of serverless containers while considering the system cost incurred by the provisioning. The particular reason for this circumstance is that frequently creating and destroying containers not only increases the system cost but also degrades the time responsiveness due to the cold-start process. Function caching is a common approach to mitigate the coldstart issue. However, function caching requires extra hardware resources and hence incurs extra system costs. Furthermore, the dynamic and bursty nature of serverless invocations remains an under-explored area. Hence, it is vitally important for service providers to conduct a context-aware request distribution and container caching policy for serverless edge computing. In this paper, we study the request distribution and container caching problem in serverless edge computing. We prove the proposed problem is NP-hard and hence difficult to find a global optimal solution. We jointly consider the distributed and resource constrained nature of edge computing and propose an optimized request distribution algorithm that adapts to the dynamics of serverless invocations with a theoretical performance guarantee. Also, we propose a context-aware probabilistic caching policy that incorporates a number of characteristics of serverless invocations. Via simulation and implementation results, we demonstrate the superiority of the proposed algorithm by outperforming existing caching policies in terms of the overall system cost and cold-start frequency by up to 62.1% and 69.1%, respectively
    • …
    corecore