383 research outputs found

    Audience-retention-rate-aware caching and coded video delivery with asynchronous demands

    Get PDF
    Most of the current literature on coded caching focus on a static scenario, in which a fixed number of users synchronously place their requests from a content library, and the performance is measured in terms of the latency in satisfying all of these requests. In practice, however, users start watching an online video content asynchronously over time, and often abort watching a video before it is completed. The latter behaviour is captured by the notion of audience retention rate, which measures the portion of a video content watched on average. In order to bring coded caching one step closer to practice, asynchronous user demands are considered in this paper, by allowing user demands to arrive randomly over time, and both the popularity of video files, and the audience retention rates are taken into account. A decentralized partial coded delivery (PCD) scheme is proposed, and two cache allocation schemes are employed; namely homogeneous cache allocation (HoCA) and heterogeneous cache allocation (HeCA), which allocate users’ caches among different chunks of the video files in the library. Numerical results validate that the proposed PCD scheme, either with HoCA or HeCA, outperforms conventional uncoded caching as well as the state-of-the-art decentralized caching schemes, which consider only the file popularities, and are designed for synchronous demand arrivals. An information-theoretical lower bound on the average delivery rate is also presented

    Centralized coded caching of correlated contents

    Get PDF
    Coded caching and delivery is studied taking into account the correlations among the contents in the library. Correlations are modeled as common parts shared by multiple contents; that is, each file in the database is composed of a group of subfiles, where each subfile is shared by a different subset of files. The number of files that include a certain subfile is defined as the level of commonness of this subfile. First, a correlation-aware uncoded caching scheme is proposed, and it is shown that the optimal placement for this scheme gives priority to the subfiles with the highest levels of commonness. Then a correlation- aware coded caching scheme is presented, and the cache capacity allocated to subfiles with different levels of commonness is optimized in order to minimize the delivery rate. The proposed correlation-aware coded caching scheme is shown to remarkably outperform state-of-the-art correlation-ignorant solutions, indicating the benefits of exploiting content correlations in coded caching and delivery in networks

    Soft-TTL: Time-Varying Fractional Caching

    Get PDF
    Standard Time-to-Live (TTL) cache management prescribes the storage of entire files, or possibly fractions thereof, for a given amount of time after a request. As a generalization of this approach, this work proposes the storage of a time-varying, diminishing, fraction of a requested file. Accordingly, the cache progressively evicts parts of the file over an interval of time following a request. The strategy, which is referred to as soft-TTL, is justified by the fact that traffic traces are often characterized by arrival processes that display a decreasing, but non-negligible, probability of observing a request as the time elapsed since the last request increases. An optimization-based analysis of soft-TTL is presented, demonstrating the important role played by the hazard function of the inter-arrival request process, which measures the likelihood of observing a request as a function of the time since the most recent request

    Cache-Aided Interactive Multiview Video Streaming in Small Cell Wireless Networks

    Get PDF
    The emergence of interactive multimedia applications with high data rate and low latency requirements has led to a drastic increase in the video data traffic over wireless cellular networks. Locally caching some of the contents at the small base stations of a macro-cell is a promising technology to cope with the increasing pressure on the backhaul connections, and to reduce the delay for demanding video applications. In this work, delivery of an interactive multiview video over an heterogeneous cellular network is studied. Differently from existing works that ignore the video characteristics, the caching and scheduling policies are jointly optimized, taking into account the quality of the delivered video and the video delivery time constraints. We formulate our joint caching and scheduling problem via submodular set function maximization and propose efficient greedy approaches to find a well performing joint caching and scheduling policy. Numerical evaluations show that our solution significantly outperforms benchmark algorithms based on popularity caching and independent scheduling

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    A rolling-horizon dynamic programming approach for collaborative caching

    Get PDF
    In this paper, we study the online collaborative content caching problem from network economics point of view. The network consists of small cell base stations (SCBSs) with limited cache capacity and a macrocell base station (MCBS). SCBSs are connected with their neighboring SCBSs through high-speed links and collaboratively decide what data to cache. Contents are placed at the SCBSs "free of charge" at off-peak hours and updated during the day according to the content demands by considering the network usage cost. We first model the caching optimization as a finite horizon Markov Decision Process that incorporates an auto-regressive model to forecast the evolution of the content demands. The problem is NP-hard and the optimal solution can be found only for a small number of base stations and contents. To allow derivation of close to optimal solutions for larger networks, we propose the rolling horizon method, which approximates future network usage cost by considering a small decision horizon. The results show that the rolling horizon approach outperforms comparison schemes significantly. Finally, we examine two simplifications of the problem to accelerate the speed of the solution: (a) we restrict the number of content replicas in the network and (b) we limit the allowed content replacements. The results show that the rolling horizon scheme can reduce the communication cost by over 84% compared to that of running Least Recently Used (LRU) updates on offline schemes. The results also shed light on the tradeoff between the efficiency of the caching policy and the time needed to run the online algorithm
    • …
    corecore