4,636 research outputs found

    Audience-retention-rate-aware caching and coded video delivery with asynchronous demands

    Get PDF
    Most of the current literature on coded caching focus on a static scenario, in which a fixed number of users synchronously place their requests from a content library, and the performance is measured in terms of the latency in satisfying all of these requests. In practice, however, users start watching an online video content asynchronously over time, and often abort watching a video before it is completed. The latter behaviour is captured by the notion of audience retention rate, which measures the portion of a video content watched on average. In order to bring coded caching one step closer to practice, asynchronous user demands are considered in this paper, by allowing user demands to arrive randomly over time, and both the popularity of video files, and the audience retention rates are taken into account. A decentralized partial coded delivery (PCD) scheme is proposed, and two cache allocation schemes are employed; namely homogeneous cache allocation (HoCA) and heterogeneous cache allocation (HeCA), which allocate users’ caches among different chunks of the video files in the library. Numerical results validate that the proposed PCD scheme, either with HoCA or HeCA, outperforms conventional uncoded caching as well as the state-of-the-art decentralized caching schemes, which consider only the file popularities, and are designed for synchronous demand arrivals. An information-theoretical lower bound on the average delivery rate is also presented

    Generalized Degrees of Freedom of the Symmetric Cache-Aided MISO Broadcast Channel with Partial CSIT

    Get PDF
    We consider the cache-aided MISO broadcast channel (BC) in which a multi-antenna transmitter serves KK single-antenna receivers, each equipped with a cache memory. The transmitter has access to partial knowledge of the channel state information. For a symmetric setting, in terms of channel strength levels, partial channel knowledge levels and cache sizes, we characterize the generalized degrees of freedom (GDoF) up to a constant multiplicative factor. The achievability scheme exploits the interplay between spatial multiplexing gains and coded-multicasting gain. On the other hand, a cut-set-based argument in conjunction with a GDoF outer bound for a parallel MISO BC under channel uncertainty are used for the converse. We further show that the characterized order-optimal GDoF is also attained in a decentralized setting, where no coordination is required for content placement in the caches.Comment: first revisio

    Online Coded Caching

    Full text link
    We consider a basic content distribution scenario consisting of a single origin server connected through a shared bottleneck link to a number of users each equipped with a cache of finite memory. The users issue a sequence of content requests from a set of popular files, and the goal is to operate the caches as well as the server such that these requests are satisfied with the minimum number of bits sent over the shared link. Assuming a basic Markov model for renewing the set of popular files, we characterize approximately the optimal long-term average rate of the shared link. We further prove that the optimal online scheme has approximately the same performance as the optimal offline scheme, in which the cache contents can be updated based on the entire set of popular files before each new request. To support these theoretical results, we propose an online coded caching scheme termed coded least-recently sent (LRS) and simulate it for a demand time series derived from the dataset made available by Netflix for the Netflix Prize. For this time series, we show that the proposed coded LRS algorithm significantly outperforms the popular least-recently used (LRU) caching algorithm.Comment: 15 page

    Exploiting Tradeoff Between Transmission Diversity and Content Diversity in Multi-Cell Edge Caching

    Full text link
    Caching in multi-cell networks faces a well-known dilemma, i.e., to cache same contents among multiple edge nodes (ENs) to enable transmission cooperation/diversity for higher transmission efficiency, or to cache different contents to enable content diversity for higher cache hit rate. In this work, we introduce a partition-based caching to exploit the tradeoff between transmission diversity and content diversity in a multi-cell edge caching networks with single user only. The performance is characterized by the system average outage probability, which can be viewed as the sum of the cache hit outage probability and cache miss probability. We show that (i) In the low signal-to-noise ratio(SNR) region, the ENs are encouraged to cache more fractions of the most popular files so as to better exploit the transmission diversity for the most popular content; (ii) In the high SNR region, the ENs are encouraged to cache more files with less fractions of each so as to better exploit the content diversity.Comment: Accepted by IEEE International Conference on Communications (ICC), Kansas City, MO, USA, May 201

    Cooperative Local Caching under Heterogeneous File Preferences

    Full text link
    Local caching is an effective scheme for leveraging the memory of the mobile terminal (MT) and short range communications to save the bandwidth usage and reduce the download delay in the cellular communication system. Specifically, the MTs first cache in their local memories in off-peak hours and then exchange the requested files with each other in the vicinity during peak hours. However, prior works largely overlook MTs' heterogeneity in file preferences and their selfish behaviours. In this paper, we practically categorize the MTs into different interest groups according to the MTs' preferences. Each group of MTs aims to increase the probability of successful file discovery from the neighbouring MTs (from the same or different groups). Hence, we define the groups' utilities as the probability of successfully discovering the file in the neighbouring MTs, which should be maximized by deciding the caching strategies of different groups. By modelling MTs' mobilities as homogeneous Poisson point processes (HPPPs), we analytically characterize MTs' utilities in closed-form. We first consider the fully cooperative case where a centralizer helps all groups to make caching decisions. We formulate the problem as a weighted-sum utility maximization problem, through which the maximum utility trade-offs of different groups are characterized. Next, we study two benchmark cases under selfish caching, namely, partial and no cooperation, with and without inter-group file sharing, respectively. The optimal caching distributions for these two cases are derived. Finally, numerical examples are presented to compare the utilities under different cases and show the effectiveness of the fully cooperative local caching compared to the two benchmark cases
    • …
    corecore