34 research outputs found

    Effect of Number of Users in Multi-level Coded Caching

    Full text link
    It has been recently established that joint design of content delivery and storage (coded caching) can significantly improve performance over conventional caching. This has also been extended to the case when content has non-uniform popularity through several models. In this paper we focus on a multi-level popularity model, where content is divided into levels based on popularity. We consider two extreme cases of user distribution across caches for the multi-level popularity model: a single user per cache (single-user setup) versus a large number of users per cache (multi-user setup). When the capacity approximation is universal (independent of number of popularity levels as well as number of users, files and caches), we demonstrate a dichotomy in the order-optimal strategies for these two extreme cases. In the multi-user case, sharing memory among the levels is order-optimal, whereas for the single-user case clustering popularity levels and allocating all the memory to them is the order-optimal scheme. In proving these results, we develop new information-theoretic lower bounds for the problem.Comment: 13 pages; 2 figures. A shorter version is to appear in IEEE ISIT 201

    Capacity of Cellular Networks with Femtocache

    Full text link
    The capacity of next generation of cellular networks using femtocaches is studied when multihop communications and decentralized cache placement are considered. We show that the storage capability of future network User Terminals (UT) can be effectively used to increase the capacity in random decentralized uncoded caching. We further propose a random decentralized coded caching scheme which achieves higher capacity results than the random decentralized uncoded caching. The result shows that coded caching which is suitable for systems with limited storage capabilities can improve the capacity of cellular networks by a factor of log(n) where n is the number of nodes served by the femtocache.Comment: 6 pages, 2 figures, presented at Infocom Workshops on 5G and beyond, San Francisco, CA, April 201

    Cache-Enabled Broadcast Packet Erasure Channels with State Feedback

    Full text link
    We consider a cache-enabled K-user broadcast erasure packet channel in which a server with a library of N files wishes to deliver a requested file to each user who is equipped with a cache of a finite memory M. Assuming that the transmitter has state feedback and user caches can be filled during off-peak hours reliably by decentralized cache placement, we characterize the optimal rate region as a function of the memory size, the erasure probability. The proposed delivery scheme, based on the scheme proposed by Gatzianas et al., exploits the receiver side information established during the placement phase. Our results enable us to quantify the net benefits of decentralized coded caching in the presence of erasure. The role of state feedback is found useful especially when the erasure probability is large and/or the normalized memory size is small.Comment: 8 pages, 4 figures, to be presented at the 53rd Annual Allerton Conference on Communication, Control, and Computing, IL, US

    Adaptive Delivery in Caching Networks

    Full text link
    The problem of content delivery in caching networks is investigated for scenarios where multiple users request identical files. Redundant user demands are likely when the file popularity distribution is highly non-uniform or the user demands are positively correlated. An adaptive method is proposed for the delivery of redundant demands in caching networks. Based on the redundancy pattern in the current demand vector, the proposed method decides between the transmission of uncoded messages or the coded messages of [1] for delivery. Moreover, a lower bound on the delivery rate of redundant requests is derived based on a cutset bound argument. The performance of the adaptive method is investigated through numerical examples of the delivery rate of several specific demand vectors as well as the average delivery rate of a caching network with correlated requests. The adaptive method is shown to considerably reduce the gap between the non-adaptive delivery rate and the lower bound. In some specific cases, using the adaptive method, this gap shrinks by almost 50% for the average rate.Comment: 8 pages,8 figures. Submitted to IEEE transaction on Communications in 2015. A short version of this article was published as an IEEE Communications Letter with DOI: 10.1109/LCOMM.2016.255814
    corecore