215 research outputs found

    Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff

    Full text link
    Replicating or caching popular content in memories distributed across the network is a technique to reduce peak network loads. Conventionally, the main performance gain of this caching was thought to result from making part of the requested data available closer to end users. Instead, we recently showed that a much more significant gain can be achieved by using caches to create coded-multicasting opportunities, even for users with different demands, through coding across data streams. These coded-multicasting opportunities are enabled by careful content overlap at the various caches in the network, created by a central coordinating server. In many scenarios, such a central coordinating server may not be available, raising the question if this multicasting gain can still be achieved in a more decentralized setting. In this paper, we propose an efficient caching scheme, in which the content placement is performed in a decentralized manner. In other words, no coordination is required for the content placement. Despite this lack of coordination, the proposed scheme is nevertheless able to create coded-multicasting opportunities and achieves a rate close to the optimal centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin

    On Caching with More Users than Files

    Full text link
    Caching appears to be an efficient way to reduce peak hour network traffic congestion by storing some content at the user's cache without knowledge of later demands. Recently, Maddah-Ali and Niesen proposed a two-phase, placement and delivery phase, coded caching strategy for centralized systems (where coordination among users is possible in the placement phase), and for decentralized systems. This paper investigates the same setup under the further assumption that the number of users is larger than the number of files. By using the same uncoded placement strategy of Maddah-Ali and Niesen, a novel coded delivery strategy is proposed to profit from the multicasting opportunities that arise because a file may be demanded by multiple users. The proposed delivery method is proved to be optimal under the constraint of uncoded placement for centralized systems with two files, moreover it is shown to outperform known caching strategies for both centralized and decentralized systems.Comment: 6 pages, 3 figures, submitted to ISIT 201

    Fundamental Limits of Caching

    Full text link
    Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e, the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared to previously known schemes. In particular, the improvement can be on the order of the number of users in the network. Moreover, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.Comment: To appear in IEEE Transactions on Information Theor

    Content Delivery in Erasure Broadcast Channels with Cache and Feedback

    Full text link
    We study a content delivery problem in a K-user erasure broadcast channel such that a content providing server wishes to deliver requested files to users, each equipped with a cache of a finite memory. Assuming that the transmitter has state feedback and user caches can be filled during off-peak hours reliably by the decentralized content placement, we characterize the achievable rate region as a function of the memory sizes and the erasure probabilities. The proposed delivery scheme, based on the broadcasting scheme by Wang and Gatzianas et al., exploits the receiver side information established during the placement phase. Our results can be extended to centralized content placement as well as multi-antenna broadcast channels with state feedback.Comment: 29 pages, 7 figures. A short version has been submitted to ISIT 201

    Fundamental Limits of Caching with Secure Delivery

    Full text link
    Caching is emerging as a vital tool for alleviating the severe capacity crunch in modern content-centric wireless networks. The main idea behind caching is to store parts of popular content in end-users' memory and leverage the locally stored content to reduce peak data rates. By jointly designing content placement and delivery mechanisms, recent works have shown order-wise reduction in transmission rates in contrast to traditional methods. In this work, we consider the secure caching problem with the additional goal of minimizing information leakage to an external wiretapper. The fundamental cache memory vs. transmission rate trade-off for the secure caching problem is characterized. Rather surprisingly, these results show that security can be introduced at a negligible cost, particularly for large number of files and users. It is also shown that the rate achieved by the proposed caching scheme with secure delivery is within a constant multiplicative factor from the information-theoretic optimal rate for almost all parameter values of practical interest
    corecore