2,134 research outputs found

    Algorithms for Asynchronous Coded Caching

    Get PDF
    The original formulation of the coded caching problem assumes that the file requests from the users are synchronized, i.e., they arrive at the server at the same time. Several subsequent contributions work under the same assumption. Furthermore, the majority of prior work does not consider a scenario where users have deadlines. In our previous work we formulated the asynchronous coded caching problem where user requests arrive at different times. Furthermore, the users have specified deadlines. We proposed a linear program for obtaining its optimal solution. However, the size of the LP (number of constraints and variables) grows rather quickly with the number of users and cache sizes. In this work, we explore a dual decomposition based approach for solving the LP under consideration. We demonstrate that the dual function can be evaluated by equivalently solving a number of minimum cost network flow algorithms. Minimum cost network flow algorithms have been the subject of much investigation and current solvers routinely solve instances with millions of nodes in minutes. Our proposed approach leverages these fast solvers and allows us to solve several large scale instances of the asynchronous coded caching problem with manageable time and memory complexity

    Fundamental Limits of Caching in Wireless D2D Networks

    Full text link
    We consider a wireless Device-to-Device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an ``infrastructure'' setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this work, we consider a D2D ``infrastructure-less'' version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery, achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is therefore natural to ask whether these two gains are cumulative, i.e.,if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law).Comment: 45 pages, 5 figures, Submitted to IEEE Transactions on Information Theory, This is the extended version of the conference (ITW) paper arXiv:1304.585

    Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff

    Full text link
    Replicating or caching popular content in memories distributed across the network is a technique to reduce peak network loads. Conventionally, the main performance gain of this caching was thought to result from making part of the requested data available closer to end users. Instead, we recently showed that a much more significant gain can be achieved by using caches to create coded-multicasting opportunities, even for users with different demands, through coding across data streams. These coded-multicasting opportunities are enabled by careful content overlap at the various caches in the network, created by a central coordinating server. In many scenarios, such a central coordinating server may not be available, raising the question if this multicasting gain can still be achieved in a more decentralized setting. In this paper, we propose an efficient caching scheme, in which the content placement is performed in a decentralized manner. In other words, no coordination is required for the content placement. Despite this lack of coordination, the proposed scheme is nevertheless able to create coded-multicasting opportunities and achieves a rate close to the optimal centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin

    A Low-Complexity Approach to Distributed Cooperative Caching with Geographic Constraints

    Get PDF
    We consider caching in cellular networks in which each base station is equipped with a cache that can store a limited number of files. The popularity of the files is known and the goal is to place files in the caches such that the probability that a user at an arbitrary location in the plane will find the file that she requires in one of the covering caches is maximized. We develop distributed asynchronous algorithms for deciding which contents to store in which cache. Such cooperative algorithms require communication only between caches with overlapping coverage areas and can operate in asynchronous manner. The development of the algorithms is principally based on an observation that the problem can be viewed as a potential game. Our basic algorithm is derived from the best response dynamics. We demonstrate that the complexity of each best response step is independent of the number of files, linear in the cache capacity and linear in the maximum number of base stations that cover a certain area. Then, we show that the overall algorithm complexity for a discrete cache placement is polynomial in both network size and catalog size. In practical examples, the algorithm converges in just a few iterations. Also, in most cases of interest, the basic algorithm finds the best Nash equilibrium corresponding to the global optimum. We provide two extensions of our basic algorithm based on stochastic and deterministic simulated annealing which find the global optimum. Finally, we demonstrate the hit probability evolution on real and synthetic networks numerically and show that our distributed caching algorithm performs significantly better than storing the most popular content, probabilistic content placement policy and Multi-LRU caching policies.Comment: 24 pages, 9 figures, presented at SIGMETRICS'1

    Fundamental Limits of Caching

    Full text link
    Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e, the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared to previously known schemes. In particular, the improvement can be on the order of the number of users in the network. Moreover, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.Comment: To appear in IEEE Transactions on Information Theor

    Cache-Enabled Broadcast Packet Erasure Channels with State Feedback

    Full text link
    We consider a cache-enabled K-user broadcast erasure packet channel in which a server with a library of N files wishes to deliver a requested file to each user who is equipped with a cache of a finite memory M. Assuming that the transmitter has state feedback and user caches can be filled during off-peak hours reliably by decentralized cache placement, we characterize the optimal rate region as a function of the memory size, the erasure probability. The proposed delivery scheme, based on the scheme proposed by Gatzianas et al., exploits the receiver side information established during the placement phase. Our results enable us to quantify the net benefits of decentralized coded caching in the presence of erasure. The role of state feedback is found useful especially when the erasure probability is large and/or the normalized memory size is small.Comment: 8 pages, 4 figures, to be presented at the 53rd Annual Allerton Conference on Communication, Control, and Computing, IL, US
    corecore