2,689 research outputs found

    KK Users Caching Two Files: An Improved Achievable Rate

    Full text link
    Caching is an approach to smoothen the variability of traffic over time. Recently it has been proved that the local memories at the users can be exploited for reducing the peak traffic in a much more efficient way than previously believed. In this work we improve upon the existing results and introduce a novel caching strategy that takes advantage of simultaneous coded placement and coded delivery in order to decrease the worst case achievable rate with 22 files and KK users. We will show that for any cache size 1K<M<1\frac{1}{K}<M<1 our scheme outperforms the state of the art

    Finite Length Analysis of Caching-Aided Coded Multicasting

    Full text link
    In this work, we study a noiseless broadcast link serving KK users whose requests arise from a library of NN files. Every user is equipped with a cache of size MM files each. It has been shown that by splitting all the files into packets and placing individual packets in a random independent manner across all the caches, it requires at most N/MN/M file transmissions for any set of demands from the library. The achievable delivery scheme involves linearly combining packets of different files following a greedy clique cover solution to the underlying index coding problem. This remarkable multiplicative gain of random placement and coded delivery has been established in the asymptotic regime when the number of packets per file FF scales to infinity. In this work, we initiate the finite-length analysis of random caching schemes when the number of packets FF is a function of the system parameters M,N,KM,N,K. Specifically, we show that existing random placement and clique cover delivery schemes that achieve optimality in the asymptotic regime can have at most a multiplicative gain of 22 if the number of packets is sub-exponential. Further, for any clique cover based coded delivery and a large class of random caching schemes, that includes the existing ones, we show that the number of packets required to get a multiplicative gain of 43g\frac{4}{3}g is at least O((N/M)g)O((N/M)^g). We exhibit a random placement and an efficient clique cover based coded delivery scheme that approximately achieves this lower bound. We also provide tight concentration results that show that the average (over the random caching involved) number of transmissions concentrates very well requiring only polynomial number of packets in the rest of the parameters.Comment: A shorter version appeared in the 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), 201

    Signal Processing for Caching Networks and Non-volatile Memories

    Get PDF
    The recent information explosion has created a pressing need for faster and more reliable data storage and transmission schemes. This thesis focuses on two systems: caching networks and non-volatile storage systems. It proposes network protocols to improve the efficiency of information delivery and signal processing schemes to reduce errors at the physical layer as well. This thesis first investigates caching and delivery strategies for content delivery networks. Caching has been investigated as a useful technique to reduce the network burden by prefetching some contents during oË™-peak hours. Coded caching [1] proposed by Maddah-Ali and Niesen is the foundation of our algorithms and it has been shown to be a useful technique which can reduce peak traffic rates by encoding transmissions so that different users can extract different information from the same packet. Content delivery networks store information distributed across multiple servers, so as to balance the load and avoid unrecoverable losses in case of node or disk failures. On one hand, distributed storage limits the capability of combining content from different servers into a single message, causing performance losses in coded caching schemes. But, on the other hand, the inherent redundancy existing in distributed storage systems can be used to improve the performance of those schemes through parallelism. This thesis proposes a scheme combining distributed storage of the content in multiple servers and an efficient coded caching algorithm for delivery to the users. This scheme is shown to reduce the peak transmission rate below that of state-of-the-art algorithms

    Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff

    Full text link
    Replicating or caching popular content in memories distributed across the network is a technique to reduce peak network loads. Conventionally, the main performance gain of this caching was thought to result from making part of the requested data available closer to end users. Instead, we recently showed that a much more significant gain can be achieved by using caches to create coded-multicasting opportunities, even for users with different demands, through coding across data streams. These coded-multicasting opportunities are enabled by careful content overlap at the various caches in the network, created by a central coordinating server. In many scenarios, such a central coordinating server may not be available, raising the question if this multicasting gain can still be achieved in a more decentralized setting. In this paper, we propose an efficient caching scheme, in which the content placement is performed in a decentralized manner. In other words, no coordination is required for the content placement. Despite this lack of coordination, the proposed scheme is nevertheless able to create coded-multicasting opportunities and achieves a rate close to the optimal centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin

    Fundamental Limits of Coded Caching: Improved Delivery Rate-Cache Capacity Trade-off

    Get PDF
    A centralized coded caching system, consisting of a server delivering N popular files, each of size F bits, to K users through an error-free shared link, is considered. It is assumed that each user is equipped with a local cache memory with capacity MF bits, and contents can be proactively cached into these caches over a low traffic period; however, without the knowledge of the user demands. During the peak traffic period each user requests a single file from the server. The goal is to minimize the number of bits delivered by the server over the shared link, known as the delivery rate, over all user demand combinations. A novel coded caching scheme for the cache capacity of M= (N-1)/K is proposed. It is shown that the proposed scheme achieves a smaller delivery rate than the existing coded caching schemes in the literature when K > N >= 3. Furthermore, we argue that the delivery rate of the proposed scheme is within a constant multiplicative factor of 2 of the optimal delivery rate for cache capacities 1/K N >= 3.Comment: To appear in IEEE Transactions on Communication
    • …
    corecore