13 research outputs found

    Distortion-Memory Tradeoffs in Cache-Aided Wireless Video Delivery

    Full text link
    Mobile network operators are considering caching as one of the strategies to keep up with the increasing demand for high-definition wireless video streaming. By prefetching popular content into memory at wireless access points or end user devices, requests can be served locally, relieving strain on expensive backhaul. In addition, using network coding allows the simultaneous serving of distinct cache misses via common coded multicast transmissions, resulting in significantly larger load reductions compared to those achieved with conventional delivery schemes. However, prior work does not exploit the properties of video and simply treats content as fixed-size files that users would like to fully download. Our work is motivated by the fact that video can be coded in a scalable fashion and that the decoded video quality depends on the number of layers a user is able to receive. Using a Gaussian source model, caching and coded delivery methods are designed to minimize the squared error distortion at end user devices. Our work is general enough to consider heterogeneous cache sizes and video popularity distributions.Comment: To appear in Allerton 2015 Proceedings of the 53rd annual Allerton conference on Communication, control, and computin

    Complete Interference Mitigation Through Receiver-Caching in Wyner's Networks

    Full text link
    We present upper and lower bounds on the per-user multiplexing gain (MG) of Wyner's circular soft-handoff model and Wyner's circular full model with cognitive transmitters and receivers with cache memories. The bounds are tight for cache memories with prelog μ≥2/3D\mu\geq 2/3D in the soft-handoff model and for μ≥D\mu \geq D in the full model, where DD denotes the number of possibly demanded files. In these cases the per-user MG of the two models is 1+μ/D1+\mu/D, the same as for non-interfering point-to-point links with caches at the receivers. Large receiver cache-memories thus allow to completely mitigate interference in these networks.Comment: Submitted to ITW 2016 in Cambridg

    Fundamental Limits of Coded Caching: Improved Delivery Rate-Cache Capacity Trade-off

    Get PDF
    A centralized coded caching system, consisting of a server delivering N popular files, each of size F bits, to K users through an error-free shared link, is considered. It is assumed that each user is equipped with a local cache memory with capacity MF bits, and contents can be proactively cached into these caches over a low traffic period; however, without the knowledge of the user demands. During the peak traffic period each user requests a single file from the server. The goal is to minimize the number of bits delivered by the server over the shared link, known as the delivery rate, over all user demand combinations. A novel coded caching scheme for the cache capacity of M= (N-1)/K is proposed. It is shown that the proposed scheme achieves a smaller delivery rate than the existing coded caching schemes in the literature when K > N >= 3. Furthermore, we argue that the delivery rate of the proposed scheme is within a constant multiplicative factor of 2 of the optimal delivery rate for cache capacities 1/K N >= 3.Comment: To appear in IEEE Transactions on Communication

    Coded Caching for a Large Number Of Users

    Full text link
    Information theoretic analysis of a coded caching system is considered, in which a server with a database of N equal-size files, each F bits long, serves K users. Each user is assumed to have a local cache that can store M files, i.e., capacity of MF bits. Proactive caching to user terminals is considered, in which the caches are filled by the server in advance during the placement phase, without knowing the user requests. Each user requests a single file, and all the requests are satisfied simultaneously through a shared error-free link during the delivery phase. First, centralized coded caching is studied assuming both the number and the identity of the active users in the delivery phase are known by the server during the placement phase. A novel group-based centralized coded caching (GBC) scheme is proposed for a cache capacity of M = N/K. It is shown that this scheme achieves a smaller delivery rate than all the known schemes in the literature. The improvement is then extended to a wider range of cache capacities through memory-sharing between the proposed scheme and other known schemes in the literature. Next, the proposed centralized coded caching idea is exploited in the decentralized setting, in which the identities of the users that participate in the delivery phase are assumed to be unknown during the placement phase. It is shown that the proposed decentralized caching scheme also achieves a delivery rate smaller than the state-of-the-art. Numerical simulations are also presented to corroborate our theoretical results
    corecore