470 research outputs found

    Content Delivery in Erasure Broadcast Channels with Cache and Feedback

    Full text link
    We study a content delivery problem in a K-user erasure broadcast channel such that a content providing server wishes to deliver requested files to users, each equipped with a cache of a finite memory. Assuming that the transmitter has state feedback and user caches can be filled during off-peak hours reliably by the decentralized content placement, we characterize the achievable rate region as a function of the memory sizes and the erasure probabilities. The proposed delivery scheme, based on the broadcasting scheme by Wang and Gatzianas et al., exploits the receiver side information established during the placement phase. Our results can be extended to centralized content placement as well as multi-antenna broadcast channels with state feedback.Comment: 29 pages, 7 figures. A short version has been submitted to ISIT 201

    Cache-Enabled Broadcast Packet Erasure Channels with State Feedback

    Full text link
    We consider a cache-enabled K-user broadcast erasure packet channel in which a server with a library of N files wishes to deliver a requested file to each user who is equipped with a cache of a finite memory M. Assuming that the transmitter has state feedback and user caches can be filled during off-peak hours reliably by decentralized cache placement, we characterize the optimal rate region as a function of the memory size, the erasure probability. The proposed delivery scheme, based on the scheme proposed by Gatzianas et al., exploits the receiver side information established during the placement phase. Our results enable us to quantify the net benefits of decentralized coded caching in the presence of erasure. The role of state feedback is found useful especially when the erasure probability is large and/or the normalized memory size is small.Comment: 8 pages, 4 figures, to be presented at the 53rd Annual Allerton Conference on Communication, Control, and Computing, IL, US

    Random Linear Network Coding for 5G Mobile Video Delivery

    Get PDF
    An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC). In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.Comment: Invited paper for Special Issue "Network and Rateless Coding for Video Streaming" - MDPI Informatio

    Benefits of Cache Assignment on Degraded Broadcast Channels

    Get PDF
    International audienceDegraded K-user broadcast channels (BCs) are studied when the receivers are facilitated with cache memories. Lower and upper bounds are derived on the capacity-memory tradeoff, i.e., on the largest rate of reliable communication over the BC as a function of the receivers' cache sizes, and the bounds are shown to match for interesting special cases. The lower bounds are achieved by two new coding schemes that benefit from nonuniform cache assignments. Lower and upper bounds are also established on the global capacity-memory tradeoff, i.e., on the largest capacity-memory tradeoff that can be attained by optimizing the receivers' cache sizes subject to a total cache memory budget. The bounds coincide when the total cache memory budget is sufficiently small or sufficiently large, where the thresholds depend on the BC statistics. For small cache memories, it is optimal to assign all the cache memory to the weakest receiver. In this regime, the global capacity-memory tradeoff grows by the total cache memory budget divided by the number of files in the system. In other words, a perfect global caching gain is achievable in this regime and the performance corresponds to a system where all the cache contents in the network are available to all receivers. For large cache memories, it is optimal to assign a positive cache memory to every receiver, such that the weaker receivers are assigned larger cache memories compared to the stronger receivers. In this regime, the growth rate of the global capacity-memory tradeoff is further divided by the number of users, which corresponds to a local caching gain. It is observed numerically that a uniform assignment of the total cache memory is suboptimal in all regimes, unless the BC is completely symmetric. For erasure BCs, this claim is proved analytically in the regime of small cache sizes
    corecore