419 research outputs found
Speeding up Future Video Distribution via Channel-Aware Caching-Aided Coded Multicast
Future Internet usage will be dominated by the consumption of a rich variety
of online multimedia services accessed from an exponentially growing number of
multimedia capable mobile devices. As such, future Internet designs will be
challenged to provide solutions that can deliver bandwidth-intensive,
delay-sensitive, on-demand video-based services over increasingly crowded,
bandwidth-limited wireless access networks. One of the main reasons for the
bandwidth stress facing wireless network operators is the difficulty to exploit
the multicast nature of the wireless medium when wireless users or access
points rarely experience the same channel conditions or access the same content
at the same time. In this paper, we present and analyze a novel wireless video
delivery paradigm based on the combined use of channel-aware caching and coded
multicasting that allows simultaneously serving multiple cache-enabled
receivers that may be requesting different content and experiencing different
channel conditions. To this end, we reformulate the caching-aided coded
multicast problem as a joint source-channel coding problem and design an
achievable scheme that preserves the cache-enabled multiplicative throughput
gains of the error-free scenario,by guaranteeing per-receiver rates unaffected
by the presence of receivers with worse channel conditions.Comment: 11 pages,6 figures,to appear in IEEE JSAC Special Issue on Video
Distribution over Future Interne
Cache-Enabled Broadcast Packet Erasure Channels with State Feedback
We consider a cache-enabled K-user broadcast erasure packet channel in which
a server with a library of N files wishes to deliver a requested file to each
user who is equipped with a cache of a finite memory M. Assuming that the
transmitter has state feedback and user caches can be filled during off-peak
hours reliably by decentralized cache placement, we characterize the optimal
rate region as a function of the memory size, the erasure probability. The
proposed delivery scheme, based on the scheme proposed by Gatzianas et al.,
exploits the receiver side information established during the placement phase.
Our results enable us to quantify the net benefits of decentralized coded
caching in the presence of erasure. The role of state feedback is found useful
especially when the erasure probability is large and/or the normalized memory
size is small.Comment: 8 pages, 4 figures, to be presented at the 53rd Annual Allerton
Conference on Communication, Control, and Computing, IL, US
Content Delivery in Erasure Broadcast Channels with Cache and Feedback
We study a content delivery problem in a K-user erasure broadcast channel
such that a content providing server wishes to deliver requested files to
users, each equipped with a cache of a finite memory. Assuming that the
transmitter has state feedback and user caches can be filled during off-peak
hours reliably by the decentralized content placement, we characterize the
achievable rate region as a function of the memory sizes and the erasure
probabilities. The proposed delivery scheme, based on the broadcasting scheme
by Wang and Gatzianas et al., exploits the receiver side information
established during the placement phase. Our results can be extended to
centralized content placement as well as multi-antenna broadcast channels with
state feedback.Comment: 29 pages, 7 figures. A short version has been submitted to ISIT 201
Benefits of Cache Assignment on Degraded Broadcast Channels
International audienceDegraded K-user broadcast channels (BCs) are studied when the receivers are facilitated with cache memories. Lower and upper bounds are derived on the capacity-memory tradeoff, i.e., on the largest rate of reliable communication over the BC as a function of the receivers' cache sizes, and the bounds are shown to match for interesting special cases. The lower bounds are achieved by two new coding schemes that benefit from nonuniform cache assignments. Lower and upper bounds are also established on the global capacity-memory tradeoff, i.e., on the largest capacity-memory tradeoff that can be attained by optimizing the receivers' cache sizes subject to a total cache memory budget. The bounds coincide when the total cache memory budget is sufficiently small or sufficiently large, where the thresholds depend on the BC statistics. For small cache memories, it is optimal to assign all the cache memory to the weakest receiver. In this regime, the global capacity-memory tradeoff grows by the total cache memory budget divided by the number of files in the system. In other words, a perfect global caching gain is achievable in this regime and the performance corresponds to a system where all the cache contents in the network are available to all receivers. For large cache memories, it is optimal to assign a positive cache memory to every receiver, such that the weaker receivers are assigned larger cache memories compared to the stronger receivers. In this regime, the growth rate of the global capacity-memory tradeoff is further divided by the number of users, which corresponds to a local caching gain. It is observed numerically that a uniform assignment of the total cache memory is suboptimal in all regimes, unless the BC is completely symmetric. For erasure BCs, this claim is proved analytically in the regime of small cache sizes
- …