56 research outputs found
Cache-Aided Coded Multicast for Correlated Sources
The combination of edge caching and coded multicasting is a promising
approach to improve the efficiency of content delivery over cache-aided
networks. The global caching gain resulting from content overlap distributed
across the network in current solutions is limited due to the increasingly
personalized nature of the content consumed by users. In this paper, the
cache-aided coded multicast problem is generalized to account for the
correlation among the network content by formulating a source compression
problem with distributed side information. A correlation-aware achievable
scheme is proposed and an upper bound on its performance is derived. It is
shown that considerable load reductions can be achieved, compared to state of
the art correlation-unaware schemes, when caching and delivery phases
specifically account for the correlation among the content files.Comment: In proceeding of IEEE International Symposium on Turbo Codes and
Iterative Information Processing (ISTC), 201
Correlation-Aware Distributed Caching and Coded Delivery
Cache-aided coded multicast leverages side information at wireless edge
caches to efficiently serve multiple groupcast demands via common multicast
transmissions, leading to load reductions that are proportional to the
aggregate cache size. However, the increasingly unpredictable and personalized
nature of the content that users consume challenges the efficiency of existing
caching-based solutions in which only exact content reuse is explored. This
paper generalizes the cache-aided coded multicast problem to a source
compression with distributed side information problem that specifically
accounts for the correlation among the content files. It is shown how joint
file compression during the caching and delivery phases can provide load
reductions that go beyond those achieved with existing schemes. This is
accomplished through a lower bound on the fundamental rate-memory trade-off as
well as a correlation-aware achievable scheme, shown to significantly
outperform state-of-the-art correlation-unaware solutions, while approaching
the limiting rate-memory trade-off.Comment: In proceeding of IEEE Information Theory Workshop (ITW), 201
Broadcast Caching Networks with Two Receivers and Multiple Correlated Sources
The correlation among the content distributed across a cache-aided broadcast
network can be exploited to reduce the delivery load on the shared wireless
link. This paper considers a two-user three-file network with correlated
content, and studies its fundamental limits for the worst-case demand. A class
of achievable schemes based on a two-step source coding approach is proposed.
Library files are first compressed using Gray-Wyner source coding, and then
cached and delivered using a combination of correlation-unaware cache-aided
coded multicast schemes. The second step is interesting in its own right and
considers a multiple-request caching problem, whose solution requires coding in
the placement phase. A lower bound on the optimal peak rate-memory trade-off is
derived, which is used to evaluate the performance of the proposed scheme. It
is shown that for symmetric sources the two-step strategy achieves the lower
bound for large cache capacities, and it is within half of the joint entropy of
two of the sources conditioned on the third source for all other cache sizes.Comment: in Proceedings of Asilomar Conference on Signals, Systems and
Computers, Pacific Grove, California, November 201
On Caching with More Users than Files
Caching appears to be an efficient way to reduce peak hour network traffic
congestion by storing some content at the user's cache without knowledge of
later demands. Recently, Maddah-Ali and Niesen proposed a two-phase, placement
and delivery phase, coded caching strategy for centralized systems (where
coordination among users is possible in the placement phase), and for
decentralized systems. This paper investigates the same setup under the further
assumption that the number of users is larger than the number of files. By
using the same uncoded placement strategy of Maddah-Ali and Niesen, a novel
coded delivery strategy is proposed to profit from the multicasting
opportunities that arise because a file may be demanded by multiple users. The
proposed delivery method is proved to be optimal under the constraint of
uncoded placement for centralized systems with two files, moreover it is shown
to outperform known caching strategies for both centralized and decentralized
systems.Comment: 6 pages, 3 figures, submitted to ISIT 201
An Efficient Coded Multicasting Scheme Preserving the Multiplicative Caching Gain
Coded multicasting has been shown to be a promis- ing approach to
significantly improve the caching performance of content delivery networks with
multiple caches downstream of a common multicast link. However, achievable
schemes proposed to date have been shown to achieve the proved order-optimal
performance only in the asymptotic regime in which the number of packets per
requested item goes to infinity. In this paper, we first extend the asymptotic
analysis of the achievable scheme in [1], [2] to the case of heterogeneous
cache sizes and demand distributions, providing the best known upper bound on
the fundamental limiting performance when the number of packets goes to
infinity. We then show that the scheme achieving this upper bound quickly loses
its multiplicative caching gain for finite content packetization. To overcome
this limitation, we design a novel polynomial-time algorithm based on random
greedy graph- coloring that, while keeping the same finite content
packetization, recovers a significant part of the multiplicative caching gain.
Our results show that the order-optimal coded multicasting schemes proposed to
date, while useful in quantifying the fundamental limiting performance, must be
properly designed for practical regimes of finite packetization.Comment: 6 pages, 7 figures, Published in Infocom CNTCV 201
A Novel Centralized Strategy for Coded Caching with Non-uniform Demands
Despite significant progress in the caching literature concerning the worst
case and uniform average case regimes, the algorithms for caching with
nonuniform demands are still at a basic stage and mostly rely on simple
grouping and memory-sharing techniques. In this work we introduce a novel
centralized caching strategy for caching with nonuniform file popularities. Our
scheme allows for assigning more cache to the files which are more likely to be
requested, while maintaining the same sub-packetization for all the files. As a
result, in the delivery phase it is possible to perform linear codes across
files with different popularities without resorting to zero-padding or
concatenation techniques. We will describe our placement strategy for arbitrary
range of parameters. The delivery phase will be outlined for a small example
for which we are able to show a noticeable improvement over the state of the
art.Comment: 4 pages, 3 figures, submitted to the 2018 International Zurich
Seminar on Information and Communicatio
Cache-Enabled Broadcast Packet Erasure Channels with State Feedback
We consider a cache-enabled K-user broadcast erasure packet channel in which
a server with a library of N files wishes to deliver a requested file to each
user who is equipped with a cache of a finite memory M. Assuming that the
transmitter has state feedback and user caches can be filled during off-peak
hours reliably by decentralized cache placement, we characterize the optimal
rate region as a function of the memory size, the erasure probability. The
proposed delivery scheme, based on the scheme proposed by Gatzianas et al.,
exploits the receiver side information established during the placement phase.
Our results enable us to quantify the net benefits of decentralized coded
caching in the presence of erasure. The role of state feedback is found useful
especially when the erasure probability is large and/or the normalized memory
size is small.Comment: 8 pages, 4 figures, to be presented at the 53rd Annual Allerton
Conference on Communication, Control, and Computing, IL, US
- …