24 research outputs found
Caching with Partial Adaptive Matching
We study the caching problem when we are allowed to match each user to one of
a subset of caches after its request is revealed. We focus on non-uniformly
popular content, specifically when the file popularities obey a Zipf
distribution. We study two extremal schemes, one focusing on coded server
transmissions while ignoring matching capabilities, and the other focusing on
adaptive matching while ignoring potential coding opportunities. We derive the
rates achieved by these schemes and characterize the regimes in which one
outperforms the other. We also compare them to information-theoretic outer
bounds, and finally propose a hybrid scheme that generalizes ideas from the two
schemes and performs at least as well as either of them in most memory regimes.Comment: 35 pages, 7 figures. Shorter versions have appeared in IEEE ISIT 2017
and IEEE ITW 201
Uncoded Caching and Cross-level Coded Delivery for Non-uniform File Popularity
Proactive content caching at user devices and coded delivery is studied
considering a non-uniform file popularity distribution. A novel centralized
uncoded caching and coded delivery scheme, which can be applied to large file
libraries, is proposed. The proposed cross-level coded delivery (CLCD) scheme
is shown to achieve a lower average delivery rate than the state of art. In the
proposed CLCD scheme, the same subpacketization is used for all the files in
the library in order to prevent additional zero-padding in the delivery phase,
and unlike the existing schemes in the literature, two users requesting files
from different popularity groups can be served by the same multicast message in
order to reduce the delivery rate. Simulation results indicate significant
reduction in the average delivery rate for typical Zipf distribution parameter
values.Comment: A shorter version of this paper has been presented at IEEE
International Conference on Communications (ICC) 201
Multi-access Coded Caching with Decentralized Prefetching
An extension of coded caching referred to as multi-access coded caching where
each user can access multiple caches and each cache can serve multiple users is
considered in this paper. Most of the literature in multi-access coded caching
focuses on cyclic wrap-around cache access where each user is allowed to access
an exclusive set of consecutive caches only. In this paper, a more general
framework of multi-access caching problem is considered in which each user is
allowed to randomly connect to a specific number of caches and multiple users
can access the same set of caches. For the proposed system model considering
decentralized prefetching, a new delivery scheme is proposed and an expression
for per user delivery rate is obtained. A lower bound on the delivery rate is
derived using techniques from index coding. The proposed scheme is shown to be
optimal among all the linear schemes under certain conditions. An improved
delivery rate and a lower bound for the decentralized multi-access coded
caching scheme with cyclic wrap-around cache access can be obtained as a
special case. By giving specific values to certain parameters, the results of
decentralized shared caching scheme and of conventional decentralized caching
scheme can be recovered.Comment: 26 pages, 6 figures, 6 tables, Submitted to IEEE Transactions on
Communication