657 research outputs found
Adaptive Delivery in Caching Networks
The problem of content delivery in caching networks is investigated for
scenarios where multiple users request identical files. Redundant user demands
are likely when the file popularity distribution is highly non-uniform or the
user demands are positively correlated. An adaptive method is proposed for the
delivery of redundant demands in caching networks. Based on the redundancy
pattern in the current demand vector, the proposed method decides between the
transmission of uncoded messages or the coded messages of [1] for delivery.
Moreover, a lower bound on the delivery rate of redundant requests is derived
based on a cutset bound argument. The performance of the adaptive method is
investigated through numerical examples of the delivery rate of several
specific demand vectors as well as the average delivery rate of a caching
network with correlated requests. The adaptive method is shown to considerably
reduce the gap between the non-adaptive delivery rate and the lower bound. In
some specific cases, using the adaptive method, this gap shrinks by almost 50%
for the average rate.Comment: 8 pages,8 figures. Submitted to IEEE transaction on Communications in
2015. A short version of this article was published as an IEEE Communications
Letter with DOI: 10.1109/LCOMM.2016.255814
Coded Caching for a Large Number Of Users
Information theoretic analysis of a coded caching system is considered, in
which a server with a database of N equal-size files, each F bits long, serves
K users. Each user is assumed to have a local cache that can store M files,
i.e., capacity of MF bits. Proactive caching to user terminals is considered,
in which the caches are filled by the server in advance during the placement
phase, without knowing the user requests. Each user requests a single file, and
all the requests are satisfied simultaneously through a shared error-free link
during the delivery phase.
First, centralized coded caching is studied assuming both the number and the
identity of the active users in the delivery phase are known by the server
during the placement phase. A novel group-based centralized coded caching (GBC)
scheme is proposed for a cache capacity of M = N/K. It is shown that this
scheme achieves a smaller delivery rate than all the known schemes in the
literature. The improvement is then extended to a wider range of cache
capacities through memory-sharing between the proposed scheme and other known
schemes in the literature. Next, the proposed centralized coded caching idea is
exploited in the decentralized setting, in which the identities of the users
that participate in the delivery phase are assumed to be unknown during the
placement phase. It is shown that the proposed decentralized caching scheme
also achieves a delivery rate smaller than the state-of-the-art. Numerical
simulations are also presented to corroborate our theoretical results
Generalized Degrees of Freedom of the Symmetric Cache-Aided MISO Broadcast Channel with Partial CSIT
We consider the cache-aided MISO broadcast channel (BC) in which a
multi-antenna transmitter serves single-antenna receivers, each equipped
with a cache memory. The transmitter has access to partial knowledge of the
channel state information. For a symmetric setting, in terms of channel
strength levels, partial channel knowledge levels and cache sizes, we
characterize the generalized degrees of freedom (GDoF) up to a constant
multiplicative factor. The achievability scheme exploits the interplay between
spatial multiplexing gains and coded-multicasting gain. On the other hand, a
cut-set-based argument in conjunction with a GDoF outer bound for a parallel
MISO BC under channel uncertainty are used for the converse. We further show
that the characterized order-optimal GDoF is also attained in a decentralized
setting, where no coordination is required for content placement in the caches.Comment: first revisio
- …