161 research outputs found
Fundamental Limits of Caching in Wireless D2D Networks
We consider a wireless Device-to-Device (D2D) network where communication is
restricted to be single-hop. Users make arbitrary requests from a finite
library of files and have pre-cached information on their devices, subject to a
per-node storage capacity constraint. A similar problem has already been
considered in an ``infrastructure'' setting, where all users receive a common
multicast (coded) message from a single omniscient server (e.g., a base station
having all the files in the library) through a shared bottleneck link. In this
work, we consider a D2D ``infrastructure-less'' version of the problem. We
propose a caching strategy based on deterministic assignment of subpackets of
the library files, and a coded delivery strategy where the users send linearly
coded messages to each other in order to collectively satisfy their demands. We
also consider a random caching strategy, which is more suitable to a fully
decentralized implementation. Under certain conditions, both approaches can
achieve the information theoretic outer bound within a constant multiplicative
factor. In our previous work, we showed that a caching D2D wireless network
with one-hop communication, random caching, and uncoded delivery, achieves the
same throughput scaling law of the infrastructure-based coded multicasting
scheme, in the regime of large number of users and files in the library. This
shows that the spatial reuse gain of the D2D network is order-equivalent to the
coded multicasting gain of single base station transmission. It is therefore
natural to ask whether these two gains are cumulative, i.e.,if a D2D network
with both local communication (spatial reuse) and coded multicasting can
provide an improved scaling law. Somewhat counterintuitively, we show that
these gains do not cumulate (in terms of throughput scaling law).Comment: 45 pages, 5 figures, Submitted to IEEE Transactions on Information
Theory, This is the extended version of the conference (ITW) paper
arXiv:1304.585
NOMA Assisted Wireless Caching: Strategies and Performance Analysis
Conventional wireless caching assumes that content can be pushed to local
caching infrastructure during off-peak hours in an error-free manner; however,
this assumption is not applicable if local caches need to be frequently updated
via wireless transmission. This paper investigates a new approach to wireless
caching for the case when cache content has to be updated during on-peak hours.
Two non-orthogonal multiple access (NOMA) assisted caching strategies are
developed, namely the push-then-deliver strategy and the push-and-deliver
strategy. In the push-then-deliver strategy, the NOMA principle is applied to
push more content files to the content servers during a short time interval
reserved for content pushing in on-peak hours and to provide more connectivity
for content delivery, compared to the conventional orthogonal multiple access
(OMA) strategy. The push-and-deliver strategy is motivated by the fact that
some users' requests cannot be accommodated locally and the base station has to
serve them directly. These events during the content delivery phase are
exploited as opportunities for content pushing, which further facilitates the
frequent update of the files cached at the content servers. It is also shown
that this strategy can be straightforwardly extended to device-to-device
caching, and various analytical results are developed to illustrate the
superiority of the proposed caching strategies compared to OMA based schemes
A Learning-Based Approach to Caching in Heterogenous Small Cell Networks
A heterogenous network with base stations (BSs), small base stations (SBSs)
and users distributed according to independent Poisson point processes is
considered. SBS nodes are assumed to possess high storage capacity and to form
a distributed caching network. Popular files are stored in local caches of
SBSs, so that a user can download the desired files from one of the SBSs in its
vicinity. The offloading-loss is captured via a cost function that depends on
the random caching strategy proposed here. The popularity profile of cached
content is unknown and estimated using instantaneous demands from users within
a specified time interval. An estimate of the cost function is obtained from
which an optimal random caching strategy is devised. The training time to
achieve an difference between the achieved and optimal costs is
finite provided the user density is greater than a predefined threshold, and
scales as , where is the support of the popularity profile. A transfer
learning-based approach to improve this estimate is proposed. The training time
is reduced when the popularity profile is modeled using a parametric family of
distributions; the delay is independent of and scales linearly with the
dimension of the distribution parameter.Comment: 12 pages, 5 figures, published in IEEE Transactions on
Communications, 2016. arXiv admin note: text overlap with arXiv:1504.0363
- …