3,094 research outputs found
Fundamental Limits of Caching in Wireless D2D Networks
We consider a wireless Device-to-Device (D2D) network where communication is
restricted to be single-hop. Users make arbitrary requests from a finite
library of files and have pre-cached information on their devices, subject to a
per-node storage capacity constraint. A similar problem has already been
considered in an ``infrastructure'' setting, where all users receive a common
multicast (coded) message from a single omniscient server (e.g., a base station
having all the files in the library) through a shared bottleneck link. In this
work, we consider a D2D ``infrastructure-less'' version of the problem. We
propose a caching strategy based on deterministic assignment of subpackets of
the library files, and a coded delivery strategy where the users send linearly
coded messages to each other in order to collectively satisfy their demands. We
also consider a random caching strategy, which is more suitable to a fully
decentralized implementation. Under certain conditions, both approaches can
achieve the information theoretic outer bound within a constant multiplicative
factor. In our previous work, we showed that a caching D2D wireless network
with one-hop communication, random caching, and uncoded delivery, achieves the
same throughput scaling law of the infrastructure-based coded multicasting
scheme, in the regime of large number of users and files in the library. This
shows that the spatial reuse gain of the D2D network is order-equivalent to the
coded multicasting gain of single base station transmission. It is therefore
natural to ask whether these two gains are cumulative, i.e.,if a D2D network
with both local communication (spatial reuse) and coded multicasting can
provide an improved scaling law. Somewhat counterintuitively, we show that
these gains do not cumulate (in terms of throughput scaling law).Comment: 45 pages, 5 figures, Submitted to IEEE Transactions on Information
Theory, This is the extended version of the conference (ITW) paper
arXiv:1304.585
Living on the Edge: The Role of Proactive Caching in 5G Wireless Networks
This article explores one of the key enablers of beyond G wireless
networks leveraging small cell network deployments, namely proactive caching.
Endowed with predictive capabilities and harnessing recent developments in
storage, context-awareness and social networks, peak traffic demands can be
substantially reduced by proactively serving predictable user demands, via
caching at base stations and users' devices. In order to show the effectiveness
of proactive caching, we examine two case studies which exploit the spatial and
social structure of the network, where proactive caching plays a crucial role.
Firstly, in order to alleviate backhaul congestion, we propose a mechanism
whereby files are proactively cached during off-peak demands based on file
popularity and correlations among users and files patterns. Secondly,
leveraging social networks and device-to-device (D2D) communications, we
propose a procedure that exploits the social structure of the network by
predicting the set of influential users to (proactively) cache strategic
contents and disseminate them to their social ties via D2D communications.
Exploiting this proactive caching paradigm, numerical results show that
important gains can be obtained for each case study, with backhaul savings and
a higher ratio of satisfied users of up to and , respectively.
Higher gains can be further obtained by increasing the storage capability at
the network edge.Comment: accepted for publication in IEEE Communications Magazin
Cooperative Local Caching under Heterogeneous File Preferences
Local caching is an effective scheme for leveraging the memory of the mobile
terminal (MT) and short range communications to save the bandwidth usage and
reduce the download delay in the cellular communication system. Specifically,
the MTs first cache in their local memories in off-peak hours and then exchange
the requested files with each other in the vicinity during peak hours. However,
prior works largely overlook MTs' heterogeneity in file preferences and their
selfish behaviours. In this paper, we practically categorize the MTs into
different interest groups according to the MTs' preferences. Each group of MTs
aims to increase the probability of successful file discovery from the
neighbouring MTs (from the same or different groups). Hence, we define the
groups' utilities as the probability of successfully discovering the file in
the neighbouring MTs, which should be maximized by deciding the caching
strategies of different groups. By modelling MTs' mobilities as homogeneous
Poisson point processes (HPPPs), we analytically characterize MTs' utilities in
closed-form. We first consider the fully cooperative case where a centralizer
helps all groups to make caching decisions. We formulate the problem as a
weighted-sum utility maximization problem, through which the maximum utility
trade-offs of different groups are characterized. Next, we study two benchmark
cases under selfish caching, namely, partial and no cooperation, with and
without inter-group file sharing, respectively. The optimal caching
distributions for these two cases are derived. Finally, numerical examples are
presented to compare the utilities under different cases and show the
effectiveness of the fully cooperative local caching compared to the two
benchmark cases
Mitigating Interference in Content Delivery Networks by Spatial Signal Alignment: The Approach of Shot-Noise Ratio
Multimedia content especially videos is expected to dominate data traffic in
next-generation mobile networks. Caching popular content at the network edge
has emerged to be a solution for low-latency content delivery. Compared with
the traditional wireless communication, content delivery has a key
characteristic that many signals coexisting in the air carry identical popular
content. They, however, can interfere with each other at a receiver if their
modulation-and-coding (MAC) schemes are adapted to individual channels
following the classic approach. To address this issue, we present a novel idea
of content adaptive MAC (CAMAC) where adapting MAC schemes to content ensures
that all signals carry identical content are encoded using an identical MAC
scheme, achieving spatial MAC alignment. Consequently, interference can be
harnessed as signals, to improve the reliability of wireless delivery. In the
remaining part of the paper, we focus on quantifying the gain CAMAC can bring
to a content-delivery network using a stochastic-geometry model. Specifically,
content helpers are distributed as a Poisson point process, each of which
transmits a file from a content database based on a given popularity
distribution. It is discovered that the successful content-delivery probability
is closely related to the distribution of the ratio of two independent shot
noise processes, named a shot-noise ratio. The distribution itself is an open
mathematical problem that we tackle in this work. Using stable-distribution
theory and tools from stochastic geometry, the distribution function is derived
in closed form. Extending the result in the context of content-delivery
networks with CAMAC yields the content-delivery probability in different closed
forms. In addition, the gain in the probability due to CAMAC is shown to grow
with the level of skewness in the content popularity distribution.Comment: 32 pages, to appear in IEEE Trans. on Wireless Communicatio
Modeling and Performance of Uplink Cache-Enabled Massive MIMO Heterogeneous Networks
A significant burden on wireless networks is brought by the uploading of user-generated contents to the Internet by means of applications such as social media. To cope with this mobile data tsunami, we develop a novel multiple-input multiple-output (MIMO) network architecture with randomly located base stations (BSs) a large number of antennas employing cache-enabled uplink transmission. In particular, we formulate a scenario, where the users upload their content to their strongest BSs, which are Poisson point process distributed. In addition, the BSs, exploiting the benefits of massive MIMO, upload their contents to the core network by means of a finite-rate backhaul. After proposing the caching policies, where we propose the modified von Mises distribution as the popularity distribution function, we derive the outage probability and the average delivery rate by taking advantage of tools from the deterministic equivalent and stochastic geometry analyses. Numerical results investigate the realistic performance gains of the proposed heterogeneous cache-enabled uplink on the network in terms of cardinal operating parameters. For example, insights regarding the BSs storage size are exposed. Moreover, the impacts of the key parameters such as the file popularity distribution and the target bitrate are investigated. Specifically, the outage probability decreases if the storage size is increased, while the average delivery rate increases. In addition, the concentration parameter, defining the number of files stored at the intermediate nodes (popularity), affects the proposed metrics directly. Furthermore, a higher target rate results in higher outage because fewer users obey this constraint. Also, we demonstrate that a denser network decreases the outage and increases the delivery rate. Hence, the introduction of caching at the uplink of the system design ameliorates the network performance.Peer reviewe
- …