150 research outputs found
How Much Can D2D Communication Reduce Content Delivery Latency in Fog Networks with Edge Caching?
A Fog-Radio Access Network (F-RAN) is studied in which cache-enabled Edge
Nodes (ENs) with dedicated fronthaul connections to the cloud aim at delivering
contents to mobile users. Using an information-theoretic approach, this work
tackles the problem of quantifying the potential latency reduction that can be
obtained by enabling Device-to-Device (D2D) communication over out-of-band
broadcast links. Following prior work, the Normalized Delivery Time (NDT) --- a
metric that captures the high signal-to-noise ratio worst-case latency --- is
adopted as the performance criterion of interest. Joint edge caching, downlink
transmission, and D2D communication policies based on compress-and-forward are
proposed that are shown to be information-theoretically optimal to within a
constant multiplicative factor of two for all values of the problem parameters,
and to achieve the minimum NDT for a number of special cases. The analysis
provides insights on the role of D2D cooperation in improving the delivery
latency.Comment: Submitted to the IEEE Transactions on Communication
Separation Framework: An Enabler for Cooperative and D2D Communication for Future 5G Networks
Soaring capacity and coverage demands dictate that future cellular networks
need to soon migrate towards ultra-dense networks. However, network
densification comes with a host of challenges that include compromised energy
efficiency, complex interference management, cumbersome mobility management,
burdensome signaling overheads and higher backhaul costs. Interestingly, most
of the problems, that beleaguer network densification, stem from legacy
networks' one common feature i.e., tight coupling between the control and data
planes regardless of their degree of heterogeneity and cell density.
Consequently, in wake of 5G, control and data planes separation architecture
(SARC) has recently been conceived as a promising paradigm that has potential
to address most of aforementioned challenges. In this article, we review
various proposals that have been presented in literature so far to enable SARC.
More specifically, we analyze how and to what degree various SARC proposals
address the four main challenges in network densification namely: energy
efficiency, system level capacity maximization, interference management and
mobility management. We then focus on two salient features of future cellular
networks that have not yet been adapted in legacy networks at wide scale and
thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP), and
device-to-device (D2D) communications. After providing necessary background on
CoMP and D2D, we analyze how SARC can particularly act as a major enabler for
CoMP and D2D in context of 5G. This article thus serves as both a tutorial as
well as an up to date survey on SARC, CoMP and D2D. Most importantly, the
article provides an extensive outlook of challenges and opportunities that lie
at the crossroads of these three mutually entangled emerging technologies.Comment: 28 pages, 11 figures, IEEE Communications Surveys & Tutorials 201
Cooperative Local Caching under Heterogeneous File Preferences
Local caching is an effective scheme for leveraging the memory of the mobile
terminal (MT) and short range communications to save the bandwidth usage and
reduce the download delay in the cellular communication system. Specifically,
the MTs first cache in their local memories in off-peak hours and then exchange
the requested files with each other in the vicinity during peak hours. However,
prior works largely overlook MTs' heterogeneity in file preferences and their
selfish behaviours. In this paper, we practically categorize the MTs into
different interest groups according to the MTs' preferences. Each group of MTs
aims to increase the probability of successful file discovery from the
neighbouring MTs (from the same or different groups). Hence, we define the
groups' utilities as the probability of successfully discovering the file in
the neighbouring MTs, which should be maximized by deciding the caching
strategies of different groups. By modelling MTs' mobilities as homogeneous
Poisson point processes (HPPPs), we analytically characterize MTs' utilities in
closed-form. We first consider the fully cooperative case where a centralizer
helps all groups to make caching decisions. We formulate the problem as a
weighted-sum utility maximization problem, through which the maximum utility
trade-offs of different groups are characterized. Next, we study two benchmark
cases under selfish caching, namely, partial and no cooperation, with and
without inter-group file sharing, respectively. The optimal caching
distributions for these two cases are derived. Finally, numerical examples are
presented to compare the utilities under different cases and show the
effectiveness of the fully cooperative local caching compared to the two
benchmark cases
A Survey on Applications of Cache-Aided NOMA
Contrary to orthogonal multiple-access (OMA), non-orthogonal multiple-access (NOMA) schemes can serve a pool of users without exploiting the scarce frequency or time domain resources. This is useful in meeting the future network requirements (5G and beyond systems), such as, low latency, massive connectivity, users' fairness, and high spectral efficiency. On the other hand, content caching restricts duplicate data transmission by storing popular contents in advance at the network edge which reduces data traffic. In this survey, we focus on cache-aided NOMA-based wireless networks which can reap the benefits of both cache and NOMA; switching to NOMA from OMA enables cache-aided networks to push additional files to content servers in parallel and improve the cache hit probability. Beginning with fundamentals of the cache-aided NOMA technology, we summarize the performance goals of cache-aided NOMA systems, present the associated design challenges, and categorize the recent related literature based on their application verticals. Concomitant standardization activities and open research challenges are highlighted as well
On the Rate-Memory Tradeoff of D2D Coded Caching with Three Users
The device-to-device (D2D) centralized coded caching problem is studied for
the three-user scenario, where two models are considered. One is the 3-user D2D
coded caching model proposed by Ji et al, and the other is a simpler model
named the 3-user D2D coded caching with two random requesters and one sender
(2RR1S), proposed in this paper, where in the delivery phase, any two of the
three users will make file requests, and the user that does not make any file
request is the designated sender. We allow for coded cache placement and none
one-shot delivery schemes. We first find the optimal caching and delivery
schemes for the model of the 3-user D2D coded caching with 2RR1S for any number
of files. Next, we propose a new caching and delivery scheme for the 3-user D2D
coded caching problem using the optimal scheme of the 3-user D2D coded caching
with 2RR1S as a base scheme. The new caching and delivery scheme proposed
employs coded cache placement and when the number of files is equal to 2 and
the cache size is medium, it outperforms existing schemes which focus on
uncoded cache placement. We further characterize the optimal rate-memory
tradeoff for the 3-user D2D coded caching problem when the number of files is
equal to 2. As a result, we show that the new caching and delivery scheme
proposed is in fact optimal when the cache size is in the medium range.Comment: To be submitted for possible journal publicatio
- …