730 research outputs found
Optimal Content Placement for Offloading in Cache-enabled Heterogeneous Wireless Networks
Caching at base stations (BSs) is a promising way to offload traffic and
eliminate backhaul bottleneck in heterogeneous networks (HetNets). In this
paper, we investigate the optimal content placement maximizing the successful
offloading probability in a cache-enabled HetNet where a tier of multi-antenna
macro BSs (MBSs) is overlaid with a tier of helpers with caches. Based on
probabilistic caching framework, we resort to stochastic geometry theory to
derive the closed-form successful offloading probability and formulate the
caching probability optimization problem, which is not concave in general. In
two extreme cases with high and low user-to-helper density ratios, we obtain
the optimal caching probability and analyze the impacts of BS density and
transmit power of the two tiers and the signal-to-interference-plus-noise ratio
(SINR) threshold. In general case, we obtain the optimal caching probability
that maximizes the lower bound of successful offloading probability and analyze
the impact of user density. Simulation and numerical results show that when the
ratios of MBS-to-helper density, MBS-to-helper transmit power and
user-to-helper density, and the SINR threshold are large, the optimal caching
policy tends to cache the most popular files everywhere.Comment: Submitted to IEEE Globecom 201
Caching at the Wireless Edge: Design Aspects, Challenges and Future Directions
Caching at the wireless edge is a promising way of boosting spectral
efficiency and reducing energy consumption of wireless systems. These
improvements are rooted in the fact that popular contents are reused,
asynchronously, by many users. In this article, we first introduce methods to
predict the popularity distributions and user preferences, and the impact of
erroneous information. We then discuss the two aspects of caching systems,
namely content placement and delivery. We expound the key differences between
wired and wireless caching, and outline the differences in the system arising
from where the caching takes place, e.g., at base stations, or on the wireless
devices themselves. Special attention is paid to the essential limitations in
wireless caching, and possible tradeoffs between spectral efficiency, energy
efficiency and cache size.Comment: Published in IEEE Communications Magazin
A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications
As the explosive growth of smart devices and the advent of many new
applications, traffic volume has been growing exponentially. The traditional
centralized network architecture cannot accommodate such user demands due to
heavy burden on the backhaul links and long latency. Therefore, new
architectures which bring network functions and contents to the network edge
are proposed, i.e., mobile edge computing and caching. Mobile edge networks
provide cloud computing and caching capabilities at the edge of cellular
networks. In this survey, we make an exhaustive review on the state-of-the-art
research efforts on mobile edge networks. We first give an overview of mobile
edge networks including definition, architecture and advantages. Next, a
comprehensive survey of issues on computing, caching and communication
techniques at the network edge is presented respectively. The applications and
use cases of mobile edge networks are discussed. Subsequently, the key enablers
of mobile edge networks such as cloud technology, SDN/NFV and smart devices are
discussed. Finally, open research challenges and future directions are
presented as well
Performance Analysis and Optimization of Cache-Assisted CoMP for Clustered D2D Networks
Caching at mobile devices and leveraging cooperative device-to-device (D2D)
communications are two promising approaches to support massive content delivery
over wireless networks while mitigating the effects of interference. To show
the impact of cooperative communication on the performance of cache-enabled D2D
networks, the notion of device clustering must be factored in to convey a
realistic description of the network performance. In this regard, this paper
develops a novel mathematical model, based on stochastic geometry and an
optimization framework for cache-assisted coordinated multi-point (CoMP)
transmissions with clustered devices. Devices are spatially distributed into
disjoint clusters and are assumed to have a surplus memory to cache files from
a known library, following a random probabilistic caching scheme. Desired
contents that are not self-cached can be obtained via D2D CoMP transmissions
from neighboring devices or, as a last resort, from the network. For this
model, we analytically characterize the offloading gain and rate coverage
probability as functions of the system parameters. An optimal caching strategy
is then defined as the content placement scheme that maximizes the offloading
gain. For a tractable optimization framework, we pursue two separate approaches
to obtain a lower bound and a provably accurate approximation of the offloading
gain, which allows us to obtain optimized caching strategies
Joint Caching and Resource Allocation in D2D-Assisted Wireless HetNet
5G networks are required to provide very fast and reliable communications
while dealing with the increase of users traffic. In Heterogeneous Networks
(HetNets) assisted with Device-to-Device (D2D) communication, traffic can be
offloaded to Small Base Stations or to users to improve the network's
successful data delivery rate. In this paper, we aim at maximizing the average
number of files that are successfully delivered to users, by jointly optimizing
caching placement and channel allocation in cache-enabled D2D-assisted HetNets.
At first, an analytical upper-bound on the average content delivery delay is
derived. Then, the joint optimization problem is formulated. The non-convexity
of the problem is alleviated, and the optimal solution is determined. Due to
the high time complexity of the obtained solution, a low-complex sub-optimal
approach is proposed. Numerical results illustrate the efficacy of the proposed
solutions and compare them to conventional approaches. Finally, by
investigating the impact of key parameters, e.g. power, caching capacity, QoS
requirements, etc., guidelines to design these networks are obtained.Comment: 24 pages, 5 figures, submitted to IEEE Transactions on Wireless
Communications (12-Feb-2019
Analysis of Cached-Enabled Hybrid Millimter Wave & Sub-6 GHz Massive MIMO Networks
This paper focuses on edge caching in mm/{\mu}Wave hybrid wireless networks,
in which all mmWave SBSs and {\mu}Wave MBSs are capable of storing contents to
alleviate the traffic burden on the backhaul link that connect the BSs and the
core network to retrieve the non-cached contents. The main aim of this work is
to address the effect of capacity-limited backhaul on the average success
probability (ASP) of file delivery and latency. In particular, we consider a
more practical mmWave hybrid beamforming in small cells and massive MIMO
communication in macro cells. Based on stochastic geometry and a simple
retransmission protocol, we derive the association probabilities by which the
ASP of file delivery and latency are derived. Taking no caching event as the
benchmark, we evaluate these QoS performance metrics under MC and UC placement
policies. The theoretical results demonstrate that backhaul capacity indeed has
a significant impact on network performance especially under weak backhaul
capacity. Besides, we also show the tradeoff among cache size, retransmission
attempts, ASP of file delivery, and latency. The interplay shows that cache
size and retransmission under different caching placement schemes alleviates
the backhaul requirements. Simulation results are present to valid our
analysis
Optimizing Joint Probabilistic Caching and Communication for Clustered D2D Networks
Caching at mobile devices and leveraging device-to-device (D2D) communication
are two promising approaches to support massive content delivery over wireless
networks. The analysis of such D2D caching networks based on a physical
interference model is usually carried out by assuming that devices are
uniformly distributed. However, this approach does not fully consider and
characterize the fact that devices are usually grouped into clusters. Motivated
by this fact, this paper presents a comprehensive performance analysis and
joint communication and caching optimization for a clustered D2D network.
Devices are distributed according to a Thomas cluster process (TCP) and are
assumed to have a surplus memory which is exploited to proactively cache files
from a known library, following a random probabilistic caching scheme. Devices
can retrieve the requested files from their caches, from neighbouring devices
in their proximity (cluster), or from the base station as a last resort. Three
key performance metrics are optimized in this paper, namely, the offloading
gain, energy consumption, and latency
Caching Policy for Cache-enabled D2D Communications by Learning User Preference
Prior works in designing caching policy do not distinguish content popularity
with user preference. In this paper, we illustrate the caching gain by
exploiting individual user behavior in sending requests. After showing the
connection between the two concepts, we provide a model for synthesizing user
preference from content popularity. We then optimize the caching policy with
the knowledge of user preference and active level to maximize the offloading
probability for cache-enabled device-to-device communications, and develop a
low-complexity algorithm to find the solution. In order to learn user
preference, we model the user request behavior resorting to probabilistic
latent semantic analysis, and learn the model parameters by expectation
maximization algorithm. By analyzing a Movielens dataset, we find that the user
preferences are less similar, and the active level and topic preference of each
user change slowly over time. Based on this observation, we introduce a prior
knowledge based learning algorithm for user preference, which can shorten the
learning time. Simulation results show remarkable performance gain of the
caching policy with user preference over existing policy with content
popularity, both with realistic dataset and synthetic data validated by the
real dataset
Caching Policy Optimization for D2D Communications by Learning User Preference
Cache-enabled device-to-device (D2D) communications can boost network
throughput. By pre-downloading contents to local caches of users, the content
requested by a user can be transmitted via D2D links by other users in
proximity. Prior works optimize the caching policy at users with the knowledge
of content popularity, defined as the probability distribution of request for
every file in a library from by all users. However, content popularity can not
reflect the interest of each individual user and thus popularity-based caching
policy may not fully capture the performance gain introduced by caching. In
this paper, we optimize caching policy for cache-enabled D2D by learning user
preference, defined as the conditional probability distribution of a user's
request for a file given that the user sends a request. We first formulate an
optimization problem with given user preference to maximize the offloading
probability, which is proved as NP-hard, and then provide a greedy algorithm to
find the solution. In order to predict the preference of each individual user,
we model the user request behavior by probabilistic latent semantic analysis
(pLSA), and then apply expectation maximization (EM) algorithm to estimate the
model parameters. Simulation results show that the user preference can be
learnt quickly. Compared to the popularity-based caching policy, the offloading
gain achieved by the proposed policy can be remarkably improved even with
predicted user preference.Comment: Accepted by VTC Spring 201
Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence
Along with the rapid developments in communication technologies and the surge
in the use of mobile devices, a brand-new computation paradigm, Edge Computing,
is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications
are thriving with the breakthroughs in deep learning and the many improvements
in hardware architectures. Billions of data bytes, generated at the network
edge, put massive demands on data processing and structural optimization. Thus,
there exists a strong demand to integrate Edge Computing and AI, which gives
birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI
for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial
Intelligence on Edge). The former focuses on providing more optimal solutions
to key problems in Edge Computing with the help of popular and effective AI
technologies while the latter studies how to carry out the entire process of
building AI models, i.e., model training and inference, on the edge. This paper
provides insights into this new inter-disciplinary field from a broader
perspective. It discusses the core concepts and the research road-map, which
should provide the necessary background for potential future research
initiatives in Edge Intelligence.Comment: 13 pages, 3 figure
- …