1,204 research outputs found
Online Learning Models for Content Popularity Prediction In Wireless Edge Caching
Caching popular contents in advance is an important technique to achieve the
low latency requirement and to reduce the backhaul costs in future wireless
communications. Considering a network with base stations distributed as a
Poisson point process (PPP), optimal content placement caching probabilities
are derived for known popularity profile, which is unknown in practice. In this
paper, online prediction (OP) and online learning (OL) methods are presented
based on popularity prediction model (PPM) and Grassmannian prediction model
(GPM), to predict the content profile for future time slots for time-varying
popularities. In OP, the problem of finding the coefficients is modeled as a
constrained non-negative least squares (NNLS) problem which is solved with a
modified NNLS algorithm. In addition, these two models are compared with
log-request prediction model (RPM), information prediction model (IPM) and
average success probability (ASP) based model. Next, in OL methods for the
time-varying case, the cumulative mean squared error (MSE) is minimized and the
MSE regret is analyzed for each of the models. Moreover, for quasi-time varying
case where the popularity changes block-wise, KWIK (know what it knows)
learning method is modified for these models to improve the prediction MSE and
ASP performance. Simulation results show that for OP, PPM and GPM provides the
best ASP among these models, concluding that minimum mean squared error based
models do not necessarily result in optimal ASP. OL based models yield
approximately similar ASP and MSE, while for quasi-time varying case, KWIK
methods provide better performance, which has been verified with MovieLens
dataset.Comment: 9 figure, 29 page
Living on the Edge: The Role of Proactive Caching in 5G Wireless Networks
This article explores one of the key enablers of beyond G wireless
networks leveraging small cell network deployments, namely proactive caching.
Endowed with predictive capabilities and harnessing recent developments in
storage, context-awareness and social networks, peak traffic demands can be
substantially reduced by proactively serving predictable user demands, via
caching at base stations and users' devices. In order to show the effectiveness
of proactive caching, we examine two case studies which exploit the spatial and
social structure of the network, where proactive caching plays a crucial role.
Firstly, in order to alleviate backhaul congestion, we propose a mechanism
whereby files are proactively cached during off-peak demands based on file
popularity and correlations among users and files patterns. Secondly,
leveraging social networks and device-to-device (D2D) communications, we
propose a procedure that exploits the social structure of the network by
predicting the set of influential users to (proactively) cache strategic
contents and disseminate them to their social ties via D2D communications.
Exploiting this proactive caching paradigm, numerical results show that
important gains can be obtained for each case study, with backhaul savings and
a higher ratio of satisfied users of up to and , respectively.
Higher gains can be further obtained by increasing the storage capability at
the network edge.Comment: accepted for publication in IEEE Communications Magazin
A Transfer Learning Approach for Cache-Enabled Wireless Networks
Locally caching contents at the network edge constitutes one of the most
disruptive approaches in G wireless networks. Reaping the benefits of edge
caching hinges on solving a myriad of challenges such as how, what and when to
strategically cache contents subject to storage constraints, traffic load,
unknown spatio-temporal traffic demands and data sparsity. Motivated by this,
we propose a novel transfer learning-based caching procedure carried out at
each small cell base station. This is done by exploiting the rich contextual
information (i.e., users' content viewing history, social ties, etc.) extracted
from device-to-device (D2D) interactions, referred to as source domain. This
prior information is incorporated in the so-called target domain where the goal
is to optimally cache strategic contents at the small cells as a function of
storage, estimated content popularity, traffic load and backhaul capacity. It
is shown that the proposed approach overcomes the notorious data sparsity and
cold-start problems, yielding significant gains in terms of users'
quality-of-experience (QoE) and backhaul offloading, with gains reaching up to
in a setting consisting of four small cell base stations.Comment: some small fixes in notatio
Big Data Meets Telcos: A Proactive Caching Perspective
Mobile cellular networks are becoming increasingly complex to manage while
classical deployment/optimization techniques and current solutions (i.e., cell
densification, acquiring more spectrum, etc.) are cost-ineffective and thus
seen as stopgaps. This calls for development of novel approaches that leverage
recent advances in storage/memory, context-awareness, edge/cloud computing, and
falls into framework of big data. However, the big data by itself is yet
another complex phenomena to handle and comes with its notorious 4V: velocity,
voracity, volume and variety. In this work, we address these issues in
optimization of 5G wireless networks via the notion of proactive caching at the
base stations. In particular, we investigate the gains of proactive caching in
terms of backhaul offloadings and request satisfactions, while tackling the
large-amount of available data for content popularity estimation. In order to
estimate the content popularity, we first collect users' mobile traffic data
from a Turkish telecom operator from several base stations in hours of time
interval. Then, an analysis is carried out locally on a big data platform and
the gains of proactive caching at the base stations are investigated via
numerical simulations. It turns out that several gains are possible depending
on the level of available information and storage size. For instance, with 10%
of content ratings and 15.4 Gbyte of storage size (87% of total catalog size),
proactive caching achieves 100% of request satisfaction and offloads 98% of the
backhaul when considering 16 base stations.Comment: 8 pages, 5 figure
GreenDelivery: Proactive Content Caching and Push with Energy-Harvesting-based Small Cells
The explosive growth of mobile multimedia traffic calls for scalable wireless
access with high quality of service and low energy cost. Motivated by the
emerging energy harvesting communications, and the trend of caching multimedia
contents at the access edge and user terminals, we propose a paradigm-shift
framework, namely GreenDelivery, enabling efficient content delivery with
energy harvesting based small cells. To resolve the two-dimensional randomness
of energy harvesting and content request arrivals, proactive caching and push
are jointly optimized, with respect to the content popularity distribution and
battery states. We thus develop a novel way of understanding the interplay
between content and energy over time and space. Case studies are provided to
show the substantial reduction of macro BS activities, and thus the related
energy consumption from the power grid is reduced. Research issues of the
proposed GreenDelivery framework are also discussed.Comment: 15 pages, 5 figures, accepted by IEEE Communications Magazin
- …