26 research outputs found
A Feature-Based Bayesian Method for Content Popularity Prediction in Edge-Caching Networks
Edge-caching is recognized as an efficient technique for future wireless
cellular networks to improve network capacity and user-perceived quality of
experience. Due to the random content requests and the limited cache memory,
designing an efficient caching policy is a challenge. To enhance the
performance of caching systems, an accurate content request prediction
algorithm is essential. Here, we introduce a flexible model, a Poisson
regressor based on a Gaussian process, for the content request distribution in
stationary environments. Our proposed model can incorporate the content
features as side information for prediction enhancement. In order to learn the
model parameters, which yield the Poisson rates or alternatively content
popularities, we invoke the Bayesian approach which is very robust against
over-fitting.
However, the posterior distribution in the Bayes formula is analytically
intractable to compute. To tackle this issue, we apply a Monte Carlo Markov
Chain (MCMC) method to approximate the posterior distribution. Two types of
predictive distributions are formulated for the requests of existing contents
and for the requests of a newly-added content. Finally, simulation results are
provided to confirm the accuracy of the developed content popularity learning
approach.Comment: arXiv admin note: substantial text overlap with arXiv:1903.0306
Online Reinforcement Learning of X-Haul Content Delivery Mode in Fog Radio Access Networks
We consider a Fog Radio Access Network (F-RAN) with a Base Band Unit (BBU) in
the cloud and multiple cache-enabled enhanced Remote Radio Heads (eRRHs). The
system aims at delivering contents on demand with minimal average latency from
a time-varying library of popular contents. Information about uncached
requested files can be transferred from the cloud to the eRRHs by following
either backhaul or fronthaul modes. The backhaul mode transfers fractions of
the requested files, while the fronthaul mode transmits quantized baseband
samples as in Cloud-RAN (C-RAN). The backhaul mode allows the caches of the
eRRHs to be updated, which may lower future delivery latencies. In contrast,
the fronthaul mode enables cooperative C-RAN transmissions that may reduce the
current delivery latency. Taking into account the trade-off between current and
future delivery performance, this paper proposes an adaptive selection method
between the two delivery modes to minimize the long-term delivery latency.
Assuming an unknown and time-varying popularity model, the method is based on
model-free Reinforcement Learning (RL). Numerical results confirm the
effectiveness of the proposed RL scheme.Comment: 5 pages, 2 figure
Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching
Proactive caching is an effective way to alleviate peak-hour traffic
congestion by prefetching popular contents at the wireless network edge. To
maximize the caching efficiency requires the knowledge of content popularity
profile, which however is often unavailable in advance. In this paper, we first
propose a new linear prediction model, named grouped linear model (GLM) to
estimate the future content requests based on historical data. Unlike many
existing works that assumed the static content popularity profile, our model
can adapt to the temporal variation of the content popularity in practical
systems due to the arrival of new contents and dynamics of user preference.
Based on the predicted content requests, we then propose a reinforcement
learning approach with model-free acceleration (RLMA) for online cache
replacement by taking into account both the cache hits and replacement cost.
This approach accelerates the learning process in non-stationary environment by
generating imaginary samples for Q-value updates. Numerical results based on
real-world traces show that the proposed prediction and learning based online
caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho
Online Reinforcement Learning of X-Haul Content Delivery Mode in Fog Radio Access Networks
We consider a Fog Radio Access Network (F-RAN) with a Base Band Unit (BBU) in
the cloud and multiple cache-enabled enhanced Remote Radio Heads (eRRHs). The
system aims at delivering contents on demand with minimal average latency from
a time-varying library of popular contents. Information about uncached
requested files can be transferred from the cloud to the eRRHs by following
either backhaul or fronthaul modes. The backhaul mode transfers fractions of
the requested files, while the fronthaul mode transmits quantized baseband
samples as in Cloud-RAN (C-RAN). The backhaul mode allows the caches of the
eRRHs to be updated, which may lower future delivery latencies. In contrast,
the fronthaul mode enables cooperative C-RAN transmissions that may reduce the
current delivery latency. Taking into account the trade-off between current and
future delivery performance, this paper proposes an adaptive selection method
between the two delivery modes to minimize the long-term delivery latency.
Assuming an unknown and time-varying popularity model, the method is based on
model-free Reinforcement Learning (RL). Numerical results confirm the
effectiveness of the proposed RL scheme.Comment: 12 pages, 2 figure
Modified reinforcement learning based- caching system for mobile edge computing
International audienceCaching contents at the edge of mobile networks is an efficient mechanism that can alleviate the backhaul links loadand reduce the transmission delay. For this purpose, choosing an adequate caching strategy becomes an importantissue. Recently, the tremendous growth ofMobile Edge Computing(MEC) empowers the edge network nodes withmore computation capabilities and storage capabilities, allowing the execution of resource-intensive tasks within themobile network edges such as running artificial intelligence (AI) algorithms. Exploiting users context informationintelligently makes it possible to design an intelligent context-aware mobile edge caching. To maximize the cachingperformance, the suitable methodology is to consider both context awareness and intelligence so that the cachingstrategy is aware of the environment while caching the appropriate content by making the right decision. Inspiredby the success ofreinforcement learning(RL) that uses agents to deal with decision making problems, we presentamodified reinforcement learning(mRL) to cache contents in the network edges. Our proposed solution aims tomaximize the cache hit rate and requires a multi awareness of the influencing factors on cache performance. Themodified RL differs from other RL algorithms in the learning rate that uses the method ofstochastic gradient decent(SGD) beside taking advantage of learning using the optimal caching decision obtained from fuzzy rules.Index Terms — Caching, Reinforcement Learning, Fuzzy Logic, Mobile Edge Computing
Deep learning-based edge caching for multi-cluster heterogeneous networks
© 2019, Springer-Verlag London Ltd., part of Springer Nature. In this work, we consider a time and space evolution cache refreshing in multi-cluster heterogeneous networks. We consider a two-step content placement probability optimization. At the initial complete cache refreshing optimization, the joint optimization of the activated base station density and the content placement probability is considered. And we transform this optimization problem into a GP problem. At the following partial cache refreshing optimization, we take the time–space evolution into consideration and derive a convex optimization problem subjected to the cache capacity constraint and the backhaul limit constraint. We exploit the redundant information in different content popularity using the deep neural network to avoid the repeated calculation because of the change in content popularity distribution at different time slots. Trained DNN can provide online response to content placement in a multi-cluster HetNet model instantaneously. Numerical results demonstrate the great approximation to the optimum and generalization ability
Online Learning Models for Content Popularity Prediction In Wireless Edge Caching
Caching popular contents in advance is an important technique to achieve the
low latency requirement and to reduce the backhaul costs in future wireless
communications. Considering a network with base stations distributed as a
Poisson point process (PPP), optimal content placement caching probabilities
are derived for known popularity profile, which is unknown in practice. In this
paper, online prediction (OP) and online learning (OL) methods are presented
based on popularity prediction model (PPM) and Grassmannian prediction model
(GPM), to predict the content profile for future time slots for time-varying
popularities. In OP, the problem of finding the coefficients is modeled as a
constrained non-negative least squares (NNLS) problem which is solved with a
modified NNLS algorithm. In addition, these two models are compared with
log-request prediction model (RPM), information prediction model (IPM) and
average success probability (ASP) based model. Next, in OL methods for the
time-varying case, the cumulative mean squared error (MSE) is minimized and the
MSE regret is analyzed for each of the models. Moreover, for quasi-time varying
case where the popularity changes block-wise, KWIK (know what it knows)
learning method is modified for these models to improve the prediction MSE and
ASP performance. Simulation results show that for OP, PPM and GPM provides the
best ASP among these models, concluding that minimum mean squared error based
models do not necessarily result in optimal ASP. OL based models yield
approximately similar ASP and MSE, while for quasi-time varying case, KWIK
methods provide better performance, which has been verified with MovieLens
dataset.Comment: 9 figure, 29 page