4,512 research outputs found
Hit and Bandwidth Optimal Caching for Wireless Data Access Networks
For many data access applications, the availability of the most updated information is a fundamental and rigid requirement. In spite of many technological improvements, in wireless networks, wireless channels (or bandwidth) are the most scarce resources and hence are expensive. Data access from remote sites heavily depends on these expensive resources. Due to affordable smart mobile devices and tremendous popularity of various Internet-based services, demand for data from these mobile devices are growing very fast. In many cases, it is becoming impossible for the wireless data service providers to satisfy the demand for data using the current network infrastructures. An efficient caching scheme at the client side can soothe the problem by reducing the amount of data transferred over the wireless channels. However, an update event makes the associated cached data objects obsolete and useless for the applications. Frequencies of data update, as well as data access play essential roles in cache access and replacement policies. Intuitively, frequently accessed and infrequently updated objects should be given higher preference while preserving in the cache. However, modeling this intuition is challenging, particularly in a network environment
where updates are injected by both the server and the clients, distributed all over networks.
In this thesis, we strive to make three inter-related contributions. Firstly, we propose two enhanced cache access policies. The access policies ensure strong consistency of the cached data objects
through proactive or reactive interactions with the data server. At the same time, these policies collect information about access and update frequencies of hosted objects to facilitate efficient deployment of the cache replacement policy. Secondly, we design a replacement policy which plays the decision maker role when there is
a new object to accommodate in a fully occupied cache. The statistical information collected by the access policies enables the
decision making process. This process is modeled around the idea of preserving frequently accessed but less frequently updated objects in the cache. Thirdly, we analytically show that a cache management
scheme with the proposed replacement policy bundled with any of the cache access policies guarantees optimum amount of data transmission by increasing the number of effective hits in the cache system.
Results from both analysis and our extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) policy in terms of both effective hits and bandwidth
consumption. Moreover, our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks
Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching
Proactive caching is an effective way to alleviate peak-hour traffic
congestion by prefetching popular contents at the wireless network edge. To
maximize the caching efficiency requires the knowledge of content popularity
profile, which however is often unavailable in advance. In this paper, we first
propose a new linear prediction model, named grouped linear model (GLM) to
estimate the future content requests based on historical data. Unlike many
existing works that assumed the static content popularity profile, our model
can adapt to the temporal variation of the content popularity in practical
systems due to the arrival of new contents and dynamics of user preference.
Based on the predicted content requests, we then propose a reinforcement
learning approach with model-free acceleration (RLMA) for online cache
replacement by taking into account both the cache hits and replacement cost.
This approach accelerates the learning process in non-stationary environment by
generating imaginary samples for Q-value updates. Numerical results based on
real-world traces show that the proposed prediction and learning based online
caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho
Echo State Networks for Proactive Caching in Cloud-Based Radio Access Networks with Mobile Users
In this paper, the problem of proactive caching is studied for cloud radio
access networks (CRANs). In the studied model, the baseband units (BBUs) can
predict the content request distribution and mobility pattern of each user,
determine which content to cache at remote radio heads and BBUs. This problem
is formulated as an optimization problem which jointly incorporates backhaul
and fronthaul loads and content caching. To solve this problem, an algorithm
that combines the machine learning framework of echo state networks with
sublinear algorithms is proposed. Using echo state networks (ESNs), the BBUs
can predict each user's content request distribution and mobility pattern while
having only limited information on the network's and user's state. In order to
predict each user's periodic mobility pattern with minimal complexity, the
memory capacity of the corresponding ESN is derived for a periodic input. This
memory capacity is shown to be able to record the maximum amount of user
information for the proposed ESN model. Then, a sublinear algorithm is proposed
to determine which content to cache while using limited content request
distribution samples. Simulation results using real data from Youku and the
Beijing University of Posts and Telecommunications show that the proposed
approach yields significant gains, in terms of sum effective capacity, that
reach up to 27.8% and 30.7%, respectively, compared to random caching with
clustering and random caching without clustering algorithm.Comment: Accepted in the IEEE Transactions on Wireless Communication
Online Reinforcement Learning of X-Haul Content Delivery Mode in Fog Radio Access Networks
We consider a Fog Radio Access Network (F-RAN) with a Base Band Unit (BBU) in
the cloud and multiple cache-enabled enhanced Remote Radio Heads (eRRHs). The
system aims at delivering contents on demand with minimal average latency from
a time-varying library of popular contents. Information about uncached
requested files can be transferred from the cloud to the eRRHs by following
either backhaul or fronthaul modes. The backhaul mode transfers fractions of
the requested files, while the fronthaul mode transmits quantized baseband
samples as in Cloud-RAN (C-RAN). The backhaul mode allows the caches of the
eRRHs to be updated, which may lower future delivery latencies. In contrast,
the fronthaul mode enables cooperative C-RAN transmissions that may reduce the
current delivery latency. Taking into account the trade-off between current and
future delivery performance, this paper proposes an adaptive selection method
between the two delivery modes to minimize the long-term delivery latency.
Assuming an unknown and time-varying popularity model, the method is based on
model-free Reinforcement Learning (RL). Numerical results confirm the
effectiveness of the proposed RL scheme.Comment: 5 pages, 2 figure
Learning-Based Optimization of Cache Content in a Small Cell Base Station
Optimal cache content placement in a wireless small cell base station (sBS)
with limited backhaul capacity is studied. The sBS has a large cache memory and
provides content-level selective offloading by delivering high data rate
contents to users in its coverage area. The goal of the sBS content controller
(CC) is to store the most popular contents in the sBS cache memory such that
the maximum amount of data can be fetched directly form the sBS, not relying on
the limited backhaul resources during peak traffic periods. If the popularity
profile is known in advance, the problem reduces to a knapsack problem.
However, it is assumed in this work that, the popularity profile of the files
is not known by the CC, and it can only observe the instantaneous demand for
the cached content. Hence, the cache content placement is optimised based on
the demand history. By refreshing the cache content at regular time intervals,
the CC tries to learn the popularity profile, while exploiting the limited
cache capacity in the best way possible. Three algorithms are studied for this
cache content placement problem, leading to different exploitation-exploration
trade-offs. We provide extensive numerical simulations in order to study the
time-evolution of these algorithms, and the impact of the system parameters,
such as the number of files, the number of users, the cache size, and the
skewness of the popularity profile, on the performance. It is shown that the
proposed algorithms quickly learn the popularity profile for a wide range of
system parameters.Comment: Accepted to IEEE ICC 2014, Sydney, Australia. Minor typos corrected.
Algorithm MCUCB correcte
A Deep Reinforcement Learning-Based Framework for Content Caching
Content caching at the edge nodes is a promising technique to reduce the data
traffic in next-generation wireless networks. Inspired by the success of Deep
Reinforcement Learning (DRL) in solving complicated control problems, this work
presents a DRL-based framework with Wolpertinger architecture for content
caching at the base station. The proposed framework is aimed at maximizing the
long-term cache hit rate, and it requires no knowledge of the content
popularity distribution. To evaluate the proposed framework, we compare the
performance with other caching algorithms, including Least Recently Used (LRU),
Least Frequently Used (LFU), and First-In First-Out (FIFO) caching strategies.
Meanwhile, since the Wolpertinger architecture can effectively limit the action
space size, we also compare the performance with Deep Q-Network to identify the
impact of dropping a portion of the actions. Our results show that the proposed
framework can achieve improved short-term cache hit rate and improved and
stable long-term cache hit rate in comparison with LRU, LFU, and FIFO schemes.
Additionally, the performance is shown to be competitive in comparison to Deep
Q-learning, while the proposed framework can provide significant savings in
runtime.Comment: 6 pages, 3 figure
- …