151 research outputs found
A Delay-Aware Caching Algorithm for Wireless D2D Caching Networks
Recently, wireless caching techniques have been studied to satisfy lower
delay requirements and offload traffic from peak periods. By storing parts of
the popular files at the mobile users, users can locate some of their requested
files in their own caches or the caches at their neighbors. In the latter case,
when a user receives files from its neighbors, device-to-device (D2D)
communication is enabled. D2D communication underlaid with cellular networks is
also a new paradigm for the upcoming 5G wireless systems. By allowing a pair of
adjacent D2D users to communicate directly, D2D communication can achieve
higher throughput, better energy efficiency and lower traffic delay. In this
work, we propose a very efficient caching algorithm for D2D-enabled cellular
networks to minimize the average transmission delay. Instead of searching over
all possible solutions, our algorithm finds out the best pairs,
which provide the best delay improvement in each loop to form a caching policy
with very low transmission delay and high throughput. This algorithm is also
extended to address a more general scenario, in which the distributions of
fading coefficients and values of system parameters potentially change over
time. Via numerical results, the superiority of the proposed algorithm is
verified by comparing it with a naive algorithm, in which all users simply
cache their favorite files
A Deep Reinforcement Learning-Based Framework for Content Caching
Content caching at the edge nodes is a promising technique to reduce the data
traffic in next-generation wireless networks. Inspired by the success of Deep
Reinforcement Learning (DRL) in solving complicated control problems, this work
presents a DRL-based framework with Wolpertinger architecture for content
caching at the base station. The proposed framework is aimed at maximizing the
long-term cache hit rate, and it requires no knowledge of the content
popularity distribution. To evaluate the proposed framework, we compare the
performance with other caching algorithms, including Least Recently Used (LRU),
Least Frequently Used (LFU), and First-In First-Out (FIFO) caching strategies.
Meanwhile, since the Wolpertinger architecture can effectively limit the action
space size, we also compare the performance with Deep Q-Network to identify the
impact of dropping a portion of the actions. Our results show that the proposed
framework can achieve improved short-term cache hit rate and improved and
stable long-term cache hit rate in comparison with LRU, LFU, and FIFO schemes.
Additionally, the performance is shown to be competitive in comparison to Deep
Q-learning, while the proposed framework can provide significant savings in
runtime.Comment: 6 pages, 3 figure
- …