5,592 research outputs found
Asymptotically-Optimal Incentive-Based En-Route Caching Scheme
Content caching at intermediate nodes is a very effective way to optimize the
operations of Computer networks, so that future requests can be served without
going back to the origin of the content. Several caching techniques have been
proposed since the emergence of the concept, including techniques that require
major changes to the Internet architecture such as Content Centric Networking.
Few of these techniques consider providing caching incentives for the nodes or
quality of service guarantees for content owners. In this work, we present a
low complexity, distributed, and online algorithm for making caching decisions
based on content popularity, while taking into account the aforementioned
issues. Our algorithm performs en-route caching. Therefore, it can be
integrated with the current TCP/IP model. In order to measure the performance
of any online caching algorithm, we define the competitive ratio as the ratio
of the performance of the online algorithm in terms of traffic savings to the
performance of the optimal offline algorithm that has a complete knowledge of
the future. We show that under our settings, no online algorithm can achieve a
better competitive ratio than , where is the number of
nodes in the network. Furthermore, we show that under realistic scenarios, our
algorithm has an asymptotically optimal competitive ratio in terms of the
number of nodes in the network. We also study an extension to the basic
algorithm and show its effectiveness through extensive simulations
A Deep Reinforcement Learning-Based Framework for Content Caching
Content caching at the edge nodes is a promising technique to reduce the data
traffic in next-generation wireless networks. Inspired by the success of Deep
Reinforcement Learning (DRL) in solving complicated control problems, this work
presents a DRL-based framework with Wolpertinger architecture for content
caching at the base station. The proposed framework is aimed at maximizing the
long-term cache hit rate, and it requires no knowledge of the content
popularity distribution. To evaluate the proposed framework, we compare the
performance with other caching algorithms, including Least Recently Used (LRU),
Least Frequently Used (LFU), and First-In First-Out (FIFO) caching strategies.
Meanwhile, since the Wolpertinger architecture can effectively limit the action
space size, we also compare the performance with Deep Q-Network to identify the
impact of dropping a portion of the actions. Our results show that the proposed
framework can achieve improved short-term cache hit rate and improved and
stable long-term cache hit rate in comparison with LRU, LFU, and FIFO schemes.
Additionally, the performance is shown to be competitive in comparison to Deep
Q-learning, while the proposed framework can provide significant savings in
runtime.Comment: 6 pages, 3 figure
On the Theory of Spatial and Temporal Locality
This paper studies the theory of caching and temporal and spatial locality. We show the following results: (1) hashing can be used to guarantee that caches with limited associativity behave as well as fully associative cache; (2) temporal locality cannot be characterized using one, or few parameters; (3) temporal locality and spatial locality cannot be studied separately; and (4) unlike temporal locality, spatial locality cannot be managed efficiently online
Cache-Related Preemption Delay Computation for Set-Associative Caches - Pitfalls and Solutions
In preemptive real-time systems, scheduling analyses need - in addition to the worst-case execution time - the context-switch cost. In case of preemption, the preempted and the preempting task may interfere on the cache memory. These interferences lead to additional reloads in the preempted task. The delay due to these reloads is referred to as the cache-related preemption delay (CRPD). The CRPD constitutes a large part of the context-switch cost. In this article, we focus on the computation of upper bounds on the CRPD based on the concepts of useful cache blocks (UCBs) and evicting cache blocks (ECBs). We explain how these concepts can be used to bound the CRPD in case of direct-mapped caches. Then we consider set-associative caches with LRU, FIFO, and PLRU replacement. We show potential pitfalls when using UCBs and ECBs to bound the CRPD in case of LRU and
demonstrate that neither UCBs nor ECBs can be used to bound the CRPD in case of FIFO and PLRU. Finally, we sketch a new approach to circumvent these limitations by using the concept of relative competitiveness
- …