13,835 research outputs found
Performance Evaluation of Caching Policies in NDN - an ICN Architecture
Information Centric Networking (ICN) advocates the philosophy of accessing
the content independent of its location. Owing to this location independence in
ICN, the routers en-route can be enabled to cache the content to serve the
future requests for the same content locally. Several ICN architectures have
been proposed in the literature along with various caching algorithms for
caching and cache replacement at the routers en-route. The aim of this paper is
to critically evaluate various caching policies using Named Data Networking
(NDN), an ICN architecture proposed in literature. We have presented the
performance comparison of different caching policies naming First In First Out
(FIFO), Least Recently Used (LRU), and Universal Caching (UC) in two network
models; Watts-Strogatz (WS) model (suitable for dense short link networks such
as sensor networks) and Sprint topology (better suited for large Internet
Service Provider (ISP) networks) using ndnSIM, an ns3 based discrete event
simulator for NDN architecture. Our results indicate that UC outperforms other
caching policies such as LRU and FIFO and makes UC a better alternative for
both sensor networks and ISP networks
Dynamic Coded Caching in Wireless Networks
We consider distributed and dynamic caching of coded content at small base stations (SBSs) in an area served by a macro base station (MBS). Specifically, content is encoded using a maximum distance separable code and cached according to a time-to-live (TTL) cache eviction policy, which allows coded packets to be removed from the caches at periodic times. Mobile users requesting a particular content download coded packets from SBSs within communication range. If additional packets are required to decode the file, these are downloaded from the MBS. We formulate an optimization problem that is efficiently solved numerically, providing TTL caching policies minimizing the overall network load. We demonstrate that distributed coded caching using TTL caching policies can offer significant reductions in terms of network load when request arrivals are bursty. We show how the distributed coded caching problem utilizing TTL caching policies can be analyzed as a specific single cache, convex optimization problem. Our problem encompasses static caching and the single cache as special cases. We prove that, interestingly, static caching is optimal under a Poisson request process, and that for a single cache the optimization problem has a surprisingly simple solution
Dynamic Coded Caching in Wireless Networks
We consider distributed and dynamic caching of coded content at small base
stations (SBSs) in an area served by a macro base station (MBS). Specifically,
content is encoded using a maximum distance separable code and cached according
to a time-to-live (TTL) cache eviction policy, which allows coded packets to be
removed from the caches at periodic times. Mobile users requesting a particular
content download coded packets from SBSs within communication range. If
additional packets are required to decode the file, these are downloaded from
the MBS. We formulate an optimization problem that is efficiently solved
numerically, providing TTL caching policies minimizing the overall network
load. We demonstrate that distributed coded caching using TTL caching policies
can offer significant reductions in terms of network load when request arrivals
are bursty. We show how the distributed coded caching problem utilizing TTL
caching policies can be analyzed as a specific single cache, convex
optimization problem. Our problem encompasses static caching and the single
cache as special cases. We prove that, interestingly, static caching is optimal
under a Poisson request process, and that for a single cache the optimization
problem has a surprisingly simple solution.Comment: To appear in IEEE Transactions on Communication
Optimistic No-regret Algorithms for Discrete Caching
We take a systematic look at the problem of storing whole files in a cache
with limited capacity in the context of optimistic learning, where the caching
policy has access to a prediction oracle (provided by, e.g., a Neural Network).
The successive file requests are assumed to be generated by an adversary, and
no assumption is made on the accuracy of the oracle. In this setting, we
provide a universal lower bound for prediction-assisted online caching and
proceed to design a suite of policies with a range of performance-complexity
trade-offs. All proposed policies offer sublinear regret bounds commensurate
with the accuracy of the oracle. Our results substantially improve upon all
recently-proposed online caching policies, which, being unable to exploit the
oracle predictions, offer only regret. In this pursuit, we
design, to the best of our knowledge, the first comprehensive optimistic
Follow-the-Perturbed leader policy, which generalizes beyond the caching
problem. We also study the problem of caching files with different sizes and
the bipartite network caching problem. Finally, we evaluate the efficacy of the
proposed policies through extensive numerical experiments using real-world
traces.Comment: Accepted to ACM SIGMETRICS 202
When Exploiting Individual User Preference Is Beneficial for Caching at Base Stations
Most of prior works optimize caching policies based on the following
assumptions: 1) every user initiates request according to content popularity,
2) all users are with the same active level, and 3) users are uniformly located
in the considered region. In practice, these assumptions are often not true. In
this paper, we explore the benefit of optimizing caching policies for base
stations by exploiting user preference considering the spatial locality and
different active level of users. We obtain optimal caching policies,
respectively minimizing the download delay averaged over all file requests and
user locations in the network (namely network average delay), and minimizing
the maximal weighted download delay averaged over the file requests and
location of each user (namely maximal weighted user average delay), as well as
minimizing the weighted sum of both. The analysis and simulation results show
that exploiting heterogeneous user preference and active level can improve user
fairness, and can also improve network performance when users are with spatial
locality.Comment: Accepted by IEEE ICC 2018 Workshop on Information-Centric Edge
Computing and Caching for Future Network
Building a flexible web caching system.
Web caching is a technology that has demonstrated to
improve traffic on the Internet. To find out how to
implement a Web caching architecture that assures
improvements is not an easy task. The problem is more
difficult when we are interested in deploying a distributed
and cooperative Web caching system. We have found that
some cooperative Web caching architectures could be
unviable when changes on the network environment
appear. This situation suggests that a cooperative Web
caching system could get worst access to Web objects.
However in this paper we present an architecture that
combines the best of several Web caching configurations
that we have previously analyzed. Our architecture gives
basic ideas for implementing a cooperative Web caching
system using groups of HTTP proxy servers which can
improve access to remote Web objects regardless of the
changes that might occur on the network environment
(changes that could produce modifications in Web object
validation policies and/or types of caching
communication).Peer Reviewe
- …