11 research outputs found
No-Regret Caching with Noisy Request Estimates
Online learning algorithms have been successfully used to design caching
policies with regret guarantees. Existing algorithms assume that the cache
knows the exact request sequence, but this may not be feasible in high load
and/or memory-constrained scenarios, where the cache may have access only to
sampled requests or to approximate requests' counters. In this paper, we
propose the Noisy-Follow-the-Perturbed-Leader (NFPL) algorithm, a variant of
the classic Follow-the-Perturbed-Leader (FPL) when request estimates are noisy,
and we show that the proposed solution has sublinear regret under specific
conditions on the requests estimator. The experimental evaluation compares the
proposed solution against classic caching policies and validates the proposed
approach under both synthetic and real request traces
Online Caching with no Regret: Optimistic Learning via Recommendations
The design of effective online caching policies is an increasingly important
problem for content distribution networks, online social networks and edge
computing services, among other areas. This paper proposes a new algorithmic
toolbox for tackling this problem through the lens of optimistic online
learning. We build upon the Follow-the-Regularized-Leader (FTRL) framework,
which is developed further here to include predictions for the file requests,
and we design online caching algorithms for bipartite networks with fixed-size
caches or elastic leased caches subject to time-average budget constraints. The
predictions are provided by a content recommendation system that influences the
users viewing activity and hence can naturally reduce the caching network's
uncertainty about future requests. We also extend the framework to learn and
utilize the best request predictor in cases where many are available. We prove
that the proposed {optimistic} learning caching policies can achieve sub-zero
performance loss (regret) for perfect predictions, and maintain the sub-linear
regret bound , which is the best achievable bound for policies that
do not use predictions, even for arbitrary-bad predictions. The performance of
the proposed algorithms is evaluated with detailed trace-driven numerical
tests.Comment: arXiv admin note: substantial text overlap with arXiv:2202.1059
A Swiss Army Knife for Online Caching in Small Cell Networks
We consider a dense cellular network, in which a limited-size cache is available at every base station (BS). Coordinating content allocation across the different caches can lead to significant performance gains, but is a difficult problem even when full information about the network and the request process is available. In this paper we present qLRU-Δ, a general-purpose online caching policy that can be tailored to optimize different performance metrics also in presence of coordinated multipoint transmission techniques. The policy requires neither direct communication among BSs, nor a priori knowledge of content popularity and, under stationary request processes, has provable performance guarantees
A\c{C}AI: Ascent Similarity Caching with Approximate Indexes
Similarity search is a key operation in multimedia retrieval systems and
recommender systems, and it will play an important role also for future machine
learning and augmented reality applications. When these systems need to serve
large objects with tight delay constraints, edge servers close to the end-user
can operate as similarity caches to speed up the retrieval. In this paper we
present A\c{C}AI, a new similarity caching policy which improves on the state
of the art by using (i) an (approximate) index for the whole catalog to decide
which objects to serve locally and which to retrieve from the remote server,
and (ii) a mirror ascent algorithm to update the set of local objects with
strong guarantees even when the request process does not exhibit any
statistical regularity