7,055 research outputs found
The K-Server Dual and Loose Competitiveness for Paging
This paper has two results. The first is based on the surprising observation
that the well-known ``least-recently-used'' paging algorithm and the
``balance'' algorithm for weighted caching are linear-programming primal-dual
algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that
generalizes them both and has an optimal performance guarantee for weighted
caching.
For the second result, the paper presents empirical studies of paging
algorithms, documenting that in practice, on ``typical'' cache sizes and
sequences, the performance of paging strategies are much better than their
worst-case analyses in the standard model suggest. The paper then presents
theoretical results that support and explain this. For example: on any input
sequence, with almost all cache sizes, either the performance guarantee of
least-recently-used is O(log k) or the fault rate (in an absolute sense) is
insignificant.
Both of these results are strengthened and generalized in``On-line File
Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA
(1991
Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching
Proactive caching is an effective way to alleviate peak-hour traffic
congestion by prefetching popular contents at the wireless network edge. To
maximize the caching efficiency requires the knowledge of content popularity
profile, which however is often unavailable in advance. In this paper, we first
propose a new linear prediction model, named grouped linear model (GLM) to
estimate the future content requests based on historical data. Unlike many
existing works that assumed the static content popularity profile, our model
can adapt to the temporal variation of the content popularity in practical
systems due to the arrival of new contents and dynamics of user preference.
Based on the predicted content requests, we then propose a reinforcement
learning approach with model-free acceleration (RLMA) for online cache
replacement by taking into account both the cache hits and replacement cost.
This approach accelerates the learning process in non-stationary environment by
generating imaginary samples for Q-value updates. Numerical results based on
real-world traces show that the proposed prediction and learning based online
caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho
Query Load Balancing by Caching Search Results in Peer-to-Peer Information Retrieval Networks
For peer-to-peer web search engines it is important to keep the delay between receiving a query and providing search results within an acceptable range for the end user. How to achieve this remains an open challenge. One way to reduce delays is by caching search results for queries and allowing peers to access each others cache. In this paper we explore the limitations of search result caching in large-scale peer-to-peer information retrieval networks by simulating such networks with increasing levels of realism. We find that cache hit ratios of at least thirty-three percent are attainable
Modeling Data-Plane Power Consumption of Future Internet Architectures
With current efforts to design Future Internet Architectures (FIAs), the
evaluation and comparison of different proposals is an interesting research
challenge. Previously, metrics such as bandwidth or latency have commonly been
used to compare FIAs to IP networks. We suggest the use of power consumption as
a metric to compare FIAs. While low power consumption is an important goal in
its own right (as lower energy use translates to smaller environmental impact
as well as lower operating costs), power consumption can also serve as a proxy
for other metrics such as bandwidth and processor load.
Lacking power consumption statistics about either commodity FIA routers or
widely deployed FIA testbeds, we propose models for power consumption of FIA
routers. Based on our models, we simulate scenarios for measuring power
consumption of content delivery in different FIAs. Specifically, we address two
questions: 1) which of the proposed FIA candidates achieves the lowest energy
footprint; and 2) which set of design choices yields a power-efficient network
architecture? Although the lack of real-world data makes numerous assumptions
necessary for our analysis, we explore the uncertainty of our calculations
through sensitivity analysis of input parameters
Echo State Networks for Proactive Caching in Cloud-Based Radio Access Networks with Mobile Users
In this paper, the problem of proactive caching is studied for cloud radio
access networks (CRANs). In the studied model, the baseband units (BBUs) can
predict the content request distribution and mobility pattern of each user,
determine which content to cache at remote radio heads and BBUs. This problem
is formulated as an optimization problem which jointly incorporates backhaul
and fronthaul loads and content caching. To solve this problem, an algorithm
that combines the machine learning framework of echo state networks with
sublinear algorithms is proposed. Using echo state networks (ESNs), the BBUs
can predict each user's content request distribution and mobility pattern while
having only limited information on the network's and user's state. In order to
predict each user's periodic mobility pattern with minimal complexity, the
memory capacity of the corresponding ESN is derived for a periodic input. This
memory capacity is shown to be able to record the maximum amount of user
information for the proposed ESN model. Then, a sublinear algorithm is proposed
to determine which content to cache while using limited content request
distribution samples. Simulation results using real data from Youku and the
Beijing University of Posts and Telecommunications show that the proposed
approach yields significant gains, in terms of sum effective capacity, that
reach up to 27.8% and 30.7%, respectively, compared to random caching with
clustering and random caching without clustering algorithm.Comment: Accepted in the IEEE Transactions on Wireless Communication
On-Line File Caching
In the on-line file-caching problem problem, the input is a sequence of
requests for files, given on-line (one at a time). Each file has a non-negative
size and a non-negative retrieval cost. The problem is to decide which files to
keep in a fixed-size cache so as to minimize the sum of the retrieval costs for
files that are not in the cache when requested. The problem arises in web
caching by browsers and by proxies. This paper describes a natural
generalization of LRU called Landlord and gives an analysis showing that it has
an optimal performance guarantee (among deterministic on-line algorithms).
The paper also gives an analysis of the algorithm in a so-called ``loosely''
competitive model, showing that on a ``typical'' cache size, either the
performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998
- …