721 research outputs found
Optimal Error Correcting Delivery Scheme for Coded Caching with Symmetric Batch Prefetching
Coded caching is used to reduce network congestion during peak hours. A
single server is connected to a set of users through a bottleneck link, which
generally is assumed to be error-free. During non-peak hours, all the users
have full access to the files and they fill their local cache with portions of
the files available. During delivery phase, each user requests a file and the
server delivers coded transmissions to meet the demands taking into
consideration their cache contents. In this paper we assume that the shared
link is error prone. A new delivery scheme is required to meet the demands of
each user even after receiving finite number of transmissions in error. We
characterize the minimum average rate and minimum peak rate for this problem.
We find closed form expressions of these rates for a particular caching scheme
namely \textit{symmetric batch prefetching}. We also propose an optimal error
correcting delivery scheme for coded caching problem with symmetric batch
prefetching.Comment: 9 pages and 4 figure
A performance model of speculative prefetching in distributed information systems
Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes
Using Intelligent Prefetching to Reduce the Energy Consumption of a Large-scale Storage System
Many high performance large-scale storage systems will experience significant workload increases as their user base and content availability grow over time. The U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) center hosts one such system that has recently undergone a period of rapid growth as its user population grew nearly 400% in just about three years. When administrators of these massive storage systems face the challenge of meeting the demands of an ever increasing number of requests, the easiest solution is to integrate more advanced hardware to existing systems. However, additional investment in hardware may significantly increase the system cost as well as daily power consumption. In this paper, we present evidence that well-selected software level optimization is capable of achieving comparable levels of performance without the cost and power consumption overhead caused by physically expanding the system. Specifically, we develop intelligent prefetching algorithms that are suitable for the unique workloads and user behaviors of the world\u27s largest satellite images distribution system managed by USGS EROS. Our experimental results, derived from real-world traces with over five million requests sent by users around the globe, show that the EROS hybrid storage system could maintain the same performance with over 30% of energy savings by utilizing our proposed prefetching algorithms, compared to the alternative solution of doubling the size of the current FTP server farm
Efficient Proactive Caching for Supporting Seamless Mobility
We present a distributed proactive caching approach that exploits user
mobility information to decide where to proactively cache data to support
seamless mobility, while efficiently utilizing cache storage using a congestion
pricing scheme. The proposed approach is applicable to the case where objects
have different sizes and to a two-level cache hierarchy, for both of which the
proactive caching problem is hard. Additionally, our modeling framework
considers the case where the delay is independent of the requested data object
size and the case where the delay is a function of the object size. Our
evaluation results show how various system parameters influence the delay gains
of the proposed approach, which achieves robust and good performance relative
to an oracle and an optimal scheme for a flat cache structure.Comment: 10 pages, 9 figure
- …