12 research outputs found

    On Performance of Caching Proxies

    No full text
    This paper presents a performance study of the state-of-the-art caching proxy called Squid. We instrumented Squid to measure per request network and disk activities and conducted a series of experiments on large Web caches. We have discovered many interesting and consistent patterns across a wide variety of environments. Our data and analysis are essential for understanding, modeling, and tuning performance of a proxy server

    Cache Digests

    No full text
    This paper presents Cache Digest, a novel protocol and optimization technique for cooperative Web caching. Cache Digest allows proxies to make information about their cache contents available to peers in a compact form. A peer uses digests to identify neighbors that are likely to have a given document. Cache Digest is a promising alternative to traditional per-request query/reply schemes such as ICP. We discuss the design ideas behind Cache Digest and its implementation in the Squid proxy cache. The performance of Cache Digest is compared to ICP using real-world Web caches operated by NLANR. Our analysis shows that Cache Digest outperforms ICP in several categories. Finally, we outline improvements to the techniques we are currently working on

    Eliminating the I/O Bottleneck in Large Web Caches

    No full text
    This paper presents a technique for eliminating the disk bottleneck in large Web Caches. Our approach objective is twofold. First, the presented algorithm substantially decreases disk activity during peak server load. Second, it maintains the hit ratio at the level of traditional caching policies. We evaluate the performance of the algorithm using trace driven simulations based on access logs from several top-level Web caches

    On Performance of Caching Proxies

    No full text
    This paper presents a performance study of the state-of-the-art caching proxy called Squid. We instrumented Squid to measure per request network and disk activities and conducted a series of experiments on large Web caches. We have discovered many interesting and consistent patterns across a wide variety of environments. Our data and analysis are essential for understanding, modeling, benchmarking, and tuning performance of a proxy server. Keywords: performance analysis, Web caching, caching proxy, profiling, Squid. ii 1 Introduction The World Wide Web has clearly become the environment for global information distribution, exchange, and sharing. Caching proxies are playing an important role in handling Web traffic. Web caching is one of the key methods for coping with the exponential growth of the Web. Caching proxies are usually installed where many clients are accessing the Internet through a single path. During the first client request to a Web object, a proxy cache stores a copy..

    The CANDID Video-on-Demand Server

    No full text
    The paper presents a simulation study of CANDID, a video-on-demand server. CANDID is distributed, scalable, cost-efficient server capable of providing service to hundreds or thousands of clients. The server consists of several processing nodes with several disks each, connected in shared-nothing manner. Compressed movies are declustered across all disks of the server. Movies are read by fragments of equal play-back time rather than by fragments of equal size as in traditional systems. Knowledge of fragment lengths allows accurately estimating deadlines for requests. We design a new real-time disk scheduling algorithm that emphasizes delivering video fragments close to their deadlines. The performance of CANDID is compared against the performance of SPIFFI, the advanced system based on a traditional approach. The comparison demonstrates that the new fragment delivery schema and disk scheduling algorithm substantially reduce memory requirements on both the server and clients. The interac..

    Caching Policies for Reducing Disk I/Os in a Web Proxy

    No full text
    Web proxy servers are a standard tool for caching the Web traffic. The I/O subsystem appears to be a bottleneck in a proxy server. We are looking for a way to significantly reduce the amount of disk traffic during peak hours in a proxy server. We concentrate on designing alternative caching policies for a proxy server. The idea is to divide cacheable documents into two categories, "known" and "new", and store them in separate caches. "Known" documents are those that were accessed in previous few days, and they can be stored in disk cache. "New" documents are the other documents, i.e. those that are fresh or not accessed for several days; they are cached in memory cache. The disk cache content is mostly static and updated only once a day at off-peak hours using a daily trace and one of replacement algorithms. The memory cache content is dynamic and managed by its own replacement algorithm. We investigate a number of replacement algorithms for the disk cache and for the memory cache and ..

    Static Caching in Web Servers

    No full text
    Most existing studies on Web document caching consider Web proxies. The focus of this paper is caching in primary Web servers. This type of caching decreases request response time and server load. We use log files from four Web servers to analyze the performance of various proposed cache policies for Web servers: LRU-Threshold, LFU, LRU-SIZE, LRU-MIN, and the Pitkow/Recker policy. We also study application of LRU-k, a caching policy initially designed for database systems, to Web servers. Augmented LRU-k with a threshold parameter makes it an efficient caching policy for Web servers. Web document access patterns change very slowly. Documents, that were popular today, will stay popular tomorrow. Based on this fact, we propose and evaluate static caching, a novel cache policy for Web servers. In static caching, the set of documents kept in the cache is determined periodically by analyzing the request log file for the previous period. The cache is filled with documents to maximize cache p..

    Buffer/Latency Tradeoff in Video Servers

    No full text
    The study presents a queuing model for a variable workload of a video server. We show that to keep queuing delay small, the server must have some extra capacity, i.e. support more video streams. Most other studies consider the case when the load is constant and equal to the maximum number of streams supported by the server. In such a case, it is reasonable to start new movies as soon as possible because there is no external queuing delay. However, when several videos are started too close to each other in the service round, fragments for all these videos but one have to be prefetched. This introduces a high memory requirement. We discuss a possible approach to decrease the memory requirement. If video starts are distributed evenly within the service round, no prefetching is needed and the maximum required buffer is much smaller. The additional start delay introduced by this approach should be relatively small when compared with queuing delay due to the required extra capacity of the se..
    corecore