89 research outputs found

    Variable QoS from Shared Web Caches: User-Centered Design and Value-Sensitive Replacement

    Full text link
    Due to differences in server capacity, external bandwidth and client demand, some Web servers value cache hits more than others. Assuming that a shared cache knows the extent to which different servers value hits, it may employ a value-sensitive replacement policy in order to generate maximum aggregate value for servers. we consider both the prediction and value aspects of this problem and introduce a novel value-sensitive LFU/LRU hybrid which biases the allocation of cache space toward documents whose origin servers value caching most highly. We compare our algorithm with others from the Web caching literature and discuss from an economic standpoint the problems associated with obtaining servers' private valuation information.http://deepblue.lib.umich.edu/bitstream/2027.42/50430/1/varp.pd

    The Case for Market-based Push Caching

    Full text link
    http://deepblue.lib.umich.edu/bitstream/2027.42/60417/1/webcache.pd

    One Size Doesn't Fit All: Improving Network QoS Through Preference-driven Web Caching

    Full text link
    In order to combat Internet congestion Web caches use replacement policies that attempt to keep the objects in a cache that are most likely to get requested in the future. We adopt the economic perspective that the objects with the greatest value to the users should be in a cache. Using trace driven simulations we implement an incentive compatible market-based Web cache for servers to push content into a cache. This system decentralizes the caching process as servers provide information in the form of bids for space in the cache. Truthful information from the server on valuations of objects and predictions of hit rates is obtained. This information is used in filling the cache, which can provide increased aggregate value and differential quality of service to servers when compared to LFU and LRU.http://deepblue.lib.umich.edu/bitstream/2027.42/50429/1/berlin.pd

    Biased Replacement Policies for Web Caches: Differential Quality-of-Service and Aggregate User Value

    Full text link
    Disk space in shared Web caches can be diverted to serve some system users at the expense of others. Cache hits reduce server loads, and if servers desire load reduction to different degrees, a replacement policy which prioritizes cache space across servers can provide differential quality-of-service (QoS). We present a simple generalization of least-frequently-used (LFU) replacement that is sensitive to varying levels of server valuation for cache hits. Through trace-driven simulation we show that under a particular assumption about server valuations our algorithm delivers a reasonable QoS relationship: higher byte hit rates for servers that value hits more. We furthermore adopt the economic perspective that value received by system users is a more appropriate performance metric than hit rate or byte hit rate, and demonstrate that our algorithm delivers higher "social welfare" (aggregate value to servers) than LRU or LFU.http://deepblue.lib.umich.edu/bitstream/2027.42/60420/1/weighted-lfu.pd

    Towards capturing representative AS-level Internet topologies

    Full text link

    BU/NSF Workshop on Internet Measurement, Instrumentation and Characterization

    Full text link
    OBJECTIVES AND OVERVIEW Because of its growth in size, scope, and complexity--as well as its increasingly central role in society--the Internet has become an important object of study and evaluation. Many significant innovations in the networking community in recent years have been directed at obtaining a more accurate understanding of the fundamental behavior of the complex system that is the Internet. These innovations have come in the form of better models of components of the system, better tools which enable us to measure the performance of the system more accurately, and new techniques coupled with performance evaluation which have delivered better system utilization. The continued development and improvement of our understanding of the properties of the Internet is essential to guide designers of hardware, protocols, and applications for the next decade of Internet growth. As a research community, an important next step involves an comprehensive look at the challenges that lie ahead in this area. This includes an an evaluation of both the current unsolved challenges and the upcoming challenges the Internet will present us with in the near future, and a discussion of the promising new techniques that innovators in the field are currently developing. To this end, the Web and InterNetworking Research Group at Boston University (WING@BU), with support from the National Science Foundation, (grant #9985484) organized a one-day workshop which was held at Boston University on Monday, August 30, 1999 (immediately preceding ACM SIGCOMM '99).National Science Foundation (9985484

    A Measurement-Based Admission Control Algorithm for Integrated Services Packet Networks

    No full text
    work traffic is long-range dependent which rises and ebbs with possibly long ebb times. One dangerous implication of long-range dependent traffic on any measurement-based admission control algorithm is that the algorithm may allow too many new flows into the network during the ebb times, resulting in prolonged delay bound violations during the ensuing tides. In our simulations, besides traditional source models, we also use source models that exhibit long-range dependence, both in themselves and in their aggregation. As with most iii measurement-based control systems, there are several knobs that govern the degree of conservativeness of the measured values and resulting decisions. We will explore these and also look at some dynamic interactions between flows with different resource requirements. We will present results from a comparative study of several measurement-based admission control algorithms, and finally conclude this dissertation by pointing out several possible ext

    To peer or not to peer: Modeling the evolution of the Internet’s AS-level topology

    No full text
    Abstract — Internet connectivity at the AS level, defined in terms of pairwise logical peering relationships, is constantly evolving. This evolution is largely a response to economic, political, and technological changes that impact the way ASs conduct their business. We present a new framework for modeling this evolutionary process by identifying a set of criteria that ASs consider either in establishing a new peering relationship or in reassessing an existing relationship. The proposed framework is intended to capture key elements in the decision processes underlying the formation of these relationships. We present two decision processes that are executed by an AS, depending on its role in a given peering decision, as a customer or a peer of another AS. When acting as a peer, a key feature of the AS’s corresponding decision model is its reliance on realistic inter-AS traffic demands. To reflect the enormous heterogeneity among customer or peer ASs, our decision models are flexible enough to accommodate a wide range of AS-specific objectives. We demonstrate the potential of this new framework by considering different decision models in various realistic “what if ” experiment scenarios. We implement these decision models to generate and study the evolution of the resulting AS graphs over time, and compare them against observed historical evolutionary features of the Internet at the AS level. I
    • …
    corecore