11,766 research outputs found

    Web Hosting Service Level Agreements

    Get PDF
    This paper proposes a model for a relatively simple Web hosting provider. The model assumes the existence of a load-dispatcher and a finite number of Web-servers. We quantify the quality of service towards the clients of this facility based on a service level agreement between the two parts: the web hosting provider and the client. We assume that the client has the knowledge and resources to quantify its needs. Based on these quantifications, which in our model become parameters, the provider can establish a service offer. In our model, this offer covers the quality of service and the price options for it

    An Optimal Trade-off between Content Freshness and Refresh Cost

    Full text link
    Caching is an effective mechanism for reducing bandwidth usage and alleviating server load. However, the use of caching entails a compromise between content freshness and refresh cost. An excessive refresh allows a high degree of content freshness at a greater cost of system resource. Conversely, a deficient refresh inhibits content freshness but saves the cost of resource usages. To address the freshness-cost problem, we formulate the refresh scheduling problem with a generic cost model and use this cost model to determine an optimal refresh frequency that gives the best tradeoff between refresh cost and content freshness. We prove the existence and uniqueness of an optimal refresh frequency under the assumptions that the arrival of content update is Poisson and the age-related cost monotonically increases with decreasing freshness. In addition, we provide an analytic comparison of system performance under fixed refresh scheduling and random refresh scheduling, showing that with the same average refresh frequency two refresh schedulings are mathematically equivalent in terms of the long-run average cost

    Cloud-based Content Distribution on a Budget

    Full text link
    To leverage the elastic nature of cloud computing, a solution provider must be able to accurately gauge demand for its offering. For applications that involve swarm-to-cloud interactions, gauging such demand is not straightforward. In this paper, we propose a general framework, analyze a mathematical model, and present a prototype implementation of a canonical swarm-to-cloud application, namely peer-assisted content delivery. Our system – called Cyclops – dynamically adjusts the off-cloud bandwidth consumed by content servers (which represents the bulk of the provider's cost) to feed a set of swarming clients, based on a feedback signal that gauges the real-time health of the swarm. Our extensive evaluation of Cyclops in a variety of settings – including controlled PlanetLab and live Internet experiments involving thousands of users – show significant reduction in content distribution costs (by as much as two orders of magnitude) when compared to non-feedback-based swarming solutions, with minor impact on content delivery times

    AngelCast: cloud-based peer-assisted live streaming using optimized multi-tree construction

    Full text link
    Increasingly, commercial content providers (CPs) offer streaming solutions using peer-to-peer (P2P) architectures, which promises significant scalabil- ity by leveraging clients’ upstream capacity. A major limitation of P2P live streaming is that playout rates are constrained by clients’ upstream capac- ities – typically much lower than downstream capacities – which limit the quality of the delivered stream. To leverage P2P architectures without sacri- ficing quality, CPs must commit additional resources to complement clients’ resources. In this work, we propose a cloud-based service AngelCast that enables CPs to complement P2P streaming. By subscribing to AngelCast, a CP is able to deploy extra resources (angel), on-demand from the cloud, to maintain a desirable stream quality. Angels do not download the whole stream, nor are they in possession of it. Rather, angels only relay the minimal fraction of the stream necessary to achieve the desired quality. We provide a lower bound on the minimum angel capacity needed to maintain a desired client bit-rate, and develop a fluid model construction to achieve it. Realizing the limitations of the fluid model construction, we design a practical multi- tree construction that captures the spirit of the optimal construction, and avoids its limitations. We present a prototype implementation of AngelCast, along with experimental results confirming the feasibility of our service.Supported in part by NSF awards #0720604, #0735974, #0820138, #0952145, #1012798 #1012798 #1430145 #1414119. (0720604 - NSF; 0735974 - NSF; 0820138 - NSF; 0952145 - NSF; 1012798 - NSF; 1430145 - NSF; 1414119 - NSF

    Pinwheel Scheduling for Fault-tolerant Broadcast Disks in Real-time Database Systems

    Full text link
    The design of programs for broadcast disks which incorporate real-time and fault-tolerance requirements is considered. A generalized model for real-time fault-tolerant broadcast disks is defined. It is shown that designing programs for broadcast disks specified in this model is closely related to the scheduling of pinwheel task systems. Some new results in pinwheel scheduling theory are derived, which facilitate the efficient generation of real-time fault-tolerant broadcast disk programs.National Science Foundation (CCR-9308344, CCR-9596282
    • …
    corecore