19,798 research outputs found

    Cloud-based Content Distribution on a Budget

    Full text link
    To leverage the elastic nature of cloud computing, a solution provider must be able to accurately gauge demand for its offering. For applications that involve swarm-to-cloud interactions, gauging such demand is not straightforward. In this paper, we propose a general framework, analyze a mathematical model, and present a prototype implementation of a canonical swarm-to-cloud application, namely peer-assisted content delivery. Our system – called Cyclops – dynamically adjusts the off-cloud bandwidth consumed by content servers (which represents the bulk of the provider's cost) to feed a set of swarming clients, based on a feedback signal that gauges the real-time health of the swarm. Our extensive evaluation of Cyclops in a variety of settings – including controlled PlanetLab and live Internet experiments involving thousands of users – show significant reduction in content distribution costs (by as much as two orders of magnitude) when compared to non-feedback-based swarming solutions, with minor impact on content delivery times

    Correction. Brownian models of open processing networks: canonical representation of workload

    Full text link
    Due to a printing error the above mentioned article [Annals of Applied Probability 10 (2000) 75--103, doi:10.1214/aoap/1019737665] had numerous equations appearing incorrectly in the print version of this paper. The entire article follows as it should have appeared. IMS apologizes to the author and the readers for this error. A recent paper by Harrison and Van Mieghem explained in general mathematical terms how one forms an ``equivalent workload formulation'' of a Brownian network model. Denoting by Z(t)Z(t) the state vector of the original Brownian network, one has a lower dimensional state descriptor W(t)=MZ(t)W(t)=MZ(t) in the equivalent workload formulation, where MM can be chosen as any basis matrix for a particular linear space. This paper considers Brownian models for a very general class of open processing networks, and in that context develops a more extensive interpretation of the equivalent workload formulation, thus extending earlier work by Laws on alternate routing problems. A linear program called the static planning problem is introduced to articulate the notion of ``heavy traffic'' for a general open network, and the dual of that linear program is used to define a canonical choice of the basis matrix MM. To be specific, rows of the canonical MM are alternative basic optimal solutions of the dual linear program. If the network data satisfy a natural monotonicity condition, the canonical matrix MM is shown to be nonnegative, and another natural condition is identified which ensures that MM admits a factorization related to the notion of resource pooling.Comment: Published at http://dx.doi.org/10.1214/105051606000000583 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Modeling the interaction between TCP and Rate Adaptation

    Get PDF
    In this paper, we model and investigate the interaction between the TCP protocol and rate adaptation at intermediate routers. Rate adaptation aims at saving energy by controlling the offered capacity of links and adapting it to the amount of traffic. However, when TCP is used at the transport layer, the control loop of rate adaptation and one of the TCP congestion control mechanism might interact and disturb each other, compromising throughput and Quality of Service (QoS). Our investigation is lead through mathematical modeling consisting in depicting the behavior of TCP and of rate adaption through a set of Delay Differential Equations (DDEs). The model is validated against simulation results and it is shown to be accurate. The results of the sensitivity analysis of the system performance to control parameters show that rate adaptation can be effective but a careful parameter setting is needed to avoid undesired disruptive interaction among controllers at different levels, that impair QoS

    Modeling and Control of Rare Segments in BitTorrent with Epidemic Dynamics

    Full text link
    Despite its existing incentives for leecher cooperation, BitTorrent file sharing fundamentally relies on the presence of seeder peers. Seeder peers essentially operate outside the BitTorrent incentives, with two caveats: slow downlinks lead to increased numbers of "temporary" seeders (who left their console, but will terminate their seeder role when they return), and the copyright liability boon that file segmentation offers for permanent seeders. Using a simple epidemic model for a two-segment BitTorrent swarm, we focus on the BitTorrent rule to disseminate the (locally) rarest segments first. With our model, we show that the rarest-segment first rule minimizes transition time to seeder (complete file acquisition) and equalizes the segment populations in steady-state. We discuss how alternative dissemination rules may {\em beneficially increase} file acquisition times causing leechers to remain in the system longer (particularly as temporary seeders). The result is that leechers are further enticed to cooperate. This eliminates the threat of extinction of rare segments which is prevented by the needed presence of permanent seeders. Our model allows us to study the corresponding trade-offs between performance improvement, load on permanent seeders, and content availability, which we leave for future work. Finally, interpreting the two-segment model as one involving a rare segment and a "lumped" segment representing the rest, we study a model that jointly considers control of rare segments and different uplinks causing "choking," where high-uplink peers will not engage in certain transactions with low-uplink peers.Comment: 18 pages, 6 figures, A shorter version of this paper that did not include the N-segment lumped model was presented in May 2011 at IEEE ICC, Kyot
    corecore