21 research outputs found

    Optimizing on-demand resource deployment for peer-assisted content delivery (PhD thesis)

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for service in a peer-to-peer (P2P) fashion. Such peer-assisted service paradigms promise significant infrastructure cost reduction, but suffer from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to the clients. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to optimally utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the uplink capacity of clients. We target three applications that require the delivery of fresh as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time -- the time it takes to deliver the content to all clients in a group. The second application is live streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for bandwidth-intensive applications. For each of the above applications, we develop mathematical models that optimally allocate the already available resources. They also optimally allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate those techniques through simulation and/or implementation. (Major Advisor: Azer Bestavros

    Optimizing on-demand resource deployment for peer-assisted content delivery

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for services in a pee-to-peer (P2P) fashion. Such peer-assisted service paradigm promises significant infrastructure cost reduction, but suffers from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to clients especially for real-time applications where content can not be cached. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to efficiently utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the upstream capacity of clients. We target three applications that require the delivery of real-time as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time - the time it takes to deliver the content to all clients in a group. The second application is live video streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for clients running bandwidth-intensive applications. For each of the above applications, we develop analytical models that efficiently allocate the already available resources. They also efficiently allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate these techniques through simulation and/or implementation

    Extending snBench to Support a Graphical Programming Interface for a Sensor Network Tasking Language (STEP)

    Full text link
    The purpose of this project is the creation of a graphical "programming" interface for a sensor network tasking language called STEP. The graphical interface allows the user to specify a program execution graphically from an extensible pallet of functionalities and save the results as a properly formatted STEP file. Moreover, the software is able to load a file in STEP format and convert it into the corresponding graphical representation. During both phases a type-checker is running on the background to ensure that both the graphical representation and the STEP file are syntactically correct. This project has been motivated by the Sensorium project at Boston University. In this technical report we present the basic features of the software, the process that has been followed during the design and implementation. Finally, we describe the approach used to test and validate our software

    AngelCast: cloud-based peer-assisted live streaming using optimized multi-tree construction

    Full text link
    Increasingly, commercial content providers (CPs) offer streaming solutions using peer-to-peer (P2P) architectures, which promises significant scalabil- ity by leveraging clients’ upstream capacity. A major limitation of P2P live streaming is that playout rates are constrained by clients’ upstream capac- ities – typically much lower than downstream capacities – which limit the quality of the delivered stream. To leverage P2P architectures without sacri- ficing quality, CPs must commit additional resources to complement clients’ resources. In this work, we propose a cloud-based service AngelCast that enables CPs to complement P2P streaming. By subscribing to AngelCast, a CP is able to deploy extra resources (angel), on-demand from the cloud, to maintain a desirable stream quality. Angels do not download the whole stream, nor are they in possession of it. Rather, angels only relay the minimal fraction of the stream necessary to achieve the desired quality. We provide a lower bound on the minimum angel capacity needed to maintain a desired client bit-rate, and develop a fluid model construction to achieve it. Realizing the limitations of the fluid model construction, we design a practical multi- tree construction that captures the spirit of the optimal construction, and avoids its limitations. We present a prototype implementation of AngelCast, along with experimental results confirming the feasibility of our service.Supported in part by NSF awards #0720604, #0735974, #0820138, #0952145, #1012798 #1012798 #1430145 #1414119. (0720604 - NSF; 0735974 - NSF; 0820138 - NSF; 0952145 - NSF; 1012798 - NSF; 1430145 - NSF; 1414119 - NSF

    Extending snBench to Support a Graphical Programming Interface for a Sensor Network Tasking Language (STEP)

    Full text link
    The purpose of this project is the creation of a graphical "programming" interface for a sensor network tasking language called STEP. The graphical interface allows the user to specify a program execution graphically from an extensible pallet of functionalities and save the results as a properly formatted STEP file. Moreover, the software is able to load a file in STEP format and convert it into the corresponding graphical representation. During both phases a type-checker is running on the background to ensure that both the graphical representation and the STEP file are syntactically correct. This project has been motivated by the Sensorium project at Boston University. In this technical report we present the basic features of the software, the process that has been followed during the design and implementation. Finally, we describe the approach used to test and validate our software

    Angels: In-Network Support for Minimum Distribution Time in P2P Overlays

    Full text link
    This thesis proposes the use of in-network caches (which we call Angels) to reduce the Minimum Distribution Time (MDT) of a file from a seeder – a node that possesses the file – to a set of leechers – nodes who are interested in downloading the file. An Angel is not a leecher in the sense that it is not interested in receiving the entire file, but rather it is interested in minimizing the MDT to all leechers, and as such uses its storage and up/down-link capacity to cache and forward parts of the file to other peers. We extend the analytical results by Kumar and Ross (Kumar and Ross, 2006) to account for the presence of angels by deriving a new lower bound for the MDT. We show that this newly derived lower bound is tight by proposing a distribution strategy under assumptions of a fluid model. We present a GroupTree heuristic that addresses the impracticalities of the fluid model. We evaluate our designs through simulations that show that our GroupTree heuristic outperforms other heuristics, that it scales well with the increase of the number of leechers, and that it closely approaches the optimal theoretical bounds

    Angels In the Cloud – A Peer-Assisted Bulk-Synchronous Content Distribution Service

    Full text link
    Leveraging client upload capacity through peer-assisted content distribution was shown to decrease the load on content providers, while also improving average distribution times. These benefits, however, are limited by the disparity between client upload and download speeds, especially in scenarios requiring a minimum distribution time (MDT) of a piece of content to a set of clients. Achieving MDT is crucial for bulk-synchronous applications, when every client in a set must wait for all other clients in the set to finish their downloads before being able to make use of the downloaded content. In this paper, we propose the use of dedicated servers, which we call angels to accelerate peer-assisted content distribution in general, and to minimize MDT in particular. An angel is not itself the content origin, nor is it interested in fully downloading the content; its only purpose is to enable a peer-assisted content distribution scheme to approach the theoretical lower-bound for MDT. To overcome scalability issues inherent in an optimal MDT construction, we propose and evaluate a content exchange strategy involving angels, which we call "Group Tree". In addition to simulation results that demonstrate the near optimal performance of our proposed approach, we present the architecture and implementation of CloudAngels – a service that allows the elastic, on-the-fly deployment of angels (in the cloud) to assist a content provider (off the cloud) in realizing its MDT objective

    Enhancing Tor performance for bandwidth-intensive applications

    Full text link
    When it was first introduced a decade ago, Tor, the anonymous onion routing protocol, aimed at providing anonymity for latency-sensitive applications, such as web-browsing, as opposed to bandwidth-intensive applications, such as on-demand or live video streaming. This emphasis on latency-sensitive applications is evident from proposed Tor circuit-scheduling techniques [23], [10] that throttle bandwidth-intensive applications in favor of bursty, latency-sensitive applications. In this paper, we deviate from this traditional view by identifying key attributes and design decisions that negatively impact Tor’s performance in general and its ability to cater to bandwidth-intensive applications in particular, and by proposing new capabilities that aim to enhance Tor’s performance as it relates to anonymizing bandwidth-intensive traffic. We present results from in-vivo measurement studies that shed light on Tor’s approach to manage load across relays, which manifests itself in the way source-based routing at the end-systems (clients) is handled. We present an analytical model that captures the key attributes of the feedback control inherent in Tor’s approach to load management – namely, probing and circuit selection. We show that changing some of these key attributes yields measurable improvement in terms of overall network utilization as well as better load balancing of relays, resulting in better predictability of individual circuit performance. To boost the performance of bandwidth-intensive circuits, we propose the use of on-demand relays (angels) to not only increase the capacity in the Tor network, but also to implement special bandwidth-boosting functionality using multi-path routing. Our conclusions are backed up with results from simulation experiments.National Science Foundation (0735974, 0820138, 0963974, 1012798), Google (2011 Faculty Research

    AngelCast: cloud-based peer-assisted live streaming using optimized multi-tree construction

    Full text link
    Increasingly, commercial content providers (CPs) offer streaming and IPTV solutions that leverage an underlying peer-to-peer (P2P) stream distribution architecture. The use of P2P protocols promises significant scalability and cost savings by leveraging the local resources of clients -- specifically, uplink capacity. A major limitation of P2P live streaming is that playout rates are constrained by the uplink capacities of clients, which are typically much lower than downlink capacities, thus limiting the quality of the delivered stream. Thus, to leverage P2P architectures without sacrificing the quality of the delivered stream, CPs must commit additional resources to complement those available through clients. In this paper, we propose a cloud-based service--AngelCast--that enables CPs to elastically complement P2P streaming "as needed". By subscribing to AngelCast, a CP is able to deploy extra resources ("angels"), on-demand from the cloud, to maintain a desirable stream (bit-rate) quality. Angels need not download the whole stream (they are not "leachers"), nor are they in possession of it (they are not "seeders"). Rather, angels only relay (download once and upload as many times as needed) the minimal possible fraction of the stream that is necessary to achieve the desirable stream quality, while maximally utilizing available client resources. We provide a lower bound on the minimum amount of angel capacity needed to maintain a certain bit-rate to all clients, and develop a fluid model construction that achieves this lower bound. Realizing the limitations of the fluid model construction--namely, susceptibility to potentially arbitrary start-up delays and significant degradation due to churn--we present a practical multi-tree construction that captures the spirit of the optimal construction, while avoiding its limitations. In particular, our AngelCast protocol achieves near optimal performance (compared to the fluid-model construction) while ensuring a low startup delay by maintaining a logarithmic-length path between any client and the provider, and while gracefully dealing with churn by adopting a flexible membership management approach. We present the blueprints of a prototype implementation of AngelCast, along with experimental results confirming the feasibility and performance potential of our AngelCast service when deployed on Emulab and PlanetLab.National Science Foundation (0720604, 0735974, 0820138, 0952145, 1012798

    Colocation as a Service

    Full text link
    By colocating with other tenants of an Infrastructure as a Service (IaaS) offering, IaaS users could reap significant cost savings by judiciously sharing their use of the fixed-size instances offered by IaaS providers. This paper presents the blueprints of a Colocation as a Service (CaaS) framework. CaaS strategic services identify coalitions of self-interested users that would benefit from colocation on shared instances. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. CaaS could be incorporated into an IaaS offering by providers; it could be implemented as a value-added proposition by IaaS resellers; or it could be directly leveraged in a peer-to-peer fashion by IaaS users. To establish the practicality of such offerings, this paper presents XCS – a prototype implementation of CaaS on top of the Xen hypervisor. XCS makes specific choices with respect to the various elements of the CaaS framework: it implements strategic services based on a game-theoretic formulation of colocation; it features novel concurrent migration heuristics which are shown to be efficient; and it offers monitoring and accounting services at both the hypervisor and VM layers. Extensive experimental results obtained by running PlanetLab trace-driven workloads on the XCS prototype confirm the premise of CaaS – by demonstrating the efficiency and scalability of XCS, and by quantifying the potential cost savings accrued through the use of XCS
    corecore