102 research outputs found

    Optimizing on-demand resource deployment for peer-assisted content delivery

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for services in a pee-to-peer (P2P) fashion. Such peer-assisted service paradigm promises significant infrastructure cost reduction, but suffers from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to clients especially for real-time applications where content can not be cached. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to efficiently utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the upstream capacity of clients. We target three applications that require the delivery of real-time as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time - the time it takes to deliver the content to all clients in a group. The second application is live video streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for clients running bandwidth-intensive applications. For each of the above applications, we develop analytical models that efficiently allocate the already available resources. They also efficiently allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate these techniques through simulation and/or implementation

    Optimizing on-demand resource deployment for peer-assisted content delivery (PhD thesis)

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for service in a peer-to-peer (P2P) fashion. Such peer-assisted service paradigms promise significant infrastructure cost reduction, but suffer from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to the clients. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to optimally utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the uplink capacity of clients. We target three applications that require the delivery of fresh as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time -- the time it takes to deliver the content to all clients in a group. The second application is live streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for bandwidth-intensive applications. For each of the above applications, we develop mathematical models that optimally allocate the already available resources. They also optimally allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate those techniques through simulation and/or implementation. (Major Advisor: Azer Bestavros

    Building accurate radio environment maps from multi-fidelity spectrum sensing data

    Get PDF
    In cognitive wireless networks, active monitoring of the wireless environment is often performed through advanced spectrum sensing and network sniffing. This leads to a set of spatially distributed measurements which are collected from different sensing devices. Nowadays, several interpolation methods (e.g., Kriging) are available and can be used to combine these measurements into a single globally accurate radio environment map that covers a certain geographical area. However, the calibration of multi-fidelity measurements from heterogeneous sensing devices, and the integration into a map is a challenging problem. In this paper, the auto-regressive co-Kriging model is proposed as a novel solution. The algorithm is applied to model measurements which are collected in a heterogeneous wireless testbed environment, and the effectiveness of the new methodology is validated

    Flexible, wide-area storage for distributed systems using semantic cues

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 81-87).There is a growing set of Internet-based services that are too big, or too important, to run at a single site. Examples include Web services for e-mail, video and image hosting, and social networking. Splitting such services over multiple sites can increase capacity, improve fault tolerance, and reduce network delays to clients. These services often need storage infrastructure to share data among the sites. This dissertation explores the use of a new file system (WheelFS) specifically designed to be the storage infrastructure for wide-area distributed services. WheelFS allows applications to adjust the semantics of their data via semantic cues, which provide application control over consistency, failure handling, and file and replica placement. This dissertation describes a particular set of semantic cues that reflect the specific challenges that storing data over the wide-area network entails: high-latency and low-bandwidth links, coupled with increased node and link failures, when compared to local-area networks. By augmenting a familiar POSIX interface with support for semantic cues, WheelFS provides a wide-area distributed storage system intended to help multi-site applications share data and gain fault tolerance, in the form of a distributed file system. Its design allows applications to adjust the tradeoff between prompt visibility of updates from other sites and the ability for sites to operate independently despite failures and long delays. WheelFS is implemented as a user-level file system and is deployed on PlanetLab and Emu-lab.(cont.) Six applications (an all-pairs-pings script, a distributed Web cache, an email service, large file distribution, distributed compilation, and protein sequence alignment software) demonstrate that WheelFS's file system interface simplifies construction of distributed applications by allowing reuse of existing software. These applications would perform poorly with the strict semantics implied by a traditional file system interface, but by providing cues to WheelFS they are able to achieve good performance. Measurements show that applications built on WheelFS deliver comparable performance to services such as CoralCDN and BitTorrent that use specialized wide-area storage systems.by Jeremy Andrew Stribling.Ph.D

    THE EMERGENCE OF DOMINANT DESIGN(S) IN LARGE SCALE CYBERINFRASTRUCTURE SYSTEMS

    Get PDF
    Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three alternative theories of IT adoption (path dependence, project management, technology framing) and on a qualitative study of archival and interview data I examine how design and development influence the adoption trajectory of four competing cyber-infrastructure systems comprising the Global Environment for Network Innovations (www.geni.net) over a period of ten years (2001-2011). Findings indicate that a) early design decisions, particularly those related to similar pre-existing systems set a path of adoption in motion leading to the early dominance of one system, b) coordination of milestones led to increased adoption for the high-performing teams, and c) the framing of technology presentations and demos as a social influence strategy was less effective in “breaking” the dominant system’s adoption path in the long term but enabled most of the development teams to challenge that dominance and increase the adoption of their systems in the short term. While studies in path dependence and dominant design assume that adoption and dominance occurs through users’ actions after development is completed, this study’s findings show that developers and managers of competing systems can also influence adoption and even “break” the dominant system’s adoption path while it’s still under development. Understanding how cyber-infrastructure systems are developed is key to promoting their adoption and use. This research has import for understanding the ramifications of early-stage design decisions, as well as the impact of project coordination and technology presentation strategies such as framing for the adoption of such systems

    Surrogate modeling based cognitive decision engine for optimization of WLAN performance

    Get PDF
    Due to the rapid growth of wireless networks and the dearth of the electromagnetic spectrum, more interference is imposed to the wireless terminals which constrains their performance. In order to mitigate such performance degradation, this paper proposes a novel experimentally verified surrogate model based cognitive decision engine which aims at performance optimization of IEEE 802.11 links. The surrogate model takes the current state and configuration of the network as input and makes a prediction of the QoS parameter that would assist the decision engine to steer the network towards the optimal configuration. The decision engine was applied in two realistic interference scenarios where in both cases, utilization of the cognitive decision engine significantly outperformed the case where the decision engine was not deployed

    MS

    Get PDF
    thesisIn this research, a computerized motion planning and control system for multiple robots is presented. Medium scale wheeled mobile robot couriers move wireless antennas within a semicontrolled environment. The systems described in this work are integrated as components within Mobile Emulab, a wireless research testbed. This testbed is publicly available to users remotely via the Internet. Experimenters use a computer interface to specify desired paths and configurations for multiple robots. The robot control and coordination system autonomously creates complex movements and behaviors from high level instructions. Multiple trajectory types may be created by Mobile Emulab. Baseline paths are comprised of line segments connecting waypoints, which require robots to stop and pivot between each segment. Filleted circular arcs between line segments allow constant motion trajectories. To avoid curvature discontinuities inherent in line-arc segmented paths, higher order continuous polynomial spirals and splines are constructed in place of the constant radius arcs. Polar form nonlinear state feedback controllers executing on a computer system connected to the robots over a wireless network accomplish posture stabilization, path following and trajectory tracking control. State feedback is provided by an overhead camera based visual localization system integrated into the testbed. Kinematic control is used to generate velocity commands sent to wheel velocity servo loop controllers built into the robots. Obstacle avoidance in Mobile Emulab is accomplished through visibility graph methods. The Virtualized Phase Portrait Method is presented as an alternative. A virtual velocity field overlay is created from workspace obstacle zone data. Global stability to a single equilibrium point, with local instability in proximity to obstacle regions is designed into this system

    Study on the Performance of TCP over 10Gbps High Speed Networks

    Get PDF
    Internet traffic is expected to grow phenomenally over the next five to ten years. To cope with such large traffic volumes, high-speed networks are expected to scale to capacities of terabits-per-second and beyond. Increasing the role of optics for packet forwarding and transmission inside the high-speed networks seems to be the most promising way to accomplish this capacity scaling. Unfortunately, unlike electronic memory, it remains a formidable challenge to build even a few dozen packets of integrated all-optical buffers. On the other hand, many high-speed networks depend on the TCP/IP protocol for reliability which is typically implemented in software and is sensitive to buffer size. For example, TCP requires a buffer size of bandwidth delay product in switches/routers to maintain nearly 100\% link utilization. Otherwise, the performance will be much downgraded. But such large buffer will challenge hardware design and power consumption, and will generate queuing delay and jitter which again cause problems. Therefore, improve TCP performance over tiny buffered high-speed networks is a top priority. This dissertation studies the TCP performance in 10Gbps high-speed networks. First, a 10Gbps reconfigurable optical networking testbed is developed as a research environment. Second, a 10Gbps traffic sniffing tool is developed for measuring and analyzing TCP performance. New expressions for evaluating TCP loss synchronization are presented by carefully examining the congestion events of TCP. Based on observation, two basic reasons that cause performance problems are studied. We find that minimize TCP loss synchronization and reduce flow burstiness impact are critical keys to improve TCP performance in tiny buffered networks. Finally, we present a new TCP protocol called Multi-Channel TCP and a new congestion control algorithm called Desynchronized Multi-Channel TCP (DMCTCP). Our algorithm implementation takes advantage of a potential parallelism from the Multi-Path TCP in Linux. Over an emulated 10Gbps network ruled by routers with only a few dozen packets of buffers, our experimental results confirm that bottleneck link utilization can be much better improved by DMCTCP than by many other TCP variants. Our study is a new step towards the deployment of optical packet switching/routing networks
    • …
    corecore