911 research outputs found

    A Unified End-to-End Communication Paradigm for Heterogeneous Networks

    Get PDF
    The aim of this thesis research is to develop a unified communication paradigm that provides an end-to-end bursting model across heterogeneous realms. This model generates end-to-end bursts, thereby eliminating edge node burst assembly and its effect on TCP performance. Simulation models are developed in ns-2 to validate this work by comparing it with edge burst assembly on OBS networks. Analysis shows improved end-to-end performance for a variety of burst sizes, timeouts, and other network parameters

    Max-min Fairness in 802.11 Mesh Networks

    Get PDF
    In this paper we build upon the recent observation that the 802.11 rate region is log-convex and, for the first time, characterise max-min fair rate allocations for a large class of 802.11 wireless mesh networks. By exploiting features of the 802.11e/n MAC, in particular TXOP packet bursting, we are able to use this characterisation to establish a straightforward, practically implementable approach for achieving max-min throughput fairness. We demonstrate that this approach can be readily extended to encompass time-based fairness in multi-rate 802.11 mesh networks

    Provider-Controlled Bandwidth Management for HTTP-based Video Delivery

    Get PDF
    Over the past few years, a revolution in video delivery technology has taken place as mobile viewers and over-the-top (OTT) distribution paradigms have significantly changed the landscape of video delivery services. For decades, high quality video was only available in the home via linear television or physical media. Though Web-based services brought video to desktop and laptop computers, the dominance of proprietary delivery protocols and codecs inhibited research efforts. The recent emergence of HTTP adaptive streaming protocols has prompted a re-evaluation of legacy video delivery paradigms and introduced new questions as to the scalability and manageability of OTT video delivery. This dissertation addresses the question of how to enable for content and network service providers the ability to monitor and manage large numbers of HTTP adaptive streaming clients in an OTT environment. Our early work focused on demonstrating the viability of server-side pacing schemes to produce an HTTP-based streaming server. We also investigated the ability of client-side pacing schemes to work with both commodity HTTP servers and our HTTP streaming server. Continuing our client-side pacing research, we developed our own client-side data proxy architecture which was implemented on a variety of mobile devices and operating systems. We used the portable client architecture as a platform for investigating different rate adaptation schemes and algorithms. We then concentrated on evaluating the network impact of multiple adaptive bitrate clients competing for limited network resources, and developing schemes for enforcing fair access to network resources. The main contribution of this dissertation is the definition of segment-level client and network techniques for enforcing class of service (CoS) differentiation between OTT HTTP adaptive streaming clients. We developed a segment-level network proxy architecture which works transparently with adaptive bitrate clients through the use of segment replacement. We also defined a segment-level rate adaptation algorithm which uses download aborts to enforce CoS differentiation across distributed independent clients. The segment-level abstraction more accurately models application-network interactions and highlights the difference between segment-level and packet-level time scales. Our segment-level CoS enforcement techniques provide a foundation for creating scalable managed OTT video delivery services

    Interoperability in the Heterogeneous Cloud Environment: A Survey of Recent User-centric Approaches

    Get PDF
    © 2016 Copyright held by the owner/author(s). Cloud computing provides users the ability to access shared, online computing resources. However, providers often offer their own proprietary applications, interfaces, APIs and infrastructures, resulting in a heterogeneous cloud environment. This heterogeneous environment makes it difficult for users to change cloud service providers; exploring capabilities to support the automated migration from one provider to another is an active, open research area. Many standards bodies (IEEE, NIST, DMTF and SNIA), industry (middleware) and academia have been pursuing approaches to reduce the impact of vendor lock-in by investigating the cloud migration problem at the level of the VM. However, the migration downtime, decoupling VM from underlying systems and security of live channels remain open issues. This paper focuses on analysing recently proposed live, cloud migration approaches for VMs at the infrastructure level in the cloud architecture. The analysis reveals issues with flexibility, performance, and security of the approaches, including additional loads to the CPU and disk I/O drivers of the physical machine where the VM initially resides. The next steps of this research are to develop and evaluate a new approach LibZam (Libya Zamzem) that will work towards addressing the identified limitations

    Spectrum Utilization and Congestion of IEEE 802.11 Networks in the 2.4 GHz ISM Band

    Get PDF
    Wi-Fi technology, plays a major role in society thanks to its widespread availability, ease of use and low cost. To assure its long term viability in terms of capacity and ability to share the spectrum efïŹciently, it is of paramount to study the spectrum utilization and congestion mechanisms in live environments. In this paper the service level in the 2.4 GHz ISM band is investigated with focus on todays IEEE 802.11 WLAN systems with support for the 802.11e extension. Here service level means the overall Quality of Service (QoS), i.e. can all devices fulïŹll their communication needs? A crosslayer approach is used, since the service level can be measured at several levels of the protocol stack. The focus is on monitoring at both the Physical (PHY) and the Medium Access Control (MAC) link layer simultaneously by performing respectively power measurements with a spectrum analyzer to assess spectrum utilization and packet snifïŹng to measure the congestion. Compared to traditional QoS analysis in 802.11 networks, packet snifïŹng allows to study the occurring congestion mechanisms more thoroughly. The monitoring is applied for the following two cases. First the inïŹ‚uence of interference between WLAN networks sharing the same radio channel is investigated in a controlled environment. It turns out that retry rate, Clear-ToSend (CTS), Request-To-Send (RTS) and (Block) Acknowledgment (ACK) frames can be used to identify congestion, whereas the spectrum analyzer is employed to identify the source of interference. Secondly, live measurements are performed at three locations to identify this type of interference in real-live situations. Results show inefïŹcient use of the wireless medium in certain scenarios, due to a large portion of management and control frames compared to data content frames (i.e. only 21% of the frames is identiïŹed as data frames)

    Checkpointing as a Service in Heterogeneous Cloud Environments

    Get PDF
    A non-invasive, cloud-agnostic approach is demonstrated for extending existing cloud platforms to include checkpoint-restart capability. Most cloud platforms currently rely on each application to provide its own fault tolerance. A uniform mechanism within the cloud itself serves two purposes: (a) direct support for long-running jobs, which would otherwise require a custom fault-tolerant mechanism for each application; and (b) the administrative capability to manage an over-subscribed cloud by temporarily swapping out jobs when higher priority jobs arrive. An advantage of this uniform approach is that it also supports parallel and distributed computations, over both TCP and InfiniBand, thus allowing traditional HPC applications to take advantage of an existing cloud infrastructure. Additionally, an integrated health-monitoring mechanism detects when long-running jobs either fail or incur exceptionally low performance, perhaps due to resource starvation, and proactively suspends the job. The cloud-agnostic feature is demonstrated by applying the implementation to two very different cloud platforms: Snooze and OpenStack. The use of a cloud-agnostic architecture also enables, for the first time, migration of applications from one cloud platform to another.Comment: 20 pages, 11 figures, appears in CCGrid, 201
    • 

    corecore