41 research outputs found

    End-to-end single-rate multicast congestion detection using support vector machines

    Get PDF
    >Magister Scientiae - MScIP multicast is an efficient mechanism for simultaneously transmitting bulk data to multiple receivers. Many applications can benefit from multicast, such as audio and videoconferencing, multi-player games, multimedia broadcasting, distance education, and data replication. For either technical or policy reasons, IP multicast still has not yet been deployed in today’s Internet. Congestion is one of the most important issues impeding the development and deployment of IP multicast and multicast applications

    Deployable transport services for low-latency multimedia applications

    Get PDF
    Low-latency multimedia applications generate a growing and significant majority of all Internet traffic. These applications are characterised by tight bounds on end-to-end latency that typically range from tens to a few hundred milliseconds. Operating within these bounds is challenging, with the best-effort delivery service of the Internet giving rise to unreliable delivery with unpredictable latency. The way in which the upper layers of the protocol stack manage this unreliability and unpredictability can greatly impact the quality-of-experience that applications can provide. In this thesis, I focus on the services and abstractions that the transport layer provides to applications. The delivery model provided by the transport layer can have a significant impact on the quality-of-experience that can be provided by the application. Reliability and order, for example, introduce delay while packet loss is detected and the lost data retransmitted. This enforces a particular trade-off between latency, loss, and application quality-of-experience, with reliability taking priority. This trade-off is not suitable for low-latency multimedia applications, which prefer predictable and bounded latency to strict reliability and order. No widely-deployed transport protocol provides a delivery model that fully supports low-latency applications: UDP provides no reliability guarantees, while TCP enforces reliability. Implementing a protocol that does support these applications is difficult: ossification restricts protocols to appearing as UDP or TCP on-the-wire. To meet both challenges -- of better supporting low-latency multimedia applications, and of deploying a new protocol within an ossified transport layer -- I propose TCP Hollywood, a protocol that maintains wire compatibility with TCP, while exposing the trade-off between reliability and delay such that applications can improve their quality-of-experience. I show that TCP Hollywood is deployable on the public Internet, and that it achieves its goal of improving support for low-latency multimedia applications. I conclude by evaluating the API changes that are required to support TCP Hollywood, distilling the protocol into the set of transport services that it provides

    Cross-layer optimisation of quality of experience for video traffic

    Get PDF
    Realtime video traffic is currently the dominant network traffic and is set to increase in volume for the foreseeable future. As this traffic is bursty, providing perceptually good video quality is a challenging task. Bursty traffic refers to inconsistency of the video traffic level. It is at high level sometimes while is at low level at some other times. Many video traffic measurement algorithms have been proposed for measurement-based admission control. Despite all of this effort, there is no entirely satisfactory admission algorithm for variable rate flows. Furthermore, video frames are subjected to loss and delay which cause quality degradation when sent without reacting to network congestion. The perceived Quality of Experience (QoE)-number of sessions trade-off can be optimised by exploiting the bursty nature of video traffic. This study introduces a cross-layer QoE-aware optimisation architecture for video traffic. QoE is a measure of the user's perception of the quality of a network service. The architecture addresses the problem of QoE degradation in a bottleneck network. It proposes that video sources at the application layer adapt their rate to the network environment by dynamically controlling their transmitted bit rate. Whereas the edge of the network protects the quality of active video sessions by controlling the acceptance of new sessions through a QoE-aware admission control. In particular, it seeks the most efficient way of accepting new video sessions and adapts sending rates to free up resources for more sessions whilst maintaining the QoE of the current sessions. As a pathway to the objective, the performance of the video flows that react to the network load by adapting the sending rate was investigated. Although dynamic rate adaptation enhances the video quality, accepting more sessions than a link can accommodate will degrade the QoE. The video's instantaneous aggregate rate was compared to the average aggregate rate which is a calculated rate over a measurement time window. It was found that there is no substantial difference between the two rates except for a small number of video flows, long measurement window, or fast moving contents (such as sport), in which the average is smaller than the instantaneous rate. These scenarios do not always represent the reality. The finding discussed above was the main motivation for proposing a novel video traffic measurement algorithm that is QoE-aware. The algorithm finds the upper limit of the video total rate that can exceed a specific link capacity without the QoE degradation of ongoing video sessions. When implemented in a QoE-aware admission control, the algorithm managed to maintain the QoE for a higher number of video session compared to the calculated rate-based admission controls such as the Internet Engineering Task Force (IETF) standard Pre-Congestion Notification (PCN)-based admission control. Subjective tests were conducted to involve human subjects in rating of the quality of videos delivered with the proposed measurement algorithm. Mechanisms proposed for optimising the QoE of video traffic were surveyed in detail in this dissertation and the challenges of achieving this objective were discussed. Finally, the current rate adaptation capability of video applications was combined with the proposed QoE-aware admission control in a QoE-aware cross-layer architecture. The performance of the proposed architecture was evaluated against the architecture in which video applications perform rate adaptation without being managed by the admission control component. The results showed that our architecture optimises the mean Mean Opinion Score (MOS) and number of successful decoded video sessions without compromising the delay. The algorithms proposed in this study were implemented and evaluated using Network Simulator-version 2 (NS-2), MATLAB, Evalvid and Evalvid-RA. These software tools were selected based on their use in similar studies and availability at the university. Data obtained from the simulations was analysed with analysis of variance (ANOVA) and the Cumulative Distribution Functions (CDF) for the performance metrics were calculated. The proposed architecture will contribute to the preparation for the massive growth of video traffic. The mathematical models of the proposed algorithms contribute to the research community

    Aggregating the Bandwidth of Multiple Network Interfaces to Increase the Performance of Networked Applications

    Get PDF
    Devices capable of connecting to two or more different networks simultaneously, known as host multihoming, are becoming increasingly common. For example, most laptops are equipped with a least a Local Area Network (LAN) and a Wireless LAN (WLAN) interface, and smartphones can connect to both WLANs and 3G-networks (High-Speed Downlink Packet Access, HSDPA). Being connected to multiple networks simultaneously allows for desirable features like bandwidth aggregation and redundancy. Enabling and making efficient use of multiple network interfaces or links (network interface and link will be used interchangeably throughout this thesis) requires solving several challenges related to deployment, link heterogeneity and dynamic behavior. Even though multihoming has existed for a long time, for example routers must support connecting to different networks, most existing operating systems, network protocols and applications do not take host multihoming into consideration. The default behavior is still to use a single interface for all traffic. Using a single interface is, for example, often insufficient to meet the requirements of popular, bandwidth intensive services like video streaming. In this thesis, we have focused on bandwidth aggregation on host multihomed devices. Even though bandwidth aggregation has been a research field for several years, the related works have failed to consider the challenges present in real world networks properly, or does not apply to scenarios where a device is connected to different heterogeneous networks. In order to solve the deployment challenges and enable the use of multiple links in away that works in a real-world network environment, we have created a platform-independent framework, called MULTI. MULTI was used as the foundation for designing transparent (to the applications) and application-specific bandwidth aggregation techniques. MULTI works in the presence of Network Address Translation (NAT), automatically detects and configures the device based on changes in link state, and notifies the application(s) of any changes. The application-specific bandwidth aggregation technique presented in this thesis was optimised for and evaluated with quailty-adaptive video streaming. The technique was evaluated with different types of streaming in both a controlled network environment and real-world networks. Adding a second link gave a significant increase in both video and playback quality. However, the technique is not limited to video streaming and can be used to improve the performance of several, common application types. In many cases, it is not possible to extend applications directly with multilink support. Working on the network-layer allows for the creation of bandwidth aggregation techniques that are transparent to applications. Transparent, network-layer bandwidth aggregation techniques must support the behavior of the different transport protocol in order to achieve efficient bandwidth aggregation. The transparent bandwidth aggregation techniques introduced in this thesis are targeted at Universal Datagram Protocol (UDP) and Transmission Control Protocol (TCP), the two most common transport protocols in the Internet today

    A cross-layer quality-oriented energy-efficient scheme for multimedia delivery in wireless local area networks

    Get PDF
    Wireless communication technologies, although emerged only a few decades ago, have grown fast in both popularity and technical maturity. As a result, mobile devices such as Personal Digital Assistants (PDA) or smart phones equipped with embedded wireless cards have seen remarkable growth in popularity and are quickly becoming one of the most widely used communication tools. This is mainly determined by the flexibility, convenience and relatively low costs associated with these devices and wireless communications. Multimedia applications have become by far one of the most popular applications among mobile users. However this type of application has very high bandwidth requirements, seriously restricting the usage of portable devices. Moreover, the wireless technology involves increased energy consumption and consequently puts huge pressure on the limited battery capacity which presents many design challenges in the context of battery powered devices. As a consequence, power management has raised awareness in both research and industrial communities and huge efforts have been invested into energy conservation techniques and strategies deployed within different components of the mobile devices. Our research presented in this thesis focuses on energy efficient data transmission in wireless local networks, and mainly contributes in the following aspects: 1. Static STELA, which is a Medium Access Control (MAC) layer solution that adapts the sleep/wakeup state schedule of the radio transceiver according to the bursty nature of data traffic and real time observation of data packets in terms of arrival time. The algorithm involves three phases– slow start phase, exponential increase phase, and linear increase phase. The initiation and termination of each phase is self-adapted to real time traffic and user configuration. It is designed to provide either maximum energy efficiency or best Quality of Service (QoS) according to user preference. 2. Dynamic STELA, which is a MAC layer solution deployed on the mobile devices and provides balanced performance between energy efficiency and QoS. Dynamic STELA consists of the three phase algorithm used in static STELA, and additionally employs a traffic modeling algorithm to analyze historical traffic data and estimate the arrival time of the next burst. Dynamic STELA achieves energy saving through intelligent and adaptive increase of Wireless Network Interface Card (WNIC) sleeping interval in the second and the third phase and at the same time guarantees delivery performance through optimal WNIC waking timing before the estimated arrival of new data burst. 3. Q-PASTE, which is a quality-oriented cross-layer solution with two components employed at different network layers, designed for multimedia content delivery. First component, the Packet/ApplicaTion manager (PAT) is deployed at the application layer of both service gateway and client host. The gateway level PAT utilizes fast start, as a widely supported technique for multimedia content delivery, to achieve high QoS and shapes traffic into bursts to reduce the wireless transceiver’s duty cycle. Additionally, gateway-side PAT informs client host the starting and ending time of fast start to assist parameter tuning. The client-side PAT monitors each active session and informs the MAC layer about their traffic-related behavior. The second component, dynamic STELA, deployed at MAC layer, adaptively adjusts the sleep/wake-up behavior of mobile device wireless interfaces in order to reduce energy consumption while also maintaining high Quality of Service (QoS) levels. 4. A comprehensive survey on energy efficient standards and some of the most important state-of-the-art energy saving technologies is also provided as part of the work

    Decentralising resource management in operating systems

    Get PDF
    This dissertation explores operating system mechanisms to allow resource-aware applications to be involved in the process of managing resources under the premise that these applications (1) potentially have some (implicit) notion of their future resource demands and (2) can adapt their resource demands. The general idea is to provide feedback to resource-aware applications so that they can proactively participate in the management of resources. This approach has the benefit that resource management policies can be removed from central entities and the operating system has only to provide mechanism. Furthermore, in contrast to centralised approaches, application specific features can be more easily exploited. To achieve this aim, I propose to deploy a microeconomic theory, namely congestion or shadow pricing, which has recently received attention for managing congestion in communication networks. Applications are charged based on the potential "damage" they cause to other consumers by using resources. Consumers interpret these congestion charges as feedback signals which they use to adjust their resource consumption. It can be shown theoretically that such a system with consumers merely acting in their own self-interest will converge to a social optimum. This dissertation focuses on the operating system mechanisms required to decentralise resource management this way. In particular it identifies four mechanisms: pricing & charging, credit accounting, resource usage accounting, and multiplexing. While the latter two are mechanisms generally required for the accurate management of resources, pricing & charging and credit accounting present novel mechanisms. It is argued that congestion prices are the correct economic model in this context and provide appropriate feedback to applications. The credit accounting mechanism is necessary to ensure the overall stability of the system by assigning value to credits

    A World-Class University-Industry Consortium for Wind Energy Research, Education, and Workforce Development: Final Technical Report

    Full text link

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Towards AI-assisted Healthcare: System Design and Deployment for Machine Learning based Clinical Decision Support

    Get PDF
    Over the last decade, American hospitals have adopted electronic health records (EHRs) widely. In the next decade, incorporating EHRs with clinical decision support (CDS) together into the process of medicine has the potential to change the way medicine has been practiced and advance the quality of patient care. It is a unique opportunity for machine learning (ML), with its ability to process massive datasets beyond the scope of human capability, to provide new clinical insights that aid physicians in planning and delivering care, ultimately leading to better outcomes, lower costs of care, and increased patient satisfaction. However, applying ML-based CDS has to face steep system and application challenges. No open platform is there to support ML and domain experts to develop, deploy, and monitor ML-based CDS; and no end-to-end solution is available for machine learning algorithms to consume heterogenous EHRs and deliver CDS in real-time. Build ML-based CDS from scratch can be expensive and time-consuming. In this dissertation, CDS-Stack, an open cloud-based platform, is introduced to help ML practitioners to deploy ML-based CDS into healthcare practice. The CDS-Stack integrates various components into the infrastructure for the development, deployment, and monitoring of the ML-based CDS. It provides an ETL engine to transform heterogenous EHRs, either historical or online, into a common data model (CDM) in parallel so that ML algorithms can directly consume health data for training or prediction. It introduces both pull and push-based online CDS pipelines to deliver CDS in real-time. The CDS-Stack has been adopted by Johns Hopkins Medical Institute (JHMI) to deliver a sepsis early warning score since November 2017 and begins to show promising results. Furthermore, we believe CDS-Stack can be extended to outpatients too. A case study of outpatient CDS has been conducted which utilizes smartphones and machine learning to quantify the severity of Parkinson disease. In this study, a mobile Parkinson disease severity score (mPDS) is generated using a novel machine learning approach. The results show it can detect response to dopaminergic therapy, correlate strongly with traditional rating scales, and capture intraday symptom fluctuation

    Cyber Security and Critical Infrastructures 2nd Volume

    Get PDF
    The second volume of the book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles, including an editorial that explains the current challenges, innovative solutions and real-world experiences that include critical infrastructure and 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems
    corecore