61 research outputs found

    A delay-based aggregate rate control for P2P streaming systems

    Get PDF
    In this paper we consider mesh based P2P streaming systems focusing on the problem of regulating peer transmission rate to match the system demand while not overloading each peer upload link capacity. We propose Hose Rate Control (HRC), a novel scheme to control the speed at which peers offer chunks to other peers, ultimately controlling peer uplink capacity utilization. This is of critical importance for heterogeneous scenarios like the one faced in the Internet, where peer upload capacity is unknown and varies widely. HRC nicely adapts to the actual peer available upload bandwidth and system demand, so that Quality of Experience is greatly enhanced. To support our claims we present both simulations and actual experiments involving more than 1000 peers to assess performance in real scenarios. Results show that HRC consistently outperforms the Quality of Experience achieved by non-adaptive scheme

    A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics

    Get PDF
    A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques

    Ad-hoc Stream Adaptive Protocol

    Get PDF
    With the growing market of smart-phones, sophisticated applications that do extensive computation are common on mobile platform; and with consumers’ high expectation of technologies to stay connected on the go, academic researchers and industries have been making efforts to find ways to stream multimedia contents to mobile devices. However, the restricted wireless channel bandwidth, unstable nature of wireless channels, and unpredictable nature of mobility, has been the major road block for wireless streaming advance forward. In this paper, various recent studies on mobility and P2P system proposal are explained and analyzed, and propose a new design based on existing P2P systems, aimed to solve the wireless and mobility issues

    Optimizing on-demand resource deployment for peer-assisted content delivery

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for services in a pee-to-peer (P2P) fashion. Such peer-assisted service paradigm promises significant infrastructure cost reduction, but suffers from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to clients especially for real-time applications where content can not be cached. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to efficiently utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the upstream capacity of clients. We target three applications that require the delivery of real-time as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time - the time it takes to deliver the content to all clients in a group. The second application is live video streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for clients running bandwidth-intensive applications. For each of the above applications, we develop analytical models that efficiently allocate the already available resources. They also efficiently allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate these techniques through simulation and/or implementation

    Optimizing on-demand resource deployment for peer-assisted content delivery (PhD thesis)

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for service in a peer-to-peer (P2P) fashion. Such peer-assisted service paradigms promise significant infrastructure cost reduction, but suffer from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to the clients. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to optimally utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the uplink capacity of clients. We target three applications that require the delivery of fresh as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time -- the time it takes to deliver the content to all clients in a group. The second application is live streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for bandwidth-intensive applications. For each of the above applications, we develop mathematical models that optimally allocate the already available resources. They also optimally allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate those techniques through simulation and/or implementation. (Major Advisor: Azer Bestavros

    Investigating the Effects of Network Dynamics on Quality of Delivery Prediction and Monitoring for Video Delivery Networks

    Get PDF
    Video streaming over the Internet requires an optimized delivery system given the advances in network architecture, for example, Software Defined Networks. Machine Learning (ML) models have been deployed in an attempt to predict the quality of the video streams. Some of these efforts have considered the prediction of Quality of Delivery (QoD) metrics of the video stream in an effort to measure the quality of the video stream from the network perspective. In most cases, these models have either treated the ML algorithms as black-boxes or failed to capture the network dynamics of the associated video streams. This PhD investigates the effects of network dynamics in QoD prediction using ML techniques. The hypothesis that this thesis investigates is that ML techniques that model the underlying network dynamics achieve accurate QoD and video quality predictions and measurements. The thesis results demonstrate that the proposed techniques offer performance gains over approaches that fail to consider network dynamics. This thesis results highlight that adopting the correct model by modelling the dynamics of the network infrastructure is crucial to the accuracy of the ML predictions. These results are significant as they demonstrate that improved performance is achieved at no additional computational or storage cost. These techniques can help the network manager, data center operatives and video service providers take proactive and corrective actions for improved network efficiency and effectiveness

    Network traffic classification : from theory to practice

    Get PDF
    Since its inception until today, the Internet has been in constant transformation. The analysis and monitoring of data networks try to shed some light on this huge black box of interconnected computers. In particular, the classification of the network traffic has become crucial for understanding the Internet. During the last years, the research community has proposed many solutions to accurately identify and classify the network traffic. However, the continuous evolution of Internet applications and their techniques to avoid detection make their identification a very challenging task, which is far from being completely solved. This thesis addresses the network traffic classification problem from a more practical point of view, filling the gap between the real-world requirements from the network industry, and the research carried out. The first block of this thesis aims to facilitate the deployment of existing techniques in production networks. To achieve this goal, we study the viability of using NetFlow as input in our classification technique, a monitoring protocol already implemented in most routers. Since the application of packet sampling has become almost mandatory in large networks, we also study its impact on the classification and propose a method to improve the accuracy in this scenario. Our results show that it is possible to achieve high accuracy with both sampled and unsampled NetFlow data, despite the limited information provided by NetFlow. Once the classification solution is deployed it is important to maintain its accuracy over time. Current network traffic classification techniques have to be regularly updated to adapt them to traffic changes. The second block of this thesis focuses on this issue with the goal of automatically maintaining the classification solution without human intervention. Using the knowledge of the first block, we propose a classification solution that combines several techniques only using Sampled NetFlow as input for the classification. Then, we show that classification models suffer from temporal and spatial obsolescence and, therefore, we design an autonomic retraining system that is able to automatically update the models and keep the classifier accurate along time. Going one step further, we introduce next the use of stream-based Machine Learning techniques for network traffic classification. In particular, we propose a classification solution based on Hoeffding Adaptive Trees. Apart from the features of stream-based techniques (i.e., process an instance at a time and inspect it only once, with a predefined amount of memory and a bounded amount of time), our technique is able to automatically adapt to the changes in the traffic by using only NetFlow data as input for the classification. The third block of this thesis aims to be a first step towards the impartial validation of state-of-the-art classification techniques. The wide range of techniques, datasets, and ground-truth generators make the comparison of different traffic classifiers a very difficult task. To achieve this goal we evaluate the reliability of different Deep Packet Inspection-based techniques (DPI) commonly used in the literature for ground-truth generation. The results we obtain show that some well-known DPI techniques present several limitations that make them not recommendable as a ground-truth generator in their current state. In addition, we publish some of the datasets used in our evaluations to address the lack of publicly available datasets and make the comparison and validation of existing techniques easier

    스마트폰에서의 다속성 기반 다중 네트워크 운용 최적화 기법 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 2. 최성현.Todays smartphones integrate multiple radio access technologies (multi-RAT), e.g., 3G, 4G, WiFi, and Bluetooth, etc. Moreover, state-of-the-art smartphones can activate multiple RAT interfaces simultaneously for the parallel transmission. Therefore, it is becoming more important to select the best RAT set among the available RATs, and determine how much data to transfer via each selected RAT network. We propose Energy, Service charge, and Performance Aware (ESPA), an adaptive multi-RAT operation policies for smartphone with supporting system design and multi-attribute cost function for smartphones Internet services including multimedia file transfer and video streaming services. ESPAs cost function incorporates battery energy, data usage quota, and service specific performance, simultaneously. These attributes are motivated by the growing sensitivity of todays smartphone users to these attributes. Each time the individual attributes are calculated and updated, ESPA selects the optimal RAT set that minimizes the overall cost. It can activate only the best one RAT interface or exploit multiple RATs simultaneously. The primary benefit of the ESPA is that it enables the smartphone to always operate in the best mode without the need for users manual controlthe energy saving mode if the remaining battery energy is becoming nearly depletedthe cost-saving mode if the remaining data quota is almost running outor, the performance-oriented mode if remaining data quota and battery energy are both sufficient. From Chapter 2 to Chapter 4, we cope with file transfer, video streaming, and standby mode for our proposed algorithms. The proposed algorithms are based on the service specific cost or utility models, which also take into account practical issues related to user satisfaction metrics. First, for file transfer mode, we apply the transfer completion time as the performance metric, and the energy consumption and service charge for downloading a specific size of file are simultaneously considered. Furthermore, we especially take into account a problem that the computational complexity exponentially increases as the number of available RATs increases. We propose a heuristic linear search algorithm to find the optimal RAT set without significant performance degradation. Secondly, for video streaming mode, we consider the HTTP-based video streaming model exploiting multipath with LTE and WiFi networks. Based on analysis of the energy consumption and data usage for the video streaming services, we propose a multi-RAT based video streaming algorithm that balances between the video quality, i.e., the performance metric, and the total playback time with currently given battery energy and data quota. Finally, we cope with the battery energy leakage issue of the smartphone in the standby mode due to intermittent traffic generated by some applications running on background. We analyze the energy-consuming factors in the standby mode and smartphone usage patterns of multiple users, and then, propose a usage pattern-aware deep sleep operation algorithm to save the battery energy in the standby mode. Simulation results based on real measurement data of the smartphone show that the ESPA algorithms indeed choose the best operational mode by maintaining dynamic balance among the performance, energy consumption, and service charge considering the currently provided services and the remaining resources.Abstract i Contents iv List of Tables vii List of Figures viii 1 Introduction 1 1.1 Energy, Service Charge, and Performance aware Multi-RAT Operation Policies for Smartphone 1.2 Overview of Existing Approaches 1.2.1 Multi-attribute based network selection 1.2.2 Energy and quota-aware video streaming services 1.2.3 Multi-path based approaches 1.3 Main Contributions 1.3.1 File transfer mode 1.3.2 Video streaming mode 1.3.3 Standby mode 1.4 Organization of the Dissertation 2 File Transfer Mode 2.1 Introduction 2.2 System Model 2.3 Problem Formulation 2.3.1 T-E-Q cost modeling 2.3.2 Optimization problem 2.4 Numerical Analysis 2.5 Proposed Algorithm 2.5.1 Bi-directional linear search algorithm 2.5.2 Dynamic update algorithm 2.6 Performance Evaluation 2.7 Summary 3 Video Streaming Mode 3.1 Introduction 3.2 System Model 3.2.1 HTTP-based playback model 3.2.2 LTE/WiFi-based multipath video streaming model 3.3 Chunk Download Cycle based Analysis 3.3.1 Data and energy consumption rate 3.3.2 Expected waste of data and energy 3.4 Proposed Scheme 3.4.1 Problem formulation 3.4.2 Subproblem I: Playback time maximization 3.4.3 Subproblem II: Balancing between encoding rate and total playback time 3.5 Performance Evaluation 3.5.1 Maximization of playback time with a single path 3.5.2 Balancing between video quality and playback time with LTE/WiFi multiple networks 3.6 Summary 4 Standby Mode 4.1 Introduction 4.2 Standby Mode Power Anatomy of Smartphones 4.2.1 Low power mode operation 4.2.2 Power consumption for background traffic 4.2.3 WiFi MAC overhead issue 4.3 Usage Log-based Idle Duration Analysis 4.3.1 User-specific daily distribution of idle duration 4.3.2 All-day distribution 4.3.3 Activity/inactivity time separation 4.4 Proposed Algorithm 4.4.1 Learning phase 4.4.2 Deep Sleep Mode (DSM) operation 4.5 Performance Evaluation 4.5.1 Performance comparison 4.5.2 Effect of Tonoff 4.6 Summary 5 Conclusion 5.1 Concluding Remarks Abstract (In Korean)Docto

    Incentive-driven QoS in peer-to-peer overlays

    Get PDF
    A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation
    corecore