30 research outputs found

    An energy efficient http adaptive streaming protocol design for mobile hand-held devices

    Get PDF
    Internet traffic generated from mobile devices has experienced a huge growth in the last few years. With the increasing popularity of streaming applications in mobile devices, video traffic generated from mobile devices is also increasing. One of the big challenges of streaming applications on mobile devices is the energy intensive behaviour of such applications. Energy management has always been a critical issue for mobile devices. A wireless network interface consumes a significant portion of the total system energy while active. During video streaming, the network interface is kept awake for a long period of time. This causes a large energy drain. There are several research works focused on reducing energy consumption during video streaming on mobile devices. HTTP adaptive streaming is gaining popularity as a method of video delivery because of its significant advantages in terms of both user-perceived quality and resource utilization. By using rate adaptation via changes in the requested video version, it adapts to varying network available capacity. There are several research work that aim to increase the performance of rate adaptation. None of the previous works have focused on reducing energy consumption during HTTP adaptive streaming. In this thesis, an energy efficient HTTP adaptive streaming protocol is designed. The new protocol uses an efficient buffer management approach and a three step bitrate selection mechanism. The proposed protocol is implemented by modifying the Adobe OSMF player version 1.6. Performance evaluation of the new protocol is carried out by running a number of experiments in both a lab environment and three real world environments. The experimental results show that the proposed protocol is able to achieve high amounts of sleep time (by more than an estimated 70% for WiFi and more than 35% for 3G/EDGE) and reduce energy consumption during data transfer. It can also reduce data wastage by 80% in case of playback interruption in the video playback

    Enhancing User Experience by Extracting Application Intelligence from Network Traffic

    Full text link
    Internet Service Providers (ISPs) continue to get complaints from users on poor experience for diverse Internet applications ranging from video streaming and gaming to social media and teleconferencing. Identifying and rectifying the root cause of these experience events requires the ISP to know more than just coarse-grained measures like link utilizations and packet losses. Application classification and experience measurement using traditional deep packet inspection (DPI) techniques is starting to fail with the increasing adoption of traffic encryption and is not cost-effective with the explosive growth in traffic rates. This thesis leverages the emerging paradigms of machine learning and programmable networks to design and develop systems that can deliver application-level intelligence to ISPs at scale, cost, and accuracy that has hitherto not been achieved before. This thesis makes four new contributions. Our first contribution develops a novel transformer-based neural network model that classifies applications based on their traffic shape, agnostic to encryption. We show that this approach has over 97% f1-score for diverse application classes such as video streaming and gaming. Our second contribution builds and validates algorithmic and machine learning models to estimate user experience metrics for on-demand and live video streaming applications such as bitrate, resolution, buffer states, and stalls. For our third contribution, we analyse ten popular latency-sensitive online multiplayer games and develop data structures and algorithms to rapidly and accurately detect each game using automatically generated signatures. By combining this with active latency measurement and geolocation analysis of the game servers, we help ISPs determine better routing paths to reduce game latency. Our fourth and final contribution develops a prototype of a self-driving network that autonomously intervenes just-in-time to alleviate the suffering of applications that are being impacted by transient congestion. We design and build a complete system that extracts application-aware network telemetry from programmable switches and dynamically adapts the QoS policies to manage the bottleneck resources in an application-fair manner. We show that it outperforms known queue management techniques in various traffic scenarios. Taken together, our contributions allow ISPs to measure and tune their networks in an application-aware manner to offer their users the best possible experience

    Quality-driven management of video streaming services in segment-based cache networks

    Get PDF

    Performance evaluation of caching placement algorithms in named data network for video on demand service

    Get PDF
    The purpose of this study is to evaluate the performance of caching placement algorithms (LCD, LCE, Prob, Pprob, Cross, Centrality, and Rand) in Named Data Network (NDN) for Video on Demand (VoD). This study aims to increment the service quality and to decrement the time of download. There are two stages of activities resulted in the outcome of the study: The first is to determine the causes of delay performance in NDN cache algorithms used in VoD workload. The second activity is the evaluation of the seven cache placement algorithms on the cloud of video content in terms of the key performance metrics: delay time, average cache hit ratio, total reduction in the network footprint, and reduction in load. The NS3 simulations and the Internet2 topology were used to evaluate and analyze the findings of each algorithm, and to compare the results based on cache sizes: 1GB, 10GB, 100GB, and 1TB. This study proves that the different user requests of online videos would lead to delay in network performance. In addition to that the delay also caused by the high increment of video requests. Also, the outcomes led to conclude that the increase in cache capacity leads to make the placement algorithms have a significant increase in the average cache hit ratio, a reduction in server load, and the total reduction in network footprint, which resulted in obtaining a minimized delay time. In addition to that, a conclusion was made that Centrality is the worst cache placement algorithm based on the results obtained

    Improving video streaming experience through network measurements and analysis

    Get PDF
    Multimedia traffic dominates today’s Internet. In particular, the most prevalent traffic carried over wired and wireless networks is video. Most popular streaming providers (e.g. Netflix, Youtube) utilise HTTP adaptive streaming (HAS) for video content delivery to end-users. The power of HAS lies in the ability to change video quality in real time depending on the current state of the network (i.e. available network resources). The main goal of HAS algorithms is to maximise video quality while minimising re-buffering events and switching between different qualities. However, these requirements are opposite in nature, so striking a perfect blend is challenging, as there is no single widely accepted metric that captures user experience based on the aforementioned requirements. In recent years, researchers have put a lot of effort into designing subjectively validated metrics that can be used to map quality, re-buffering and switching behaviour of HAS players to the overall user experience (i.e. video QoE). This thesis demonstrates how data analysis can contribute in improving video QoE. One of the main characteristics of mobile networks is frequent throughput fluctuations. There are various underlying factors that contribute to this behaviour, including rapid changes in the radio channel conditions, system load and interaction between feedback loops at the different time scales. These fluctuations highlight the challenge to achieve a high video user experience. In this thesis, we tackle this issue by exploring the possibility of throughput prediction in cellular networks. The need for better throughput prediction comes from data-based evidence that standard throughput estimation techniques (e.g. exponential moving average) exhibit low prediction accuracy. Cellular networks deploy opportunistic exponential scheduling algorithms (i.e. proportional-fair) for resource allocation among mobile users/devices. These algorithms take into account a user’s physical layer information together with throughput demand. While the algorithm itself is proprietary to the manufacturer, physical layer and throughput information are exchanged between devices and base stations. Availability of this information allows for a data-driven approach for throughput prediction. This thesis utilises a machine-learning approach to predict available throughput based on measurements in the near past. As a result, a prediction accuracy with an error less than 15% in 90% of samples is achieved. Adding information from other devices served by the same base station (network-based information) further improves accuracy while lessening the need for a large history (i.e. how far to look into the past). Finally, the throughput prediction technique is incorporated to state-of-the-art HAS algorithms. The approach is validated in a commercial cellular network and on a stock mobile device. As a result, better throughput prediction helps in improving user experience up to 33%, while minimising re-buffering events by up to 85%. In contrast to wireless networks, channel characteristics of the wired medium are more stable, resulting in less prominent throughput variations. However, all traffic traverses through network queues (i.e. a router or switch), unlike in cellular networks where each user gets a dedicated queue at the base station. Furthermore, network operators usually deploy a simple first-in-first-out queuing discipline at queues. As a result, traffic can experience excessive delays due to the large queue sizes, usually deployed in order to minimise packet loss and maximise throughput. This effect, also known as bufferbloat, negatively impacts delay-sensitive applications, such as web browsing and voice. While there exist guidelines for modelling queue size, there is no work analysing its impact on video streaming traffic generated by multiple users. To answer this question, the performance of multiple videos clients sharing a bottleneck link is analysed. Moreover, the analysis is extended to a realistic case including heterogeneous round-trip-time (RTT) and traffic (i.e. web browsing). Based on experimental results, a simple two queue discipline is proposed for scheduling heterogeneous traffic by taking into account application characteristics. As a result, compared to the state-of-the-art Active Queue Management (AQM) discipline, Controlled Delay Management (CoDel), the proposed discipline decreases median Page Loading Time (PLT) of web traffic by up to 80% compared to CoDel, with no significant negative impact on video QoE

    Analyse mathématique, méthode de calcul de la gigue et applications aux réseaux Internet

    Get PDF
    RÉSUMÉ Internet, ces dernières années, sert de support de communication à un grand nombre d’applications. L’évolution des réseaux à haut débit ont facilité le progrès des applications multimédia comme la voix sur IP, la vidéo streaming ou la vidéo interactive en temps réel... La variation de la disponibilité des ressources du réseau ne peut pas garantir une bonne qualité à tout moment pour ces services. C’est dans ce contexte que les travaux de ce projet de doctorat s’inscrivent et précisément dans le cadre de l’optimisation de la qualité de service (QoS). Les mécanismes de contrôle de QoS sont variés. On retrouve le contrôle de délai, assuré par la stratégie d’ordonnancement des paquets. Le contrôle de débit, quant à lui, fait en sorte que le débit de la source soit égal à la bande passante disponible dans le réseau. Excepté que les applications vidéo, surtout en temps réel, sont très sensibles à la variation du délai, appelée la gigue. En effet, la qualité perçue par les clients des vidéos en ligne dépend étroitement de la gigue. Une augmentation de la gigue engendre principalement des problèmes de démarrage retardé de la vidéo, des interruptions au cours de la vidéo et des distorsions de la résolution. L’objectif de cette thèse est d’étudier le paramètre de la gigue, qui demeure peu étudiée dans la littérature sur les réseaux IP, ainsi que d’envisager l’impact de l’augmentation de ce paramètre sur la vidéo transmise sur IP, l’une des applications les plus populaires de nos jours. Toutefois, au-delà des difficultés de la modélisation du trafic et du réseau, cet objectif majeur pose de nombreuses problématiques. Comment calculer la gigue analytiquement pour un trafic modélisé par des distributions généralisées au niveau paquet ? Est-ce que les modèles proposés sont suffisamment simples et faciles à calculer ? Comment intégrer ces nouvelles formalisations pour le contrôle des performances ? Comment l’estimation analytique peut- elle minimiser le trafic des paquets de contrôle des connexions vidéo? Nous explorons tout d’abord le calcul de la gigue dans des files d’attente avec des trafics autres que le trafic Poisson. Ce dernier est largement utilisé pour modéliser le trafic sur Internet étant donnée sa simplicité en échange de la imprécision. L’idée pour le calcul de la gigue est d’utiliser, d’une part la même formule que le cas du Poisson mais en intégrant d’autres distributions, et d’autre part des approximations et des hypothèses quand la caractérisation analytique du temps de transit n’est pas possible. Nous adoptons la simulation pour valider les modèles approximatifs. L’ensemble de simulations montre que la gigue moyenne calculée par notre modèle et celle obtenue par simulation coïncident avec des intervalles de confiance adéquats. De plus, le temps de calcul estimé pour évaluer la gigue est minime, ce qui facilite l’utilisation des formules proposées dans des outils de contrôle et en optimisation.-----------ABSTRACT In recent years, we have witnessed the huge use of the Internet Protocol for delivering multimedia trafic. Developments in broadband networks led the progress in multimedia applications such as voice over IP, video streaming or real-time videos. However, the stochastic nature of the networks, in particular mobile networks, make it difficult to maintain a good quality at all times. The research of this PhD thesis deals with the improvement of the quality of service (QoS) for this kind of applications. Current network protocols provide multiple QoS control mechanism. Congestion control and transmission delay optimization are provided by packet scheduling strategies and bandwidth planning. Moreover, flow control adjusts the mismatch between the video server rate and the receiver available bandwidth. Nevertheless, video applications, in particular interactive videos, are very sensitive to delay variation, commonly called jitter. Indeed, the customers’ perceived video quality depends on it. A jitter increase may cause a large video start-up delay, video interruptions and a decrease of image quality. The main objective of this thesis is the study of jitter, which is not much studied in the IP literature. We also examine the impact of the increase of this parameter on video transmission. However, beyond the difficulties of modeling traffic and network, this major objective raises many other issues. How to calculate jitter analytically for traffic models with general distributions? Are the proposed models sufficiently simple and easy to calculate? How to integrate these new formalizations into performance monitoring? How can the analytical estimate minimize the traffic control packets exchange for each video connection? We first explore the jitter calculation in queues with traffic other than Poisson traffic, that was widely used to model Internet traffic because of its simplicity. The idea is to compute jitter with the same formula for the Poisson traffic case, but with other distributions. For this, we need some approximations and assumptions when the analytical characterization of the transit time is not possible. We adopt simulations to validate the approximate models. The set of simulations shows that the average jitter calculated by our model and by simulation coincide within an appropriate confidence intervals. Moreover, the execution time to evaluate jitter is small, which facilitates the use of the proposed formulas in control tools and in optimization models. We then study the possibility of exploiting this analytical results to control jitter buffers, an important component in the video transmission. We find that it is possible to evaluate its performances analytically by estimating jitter inside this type of buffer

    Entendendo o efeito das condições da rede na qualidade de experiência do usuário

    Get PDF
    Foi previsto que, até 2022, aproximadamente 82% do tráfego na Internet será tráfego de vídeo (CISCO, 2019). A expectativa é de que as pessoas assistam os vídeos em diferentes equipamentos, como celulares, smart TVs, computadores e tablets. Ao mesmo tempo, os usuários têm se tornado cada vez mais exigentes quanto à qualidade dos vídeos. Nesse contexto, torna-se crucial que provedores de internet entendam como condições de rede afetam a qualidade dos vídeos, visto que isso impacta diretamente na qualidade de experiência (QoE) do usuário. O objetivo principal deste trabalho é estudar a relação entre o tamanho do buffer do driver WiFi e a QoE percebida, fazendo uso de métodos interpretativos. A análise é baseada em experimentos que consistem na coleta de dados de uma aplicação de vídeo que é transmitida em uma rede monitorada. Coleto métricas de vídeos do YouTube usando uma extensão do Google Chrome, implementada em javascript. Mais especificamente, foram coletados dados que permitem a obtenção de: latência inicial, taxa do vídeo, mudanças na taxa do vídeo e ocorrência e duração de rebufferizações. Essas métricas servem como proxies para a QoE percebida pelo usuário. Para entender como as métricas de QoE se comportam com mudanças no desempenho da rede, vario as condições de rede, como, por exemplo, a taxa de perda de pacotes e, crucialmente, o tamanho do buffer do driver de WiFi do roteador de modo a analisar como as métricas de QoE se comportam sujeitas a essas variações. No futuro experimentos serão realizados com clientes voluntários de um provedor de internet para a criação de um modelo de inferência de métricas de QoE a partir de métricas de rede e o tamanho do buffer do driver WiFi

    QoE Evaluation Across a Range of User Age Groups in Video Applications

    Get PDF
    PhDQuality of Service (QoS) measures are the network parameters; delay, jitter, and loss and they do not reflect the actual quality of the service received by the end user. To get an actual view of the performance from a user’s perspective, the Quality of the Experience (QoE) measure is now used. Traditionally, QoS network measurements are carried on actual network components, such as the routers and switches since these are the key network components. In this thesis, however, the experimentation has been done on real video traffic. The experimental setup made use of a very popular network tool, Network Emulator (NetEm) created by the Linux Foundation. NetEm allows network emulation without using the actual network devices such as the routers and traffic generator. The common NetEm offered features are those that have been used by the researchers in the past. These have the same limitation as a traditional simulator, which is the inability of NetEm delay jitter model to represent realistic network traffic models, such to reflect the behaviour of real world networks. The NetEm default method of inputting delay and jitter adds or subtracts a fixed amount of delay on the outgoing traffic. NetEm also allows the user to add this variation in a correlated fashion. However, using this technique the outputted packet delays are generated in such a way as to be very limited and hence not much like real internet traffic which has a vast range of delays. The standard alternative that NetEm allows is generate the delays from either a Normal (Gaussian) or Pareto distribution. This research, however, has shown that using a Gaussian or Pareto distribution also has very severe limitations, and these are fully discussed and described in Chapter 5 on page 68 of this thesis. This research adopts another approach that is also allowed (with more difficulty) by NetEm: by measuring a very large number of packet delays generated from a double exponential distribution a packet delay profile is created that far better imitates the actual delays seen in Internet traffic. In this thesis a large set of statistical delay values were gathered and used to create delay distribution tables. Additionally, to overcome another default behaviour of NetEm of re-ordering packets once jitter is implemented, PFIFO queuing discipline has been deployed to retain the original packet order regardless of the highest levels of implemented jitter. Furthermore, this advancement in NetEm’s functionality also incorporates the ability to combine delay, jitter, and loss, which is not allowed on NetEm by default. In the literature, no work has been found to have utilised NetEm previously with such an advancement. Focusing on Video On Demand (VOD) it was discovered that the reported QoE may differ widely for users of different age groups, and that the most demanding age group (the youngest) can require an order of magnitude lower PLP to achieve the same QoE than is required by the most widely studied age group of users. A bottleneck TCP model was then used to evaluate the capacity cost of achieving an order of magnitude decrease in PLP, and found it be (almost always) a 3-fold increase in link capacity that was required. The results are potentially very useful to service providers and network designers to be able to provide a satisfactory service to their customers, and in return, maintaining a prosperous business.EPSRC (1589943)

    Entrega de conteúdos multimédia em over-the-top: caso de estudo das gravações automáticas

    Get PDF
    Doutoramento em Engenharia EletrotécnicaOver-The-Top (OTT) multimedia delivery is a very appealing approach for providing ubiquitous, exible, and globally accessible services capable of low-cost and unrestrained device targeting. In spite of its appeal, the underlying delivery architecture must be carefully planned and optimized to maintain a high Qualityof- Experience (QoE) and rational resource usage, especially when migrating from services running on managed networks with established quality guarantees. To address the lack of holistic research works on OTT multimedia delivery systems, this Thesis focuses on an end-to-end optimization challenge, considering a migration use-case of a popular Catch-up TV service from managed IP Television (IPTV) networks to OTT. A global study is conducted on the importance of Catch-up TV and its impact in today's society, demonstrating the growing popularity of this time-shift service, its relevance in the multimedia landscape, and tness as an OTT migration use-case. Catch-up TV consumption logs are obtained from a Pay-TV operator's live production IPTV service containing over 1 million subscribers to characterize demand and extract insights from service utilization at a scale and scope not yet addressed in the literature. This characterization is used to build demand forecasting models relying on machine learning techniques to enable static and dynamic optimization of OTT multimedia delivery solutions, which are able to produce accurate bandwidth and storage requirements' forecasts, and may be used to achieve considerable power and cost savings whilst maintaining a high QoE. A novel caching algorithm, Most Popularly Used (MPU), is proposed, implemented, and shown to outperform established caching algorithms in both simulation and experimental scenarios. The need for accurate QoE measurements in OTT scenarios supporting HTTP Adaptive Streaming (HAS) motivates the creation of a new QoE model capable of taking into account the impact of key HAS aspects. By addressing the complete content delivery pipeline in the envisioned content-aware OTT Content Delivery Network (CDN), this Thesis demonstrates that signi cant improvements are possible in next-generation multimedia delivery solutions.A entrega de conteúdos multimédia em Over-The-Top (OTT) e uma proposta atractiva para fornecer um serviço flexível e globalmente acessível, capaz de alcançar qualquer dispositivo, com uma promessa de baixos custos. Apesar das suas vantagens, e necessario um planeamento arquitectural detalhado e optimizado para manter níveis elevados de Qualidade de Experiência (QoE), em particular aquando da migração dos serviços suportados em redes geridas com garantias de qualidade pré-estabelecidas. Para colmatar a falta de trabalhos de investigação na área de sistemas de entrega de conteúdos multimédia em OTT, esta Tese foca-se na optimização destas soluções como um todo, partindo do caso de uso de migração de um serviço popular de Gravações Automáticas suportado em redes de Televisão sobre IP (IPTV) geridas, para um cenário de entrega em OTT. Um estudo global para aferir a importância das Gravações Automáticas revela a sua relevância no panorama de serviços multimédia e a sua adequação enquanto caso de uso de migração para cenários OTT. São obtidos registos de consumos de um serviço de produção de Gravações Automáticas, representando mais de 1 milhão de assinantes, para caracterizar e extrair informação de consumos numa escala e âmbito não contemplados ate a data na literatura. Esta caracterização e utilizada para construir modelos de previsão de carga, tirando partido de sistemas de machine learning, que permitem optimizações estáticas e dinâmicas dos sistemas de entrega de conteúdos em OTT através de previsões das necessidades de largura de banda e armazenamento, potenciando ganhos significativos em consumo energético e custos. Um novo mecanismo de caching, Most Popularly Used (MPU), demonstra um desempenho superior as soluções de referencia, quer em cenários de simulação quer experimentais. A necessidade de medição exacta da QoE em streaming adaptativo HTTP motiva a criaçao de um modelo capaz de endereçar aspectos específicos destas tecnologias adaptativas. Ao endereçar a cadeia completa de entrega através de uma arquitectura consciente dos seus conteúdos, esta Tese demonstra que são possíveis melhorias de desempenho muito significativas nas redes de entregas de conteúdos em OTT de próxima geração
    corecore