167 research outputs found

    Quality of experience-centric management of adaptive video streaming services : status and challenges

    Get PDF
    Video streaming applications currently dominate Internet traffic. Particularly, HTTP Adaptive Streaming ( HAS) has emerged as the dominant standard for streaming videos over the best-effort Internet, thanks to its capability of matching the video quality to the available network resources. In HAS, the video client is equipped with a heuristic that dynamically decides the most suitable quality to stream the content, based on information such as the perceived network bandwidth or the video player buffer status. The goal of this heuristic is to optimize the quality as perceived by the user, the so-called Quality of Experience (QoE). Despite the many advantages brought by the adaptive streaming principle, optimizing users' QoE is far from trivial. Current heuristics are still suboptimal when sudden bandwidth drops occur, especially in wireless environments, thus leading to freezes in the video playout, the main factor influencing users' QoE. This issue is aggravated in case of live events, where the player buffer has to be kept as small as possible in order to reduce the playout delay between the user and the live signal. In light of the above, in recent years, several works have been proposed with the aim of extending the classical purely client-based structure of adaptive video streaming, in order to fully optimize users' QoE. In this article, a survey is presented of research works on this topic together with a classification based on where the optimization takes place. This classification goes beyond client-based heuristics to investigate the usage of server-and network-assisted architectures and of new application and transport layer protocols. In addition, we outline the major challenges currently arising in the field of multimedia delivery, which are going to be of extreme relevance in future years

    Context-Aware Adaptive Prefetching for DASH Streaming over 5G Networks

    Full text link
    The increasing consumption of video streams and the demand for higher-quality content drive the evolution of telecommunication networks and the development of new network accelerators to boost media delivery while optimizing network usage. Multi-access Edge Computing (MEC) enables the possibility to enforce media delivery by deploying caching instances at the network edge, close to the Radio Access Network (RAN). Thus, the content can be prefetched and served from the MEC host, reducing network traffic and increasing the Quality of Service (QoS) and the Quality of Experience (QoE). This paper proposes a novel mechanism to prefetch Dynamic Adaptive Streaming over HTTP (DASH) streams at the MEC, employing a Machine Learning (ML) classification model to select the media segments to prefetch. The model is trained with media session metrics to improve the forecasts with application layer information. The proposal is tested with Mobile Network Operators (MNOs)' 5G MEC and RAN and compared with other strategies by assessing cache and player's performance metrics

    Enhancing Video Streaming Quality of DASH over Cloud/Edge Integrated Networks

    Get PDF
    With the advancement of mobile technologies and the popularity of mobile devices, mobile video streaming applications/services have increased considerably in recent years. Dynamic Adaptive Streaming over HTTP (DASH) or MPEG-DASH is one of the most widely used video streaming techniques over the Internet. It adapts video sending bit rate according to available network resources, however, in case of low bandwidth, DASH performs poorly, which will cause video quality degradation and video stalling. Mobile Edge Computing (MEC) or Multi-access Edge Computing, in connection with the backend cloud has been used to reduce latency and overcome some of the video quality degradation problems for mobile video streaming services. However, an end user might be suffering from video quality drop downs when s/he moves out from the coverage of one node to another or when the mobile network condition goes down. To tackle the degradation problems and assure enhanced video streaming quality, a novel follow-me Edge Node Prefetching (ENP) scheme was proposed and developed in the project, by prefetching video segments in advance in the upcoming node used by the end-user. A test bed was set up consisting of a backend cloud (OpenStack), two edge nodes (LXD Containers) and one mobile device, the ENP algorithm was implemented on the cloud server and client sides. Experiments were carried out for the DASH streaming service based on Dash.js from the DASH Industry Forum. Preliminary results show that the ENP scheme can maintain higher video quality and less service migration time when moving from one mobile node to another, when compared to existing approaches, i.e. live migration in Follow-me-Edge and the C-up schemes. The proposed scheme could be useful in smart city applications or providing seamless mobile video streaming services in Cloud/Edge integrated networks.Ibrahim Mohammedamee

    Provider-Controlled Bandwidth Management for HTTP-based Video Delivery

    Get PDF
    Over the past few years, a revolution in video delivery technology has taken place as mobile viewers and over-the-top (OTT) distribution paradigms have significantly changed the landscape of video delivery services. For decades, high quality video was only available in the home via linear television or physical media. Though Web-based services brought video to desktop and laptop computers, the dominance of proprietary delivery protocols and codecs inhibited research efforts. The recent emergence of HTTP adaptive streaming protocols has prompted a re-evaluation of legacy video delivery paradigms and introduced new questions as to the scalability and manageability of OTT video delivery. This dissertation addresses the question of how to enable for content and network service providers the ability to monitor and manage large numbers of HTTP adaptive streaming clients in an OTT environment. Our early work focused on demonstrating the viability of server-side pacing schemes to produce an HTTP-based streaming server. We also investigated the ability of client-side pacing schemes to work with both commodity HTTP servers and our HTTP streaming server. Continuing our client-side pacing research, we developed our own client-side data proxy architecture which was implemented on a variety of mobile devices and operating systems. We used the portable client architecture as a platform for investigating different rate adaptation schemes and algorithms. We then concentrated on evaluating the network impact of multiple adaptive bitrate clients competing for limited network resources, and developing schemes for enforcing fair access to network resources. The main contribution of this dissertation is the definition of segment-level client and network techniques for enforcing class of service (CoS) differentiation between OTT HTTP adaptive streaming clients. We developed a segment-level network proxy architecture which works transparently with adaptive bitrate clients through the use of segment replacement. We also defined a segment-level rate adaptation algorithm which uses download aborts to enforce CoS differentiation across distributed independent clients. The segment-level abstraction more accurately models application-network interactions and highlights the difference between segment-level and packet-level time scales. Our segment-level CoS enforcement techniques provide a foundation for creating scalable managed OTT video delivery services

    QoE-Assured 4K HTTP live streaming via transient segment holding at mobile edge

    Get PDF
    HTTP-based live streaming has become increasingly popular in recent years, and more users have started generating 4K live streams from their devices (e.g., mobile phones) through social-media service providers like Facebook or YouTube. If the audience is located far from a live stream source across the global Internet, TCP throughput becomes substantially suboptimal due to slow-start and congestion control mechanisms. This is especially the case when the end-to-end content delivery path involves radio access network (RAN) at the last mile. As a result, the data rate perceived by a mobile receiver may not meet the high requirement of 4K video streams, which causes deteriorated Quality-of-Experience (QoE). In this paper, we propose a scheme named Edge-based Transient Holding of Live sEgment (ETHLE), which addresses the issue above by performing context-aware transient holding of video segments at the mobile edge with virtualized content caching capability. Through holding the minimum number of live video segments at the mobile edge cache in a context-aware manner, the ETHLE scheme is able to achieve seamless 4K live streaming experiences across the global Internet by eliminating buffering and substantially reducing initial startup delay and live stream latency. It has been deployed as a virtual network function at an LTE-A network, and its performance has been evaluated using real live stream sources that are distributed around the world. The significance of this paper is that by leveraging on virtualized caching resources at the mobile edge, we have addressed the conventional transport-layer bottleneck and enabled QoE-assured Internet-wide live streaming to support the emerging live streaming services with high data rate requirements

    QoE-based mobility-aware collaborative video streaming on the edge of 5G

    Get PDF
    Today's Internet traffic is dominated by video streaming applications transmitted through wireless/cellular interfaces of mobile devices. Although ultrahigh-definition videos are now easily transmitted through mobile devices, video quality level that users perceive is generally lower than expected due to distance-based high latency between sources and end-users. Mobile edge computing (MEC) paradigm is expected to address this issue and provide users with higher perceived quality of experience (QoE) for latency-critical applications, deploying MEC servers at edges. However, due to capacity concerns on MEC servers, a more comprehensive approach is needed to meet users' expectations applying all possible operations over the resources such as caching, prefetching, and task offloading policies depending on the data repetition or memory/CPU utilization. To address these issues, this article proposes a novel collaborative QoE-based mobility-aware video streaming scheme deployed at MEC servers. Throughout the article, we demonstrate how the proposed scheme can be implemented so as to preserve the desired QoE level per user during entire video sessions. Performance of the proposed scheme has been investigated by extensive simulations. In comparison to existing schemes, the results illustrate that high efficiency is achieved through collaboration among MEC servers, utilizing explicit window size adaptation, collaborative prefetching, and handover among the edges

    Cache-Aware Adaptive Video Streaming in 5G networks

    Get PDF
    Η τεχνολογία προσαρμοστικής ροής video μέσω HTTP έχει επικρατήσει ως ο κυρίαρχος τρόπος μετάδοσης video στο Internet. Η τεχνολογία αυτή βασίζεται στη λήψη μικρών διαδοχικών τμημάτων video από έναν server. Μία πρόκληση που όμως δεν έχει διερευνηθεί επαρκώς είναι η λήψη τμημάτων video από περισσότερους από έναν servers, με τρόπο που να εξυπηρετεί τόσο τις ανάγκες του δικτύου όσο και τη βελτίωση της Ποιότητας Εμπειρίας του χρήστη (Quality of Experience, QoE). Η συγκεκριμένη διπλωματική εργασία θα διερευνήσει αυτό το πρόβλημα, προσομοιώνοντας ένα δίκτυο με πολλαπλούς video servers και διάφορους video clients. Στη συνέχεια, θα υλοποιήσει τόσο την δυνατότητα επικοινωνίας peer-to-many στα πλαίσια της προσαρμοστικής ροής video όσο και τον αλγόριθμο επιλογής video server. Όλα αυτά θα διερευνηθούν στο περιβάλλον του Mininet, που είναι ένας δικτυακός εξομοιωτής, για να προσομοιωθεί η τεχνολογία DASH με τη βοήθεια των κόμβων του δικτύου του εξομοιωτή. Αρχικά, το βίντεο χωρίστηκε σε μικρά κομμάτια με τη βοήθεια του εργαλείου ffmpeg και στη συνέχεια, υλοποιήθηκαν πειράματα που ένας πελάτης ζητούσε το βίντεο από έναν server προσωρινής αποθήκευσης (cache server). Αν το συγκεκριμένο τμήμα του βίντεο δεν υπήρχε εκεί, τότε στελνόταν αίτημα από τον server προσωρινής αποθήκευσης σε έναν διακομιστή που περιείχε όλα τα τμήματα του βίντεο (main server). Στα πειράματα αυτά εξετάστηκε και η προστιθέμενη δικτυακή κίνηση, με τελικό συμπέρασμα ότι το περιβάλλον του Mininet προκαλεί αναπόφευκτους περιορισμούς στη περίπτωση της δικτυακής κίνησης, καθώς παρατηρήσαμε πως το κανάλι του server βάσης δεδομένων παρέμενε ανενεργό καθ’ όλη τη διάρκεια αιτημάτων από τον server προσωρινής αποθήκευσης, με αποτέλεσμα να δημιουργούνται συνθήκες μη-ρεαλιστικού δικτύου. Γι’ αυτόν τον λόγο, προβήκαμε στην υλοποίηση μιας νέας προσέγγισης, εξαλείφοντας το Mininet περιβάλλον και δουλεύοντας πάνω σε νέες τεχνικές προσθήκης δικτυακής κίνησης και τροποποιώντας την επικοινωνία των διακομιστών μεταξύ τους. Με αυτόν τον τρόπο, καταφέραμε να δείξουμε σαφέστερα τους περιορισμούς της προηγούμενης προσέγγισης αλλά και να συμπεράνουμε ότι η ύπαρξη servers προσωρινής αποθήκευσης είναι ένα χρήσιμο εργαλείο υπό όρους αύξησης της ποιότητας εμπειρίας ενός χρήστη. Η γενική τάση που παρατηρήθηκε ήταν ότι με την αύξηση του διαθέσιμου χώρου αποθήκευσης, η ποιότητα αναπαραγωγής του βίντεο ανέβαινε σε κάποιο βαθμό. Ταυτόχρονα όμως, το ποσοστό βελτίωσης αυτό, είναι άρρηκτα δεμένο με τον αλγόριθμο επιλογής κομματιών βίντεο που χρησιμοποιείται. Για ακόμα καλύτερα αποτελέσματα λοιπόν, θεωρείται αναγκαία η εύρεση της χρυσής τομής μεταξύ χωρητικότητας του χώρου προσωρινής αποθήκευσης και αλγορίθμου επιλογής κομματιών. Στην παρούσα διπλωματική παρουσιάζονται τα εξής κεφάλαια: Στο κεφάλαιο 1 αναφέρεται η ιστορική αναδρομή της τεχνολογίας των δικτύων. Στο κεφάλαιο 2 αναλύεται η τεχνολογία προσαρμοστικής ροής βίντεο μέσω HTTP. Στο κεφάλαιο 3 αναλύονται οι διαφορετικές τεχνικές προσωρινής αποθήκευσης. Στο κεφάλαιο 4 παρουσιάζεται η έννοια της Ποιότητας Εμπειρίας του χρήστη και η συσχέτισή της με πολλούς άλλους παράγοντες. Το κεφάλαιο 5 περιγράφεται αναλυτικά η διαδικασία στησίματος του περιβάλλοντος και τα διάφορα απαραίτητα εργαλεία για την υλοποίησή μας. Το κεφάλαιο 6 αναφέρει τα πειράματα μέσω Mininet, την τοπολογία και όλο το στήσιμο, καθώς και τους λόγους που μας οδήγησαν στην πορεία μιας διαφορετικής προσέγγισης. Στο κεφάλαιο 7 προτείνεται η διαφορετική προσέγγιση και παρουσιάζεται η μεθοδολογία και οι μετρικές. Επίσης, αναλύονται διαγράμματα που εξάχθηκαν από την ανάλυση τω μετρικών. Τέλος, το κεφάλαιο 8 αφορά τα συμπεράσματα και θέματα μελλοντικής έρευνας για βελτίωση της Ποιότητας Εμπειρίας του χρήστη περαιτέρω.Dynamic Adaptive Streaming over HTTP (DASH) has prevailed as the dominant way of video transmission over the Internet. This technology is based on receiving small sequential video segments from a server. However, one challenge that has not been adequately examined, is the obtainment of video segments from more than one server, in a way that serves both the needs of the network and the improvement of the Quality of Experience (QoE). This thesis will investigate this problem by simulating a network with multiple video servers and a video client. It will then implement both the peer-to-many communication in the context of adaptive video streaming and the video server caching algorithm based on proposed criteria that will improve the status of the network and/or the user. All of this will be explored in the environment of Mininet, which is a network emulator, in order to simulate the DASH technology with the help of the emulator network nodes. Initially, the video was split into small segments using the ffmpeg tool, and then experiments were conducted in which a client requested the video from a cache server. If the segment could not be found in the cache server, then a request was sent from the cache server to a server that contained all segments of the video (main server). In these experiments, the added traffic was also examined, by concluded to the fact that the Mininet environment causes unavoidable limitations in the case of the traffic. What we observed was that the main server channel remained inactive throughout the requests of the cache server, resulting in unrealistic network conditions. For this reason, we have explored a new approach, eliminating the Mininet environment and working on new techniques for adding web traffic and modifying the communication of the servers, regarding the requests they receive. In this way, we were able to clearly show the limitations of the previous approach but also to conclude that the existence of caching servers is a useful tool in terms of increasing the quality of experience. The general tendency was that, as the available buffer size increased, the video playback quality increased to some extent. However, at the same time this improvement is linked to the random selection algorithm. For even better results, it is considered necessary to find an appropriate caching selection algorithm in order to take full advantage of the caching technology. The following chapters presented in this thesis are: Chapter 1 mentions the historical background of the networks. Chapter 2 analyzes the Dynamic Adaptive Streaming over HTTP. Chapter 3 analyzes the caching techniques. Chapter 4 presents the concept of Quality of Experience and its correlation with many other factors. Chapter 5 describes in detail the process of setting up the environment and the various necessary tools for our implementation. Chapter 6 refers to the Mininet experiments, the topology, and the set-up, as well as the reasons that led us to a different approach. Chapter 7 proposes the different approach and presents the methodology and the metrics. Also, diagrams extracted from the analysis of the metrics are analyzed in Chapter 7. Finally, Chapter 8 summarizes the conclusions and issues of future research to improve the Quality of Experience even further

    Distribuição de conteúdos over-the-top multimédia em redes sem fios

    Get PDF
    mestrado em Engenharia Eletrónica e TelecomunicaçõesHoje em dia a Internet é considerada um bem essencial devido ao facto de haver uma constante necessidade de comunicar, mas também de aceder e partilhar conteúdos. Com a crescente utilização da Internet, aliada ao aumento da largura de banda fornecida pelos operadores de telecomunicações, criaram-se assim excelentes condições para o aumento dos serviços multimédia Over-The-Top (OTT), demonstrado pelo o sucesso apresentado pelos os serviços Netflix e Youtube. O serviço OTT engloba a entrega de vídeo e áudio através da Internet sem um controlo direto dos operadores de telecomunicações, apresentando uma proposta atractiva de baixo custo e lucrativa. Embora a entrega OTT seja cativante, esta padece de algumas limitações. Para que a proposta se mantenha em crescimento e com elevados padrões de Qualidade-de-Experiência (QoE) para os consumidores, é necessário investir na arquitetura da rede de distribuição de conteúdos, para que esta seja capaz de se adaptar aos diversos tipos de conteúdo e obter um modelo otimizado com um uso cauteloso dos recursos, tendo como objectivo fornecer serviços OTT com uma boa qualidade para o utilizador, de uma forma eficiente e escalável indo de encontro aos requisitos impostos pelas redes móveis atuais e futuras. Esta dissertação foca-se na distribuição de conteúdos em redes sem fios, através de um modelo de cache distribuída entre os diferentes pontos de acesso, aumentando assim o tamanho da cache e diminuindo o tráfego necessário para os servidores ou caches da camada de agregação acima. Assim, permite-se uma maior escalabilidade e aumento da largura de banda disponível para os servidores de camada de agregação acima. Testou-se o modelo de cache distribuída em três cenários: o consumidor está em casa em que se considera que tem um acesso fixo, o consumidor tem um comportamento móvel entre vários pontos de acesso na rua, e o consumidor está dentro de um comboio em alta velocidade. Testaram-se várias soluções como Redis2, Cachelot e Memcached para servir de cache, bem como se avaliaram vários proxies para ir de encontro ás características necessárias. Mais ainda, na distribuição de conteúdos testaram-se dois algoritmos, nomeadamente o Consistent e o Rendezvouz Hashing. Ainda nesta dissertação utilizou-se uma proposta já existente baseada na previsão de conteúdos (prefetching ), que consiste em colocar o conteúdo nas caches antes de este ser requerido pelos consumidores. No final, verificou-se que o modelo distribuído com a integração com prefecthing melhorou a qualidade de experiência dos consumidores, bem como reduziu a carga nos servidores de camada de agregação acima.Nowadays, the Internet is considered an essential good, due to the fact that there is a need to communicate, but also to access and share information. With the increasing use of the Internet, allied with the increased bandwidth provided by telecommunication operators, it has created conditions for the increase of Over-the-Top (OTT) Multimedia Services, demonstrated by the huge success of Net ix and Youtube. The OTT service encompasses the delivery of video and audio through the Internet without direct control of telecommunication operators, presenting an attractive low-cost and pro table proposal. Although the OTT delivery is captivating, it has some limitations. In order to increase the number of clients and keep the high Quality of Experience (QoE) standards, an enhanced architecture for content distribution network is needed. Thus, the enhanced architecture needs to provide a good quality for the user, in an e cient and scalable way, supporting the requirements imposed by future mobile networks. This dissertation aims to approach the content distribution in wireless networks, through a distributed cache model among the several access points, thus increasing the cache size and decreasing the load on the upstream servers. The proposed architecture was tested in three di erent scenarios: the consumer is at home and it is considered that it has a xed access, the consumer is mobile between several access points in the street, the consumer is in a high speed train. Several solutions were evaluated, such as Redis2, Cachelot and Memcached to serve as caches, along with the evaluation of several proxies server in order to ful ll the required features. Also, it was tested two distributed algorithms, namely the Consistent and Rendezvous Hashing. Moreover, in this dissertation it was integrated a prefetching mechanism, which consists of inserting the content in caches before being requested by the consumers. At the end, it was veri ed that the distributed model with prefetching improved the consumers QoE as well as it reduced the load on the upstream servers
    corecore