133 research outputs found

    Investigating the Effects of Network Dynamics on Quality of Delivery Prediction and Monitoring for Video Delivery Networks

    Get PDF
    Video streaming over the Internet requires an optimized delivery system given the advances in network architecture, for example, Software Defined Networks. Machine Learning (ML) models have been deployed in an attempt to predict the quality of the video streams. Some of these efforts have considered the prediction of Quality of Delivery (QoD) metrics of the video stream in an effort to measure the quality of the video stream from the network perspective. In most cases, these models have either treated the ML algorithms as black-boxes or failed to capture the network dynamics of the associated video streams. This PhD investigates the effects of network dynamics in QoD prediction using ML techniques. The hypothesis that this thesis investigates is that ML techniques that model the underlying network dynamics achieve accurate QoD and video quality predictions and measurements. The thesis results demonstrate that the proposed techniques offer performance gains over approaches that fail to consider network dynamics. This thesis results highlight that adopting the correct model by modelling the dynamics of the network infrastructure is crucial to the accuracy of the ML predictions. These results are significant as they demonstrate that improved performance is achieved at no additional computational or storage cost. These techniques can help the network manager, data center operatives and video service providers take proactive and corrective actions for improved network efficiency and effectiveness

    Quality of service and dependability of cellular vehicular communication networks

    Get PDF
    Improving the dependability of mobile network applications is a complicated task for many reasons: Especially in Germany, the development of cellular infrastructure has not always been fast enough to keep up with the growing demand, resulting in many blind spots that cause communication outages. However, even when the infrastructure is available, the mobility of the users still poses a major challenge when it comes to the dependability of applications: As the user moves, the capacity of the channel can experience major changes. This can mean that applications like adjustable bitrate video streaming cannot infer future performance by analyzing past download rates, as it will only have old information about the data rate at a different location. In this work, we explore the use of 4G LTE for dependable communication in mobile vehicular scenarios. For this, we first look at the performance of LTE, especially in mobile environments, and how it has developed over time. We compare measurements performed several years apart and look at performance differences in urban and rural areas. We find that even though the continued development of the 4G standard has enabled better performance in theory, this has not always been reflected in real-life performance due to the slow development of infrastructure, especially along highways. We also explore the possibility of performance prediction in LTE networks without the need to perform active measurements. For this, we look at the relationship between the measured signal quality and the achievable data rates and latencies. We find that while there is a strong correlation between some of the signal quality indicators and the achievable data rates, the relationship between them is stochastic, i.e., a higher signal quality makes better performance more probable but does not guarantee it. We then use our empirical measurement results as a basis for a model that uses signal quality measurements to predict a throughput distribution. The resulting estimate of the obtainable throughput can then be used in adjustable bitrate applications like video streaming to improve their dependability. Mobile networks also task TCP congestion control algorithms with a new challenge: Usually, senders use TCP congestion control to avoid causing congestion in the network by sending too many packets and so that the network bandwidth is divided fairly. This can be a challenging task since it is not known how many senders are in the network, and the network load can change at any time. In mobile vehicular networks, TCP congestion control is confronted with the additional problem of a constantly changing capacity: As users change their location, the quality of the channel also changes, and the capacity of the channel can experience drastic reductions even when the difference of location is very small. Additionally, in our measurements, we have observed that packet losses only rarely occur (and instead, packets are delayed and retransmitted), meaning that loss-based algorithms like Reno or CUBIC can be at a significant disadvantage. In this thesis, we compare several popular congestion control algorithms in both stationary and mobile scenarios. We find that many loss-based algorithms tend to cause bufferbloat and thus overly increase delays. At the same time, many delay-based algorithms tend to underestimate the network capacity and thus achieve data rates that are too low. The algorithm that performed the best in our measurements was TCP BBR, as it was able to utilize the full capacity of the channel without causing bufferbloat and also react to changes in capacity by adjusting its window. However, since TCP BBR can be unfair towards other algorithms in wired networks, its use could be problematic. Finally, we also propose how our model for data rate prediction can be used to improve the dependability of mobile video streaming. For this, we develop an algorithm for adaptive bitrate streaming that provides a guarantee that the video freeze probability does not exceed a certain pre-selected upper threshold. For the algorithm to work, it needs to know the distribution of obtainable throughput. We use a simulation to verify the function of this algorithm using a distribution obtained through the previously proposed data rate prediction algorithm. In our simulation, the algorithm limited the video freeze probability as intended. However, it did so at the cost of frequent switches of video bitrate, which can diminish the quality of user experience. In future work, we want to explore the possibility of different algorithms that offer a trade-off between the video freeze probability and the frequency of bitrate switches.Die Verbesserung der Zuverlässigkeit von mobilen Netzwerk-basierten Anwendungen ist aus vielen Gründen eine komplizierte Aufgabe: Vor allem in Deutschland war die Entwicklung der Mobilfunkinfrastruktur nicht immer schnell genug, um mit der wachsenden Nachfrage Schritt zu halten. Es gibt immer noch viele Funklöchern, die für Kommunikationsausfälle verantwortlich sind. Aber auch an Orten, an denen Infrastruktur ausreichend vorhanden ist, stellt die Mobilität der Nutzer eine große Herausforderung für die Zuverlässigkeit der Anwendungen dar: Wenn sich der Nutzer bewegt, kann sich die Kapazität des Kanals stark verändern. Dies kann dazu führen, dass Anwendungen wie Videostreaming mit einstellbarer Bitrate die in der Vergangenheit erreichten Downloadraten nicht zur Vorhersage der zukünftigen Leistung nutzen können, da diese nur alte Informationen über die Datenraten an einem anderen Standort enthalten. In dieser Arbeit untersuchen wir die Nutzung von 4G LTE für zuverlässige Kommunikation in mobilen Fahrzeugszenarien. Zu diesem Zweck untersuchen wir zunächst die Leistung von LTE, insbesondere in mobilen Umgebungen, und wie sie sich im Laufe der Zeit entwickelt hat. Wir vergleichen Messungen, die in einem zeitlichen Abstand von mehreren Jahren durchgeführt wurden, und untersuchen Leistungsunterschiede in städtischen und ländlichen Gebieten. Wir stellen fest, dass die kontinuierliche Weiterentwicklung des 4G-Standards zwar theoretisch eine bessere Leistung ermöglicht hat, dass sich dies aber aufgrund des langsamen Ausbaus der Infrastruktur, insbesondere entlang von Autobahnen, nicht immer in der Praxis bemerkbar gemacht hat. Wir untersuchen auch die Möglichkeit der Leistungsvorhersage in LTE-Netzen, ohne aktive Messungen durchführen zu müssen. Zu diesem Zweck untersuchen wir die Beziehung zwischen der gemessenen Signalqualität und den erreichbaren Datenraten und Latenzzeiten. Wir stellen fest, dass es zwar eine starke Korrelation zwischen einigen der Signalqualitätsindikatoren und den erreichbaren Datenraten gibt, die Beziehung zwischen ihnen aber stochastisch ist, d. h. eine höhere Signalqualität macht eine bessere Leistung zwar wahrscheinlicher, garantiert sie aber nicht. Wir verwenden dann unsere empirischen Messergebnisse als Grundlage für ein Modell, das die Signalqualitätsmessungen zur Vorhersage einer Durchsatzverteilung nutzt. Die sich daraus ergebende Schätzung des erzielbaren Durchsatzes kann dann in Anwendungen mit einstellbarer Bitrate wie Videostreaming verwendet werden, um deren Zuverlässigkeit zu verbessern. Mobile Netze stellen auch TCP Congestion Control Algorithmen vor eine neue Herausforderung: Normalerweise verwenden Sender TCP Congestion Control, um eine Überlastung des Netzes durch das Senden von zu vielen Paketen zu vermeiden, und um die Bandbreite des Netzes gerecht aufzuteilen. Dies kann eine schwierige Aufgabe sein, da es nicht bekannt ist, wie viele Sender sich im Netz befinden, und sich die Netzlast jederzeit ändern kann. In mobilen Fahrzeugnetzen ist TCP Congestion Control mit dem zusätzlichen Problem einer sich ständig ändernden Kapazität konfrontiert: Wenn die Benutzer ihren Standort wechseln, ändert sich auch die Qualität des Kanals, und die Kanalkapazität des Kanals kann drastisch sinken, selbst wenn der Unterschied zwischen den Standorten sehr gering ist. Darüber hinaus haben wir bei unseren Messungen festgestellt, dass Paketverluste nur selten auftreten (stattdessen werden Pakete verzögert und erneut übertragen), was bedeutet, dass verlustbasierte Algorithmen wie Reno oder CUBIC einen großen Nachteil haben können. In dieser Arbeit vergleichen wir mehrere gängige Congestion Control Algorithmen sowohl in stationären als auch in mobilen Szenarien. Wir stellen fest, dass viele verlustbasierte Algorithmen dazu neigen, einen Pufferüberlauf zu verursachen und somit die Latenzen übermäßig erhöhen, während viele latenzbasierte Algorithmen dazu neigen, die Kanalkapazität zu unterschätzen und somit zu niedrige Datenraten erzielen. Der Algorithmus, der bei unseren Messungen am besten abgeschnitten hat, war TCP BBR, da er in der Lage war, die volle Kapazität des Kanals auszunutzen, ohne den Pufferfüllstand übermäßig zu erhöhen. Ebenso hat TCP BBR schnell auf Kapazitätsänderungen reagiert, indem er seine Fenstergröße angepasst hat. Da TCP BBR jedoch in kabelgebundenen Netzen gegenüber anderen Algorithmen unfair sein kann, könnte seine Verwendung problematisch sein. Schließlich schlagen wir auch vor, wie unser Modell zur Vorhersage von Datenraten verwendet werden kann, um die Zuverlässigkeit des mobilen Videostreaming zu verbessern. Dazu entwickeln wir einen Algorithmus für Streaming mit adaptiver Bitrate, der garantiert, dass die Wahrscheinlichkeit des Anhaltens eines Videos eine bestimmte, vorher festgelegte Obergrenze nicht überschreitet. Damit der Algorithmus funktionieren kann, muss er die Verteilung des erreichbaren Durchsatzes kennen. Wir verwenden eine Simulation, um die Funktion dieses Algorithmus zu überprüfen. Hierzu verwenden wir eine Verteilung, die wir durch den zuvor vorgeschlagenen Algorithmus zur Vorhersage von Datenraten erhalten haben. In unserer Simulation begrenzte der Algorithmus die Wahrscheinlichkeit des Anhaltens von Videos wie beabsichtigt, allerdings um den Preis eines häufigen Wechsels der Videobitrate, was die Qualität der Benutzererfahrung beeinträchtigen kann. In zukünftigen Arbeiten wollen wir die Möglichkeit verschiedener Algorithmen untersuchen, die einen Kompromiss zwischen der Wahrscheinlichkeit des Anhaltens des Videos und der Häufigkeit der Bitratenwechsel bieten

    Fair and Scalable Orchestration of Network and Compute Resources for Virtual Edge Services

    Get PDF
    The combination of service virtualization and edge computing allows for low latency services, while keeping data storage and processing local. However, given the limited resources available at the edge, a conflict in resource usage arises when both virtualized user applications and network functions need to be supported. Further, the concurrent resource request by user applications and network functions is often entangled, since the data generated by the former has to be transferred by the latter, and vice versa. In this paper, we first show through experimental tests the correlation between a video-based application and a vRAN. Then, owing to the complex involved dynamics, we develop a scalable reinforcement learning framework for resource orchestration at the edge, which leverages a Pareto analysis for provable fair and efficient decisions. We validate our framework, named VERA, through a real-time proof-of-concept implementation, which we also use to obtain datasets reporting real-world operational conditions and performance. Using such experimental datasets, we demonstrate that VERA meets the KPI targets for over 96% of the observation period and performs similarly when executed in our real-time implementation, with KPI differences below 12.4%. Further, its scaling cost is 54% lower than a centralized framework based on deep-Q networks

    Architectures and Algorithms for Content Delivery in Future Networks

    Get PDF
    Traditional Content Delivery Networks (CDNs) built with traditional Internet technology are less and less able to cope with today’s tremendous content growth. Enhancing infrastructures with storage and computation capabilities may help to remedy the situation. Information-Centric Networks (ICNs), a proposed future Internet technology, unlike the current Internet, decouple information from its sources and provide in-network storage. However, content delivery over in-network storage-enabled networks still faces significant issues, such as the stability and accuracy of estimated bitrate when using Dynamic Adaptive Streaming (DASH). Still Implementing new infrastructures with in-network storage can lead to other challenges. For instance, the extensive deployment of such networks will require a significant upgrade of the installed IP infrastructure. Furthermore, network slicing enables services and applications with very different characteristics to co-exist on the same network infrastructure. Another challenge is that traditional architectures cannot meet future expectations for streaming in terms of latency and network load when it comes to content, such as 360° videos and immersive services. In-Network Computing (INC), also known as Computing in the Network (COIN), allows the computation tasks to be distributed across the network instead of being computed on servers to guarantee performance. INC is expected to provide lower latency, lower network traffic, and higher throughput. Implementing infrastructures with in-network computing will help fulfill specific requirements for streaming 360° video streaming in the future. Therefore, the delivery of 360° video and immersive services can benefit from INC. This thesis elaborates and addresses the key architectural and algorithmic research challenges related to content delivery in future networks. To tackle the first challenge, we propose algorithms for solving the inaccuracy of rate estimation for future CDNs implementation with in-network storage (a key feature of future networks). An algorithm for implementing in-network storage in IP settings for CDNs is proposed for the second challenge. Finally, for the third challenge, we propose an architecture for provisioning INC-enabled slices for 360° video streaming in next-generation networks. We considered a P4-enabled Software-Defined network (SDN) as the physical infrastructure and significantly reduced latency and traffic load for video streaming

    Machine Learning-Powered Management Architectures for Edge Services in 5G Networks

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics

    Get PDF
    A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques

    Adaptive Streaming: From Bitrate Maximization to Rate-Distortion Optimization

    Get PDF
    The fundamental conflict between the increasing consumer demand for better Quality-of-Experience (QoE) and the limited supply of network resources has become significant challenges to modern video delivery systems. State-of-the-art adaptive bitrate (ABR) streaming algorithms are dedicated to drain available bandwidth in hope to improve viewers' QoE, resulting in inefficient use of network resources. In this thesis, we develop an alternative design paradigm, namely rate-distortion optimized streaming (RDOS), to balance the contrast demands from video consumers and service providers. Distinct from the traditional bitrate maximization paradigm, RDOS must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. The new paradigm has found plausible explanations in information theory, economics, and visual perception. To instantiate the new philosophy, we decompose adaptive streaming algorithms into three mutually independent components, including throughput predictor, reward function, and bitrate selector. We provide a unified framework to understand the connections among all existing ABR algorithms. The new perspective also illustrates the fundamental limitations of each algorithm by going behind its underlying assumptions. Based on the insights, we propose novel improvements to each of the three functional components. To alleviate a series of unrealistic assumptions behind bitrate-based QoE models, we develop a theoretically-grounded objective QoE model. The new objective QoE model combines the information from subject-rated streaming videos and the prior knowledge about human visual system (HVS) in a principled way. By analyzing a corpus of psychophysical experiments, we show the QoE function estimation can be formulated as a projection onto convex sets problem. The proposed model presents strong generalization capability over a broad range of source contents, video encoders, and viewing conditions. Most importantly, the QoE model disentangles bitrate with quality, making it an ideal component in the RDOS framework. In contrast to the existing throughput estimators that approximate the marginal probability distribution over all connections, we optimize the throughput predictor conditioned on each client. Although there are lack of training data for each Internet Protocol connection, we can leverage the latest advances in meta learning to incorporate the knowledge embedded in similar tasks. With a deliberately designed objective function, the algorithm learns to identify similar structures among different network characteristics from millions of realistic throughput traces. During the test phase, the model can quickly adapt to connection-level network characteristics with only a small amount of training data from novel streaming video clients with a small number of gradient steps. The enormous space of streaming videos, constantly progressing encoding schemes, and great diversity of throughput characteristics make it extremely challenging for modern data-driven bitrate selectors that are trained with limited samples to generalize well. To this end, we propose a Bayesian bitrate selection algorithm by adaptively fusing an online, robust, and short-term optimal controller with an offline, susceptible, and long-term optimal planner. Depending on the reliability of the two controllers in certain system states, the algorithm dynamically prioritizes the one of the two decision rules to obtain the optimal decision. To faithfully evaluate the performance of RDOS, we construct a large-scale streaming video dataset -- the Waterloo Streaming Video database. It contains a wide variety of high quality source contents, encoders, encoding profiles, realistic throughput traces, and viewing devices. Extensive objective evaluation demonstrates the proposed algorithm can deliver identical QoE to state-of-the-art ABR algorithms at a much lower cost. The improvement is also supported by so-far the largest subjective video quality assessment experiment

    A REVIEW STUDY OF EUROPEAN R&D PROJECTS FOR SATELLITE COMMUNICATIONS IN 5G/6G ERA

    Get PDF
    Κατά τις τελευταίες δεκαετίες τα δορυφορικά συστήματα τηλεπικοινωνιών έχουν προσφέρει μια γκάμα από πολυμεσικές υπηρεσίες όπως δορυφορική τηλεόραση, δορυφορική τηλεφωνία και ευρυζωνική πρόσβαση στο διαδίκτυο. Οι μακροπρόθεσμες τεχνολογικές αναβαθμίσεις σε συνδυασμό με την προσθήκη νέων δορυφορικών συστημάτων γεωστατικής και ελλειπτικής τροχιάς και με την ενσωμάτωση τεχνολογιών πληροφορικής έχουν ωθήσει την αύξηση του μέγιστου εύρους των δορυφόρων στο 1Gbps σε μεμονωμένους δορυφόρους ενώ σε διάταξη αστερισμού μπορούν να ξεπεράσουν το 1 Tbps. Σε συνδυασμό με την μείωση του χρόνου απόκρισης σε ρυθμούς ανταγωνιστικούς με τις χερσαίες υποδομές ανοίγουν νέες ευκαιρίες και νέους ρόλους εντός ενός οικοσυστήματος ετερογενούς δικτύων 5ης γενιάς. Σε αυτήν την διατριβή, αξιολογούμε επιδοτούμενα επιστημονικά προγράμματα έρευνας και ανάπτυξης της Ευρωπαϊκής Επιτροπής Διαστήματος (ESA) και του προγράμματος επιδότησης Horizon 2020 της Ευρωπαϊκής Ένωσης, προκειμένου να εξηγήσουμε τις δυνατότητες των δορυφόρων εντός ενός ετερογενούς δικτύου 5ης γενιάς, αναφέρουμε συγκεκριμένα αυτά που αφορούν την εξέλιξη των δορυφορικών ψηφιακών συστημάτων και την ικανότητα ενσωμάτωσης τους σε τωρινές αλλά και μελλοντικές υποδομές χερσαίων τηλεπικοινωνιακών δικτύων μέσω της εμφάνισης νέων τεχνολογιών στις ηλεκτρονικές και οπτικές επικοινωνίες αέρος μαζί με την εμφάνιση τεχνολογιών πληροφορικής όπως της δικτύωσης βασισμένης στο λογισμικό και της εικονικοποίησης λειτουργιών δικτύου. Αναφερόμαστε στους στόχους του κάθε project ξεχωριστά και κατηγοριοποιημένα στους ακόλουθους τομείς έρευνας: -Συσσωμάτωση των δορυφόρων με τα επίγεια δίκτυα 5ης γενιάς με οργανωμένες μελέτες και στρατηγικές -Ενσωμάτωση των τεχνολογιών δικτύωσης βασισμένης στο λογισμικό και εικονικοποίησης λειτουργιών δικτύου στο δορυφορικών τμήμα των δικτύων 5ης γενιάς -Ο ρόλος των δορυφόρων σε εφαρμογές του διαδικτύου των πραγμάτων σε συνάφεια με τα χερσαία δίκτυα 5ης γενιάς -Ο ρόλος των δορυφόρων στην δίκτυα διανομής πολυμεσικού περιεχομένου & η επιρροή των πρωτοκόλλων διαδικτύου στην ποιότητα υπηρεσίας χρήστη κατά την διάρκεια μιας δορυφορικής σύνδεσης. -Μελλοντικές βελτιώσεις και εφαρμογές στα δορυφορικά συστήματα με έμφαση στα μελλοντικά πρότυπα του φυσικό επιπέδου Στο τέλος διαθέτουμε ένα παράρτημα που αφορά τεχνικές αναλύσεις στην εξέλιξη του φυσικού επιπέδου των δορυφορικών συστημάτων, συνοδευόμενο με την συσχετιζόμενη βιβλιογραφία για περαιτέρω μελέτη.Over the last decades satellite telecommunication systems offer many types of multimedia services like Satellite TV, telephony and broadband internet access. The long-term technological evolutions occurred into state-of-the-art satellite systems altogether with the addition of new high throughput geostatic and non-geostatic systems, individual satellites can now achieve a peak bandwidth of up to Gbps, and with possible extension into satellite constellation systems the total capacity can reach up to Tbps. Supplementary, with systems latency being comparable to terrestrial infrastructures and with integration of several computer science technologies, satellite systems can achieve new & more advanced roles inside a heterogeneous 5G network’s ecosystem. In this thesis, we have studied European Space Agency (ESA’s) and European Union’s (EU) Horizon 2020 Research and Development (R&D) funded projects in order to describe the satellite capabilities within a 5G heterogeneous network, mentioning the impact of the evolution of digital satellite communications and furthermore the integration with the state-of the art & future terrain telecommunication systems by new technologies occurred through the evolution of electronic & free space optical communications alongside with the integration of computer science’s technologies like Software Defined Networking (SDN) and Network Function Virtualization (NFV). In order to describe this evolution we have studied the concepts of each individual project, categorized chronically and individual by its scientific field of research. Our main scientific trends for this thesis are: -Satellite Integration studies & strategies into the 5G terrestrial networks -Integration of SDN and NFV technologies on 5G satellite component -Satellite’s role in the Internet of Things applications over 5G terrestrial networks -Satellite’s role in Content Distribution Networks & internet protocols impact over user’s Quality of Experience (QoE) over a satellite link -The future proposals upon the evolution of Satellite systems by upcoming improvements and corresponding standards Finally, we have created an Annex for technical details upon the evolution of physical layer of the satellite systems with the corresponding bibliography of this thesis for future study
    corecore