1,991 research outputs found

    An HTTP/2 push-based approach for low-latency live streaming with super-short segments

    Get PDF
    Over the last years, streaming of multimedia content has become more prominent than ever. To meet increasing user requirements, the concept of HTTP Adaptive Streaming (HAS) has recently been introduced. In HAS, video content is temporally divided into multiple segments, each encoded at several quality levels. A rate adaptation heuristic selects the quality level for every segment, allowing the client to take into account the observed available bandwidth and the buffer filling level when deciding the most appropriate quality level for every new video segment. Despite the ability of HAS to deal with changing network conditions, a low average quality and a large camera-to-display delay are often observed in live streaming scenarios. In the meantime, the HTTP/2 protocol was standardized in February 2015, providing new features which target a reduction of the page loading time in web browsing. In this paper, we propose a novel push-based approach for HAS, in which HTTP/2's push feature is used to actively push segments from server to client. Using this approach with video segments with a sub-second duration, referred to as super-short segments, it is possible to reduce the startup time and end-to-end delay in HAS live streaming. Evaluation of the proposed approach, through emulation of a multi-client scenario with highly variable bandwidth and latency, shows that the startup time can be reduced with 31.2% compared to traditional solutions over HTTP/1.1 in mobile, high-latency networks. Furthermore, the end-to-end delay in live streaming scenarios can be reduced with 4 s, while providing the content at similar video quality

    A machine learning-based framework for preventing video freezes in HTTP adaptive streaming

    Get PDF
    HTTP Adaptive Streaming (HAS) represents the dominant technology to deliver videos over the Internet, due to its ability to adapt the video quality to the available bandwidth. Despite that, HAS clients can still suffer from freezes in the video playout, the main factor influencing users' Quality of Experience (QoE). To reduce video freezes, we propose a network-based framework, where a network controller prioritizes the delivery of particular video segments to prevent freezes at the clients. This framework is based on OpenFlow, a widely adopted protocol to implement the software-defined networking principle. The main element of the controller is a Machine Learning (ML) engine based on the random undersampling boosting algorithm and fuzzy logic, which can detect when a client is close to a freeze and drive the network prioritization to avoid it. This decision is based on measurements collected from the network nodes only, without any knowledge on the streamed videos or on the clients' characteristics. In this paper, we detail the design of the proposed ML-based framework and compare its performance with other benchmarking HAS solutions, under various video streaming scenarios. Particularly, we show through extensive experimentation that the proposed approach can reduce video freezes and freeze time with about 65% and 45% respectively, when compared to benchmarking algorithms. These results represent a major improvement for the QoE of the users watching multimedia content online

    Dynamic adaptive video streaming with minimal buffer sizes

    Get PDF
    Recently, adaptive streaming has been widely adopted in video streaming services to improve the Quality-of-Experience (QoE) of video delivery over the Internet. However, state-of-the-art bitrate adaptation achieves satisfactory performance only with extensive buffering of several tens of seconds. This leads to high playback latency in video delivery, which is undesirable especially in the context of live content with a low upper bound on the latency. Therefore, this thesis aims at pushing the application of adaptive streaming to its limit with respect to the buffer size, which is the dominant factor of the streaming latency. In this work, we first address the minimum buffering size required in adaptive streaming, which provides us with guidelines to determine a reasonable low latency for streaming systems. Then, we tackle the fundamental challenge of achieving such a low-latency streaming by developing a novel adaptation algorithm that stabilizes buffer dynamics despite a small buffer size. We also present advanced improvements by designing a novel adaptation architecture with low-delay feedback for the bitrate selection and optimizing the underlying transport layer to offer efficient realtime streaming. Experimental evaluations demonstrate that our approach achieves superior QoE in adaptive video streaming, especially in the particularly challenging case of low-latency streaming.In letzter Zeit setzen immer mehr Anbieter von Video-Streaming im Internet auf adaptives Streaming um die Nutzererfahrung (QoE) zu verbessern. Allerdings erreichen aktuelle Bitrate-Adaption-Algorithmen nur dann eine zufriedenstellende Leistung, wenn sehr große Puffer in der Größenordnung von mehreren zehn Sekunden eingesetzt werden. Dies führt zu großen Latenzen bei der Wiedergabe, was vor allem bei Live-Übertragungen mit einer niedrigen Obergrenze für Verzögerungen unerwünscht ist. Aus diesem Grund zielt die vorliegende Dissertation darauf ab adaptive Streaming-Anwendung im Bezug auf die Puffer-Größe zu optimieren da dies den Hauptfaktor für die Streaming-Latenz darstellt. In dieser Arbeit untersuchen wir zuerst die minimale benötigte Puffer-Größe für adaptives Streaming, was uns ermöglicht eine sinnvolle Untergrenze für die erreichbare Latenz festzulegen. Im nächsten Schritt gehen wir die grundlegende Herausforderung an dieses Optimum zu erreichen. Hierfür entwickeln wir einen neuartigen Adaptionsalgorithmus, der es ermöglicht den Füllstand des Puffers trotz der geringen Größe zu stabilisieren. Danach präsentieren wir weitere Verbesserungen indem wir eine neue Adaptions-Architektur für die Datenraten-Anpassung mit geringer Feedback-Verzögerung designen und das darunter liegende Transportprotokoll optimieren um effizientes Echtzeit-Streaming zu ermöglichen. Durch experimentelle Prüfung zeigen wir, dass unser Ansatz eine verbesserte Nutzererfahrung für adaptives Streaming erreicht, vor allem in besonders herausfordernden Fällen, wenn Streaming mit geringer Latenz gefordert ist

    CLEVER: a cooperative and cross-layer approach to video streaming in HetNets

    Get PDF
    We investigate the problem of providing a video streaming service to mobile users in an heterogeneous cellular network composed of micro e-NodeBs (eNBs) and macro e-NodeBs (MeNBs). More in detail, we target a cross-layer dynamic allocation of the bandwidth resources available over a set of eNBs and one MeNB, with the goal of reducing the delay per chunk experienced by users. After optimally formulating the problem of minimizing the chunk delay, we detail the Cross LayEr Video stReaming (CLEVER) algorithm, to practically tackle it. CLEVER makes allocation decisions on the basis of information retrieved from the application layer aswell as from lower layers. Results, obtained over two representative case studies, show that CLEVER is able to limit the chunk delay, while also reducing the amount of bandwidth reserved for offloaded users on the MeNB, as well as the number of offloaded users. In addition, we show that CLEVER performs clearly better than two selected reference algorithms, while being very close to a best bound. Finally, we show that our solution is able to achieve high fairness indexes and good levels of Quality of Experience (QoE)
    corecore