17,079 research outputs found

    Design and evaluation of a DASH-compliant second screen video player for live events in mobile scenarios

    Get PDF
    The huge diffusion of mobile devices is rapidly changing the way multimedia content is consumed. Mobile devices are often used as a second screen, providing complementary information on the content shown on the primary screen, as different camera angles in case of a sport event. The introduction of multiple camera angles poses many challenges with respect to guaranteeing a high Quality of Experience to the end user, especially when the live aspect, different devices and highly variable network conditions typical of mobile environments come into play. Due to the ability of HTTP Adaptive Streaming (HAS) protocols to dynamically adapt to bandwidth fluctuations, they are especially suited for the delivery of multimedia content in mobile environments. In HAS, each video is temporally segmented and stored in different quality levels. Rate adaptation heuristics, deployed at the video player, allow the most appropriate quality level to be dynamically requested, based on the current network conditions. Recently, a standardized solution has been proposed by the MPEG consortium, called Dynamic Adaptive Streaming over HTTP (DASH). We present in this paper a DASH-compliant iOS video player designed to support research on rate adaptation heuristics for live second screen scenarios in mobile environments. The video player allows to monitor the battery consumption and CPU usage of the mobile device and to provide this information to the heuristic. Live and Video-on-Demand streaming scenarios and real-time multi-video switching are supported as well. Quantitative results based on real 3G traces are reported on how the developed prototype has been used to benchmark two existing heuristics and to analyse the main aspects affecting battery lifetime in mobile video streaming

    Localized Application for Video Capture for a Multimedia Sensor Node with Name-Based Segment Streaming

    Get PDF
    abstract: The Internet of Things (IoT) has become a more pervasive part of everyday life. IoT networks such as wireless sensor networks, depend greatly on the limiting unnecessary power consumption. As such, providing low-power, adaptable software can greatly improve network design. For streaming live video content, Wireless Video Sensor Network Platform compatible Dynamic Adaptive Streaming over HTTP (WVSNP-DASH) aims to revolutionize wireless segmented video streaming by providing a low-power, adaptable framework to compete with modern DASH players such as Moving Picture Experts Group (MPEG-DASH) and Apple’s Hypertext Transfer Protocol (HTTP) Live Streaming (HLS). Each segment is independently playable, and does not depend on a manifest file, resulting in greatly improved power performance. My work was to show that WVSNP-DASH is capable of further power savings at the level of the wireless sensor node itself if a native capture program is implemented at the camera sensor node. I created a native capture program in the C language that fulfills the name-based segmentation requirements of WVSNP-DASH. I present this program with intent to measure its power consumption on a hardware test-bed in future. To my knowledge, this is the first program to generate WVSNP-DASH playable video segments. The results show that our program could be utilized by WVSNP-DASH, but there are issues with the efficiency, so provided are an additional outline for further improvements.Dissertation/ThesisMasters Thesis Computer Engineering 201

    Llama - Low Latency Adaptive Media Algorithm

    Get PDF
    In the recent years, HTTP Adaptive Bit Rate (ABR) streaming including Dynamic Adaptive Streaming over HTTP (DASH) has become the most popular technology for video streaming over the Internet. The client device requests segments of content using HTTP, with an ABR algorithm selecting the quality at which to request each segment to trade-off video quality with the avoidance of stalling. This introduces high latency compared to traditional broadcast methods, mostly in the client buffer which needs to hold enough data to absorb any changes in network conditions. Clients employ an ABR algorithm which monitors network conditions and adjusts the quality at which segments are requested to maximise the user's Quality of Experience. The size of the client buffer depends on the ABR algorithm's capability to respond to changes in network conditions in a timely manner, hence, low latency live streaming requires an ABR algorithm that can perform well with a small client buffer. In this paper, we present Llama - a new ABR algorithm specifically designed to operate in such scenarios. Our new ABR algorithm employs the novel idea of using two independent throughput measurements made over different timescales. We have evaluated Llama by comparing it against four popular ABR algorithms in terms of multiple QoE metrics, across multiple client settings, and in various network scenarios based on CDN logs of a commercial live TV service. Llama outperforms other ABR algorithms, improving the P.1203 Mean Opinion Score (MOS) as well as reducing rebuffering by 33% when using DASH, and 68% with CMAF in the lowest latency scenario

    On the merits of SVC-based HTTP adaptive streaming

    Get PDF
    HTTP Adaptive Streaming (HAS) is quickly becoming the dominant type of video streaming in Over-The-Top multimedia services. HAS content is temporally segmented and each segment is offered in different video qualities to the client. It enables a video client to dynamically adapt the consumed video quality to match with the capabilities of the network and/or the client's device. As such, the use of HAS allows a service provider to offer video streaming over heterogeneous networks and to heterogeneous devices. Traditionally, the H. 264/AVC video codec is used for encoding the HAS content: for each offered video quality, a separate AVC video file is encoded. Obviously, this leads to a considerable storage redundancy at the video server as each video is available in a multitude of qualities. The recent Scalable Video Codec (SVC) extension of H. 264/AVC allows encoding a video into different quality layers: by dowloading one or more additional layers, the video quality can be improved. While this leads to an immediate reduction of required storage at the video server, the impact of using SVC-based HAS on the network and perceived quality by the user are less obvious. In this article, we characterize the performance of AVC- and SVC-based HAS in terms of perceived video quality, network load and client characteristics, with the goal of identifying advantages and disadvantages of both options

    QoE-Based Low-Delay Live Streaming Using Throughput Predictions

    Full text link
    Recently, HTTP-based adaptive streaming has become the de facto standard for video streaming over the Internet. It allows clients to dynamically adapt media characteristics to network conditions in order to ensure a high quality of experience, that is, minimize playback interruptions, while maximizing video quality at a reasonable level of quality changes. In the case of live streaming, this task becomes particularly challenging due to the latency constraints. The challenge further increases if a client uses a wireless network, where the throughput is subject to considerable fluctuations. Consequently, live streams often exhibit latencies of up to 30 seconds. In the present work, we introduce an adaptation algorithm for HTTP-based live streaming called LOLYPOP (Low-Latency Prediction-Based Adaptation) that is designed to operate with a transport latency of few seconds. To reach this goal, LOLYPOP leverages TCP throughput predictions on multiple time scales, from 1 to 10 seconds, along with an estimate of the prediction error distribution. In addition to satisfying the latency constraint, the algorithm heuristically maximizes the quality of experience by maximizing the average video quality as a function of the number of skipped segments and quality transitions. In order to select an efficient prediction method, we studied the performance of several time series prediction methods in IEEE 802.11 wireless access networks. We evaluated LOLYPOP under a large set of experimental conditions limiting the transport latency to 3 seconds, against a state-of-the-art adaptation algorithm from the literature, called FESTIVE. We observed that the average video quality is by up to a factor of 3 higher than with FESTIVE. We also observed that LOLYPOP is able to reach a broader region in the quality of experience space, and thus it is better adjustable to the user profile or service provider requirements.Comment: Technical Report TKN-16-001, Telecommunication Networks Group, Technische Universitaet Berlin. This TR updated TR TKN-15-00

    Towards SVC-based adaptive streaming in information centric networks

    Get PDF
    HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for video streaming services. In HAS, each video is segmented and stored in different qualities. The client can dynamically select the most appropriate quality level to download, allowing it to adapt to varying network conditions. As the Internet was not designed to deliver such applications, optimal support for multimedia delivery is still missing. Information Centric Networking (ICN) is a recently proposed disruptive architecture that could solve this issue, where the focus is given to the content rather than to end-to-end connectivity. Due to the bandwidth unpredictability typical of ICN, standard AVC-based HAS performs quality selection sub-optimally, thus leading to a poor Quality of Experience (QoE). In this article, we propose to overcome this inefficiency by using Scalable Video Coding (SVC) instead. We individuate the main advantages of SVC-based HAS over ICN and outline, both theoretically and via simulation, the research challenges to be addressed to optimize the delivered QoE
    • …
    corecore