29 research outputs found
Minimizing the impact of delay on live SVC-based HTTP adaptive streaming services
HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for Over-The-Top video streaming services. Video content is temporally split into segments which are offered at multiple qualities to the clients. These clients autonomously select the quality layer matching the current state of the network through a quality selection heuristic. Recently, academia and industry have begun evaluating the feasibility of adopting layered video coding for HAS. Instead of downloading one file for a certain quality level, scalable video streaming requires downloading several interdependent layers to obtain the same quality. This implies that the base layer is always downloaded and is available for playout, even when throughput fluctuates and enhancement layers can not be downloaded in time. This layered video approach can help in providing better service quality assurance for video streaming. However, adopting scalable video coding for HAS also leads to other issues, since requesting multiple files over HTTP leads to an increased impact of the end-to-end delay and thus on the service provided to the client. This is even worse in a Live TV scenario where the drift on the live signal should be minimized, requiring smaller segment and buffer sizes. In this paper, we characterize the impact of delay on several measurement-based heuristics. Furthermore, we propose several ways to overcome the end-to-end delay issues, such as parallel and pipelined downloading of segment layers, to provide a higher quality for the video service
On the merits of SVC-based HTTP adaptive streaming
HTTP Adaptive Streaming (HAS) is quickly becoming the dominant type of video streaming in Over-The-Top multimedia services. HAS content is temporally segmented and each segment is offered in different video qualities to the client. It enables a video client to dynamically adapt the consumed video quality to match with the capabilities of the network and/or the client's device. As such, the use of HAS allows a service provider to offer video streaming over heterogeneous networks and to heterogeneous devices. Traditionally, the H. 264/AVC video codec is used for encoding the HAS content: for each offered video quality, a separate AVC video file is encoded. Obviously, this leads to a considerable storage redundancy at the video server as each video is available in a multitude of qualities. The recent Scalable Video Codec (SVC) extension of H. 264/AVC allows encoding a video into different quality layers: by dowloading one or more additional layers, the video quality can be improved. While this leads to an immediate reduction of required storage at the video server, the impact of using SVC-based HAS on the network and perceived quality by the user are less obvious. In this article, we characterize the performance of AVC- and SVC-based HAS in terms of perceived video quality, network load and client characteristics, with the goal of identifying advantages and disadvantages of both options
Design of an emulation framework for evaluating large-scale open content aware networks
The popularity of multimedia services has resulted in new revenue opportunities for network and service providers but has also introduced important new challenges. The large amount of resources and stringent quality requirements imposed by multimedia services has triggered the need for open content aware networks, where specific management algorithms that optimize the delivery of multimedia services can be dynamically plugged in when required. In the past, a plethora of algorithms have been proposed ranging from specific cache algorithms to video client heuristics that are optimized for a specific multimedia service type and its corresponding delivery. However, it remains difficult to accurately characterize the performance of these algorithms and investigate the impact of an actual deployment in multimedia services. In this paper, we present a framework that allows evaluating the performance of such algorithms for open content aware networks. The proposed evaluation framework has two important advantages. First, it performs an emulation of the novel algorithms instead of using a simulation approach, which is often carried out to characterize performance. Second, the emulation framework allows evaluating the impact of combining different multimedia algorithms with each other. We present the architecture of the emulation framework and discuss the main software components used. Furthermore, we present a performance evaluation of an illustrative use case, which identifies the need for emulation-based evaluation
Design and optimisation of a (FA)Q-learning-based HTTP adaptive streaming client
In recent years, HTTP (Hypertext Transfer Protocol) adaptive streaming (HAS) has become the de facto standard for adaptive video streaming services. A HAS video consists of multiple segments, encoded at multiple quality levels. State-of-the-art HAS clients employ deterministic heuristics to dynamically adapt the requested quality level based on the perceived network conditions. Current HAS client heuristics are, however, hardwired to fit specific network configurations, making them less flexible to fit a vast range of settings. In this article, a (frequency adjusted) Q-learning HAS client is proposed. In contrast to existing heuristics, the proposed HAS client dynamically learns the optimal behaviour corresponding to the current network environment in order to optimise the quality of experience. Furthermore, the client has been optimised both in terms of global performance and convergence speed. Thorough evaluations show that the proposed client can outperform deterministic algorithms by 11-18% in terms of mean opinion score in a wide range of network configurations
An announcement-based caching approach for video-on-demand streaming
The growing popularity of over the top ( OTT) video streaming services has led to a strong increase in bandwidth capacity requirements in the network. By deploying intermediary caches, closer to the end-users, popular content can be served faster and without increasing backbone traffic. Designing an appropriate replacement strategy for such caching networks is of utmost importance to achieve high caching efficiency and reduce the network load. Typically, a video stream is temporally segmented into smaller chunks that can be accessed and decoded independently. This temporal segmentation leads to a strong relationship between consecutive segments of the same video. Therefore, caching strategies have been developed, taking into account the temporal structure of the video. In this paper, we propose a novel caching strategy that takes advantage of clients announcing which videos will be watched in the near future, e.g., based on predicted requests for subsequent episodes of the same TV show. Based on a Video-on-Demand (VoD) production request trace, the presented algorithm is evaluated for a wide range of user behavior and request announcement models. In a realistic scenario, a performance increase of 11% can be achieved in terms of hit ratio, compared to the state-of-the-art
PCN based admission control for autonomic video quality differentiation: design and evaluation
The popularity of multimedia services has introduced important new challenges for broadband access network management. As these services are very prone to network anomalies such as packet loss and jitter, accurate admission control mechanisms are needed to avoid congestion. Traditionally, centralized admission control mechanisms often underperform in combination with multimedia services, as they fail to effectively characterize the amount of needed resources. Recently, measurement based admission control mechanisms have been proposed such as the IETF Pre-Congestion Notification (PCN) mechanism, where the network load is measured at each intermediate node and signaled to the edge, where the admittance decision takes place. In this article, we design a PCN based admission control mechanism, optimized for protecting bursty traffic such as video services, which is currently not studied in the PCN working group. We evaluated and identified the effect of PCN's configuration in protecting bursty traffic. The proposed admission control mechanism features three main improvements to the original PCN mechanism: first, it uses a new measurement algorithm, which is easier to configure for bursty traffic. Second, it allows to automatically adapt PCN's configuration based on the traffic characteristics of the current sessions. Third, it introduces the differentiation between video quality levels to achieve an admission decision per video quality level of each request. The mechanism has been extensively evaluated in a packet switched simulation environment, which shows that the novel admission control mechanism is able to protec