8,534 research outputs found
Modeling and Evaluation of Multisource Streaming Strategies in P2P VoD Systems
In recent years, multimedia content distribution has largely been moved to the Internet, inducing broadcasters, operators and service providers to upgrade with large expenses their infrastructures. In this context, streaming solutions that rely on user devices such as set-top boxes (STBs) to offload dedicated streaming servers are particularly appropriate. In these systems, contents are usually replicated and scattered over the network established by STBs placed at users' home, and the video-on-demand (VoD) service is provisioned through streaming sessions established among neighboring STBs following a Peer-to-Peer fashion. Up to now the majority of research works have focused on the design and optimization of content replicas mechanisms to minimize server costs. The optimization of replicas mechanisms has been typically performed either considering very crude system performance indicators or analyzing asymptotic behavior. In this work, instead, we propose an analytical model that complements previous works providing fairly accurate predictions of system performance (i.e., blocking probability). Our model turns out to be a highly scalable, flexible, and extensible tool that may be helpful both for designers and developers to efficiently predict the effect of system design choices in large scale STB-VoD system
Recommended from our members
QOE-AWARE CONTENT DISTRIBUTION SYSTEMS FOR ADAPTIVE BITRATE VIDEO STREAMING
A prodigious increase in video streaming content along with a simultaneous rise in end system capabilities has led to the proliferation of adaptive bit rate video streaming users in the Internet. Today, video streaming services range from Video-on-Demand services like traditional IP TV to more recent technologies such as immersive 3D experiences for live sports events. In order to meet the demands of these services, the multimedia and networking research community continues to strive toward efficiently delivering high quality content across the Internet while also trying to minimize content storage and delivery costs.
The introduction of flexible and adaptable technologies such as compute and storage clouds, Network Function Virtualization and Software Defined Networking continue to fuel content provider revenue. Today, content providers such as Google and Facebook build their own Software-Defined WANs to efficiently serve millions of users worldwide, while NetFlix partners with ISPs such as ATT (using OpenConnect) and cloud providers such as Amazon EC2 to serve their content and manage the delivery of several petabytes of high-quality video content for millions of subscribers at a global scale, respectively. In recent years, the unprecedented growth of video traffic in the Internet has seen several innovative systems such as Software Defined Networks and Information Centric Networks as well as inventive protocols such as QUIC, in an effort to keep up with the effects of this remarkable growth. While most existing systems continue to sub-optimally satisfy user requirements, future video streaming systems will require optimal management of storage and bandwidth resources that are several orders of magnitude larger than what is implemented today. Moreover, Quality-of-Experience metrics are becoming increasingly fine-grained in order to accurately quantify diverse content and consumer needs.
In this dissertation, we design and investigate innovative adaptive bit rate video streaming systems and analyze the implications of recent technologies on traditional streaming approaches using real-world experimentation methods. We provide useful insights for current and future content distribution network administrators to tackle Quality-of-Experience dilemmas and serve high quality video content to several users at a global scale. In order to show how Quality-of-Experience can benefit from core network architectural modifications, we design and evaluate prototypes for video streaming in Information Centric Networks and Software-Defined Networks. We also present a real-world, in-depth analysis of adaptive bitrate video streaming over protocols such as QUIC and MPQUIC to show how end-to-end protocol innovation can contribute to substantial Quality-of-Experience benefits for adaptive bit rate video streaming systems. We investigate a cross-layer approach based on QUIC and observe that application layer-based information can be successfully used to determine transport layer parameters for ABR streaming applications
Quality of experience-centric management of adaptive video streaming services : status and challenges
Video streaming applications currently dominate Internet traffic. Particularly, HTTP Adaptive Streaming ( HAS) has emerged as the dominant standard for streaming videos over the best-effort Internet, thanks to its capability of matching the video quality to the available network resources. In HAS, the video client is equipped with a heuristic that dynamically decides the most suitable quality to stream the content, based on information such as the perceived network bandwidth or the video player buffer status. The goal of this heuristic is to optimize the quality as perceived by the user, the so-called Quality of Experience (QoE). Despite the many advantages brought by the adaptive streaming principle, optimizing users' QoE is far from trivial. Current heuristics are still suboptimal when sudden bandwidth drops occur, especially in wireless environments, thus leading to freezes in the video playout, the main factor influencing users' QoE. This issue is aggravated in case of live events, where the player buffer has to be kept as small as possible in order to reduce the playout delay between the user and the live signal. In light of the above, in recent years, several works have been proposed with the aim of extending the classical purely client-based structure of adaptive video streaming, in order to fully optimize users' QoE. In this article, a survey is presented of research works on this topic together with a classification based on where the optimization takes place. This classification goes beyond client-based heuristics to investigate the usage of server-and network-assisted architectures and of new application and transport layer protocols. In addition, we outline the major challenges currently arising in the field of multimedia delivery, which are going to be of extreme relevance in future years
Crowdsourced Live Streaming over the Cloud
Empowered by today's rich tools for media generation and distribution, and
the convenient Internet access, crowdsourced streaming generalizes the
single-source streaming paradigm by including massive contributors for a video
channel. It calls a joint optimization along the path from crowdsourcers,
through streaming servers, to the end-users to minimize the overall latency.
The dynamics of the video sources, together with the globalized request demands
and the high computation demand from each sourcer, make crowdsourced live
streaming challenging even with powerful support from modern cloud computing.
In this paper, we present a generic framework that facilitates a cost-effective
cloud service for crowdsourced live streaming. Through adaptively leasing, the
cloud servers can be provisioned in a fine granularity to accommodate
geo-distributed video crowdsourcers. We present an optimal solution to deal
with service migration among cloud instances of diverse lease prices. It also
addresses the location impact to the streaming quality. To understand the
performance of the proposed strategies in the realworld, we have built a
prototype system running over the planetlab and the Amazon/Microsoft Cloud. Our
extensive experiments demonstrate that the effectiveness of our solution in
terms of deployment cost and streaming quality
A policy-based framework towards smooth adaptive playback for dynamic video streaming over HTTP
The growth of video streaming in the Internet in the last few years has been highly
significant and promises to continue in the future. This fact is related to the growth of
Internet users and especially with the diversification of the end-user devices that happens
nowadays.
Earlier video streaming solutions didn´t consider adequately the Quality of
Experience from the user’s perspective. This weakness has been since overcame with the
DASH video streaming. The main feature of this protocol is to provide different versions,
in terms of quality, of the same content. This way, depending on the status of the network
infrastructure between the video server and the user device, the DASH protocol
automatically selects the more adequate content version. Thus, it provides to the user the
best possible quality for the consumption of that content.
The main issue with the DASH protocol is associated to the loop, between each
client and video server, which controls the rate of the video stream. In fact, as the network
congestion increases, the client requests to the server a video stream with a lower rate.
Nevertheless, due to the network latency, the DASH protocol in a standalone way may
not be able to stabilize the video stream rate at a level that can guarantee a satisfactory
QoE to the end-users.
Network programming is a very active and popular topic in the field of network
infrastructures management. In this area, the Software Defined Networking paradigm is
an approach where a network controller, with a relatively abstracted view of the physical
network infrastructure, tries to perform a more efficient management of the data path.
The current work studies the combination of the DASH protocol and the Software
Defined Networking paradigm in order to achieve a more adequate sharing of the network
resources that could benefit both the users’ QoE and network management.O streaming de vĂdeo na Internet Ă© um fenĂłmeno que tem vindo a crescer de forma
significativa nos últimos anos e que promete continuar a crescer no futuro. Este facto está
associado ao aumento do nĂşmero de utilizadores na Internet e, sobretudo, Ă crescente
diversificação de dispositivos que se verifica atualmente. As primeiras soluções utilizadas no streaming de vĂdeo nĂŁo acomodavam adequadamente o ponto de vista do utilizador na avaliação da qualidade do vĂdeo, i.e., a Qualidade de ExperiĂŞncia (QoE) do utilizador. Esta debilidade foi suplantada com o protocolo de streaming de vĂdeo adaptativo DASH. A principal funcionalidade deste protocolo Ă© fornecer diferente versões, em termos de qualidade, para o mesmo conteĂşdo. Desta forma, dependendo do estado da infraestrutura de rede entre o servidor de vĂdeo e o dispositivo do utilizador, o protocolo DASH seleciona automaticamente a versĂŁo do conteĂşdo mais adequada a essas condições. Tal permite fornecer ao utilizador a melhor
qualidade possĂvel para o consumo deste conteĂşdo. O principal problema com o protocolo DASH está associado com o ciclo, entre cada cliente e o servidor de vĂdeo, que controla o dĂ©bito de cada fluxo de vĂdeo. De facto, Ă medida que a rede fica congestionada, o cliente irá começar a requerer ao servidor um
fluxo de vĂdeo com um dĂ©bito menor. Ainda assim, devido Ă latĂŞncia da rede, o protocolo
DASH pode nĂŁo ser capaz por si sĂł de estabilizar o dĂ©bito do fluxo de vĂdeo num nĂvel
que consiga garantir uma QoE satisfatória para os utilizadores. A programação de redes é uma área muito popular e ativa na gestão de infraestruturas de redes. Nesta área, o paradigma de Software Defined Networking é uma abordagem onde um controlador da rede, com um ponto de vista relativamente abstrato
da infraestrutura fĂsica da rede, tenta desempenhar uma gestĂŁo mais eficiente do encaminhamento de rede.
Neste trabalho estuda-se a junção do protocolo DASH e do paradigma de Software Defined Networking, de forma a atingir uma partilha mais adequada dos recursos da rede. O objetivo é implementar uma solução que seja benéfica tanto para a qualidade de experiência dos utilizadores como para a gestão da rede
A machine learning-based framework for preventing video freezes in HTTP adaptive streaming
HTTP Adaptive Streaming (HAS) represents the dominant technology to deliver videos over the Internet, due to its ability to adapt the video quality to the available bandwidth. Despite that, HAS clients can still suffer from freezes in the video playout, the main factor influencing users' Quality of Experience (QoE). To reduce video freezes, we propose a network-based framework, where a network controller prioritizes the delivery of particular video segments to prevent freezes at the clients. This framework is based on OpenFlow, a widely adopted protocol to implement the software-defined networking principle. The main element of the controller is a Machine Learning (ML) engine based on the random undersampling boosting algorithm and fuzzy logic, which can detect when a client is close to a freeze and drive the network prioritization to avoid it. This decision is based on measurements collected from the network nodes only, without any knowledge on the streamed videos or on the clients' characteristics. In this paper, we detail the design of the proposed ML-based framework and compare its performance with other benchmarking HAS solutions, under various video streaming scenarios. Particularly, we show through extensive experimentation that the proposed approach can reduce video freezes and freeze time with about 65% and 45% respectively, when compared to benchmarking algorithms. These results represent a major improvement for the QoE of the users watching multimedia content online
Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.
A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends
Design of a 5G Multimedia Broadcast Application Function Supporting Adaptive Error Recovery
The demand for mobile multimedia streaming services has been steadily growing
in recent years. Mobile multimedia broadcasting addresses the shortage of radio
resources but introduces a network error recovery problem. Retransmitting
multimedia segments that are not correctly broadcast can cause service
disruptions and increased service latency, affecting the quality of experience
perceived by end users. With the advent of networking paradigms based on
virtualization technologies, mobile networks have been enabled with more
flexibility and agility to deploy innovative services that improve the
utilization of available network resources. This paper discusses how mobile
multimedia broadcast services can be designed to prevent service degradation by
using the computing capabilities provided by multiaccess edge computing (MEC)
platforms in the context of a 5G network architecture. An experimental platform
has been developed to evaluate the feasibility of a MEC application to provide
adaptive error recovery for multimedia broadcast services. The results of the
experiments carried out show that the proposal provides a flexible mechanism
that can be deployed at the network edge to lower the impact of transmission
errors on latency and service disruptions.Comment: 14 pages, 10 figure
- …