75 research outputs found
Recommended from our members
QOE-AWARE CONTENT DISTRIBUTION SYSTEMS FOR ADAPTIVE BITRATE VIDEO STREAMING
A prodigious increase in video streaming content along with a simultaneous rise in end system capabilities has led to the proliferation of adaptive bit rate video streaming users in the Internet. Today, video streaming services range from Video-on-Demand services like traditional IP TV to more recent technologies such as immersive 3D experiences for live sports events. In order to meet the demands of these services, the multimedia and networking research community continues to strive toward efficiently delivering high quality content across the Internet while also trying to minimize content storage and delivery costs.
The introduction of flexible and adaptable technologies such as compute and storage clouds, Network Function Virtualization and Software Defined Networking continue to fuel content provider revenue. Today, content providers such as Google and Facebook build their own Software-Defined WANs to efficiently serve millions of users worldwide, while NetFlix partners with ISPs such as ATT (using OpenConnect) and cloud providers such as Amazon EC2 to serve their content and manage the delivery of several petabytes of high-quality video content for millions of subscribers at a global scale, respectively. In recent years, the unprecedented growth of video traffic in the Internet has seen several innovative systems such as Software Defined Networks and Information Centric Networks as well as inventive protocols such as QUIC, in an effort to keep up with the effects of this remarkable growth. While most existing systems continue to sub-optimally satisfy user requirements, future video streaming systems will require optimal management of storage and bandwidth resources that are several orders of magnitude larger than what is implemented today. Moreover, Quality-of-Experience metrics are becoming increasingly fine-grained in order to accurately quantify diverse content and consumer needs.
In this dissertation, we design and investigate innovative adaptive bit rate video streaming systems and analyze the implications of recent technologies on traditional streaming approaches using real-world experimentation methods. We provide useful insights for current and future content distribution network administrators to tackle Quality-of-Experience dilemmas and serve high quality video content to several users at a global scale. In order to show how Quality-of-Experience can benefit from core network architectural modifications, we design and evaluate prototypes for video streaming in Information Centric Networks and Software-Defined Networks. We also present a real-world, in-depth analysis of adaptive bitrate video streaming over protocols such as QUIC and MPQUIC to show how end-to-end protocol innovation can contribute to substantial Quality-of-Experience benefits for adaptive bit rate video streaming systems. We investigate a cross-layer approach based on QUIC and observe that application layer-based information can be successfully used to determine transport layer parameters for ABR streaming applications
A policy-based framework towards smooth adaptive playback for dynamic video streaming over HTTP
The growth of video streaming in the Internet in the last few years has been highly
significant and promises to continue in the future. This fact is related to the growth of
Internet users and especially with the diversification of the end-user devices that happens
nowadays.
Earlier video streaming solutions didn´t consider adequately the Quality of
Experience from the user’s perspective. This weakness has been since overcame with the
DASH video streaming. The main feature of this protocol is to provide different versions,
in terms of quality, of the same content. This way, depending on the status of the network
infrastructure between the video server and the user device, the DASH protocol
automatically selects the more adequate content version. Thus, it provides to the user the
best possible quality for the consumption of that content.
The main issue with the DASH protocol is associated to the loop, between each
client and video server, which controls the rate of the video stream. In fact, as the network
congestion increases, the client requests to the server a video stream with a lower rate.
Nevertheless, due to the network latency, the DASH protocol in a standalone way may
not be able to stabilize the video stream rate at a level that can guarantee a satisfactory
QoE to the end-users.
Network programming is a very active and popular topic in the field of network
infrastructures management. In this area, the Software Defined Networking paradigm is
an approach where a network controller, with a relatively abstracted view of the physical
network infrastructure, tries to perform a more efficient management of the data path.
The current work studies the combination of the DASH protocol and the Software
Defined Networking paradigm in order to achieve a more adequate sharing of the network
resources that could benefit both the users’ QoE and network management.O streaming de vĂdeo na Internet Ă© um fenĂłmeno que tem vindo a crescer de forma
significativa nos últimos anos e que promete continuar a crescer no futuro. Este facto está
associado ao aumento do nĂşmero de utilizadores na Internet e, sobretudo, Ă crescente
diversificação de dispositivos que se verifica atualmente. As primeiras soluções utilizadas no streaming de vĂdeo nĂŁo acomodavam adequadamente o ponto de vista do utilizador na avaliação da qualidade do vĂdeo, i.e., a Qualidade de ExperiĂŞncia (QoE) do utilizador. Esta debilidade foi suplantada com o protocolo de streaming de vĂdeo adaptativo DASH. A principal funcionalidade deste protocolo Ă© fornecer diferente versões, em termos de qualidade, para o mesmo conteĂşdo. Desta forma, dependendo do estado da infraestrutura de rede entre o servidor de vĂdeo e o dispositivo do utilizador, o protocolo DASH seleciona automaticamente a versĂŁo do conteĂşdo mais adequada a essas condições. Tal permite fornecer ao utilizador a melhor
qualidade possĂvel para o consumo deste conteĂşdo. O principal problema com o protocolo DASH está associado com o ciclo, entre cada cliente e o servidor de vĂdeo, que controla o dĂ©bito de cada fluxo de vĂdeo. De facto, Ă medida que a rede fica congestionada, o cliente irá começar a requerer ao servidor um
fluxo de vĂdeo com um dĂ©bito menor. Ainda assim, devido Ă latĂŞncia da rede, o protocolo
DASH pode nĂŁo ser capaz por si sĂł de estabilizar o dĂ©bito do fluxo de vĂdeo num nĂvel
que consiga garantir uma QoE satisfatória para os utilizadores. A programação de redes é uma área muito popular e ativa na gestão de infraestruturas de redes. Nesta área, o paradigma de Software Defined Networking é uma abordagem onde um controlador da rede, com um ponto de vista relativamente abstrato
da infraestrutura fĂsica da rede, tenta desempenhar uma gestĂŁo mais eficiente do encaminhamento de rede.
Neste trabalho estuda-se a junção do protocolo DASH e do paradigma de Software Defined Networking, de forma a atingir uma partilha mais adequada dos recursos da rede. O objetivo é implementar uma solução que seja benéfica tanto para a qualidade de experiência dos utilizadores como para a gestão da rede
Machine Learning for Multimedia Communications
Machine learning is revolutionizing the way multimedia information is processed and transmitted to users. After intensive and powerful training, some impressive efficiency/accuracy improvements have been made all over the transmission pipeline. For example, the high model capacity of the learning-based architectures enables us to accurately model the image and video behavior such that tremendous compression gains can be achieved. Similarly, error concealment, streaming strategy or even user perception modeling have widely benefited from the recent learningoriented developments. However, learning-based algorithms often imply drastic changes to the way data are represented or consumed, meaning that the overall pipeline can be affected even though a subpart of it is optimized. In this paper, we review the recent major advances that have been proposed all across the transmission chain, and we discuss their potential impact and the research challenges that they raise
Recommended from our members
Improving Resilience of Communication in Information Dissemination for Time-Critical Applications
Severe weather impacts life and in this dire condition, people rely on communication, to organize relief and stay in touch with their loved ones. In such situations, cellular network infrastructure\footnote{We refer to cellular network infrastructure as infrastructure for the entirety of this document} might be affected due to power outage, link failures, etc. This urges us to look at Ad-hoc mode of communication, to offload major traffic partially or fully from the infrastructure, depending on the status of it.
We look into threefold approach, ranging from the case where the infrastructure is completely unavailable, to where it has been replaced by make shift low capacity mobile cellular base station.
First, we look into communication without infrastructure and timely, dissemination of weather alerts specific to geographical areas. We look into the specific case of floods as they affect significant number of people. Due to the nature of the problem we can utilize the properties of Information Centric Networking (ICN) in this context, namely: i) Flexibility and high failure resistance: Any node in the network that has the information can satisfy the query ii) Robust: Only sensor and car need to communicate iii) Fine grained geo-location specific information dissemination. We analyze how message forwarding using ICN on top of Ad hoc network, approach compares to the one based on infrastructure, that is less resilient in the case of disaster. In addition, we compare the performance of different message forwarding strategies in VANETs (Vehicular Adhoc Networks) using ICN. Our results show that ICN strategy outperforms the infrastructure-based approach as it is 100 times faster for 63\% of total messages delivered.
Then we look into the case where we have the cellular network infrastructure, but it is being pressured due to rapid increase in volume of network traffic (as seen during a major event) or it has been replaced by low capacity mobile tower. In this case we look at offloading as much traffic as possible from the infrastructure to device-to-device communication. However, the host-oriented model of the TCP/IP-based Internet poses challenges to this communication pattern. A scheme that uses an ICN model to fetch content from nearby peers, increases the resiliency of the network in cases of outages and disasters. We collected content popularity statistics from social media to create a content request pattern and evaluate our approach through the simulation of realistic urban scenarios. Additionally, we analyze the scenario of large crowds in sports venues. Our simulation results show that we can offload traffic from the backhaul network by up to 51.7\%, suggesting an advantageous path to support the surge in traffic while keeping complexity and cost for the network operator at manageable levels.
Finally, we look at adaptive bit-rate streaming (ABR) streaming, which has contributed significantly to the reduction of video playout stalling, mainly in highly variable bandwidth conditions. ABR clients continue to suffer from the variation of bit rate qualities over the duration of a streaming session. Similar to stalling, these variations in bit rate quality have a negative impact on the users’ Quality of Experience (QoE). We use a trace from a large-scale CDN to show that such quality changes occur in a significant amount of streaming sessions and investigate an ABR video segment retransmission approach to reduce the number of such quality changes. As the new HTTP/2 standard is becoming increasingly popular, we also see an increase in the usage of HTTP/2 as an alternative protocol for the transmission of web traffic including video streaming. Using various network conditions, we conduct a systematic comparison of existing transport layer approaches for HTTP/2 that is best suited for ABR segment retransmissions. Since it is well known that both protocols provide a series of improvements over HTTP/1.1, we perform experiments both in controlled environments and over transcontinental links in the Internet and find that these benefits also “trickle up” into the application layer when it comes to ABR video streaming where HTTP/2 retransmissions can significantly improve the average quality bitrate while simultaneously minimizing bit rate variations over the duration of a streaming session. Taking inspiration from the first two approaches, we take into account the resiliency of a multi-path approach and further look at a multi-path and multi-stream approach to ABR streaming and demonstrate that losses on one path have very little impact on the other from the same multi-path connection and this increases throughput and resiliency of communication
QoE-Aware Resource Allocation For Crowdsourced Live Streaming: A Machine Learning Approach
In the last decade, empowered by the technological advancements of mobile devices
and the revolution of wireless mobile network access, the world has witnessed an
explosion in crowdsourced live streaming. Ensuring a stable high-quality playback
experience is compulsory to maximize the viewers’ Quality of Experience and the
content providers’ profits. This can be achieved by advocating a geo-distributed cloud
infrastructure to allocate the multimedia resources as close as possible to viewers, in
order to minimize the access delay and video stalls.
Additionally, because of the instability of network condition and the heterogeneity of
the end-users capabilities, transcoding the original video into multiple bitrates is
required. Video transcoding is a computationally expensive process, where generally a
single cloud instance needs to be reserved to produce one single video bitrate
representation. On demand renting of resources or inadequate resources reservation
may cause delay of the video playback or serving the viewers with a lower quality. On
the other hand, if resources provisioning is much higher than the required, the
extra resources will be wasted.
In this thesis, we introduce a prediction-driven resource allocation framework, to
maximize the QoE of viewers and minimize the resources allocation cost. First, by
exploiting the viewers’ locations available in our unique dataset, we implement a machine learning model to predict the viewers’ number near each geo-distributed cloud
site. Second, based on the predicted results that showed to be close to the actual values,
we formulate an optimization problem to proactively allocate resources at the viewers’
proximity. Additionally, we will present a trade-off between the video access delay and
the cost of resource allocation.
Considering the complexity and infeasibility of our offline optimization to respond to
the volume of viewing requests in real-time, we further extend our work, by introducing
a resources forecasting and reservation framework for geo-distributed cloud sites. First,
we formulate an offline optimization problem to allocate transcoding resources at the
viewers’ proximity, while creating a tradeoff between the network cost and viewers
QoE. Second, based on the optimizer resource allocation decisions on historical live
videos, we create our time series datasets containing historical records of the optimal
resources needed at each geo-distributed cloud site. Finally, we adopt machine learning
to build our distributed time series forecasting models to proactively forecast the exact
needed transcoding resources ahead of time at each geo-distributed cloud site.
The results showed that the predicted number of transcoding resources needed in each
cloud site is close to the optimal number of transcoding resources
Deep Reinforcement Learning with Importance Weighted A3C for QoE enhancement in Video Delivery Services
Adaptive bitrate (ABR) algorithms are used to adapt the video bitrate based
on the network conditions to improve the overall video quality of experience
(QoE). Recently, reinforcement learning (RL) and asynchronous advantage
actor-critic (A3C) methods have been used to generate adaptive bit rate
algorithms and they have been shown to improve the overall QoE as compared to
fixed rule ABR algorithms. However, a common issue in the A3C methods is the
lag between behaviour policy and target policy. As a result, the behaviour and
the target policies are no longer synchronized which results in suboptimal
updates. In this work, we present ALISA: An Actor-Learner Architecture with
Importance Sampling for efficient learning in ABR algorithms. ALISA
incorporates importance sampling weights to give more weightage to relevant
experience to address the lag issues with the existing A3C methods. We present
the design and implementation of ALISA, and compare its performance to
state-of-the-art video rate adaptation algorithms including vanilla A3C
implemented in the Pensieve framework and other fixed-rule schedulers like BB,
BOLA, and RB. Our results show that ALISA improves average QoE by up to 25%-48%
higher average QoE than Pensieve, and even more when compared to fixed-rule
schedulers.Comment: Number of pages: 10, Number of figures: 9, Conference name: 24th IEEE
International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM
- …