474 research outputs found
Enhancing the broadcasted TV consumption experience with broadband omnidirectional video content
[EN] The current wide range of heterogeneous consumption devices and delivery technologies, offers the opportunity to provide related contents in order to enhance and enrich the TV consumption experience. This paper describes a solution to handle the delivery and synchronous consumption of traditional broadcast TV content and related broadband omnidirectional video content. The solution is intended to support both hybrid (broadcast/broadband) delivery technologies and has been designed to be compatible with the Hybrid Broadcast Broadband TV (HbbTV) standard. In particular, some specifications of HbbTV, such as the use of global timestamps or discovery mechanisms, have been adopted. However, additional functionalities have been designed to achieve accurate synchronization and to support the playout of omnidirectional video content in current consumption devices. In order to prove that commercial hybrid environments could be immediately enhanced with this type of content, the proposed solution has been included in a testbed, and objectively and subjectively evaluated. Regarding the omnidirectional video content, the two most common types of projections are supported: equirectangular and cube map. The results of the objective assessment show that the playout of broadband delivered omnidirectional video content in companion devices can be accurately synchronized with the playout on TV of traditional broadcast 2D content. The results of the subjective assessment show the high interest of users in this type of new enriched and immersive experience that contributes to enhance their Quality of Experience (QoE) and engagement.This work was supported by the Generalitat Valenciana, Investigacion Competitiva Proyectos, through the Research and Development Program Grants for Research Groups to be Consolidated, under Grant AICO/2017/059 and Grant AICO/2017Marfil-Reguero, D.; Boronat, F.; López, J.; Vidal Meló, A. (2019). Enhancing the broadcasted TV consumption experience with broadband omnidirectional video content. IEEE Access. 7:171864-171883. https://doi.org/10.1109/ACCESS.2019.2956084S171864171883
Streaming and User Behaviour in Omnidirectional Videos
Omnidirectional videos (ODVs) have gone beyond the passive paradigm of traditional video,
offering higher degrees of immersion and interaction. The revolutionary novelty of this technology is the possibility for users to interact with the surrounding environment, and to feel a
sense of engagement and presence in a virtual space. Users are clearly the main driving force of
immersive applications and consequentially the services need to be properly tailored to them.
In this context, this chapter highlights the importance of the new role of users in ODV streaming applications, and thus the need for understanding their behaviour while navigating within
ODVs. A comprehensive overview of the research efforts aimed at advancing ODV streaming
systems is also presented. In particular, the state-of-the-art solutions under examination in this
chapter are distinguished in terms of system-centric and user-centric streaming approaches: the
former approach comes from a quite straightforward extension of well-established solutions for
the 2D video pipeline while the latter one takes the benefit of understanding users’ behaviour
and enable more personalised ODV streaming
An Edge and Fog Computing Platform for Effective Deployment of 360 Video Applications
This paper has been presented at: Seventh International Workshop on Cloud Technologies and Energy Efficiency in Mobile Communication Networks (CLEEN 2019). How cloudy and green will mobile network and
services be? 15 April 2019 - Marrakech, MoroccoIn press / En prensaImmersive video applications based on 360 video
streaming require high-bandwidth, high-reliability and lowlatency
5G connectivity but also flexible, low-latency and costeffective
computing deployment. This paper proposes a novel
solution for decomposing and distributing the end-to-end 360
video streaming service across three computing tiers, namely
cloud, edge and constrained fog, in order of proximity to the
end user client. The streaming service is aided with an adaptive
viewport technique. The proposed solution is based on the H2020
5G-CORAL system architecture using micro-services-based design
and a unified orchestration and control across all three tiers
based on Fog05. Performance evaluation of the proposed solution
shows noticeable reduction in bandwidth consumption, energy
consumption, and deployment costs, as compared to a solution
where the streaming service is all delivered out of one computing
location such as the Cloud.This work has been partially funded by the H2020 collaborative Europe/Taiwan research project 5G-CORAL (grant num. 761586)
Recommended from our members
Video Adaptation for High-Quality Content Delivery
Modern video players employ complex algorithms to adapt the bitrate of the video that is shown to the user. Bitrate adaptation requires a tradeoff between reducing the probability that the video freezes (rebuffers) and enhancing the quality of the video. A bitrate that is too high leads to frequent rebuffering, while a bitrate that is too low leads to poor video quality. In this dissertation we propose video-adaptation algorithms to deliver content and maximize the viewer\u27s quality of experience (QoE).
Video providers partition videos into short segments and encode each segment at multiple bitrates. The video player adaptively chooses the bitrate of each segment to download, possibly choosing different bitrates for successive segments. We formulate bitrate adaptation as a utility-maximization problem, and design algorithms to provide provably near-optimal time-average utility.
Real-world systems are generally too complex to be fully represented in a theoretical model and thus present a new set of challenges. We design algorithms that deliver video on production systems, maintaining the strengths of the theoretical algorithms while also tackling challenges faced in production. Our algorithms are now part of the official DASH reference player dash.js and are being used by video providers in production environments.
Most online video is streamed via HTTP over TCP. TCP provides reliable delivery at the expense of additional latency incurred when retransmitting lost packets and head-of-line blocking. Using QUIC allows the video player to tolerate some packet loss without incurring the performance penalties. We design and implement algorithms that exploit this added flexibility to provide higher overall QoE by reducing latency and rebuffering while allowing some packet loss.
Recently virtual reality content is increasing in popularity, and delivering 360° video comes with new challenges and opportunities. The viewing space is often partitioned in tiles, and a viewer using a head-mounted display only sees a subset of the tiles at any time. We develop an open source simulation environment for fast and reproducible testing of 360° algorithms. We develop adaptation algorithms that provide high QoE by allocating more bandwidth resources to deliver the tiles that the viewer is more likely to see, while ensuring that the video player reacts in a timely manner when the viewer changes their head pose
Doctor of Philosophy
dissertationInteractive editing and manipulation of digital media is a fundamental component in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. Unfortunately, current systems and techniques are rarely concerned with scalability or usability with these large images. Moreover, processing massive (or even large) imagery is assumed to be an off-line, automatic process, although many problems associated with these datasets require human intervention for high quality results. This dissertation details how to design interactive image techniques that scale. In particular, massive imagery is typically constructed as a seamless mosaic of many smaller images. The focus of this work is the creation of new technologies to enable user interaction in the formation of these large mosaics. While an interactive system for all stages of the mosaic creation pipeline is a long-term research goal, this dissertation concentrates on the last phase of the mosaic creation pipeline - the composition of registered images into a seamless composite. The work detailed in this dissertation provides the technologies to fully realize interactive editing in mosaic composition on image collections ranging from the very small to massive in scale
Streaming and 3D mapping of agri-data on mobile devices
Farm monitoring and operations generate heterogeneous AGRI-data from a variety of different sources that have the potential to be delivered to users ‘on the go’ and in the field to inform farm decision making. A software framework capable of interfacing with existing web mapping services to deliver in-field farm data on commodity mobile hardware was developed and tested. This raised key research challenges related to: robustness of data steaming methods under typical farm connectivity scenarios, and mapping and 3D rendering of AGRI-data in an engaging and intuitive way. The presentation of AGRI-data in a 3D and interactive context was explored using different visualisation techniques; currently the 2D presentation of AGRI- data is the dominant practice, despite the fact that mobile devices can now support sophisticated 3D graphics via programmable pipelines. The testing found that WebSockets were the most reliable streaming method for high resolution image/texture data. From our focus groups there was no single visualisation technique that was preferred demonstrating that a range of methods is a good way to satisfy a large user base. Improved 3D experience on mobile phones is set to revolutionize the multimedia market and a key challenge is identifying useful 3D visualisation methods and navigation tools that support the exploration of data driven 3D interactive visualisation frameworks for AGRI-data
Realizing XR Applications Using 5G-Based 3D Holographic Communication and Mobile Edge Computing
3D holographic communication has the potential to revolutionize the way
people interact with each other in virtual spaces, offering immersive and
realistic experiences. However, demands for high data rates, extremely low
latency, and high computations to enable this technology pose a significant
challenge. To address this challenge, we propose a novel job scheduling
algorithm that leverages Mobile Edge Computing (MEC) servers in order to
minimize the total latency in 3D holographic communication. One of the
motivations for this work is to prevent the uncanny valley effect, which can
occur when the latency hinders the seamless and real-time rendering of
holographic content, leading to a less convincing and less engaging user
experience. Our proposed algorithm dynamically allocates computation tasks to
MEC servers, considering the network conditions, computational capabilities of
the servers, and the requirements of the 3D holographic communication
application. We conduct extensive experiments to evaluate the performance of
our algorithm in terms of latency reduction, and the results demonstrate that
our approach significantly outperforms other baseline methods. Furthermore, we
present a practical scenario involving Augmented Reality (AR), which not only
illustrates the applicability of our algorithm but also highlights the
importance of minimizing latency in achieving high-quality holographic views.
By efficiently distributing the computation workload among MEC servers and
reducing the overall latency, our proposed algorithm enhances the user
experience in 3D holographic communications and paves the way for the
widespread adoption of this technology in various applications, such as
telemedicine, remote collaboration, and entertainment
A Reconfigurable Radio Architecture for Cognitive Radio in Emergency Networks
Cognitive Radio has been proposed as a promising technology to solve today’s spectrum scarcity problem. Cognitive Radio is able to sense the spectrum to find the free spectrum, which can be optimally used by Cognitive Radio without causing interference to the licensed user. In the scope of the Adaptive Adhoc Freeband (AAF) project, an emergency network built on top of Cognitive Radio is proposed. New functional requirements and system specifications for Cognitive Radio have to be supported by a reconfigurable architecture. In this paper, we propose a heterogenous reconfigurable System-on-Chip (SoC) architecture to enable the evolution from the traditional software defined radio to Cognitive Radio
- …