1,185 research outputs found

    Enhancing the broadcasted TV consumption experience with broadband omnidirectional video content

    Full text link
    [EN] The current wide range of heterogeneous consumption devices and delivery technologies, offers the opportunity to provide related contents in order to enhance and enrich the TV consumption experience. This paper describes a solution to handle the delivery and synchronous consumption of traditional broadcast TV content and related broadband omnidirectional video content. The solution is intended to support both hybrid (broadcast/broadband) delivery technologies and has been designed to be compatible with the Hybrid Broadcast Broadband TV (HbbTV) standard. In particular, some specifications of HbbTV, such as the use of global timestamps or discovery mechanisms, have been adopted. However, additional functionalities have been designed to achieve accurate synchronization and to support the playout of omnidirectional video content in current consumption devices. In order to prove that commercial hybrid environments could be immediately enhanced with this type of content, the proposed solution has been included in a testbed, and objectively and subjectively evaluated. Regarding the omnidirectional video content, the two most common types of projections are supported: equirectangular and cube map. The results of the objective assessment show that the playout of broadband delivered omnidirectional video content in companion devices can be accurately synchronized with the playout on TV of traditional broadcast 2D content. The results of the subjective assessment show the high interest of users in this type of new enriched and immersive experience that contributes to enhance their Quality of Experience (QoE) and engagement.This work was supported by the Generalitat Valenciana, Investigacion Competitiva Proyectos, through the Research and Development Program Grants for Research Groups to be Consolidated, under Grant AICO/2017/059 and Grant AICO/2017Marfil-Reguero, D.; Boronat, F.; López, J.; Vidal Meló, A. (2019). Enhancing the broadcasted TV consumption experience with broadband omnidirectional video content. IEEE Access. 7:171864-171883. https://doi.org/10.1109/ACCESS.2019.2956084S171864171883

    Visual Distortions in 360-degree Videos.

    Get PDF
    Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications

    Streaming and User Behaviour in Omnidirectional Videos

    Get PDF
    Omnidirectional videos (ODVs) have gone beyond the passive paradigm of traditional video, offering higher degrees of immersion and interaction. The revolutionary novelty of this technology is the possibility for users to interact with the surrounding environment, and to feel a sense of engagement and presence in a virtual space. Users are clearly the main driving force of immersive applications and consequentially the services need to be properly tailored to them. In this context, this chapter highlights the importance of the new role of users in ODV streaming applications, and thus the need for understanding their behaviour while navigating within ODVs. A comprehensive overview of the research efforts aimed at advancing ODV streaming systems is also presented. In particular, the state-of-the-art solutions under examination in this chapter are distinguished in terms of system-centric and user-centric streaming approaches: the former approach comes from a quite straightforward extension of well-established solutions for the 2D video pipeline while the latter one takes the benefit of understanding users’ behaviour and enable more personalised ODV streaming

    OPDN: Omnidirectional Position-aware Deformable Network for Omnidirectional Image Super-Resolution

    Full text link
    360{\deg} omnidirectional images have gained research attention due to their immersive and interactive experience, particularly in AR/VR applications. However, they suffer from lower angular resolution due to being captured by fisheye lenses with the same sensor size for capturing planar images. To solve the above issues, we propose a two-stage framework for 360{\deg} omnidirectional image superresolution. The first stage employs two branches: model A, which incorporates omnidirectional position-aware deformable blocks (OPDB) and Fourier upsampling, and model B, which adds a spatial frequency fusion module (SFF) to model A. Model A aims to enhance the feature extraction ability of 360{\deg} image positional information, while Model B further focuses on the high-frequency information of 360{\deg} images. The second stage performs same-resolution enhancement based on the structure of model A with a pixel unshuffle operation. In addition, we collected data from YouTube to improve the fitting ability of the transformer, and created pseudo low-resolution images using a degradation network. Our proposed method achieves superior performance and wins the NTIRE 2023 challenge of 360{\deg} omnidirectional image super-resolution.Comment: Accepted to CVPRW 202

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF

    Human-centric quality management of immersive multimedia applications

    Get PDF
    Augmented Reality (AR) and Virtual Reality (VR) multimodal systems are the latest trend within the field of multimedia. As they emulate the senses by means of omni-directional visuals, 360 degrees sound, motion tracking and touch simulation, they are able to create a strong feeling of presence and interaction with the virtual environment. These experiences can be applied for virtual training (Industry 4.0), tele-surgery (healthcare) or remote learning (education). However, given the strong time and task sensitiveness of these applications, it is of great importance to sustain the end-user quality, i.e. the Quality-of-Experience (QoE), at all times. Lack of synchronization and quality degradation need to be reduced to a minimum to avoid feelings of cybersickness or loss of immersiveness and concentration. This means that there is a need to shift the quality management from system-centered performance metrics towards a more human, QoE-centered approach. However, this requires for novel techniques in the three areas of the QoE-management loop (monitoring, modelling and control). This position paper identifies open areas of research to fully enable human-centric driven management of immersive multimedia. To this extent, four main dimensions are put forward: (1) Task and well-being driven subjective assessment; (2) Real-time QoE modelling; (3) Accurate viewport prediction; (4) Machine Learning (ML)-based quality optimization and content recreation. This paper discusses the state-of-the-art, and provides with possible solutions to tackle the open challenges
    corecore