292 research outputs found

    Toward hyper-realistic and interactive social VR experiences in live TV scenarios

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Social Virtual Reality (VR) allows multiple distributed users getting together in shared virtual environments to socially interact and/or collaborate. This article explores the applicability and potential of Social VR in the broadcast sector, focusing on a live TV show use case. For such a purpose, a novel and lightweight Social VR platform is introduced. The platform provides three key outstanding features compared to state-of-the-art solutions. First, it allows a real-time integration of remote users in shared virtual environments, using realistic volumetric representations and affordable capturing systems, thus not relying on the use of synthetic avatars. Second, it supports a seamless and rich integration of heterogeneous media formats, including 3D scenarios, dynamic volumetric representation of users and (live/stored) stereoscopic 2D and 180Âș/360Âș videos. Third, it enables low-latency interaction between the volumetric users and a video-based presenter (Chroma keying), and a dynamic control of the media playout to adapt to the session’s evolution. The production process of an immersive TV show to be able to evaluate the experience is also described. On the one hand, the results from objective tests show the satisfactory performance of the platform. On the other hand, the promising results from user tests support the potential impact of the presented platform, opening up new opportunities in the broadcast sector, among others.This work has been partially funded by the European Union’s Horizon 2020 program, under agreement nÂș 762111 (VRTogether project), and partially by ACCIÓ, under agreement COMRDI18-1-0008 (ViVIM project). Work by Mario Montagud has been additionally funded by the Spanish Ministry of Science, Innovation and Universities with a Juan de la Cierva – IncorporaciĂłn grant (reference IJCI-2017-34611). The authors would also like to thank the EU H2020 VRTogether project consortium for their relevant and valuable contributions.Peer ReviewedPostprint (author's final draft

    Enabling collaboration in virtual reality navigators

    Get PDF
    In this paper we characterize a feature superset for Collaborative Virtual Reality Environments (CVRE), and derive a component framework to transform stand-alone VR navigators into full-fledged multithreaded collaborative environments. The contributions of our approach rely on a cost-effective and extensible technique for loading software components into separate POSIX threads for rendering, user interaction and network communications, and adding a top layer for managing session collaboration. The framework recasts a VR navigator under a distributed peer-to-peer topology for scene and object sharing, using callback hooks for broadcasting remote events and multicamera perspective sharing with avatar interaction. We validate the framework by applying it to our own ALICE VR Navigator. Experimental results show that our approach has good performance in the collaborative inspection of complex models.Postprint (published version

    Network reputation-based quality optimization of video delivery in heterogeneous wireless environments

    Get PDF
    The mass-market adoption of high-end mobile devices and increasing amount of video traffic has led the mobile operators to adopt various solutions to help them cope with the explosion of mobile broadband data traffic, while ensuring high Quality of Service (QoS) levels to their services. Deploying small-cell base stations within the existing macro-cellular networks and offloading traffic from the large macro-cells to the small cells is seen as a promising solution to increase capacity and improve network performance at low cost. Parallel use of diverse technologies is also employed. The result is a heterogeneous network environment (HetNets), part of the next generation network deployments. In this context, this thesis makes a step forward towards the “Always Best Experience” paradigm, which considers mobile users seamlessly roaming in the HetNets environment. Supporting ubiquitous connectivity and enabling very good quality of rich mobile services anywhere and anytime is highly challenging, mostly due to the heterogeneity of the selection criteria, such as: application requirements (e.g., voice, video, data, etc.); different device types and with various capabilities (e.g., smartphones, netbooks, laptops, etc.); multiple overlapping networks using diverse technologies (e.g., Wireless Local Area Networks (IEEE 802.11), Cellular Networks Long Term Evolution (LTE), etc.) and different user preferences. In fact, the mobile users are facing a complex decision when they need to dynamically select the best value network to connect to in order to get the “Always Best Experience”. This thesis presents three major contributions to solve the problem described above: 1) The Location-based Network Prediction mechanism in heterogeneous wireless networks (LNP) provides a shortlist of best available networks to the mobile user based on his location, history record and routing plan; 2) Reputation-oriented Access Network Selection mechanism (RANS) selects the best reputation network from the available networks for the mobile user based on the best trade-off between QoS, energy consumptions and monetary cost. The network reputation is defined based on previous user-network interaction, and consequent user experience with the network. 3) Network Reputation-based Quality Optimization of Video Delivery in heterogeneous networks (NRQOVD) makes use of a reputation mechanism to enhance the video content quality via multipath delivery or delivery adaptation

    H2-Stereo: High-Speed, High-Resolution Stereoscopic Video System

    Full text link
    High-speed, high-resolution stereoscopic (H2-Stereo) video allows us to perceive dynamic 3D content at fine granularity. The acquisition of H2-Stereo video, however, remains challenging with commodity cameras. Existing spatial super-resolution or temporal frame interpolation methods provide compromised solutions that lack temporal or spatial details, respectively. To alleviate this problem, we propose a dual camera system, in which one camera captures high-spatial-resolution low-frame-rate (HSR-LFR) videos with rich spatial details, and the other captures low-spatial-resolution high-frame-rate (LSR-HFR) videos with smooth temporal details. We then devise a Learned Information Fusion network (LIFnet) that exploits the cross-camera redundancies to enhance both camera views to high spatiotemporal resolution (HSTR) for reconstructing the H2-Stereo video effectively. We utilize a disparity network to transfer spatiotemporal information across views even in large disparity scenes, based on which, we propose disparity-guided flow-based warping for LSR-HFR view and complementary warping for HSR-LFR view. A multi-scale fusion method in feature domain is proposed to minimize occlusion-induced warping ghosts and holes in HSR-LFR view. The LIFnet is trained in an end-to-end manner using our collected high-quality Stereo Video dataset from YouTube. Extensive experiments demonstrate that our model outperforms existing state-of-the-art methods for both views on synthetic data and camera-captured real data with large disparity. Ablation studies explore various aspects, including spatiotemporal resolution, camera baseline, camera desynchronization, long/short exposures and applications, of our system to fully understand its capability for potential applications

    Multi-party holomeetings: toward a new era of low-cost volumetric holographic meetings in virtual reality

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Fueled by advances in multi-party communications, increasingly mature immersive technologies being adopted, and the COVID-19 pandemic, a new wave of social virtual reality (VR) platforms have emerged to support socialization, interaction, and collaboration among multiple remote users who are integrated into shared virtual environments. Social VR aims to increase levels of (co-)presence and interaction quality by overcoming the limitations of 2D windowed representations in traditional multi-party video conferencing tools, although most existing solutions rely on 3D avatars to represent users. This article presents a social VR platform that supports real-time volumetric holographic representations of users that are based on point clouds captured by off-the-shelf RGB-D sensors, and it analyzes the platform’s potential for conducting interactive holomeetings (i.e., holoconferencing scenarios). This work evaluates such a platform’s performance and readiness for conducting meetings with up to four users, and it provides insights into aspects of the user experience when using single-camera and low-cost capture systems in scenarios with both frontal and side viewpoints. Overall, the obtained results confirm the platform’s maturity and the potential of holographic communications for conducting interactive multi-party meetings, even when using low-cost systems and single-camera capture systems in scenarios where users are sitting or have a limited translational movement along the X, Y, and Z axes within the 3D virtual environment (commonly known as 3 Degrees of Freedom plus, 3DoF+).The authors would like to thank the members of the EU H2020 VR-Together consortium for their valuable contributions, especially Marc Martos and Mohamad Hjeij for their support in developing and evaluating tasks. This work has been partially funded by: the EU’s Horizon 2020 program, under agreement nÂș 762111 (VR-Together project); by ACCIÓ (Generalitat de Catalunya), under agreement COMRDI18-1-0008 (ViVIM project); and by Cisco Research and the Silicon Valley Community Foundation, under the grant Extended Reality Multipoint Control Unit (ID: 1779376). The work by Mario Montagud has been additionally funded by Spain’s Agencia Estatal de InvestigaciĂłn under grant RYC2020-030679-I (AEI / 10.13039/501100011033) and by Fondo Social Europeo. The work of David RincĂłn was supported by Spain’s Agencia Estatal de InvestigaciĂłn within the Ministerio de Ciencia e InnovaciĂłn under Project PID2019-108713RB-C51 MCIN/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Performance analysis of multi-source wireless multimedia content delivery

    Get PDF
    In order to create an improved experience in variable network delivery conditions, immersive multimedia content can be delivered over existing network environments, from multiple sources. These sources are normally servers located in the cloud, in various locations. Storytelling and certain related content, such as the immersive opera multimedia data in the context of the European Horizon2020 project TRACTION, require multimedia players to be able to receive content simultaneously from several locations, and at times, merge the content, creating new content in real-time. For instance, 360° recordings and polygonal 3D content can be delivered from different locations,and the end-user receives the unified content on his or her device. This paper introduces a study of how devices can be analysed, in terms of metrics, when receiving multimedia content from multiple sources, as the network and the devices have constraints regarding performance and video quality
    • 

    corecore