1,016 research outputs found

    An Image-Space Split-Rendering Approach to Accelerate Low-Powered Virtual Reality

    Full text link
    Virtual Reality systems provide many opportunities for scientific research and consumer enjoyment; however, they are more demanding than traditional desktop applications and require a wired connection to desktops in order to enjoy maximum quality. Standalone options that are not connected to computers exist, yet they are powered by mobile GPUs, which provide limited power in comparison to desktop rendering. Alternative approaches to improve performance on mobile devices use server rendering to render frames for a client and treat the client largely as a display device. However, current streaming solutions largely suffer from high end-to-end latency due to processing and networking requirements, as well as underutilization of the client. We propose a networked split-rendering approach to achieve faster end-to-end image presentation rates on the mobile device while preserving image quality. Our proposed solution uses an image-space division of labour between the server-side GPU and the mobile client, and achieves a significantly faster runtime than client-only rendering and than using a thin-client approach, which is mostly reliant on the server

    Gaze trajectory prediction in the context of social robotics

    Get PDF
    Social robotics is an emerging field of robotics that focuses on the interactions between robots and humans. It has attracted much interest due to concerns about an aging society and the need for assistive environments. Within this context, this paper focuses on gaze control and eye tracking as a means for robot control. It aims to improve the usability of human–machine interfaces based on gaze control by developing advanced algorithms for predicting the trajectory of the human gaze. The paper proposes two approaches to gaze-trajectory prediction: probabilistic and symbolic. Both approaches use machine learning. The probabilistic method mixes two state models representing gaze locations and directions. The symbolic method treats the gaze-trajectory prediction problem similar to how word-prediction problems are handled in web browsers. Comparative experiments prove the feasibility of both approaches and show that the probabilistic approach achieves better prediction results

    From Capture to Display: A Survey on Volumetric Video

    Full text link
    Volumetric video, which offers immersive viewing experiences, is gaining increasing prominence. With its six degrees of freedom, it provides viewers with greater immersion and interactivity compared to traditional videos. Despite their potential, volumetric video services poses significant challenges. This survey conducts a comprehensive review of the existing literature on volumetric video. We firstly provide a general framework of volumetric video services, followed by a discussion on prerequisites for volumetric video, encompassing representations, open datasets, and quality assessment metrics. Then we delve into the current methodologies for each stage of the volumetric video service pipeline, detailing capturing, compression, transmission, rendering, and display techniques. Lastly, we explore various applications enabled by this pioneering technology and we present an array of research challenges and opportunities in the domain of volumetric video services. This survey aspires to provide a holistic understanding of this burgeoning field and shed light on potential future research trajectories, aiming to bring the vision of volumetric video to fruition.Comment: Submitte

    Streaming and User Behaviour in Omnidirectional Videos

    Get PDF
    Omnidirectional videos (ODVs) have gone beyond the passive paradigm of traditional video, offering higher degrees of immersion and interaction. The revolutionary novelty of this technology is the possibility for users to interact with the surrounding environment, and to feel a sense of engagement and presence in a virtual space. Users are clearly the main driving force of immersive applications and consequentially the services need to be properly tailored to them. In this context, this chapter highlights the importance of the new role of users in ODV streaming applications, and thus the need for understanding their behaviour while navigating within ODVs. A comprehensive overview of the research efforts aimed at advancing ODV streaming systems is also presented. In particular, the state-of-the-art solutions under examination in this chapter are distinguished in terms of system-centric and user-centric streaming approaches: the former approach comes from a quite straightforward extension of well-established solutions for the 2D video pipeline while the latter one takes the benefit of understanding users’ behaviour and enable more personalised ODV streaming

    Dynamic Viewport-Adaptive Rendering in Distributed Interactive VR Streaming: Optimizing viewport resolution under latency and viewport orientation constraints

    Get PDF
    In streaming Virtual Reality to thin clients one of the main concerns is the massive bandwidth requirement of VR video. Additionally, streaming VR requires a low latency of less than 25ms to avoid cybersickness and provide a high Quality of Experience. Since a user is only viewing a portion of the VR content sphere at a time, researchers have leveraged this to increase the relative quality of the user viewport compared to peripheral areas. This way bandwidth can be saved, since the peripheral areas are streamed at a lower bitrate. In streaming 360°360\degree video this has resulted in the common strategy of tiling a video frame and delivering different quality tiles based on current available bandwidth and the user's viewport location. However, such an approach is not suitable for real-time Interactive VR streaming. Furthermore, streaming only the user's viewport results in the user observing unrendered or very low-quality areas at higher latency values. In order to provide a high viewport quality in Interactive VR, we propose the novel method of Dynamic Viewport-Adaptive Rendering. By rotating the frontal direction of the content sphere with the user gaze, we can dynamically render more or less of the peripheral area and thus increase the proportional resolution of the frontal direction in the video frame. We show that DVAR can successfully compensate for different system RTT values while offering a significantly higher viewport resolution than other implementations. We further discuss how DVAR can be easily extended by other optimization methods and discuss how we can incorporate head movement prediction to allow DVAR to optimally determine the amount of peripheral area to render, thus providing an optimal viewport resolution given the system constraints

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Perception-driven approaches to real-time remote immersive visualization

    Get PDF
    In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput
    • …
    corecore