1,287 research outputs found

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours

    Perceived Acceleration in Stereoscopic Animation

    Get PDF
    In stereoscopic media, a sensation of depth is produced through the differences of images presented to the left and the right eyes. These differences are a result of binocular parallax caused by the separation of the cameras used to capture the scene. Creators of stereoscopic media face the challenge of producing compelling depth while restricting the amount of parallax to a comfortable range. Control of camera separation is a key manipulation to control parallax. Sometimes, stereoscopic warping is used in post-production process to selectively increase or decrease depth in certain regions of the image. However, mismatches between camera geometry and natural stereoscopic geometry can theoretically produce nonlinear distortions of perceived space. The relative expansion or compression of the stereoscopic space, in theory, should affect the perceived acceleration of objects moving through that space. This thesis suggests that viewers are tolerant of effects of distortions when perceiving acceleration in a stereoscopic scene

    Visual Distortions in 360-degree Videos.

    Get PDF
    Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications

    Perspective Preserving Solution for Quasi-Orthoscopic Video See-Through HMDs

    Get PDF
    In non-orthoscopic video see-through (VST) head-mounted displays (HMDs), depth perception through stereopsis is adversely affected by sources of spatial perception errors. Solutions for parallax-free and orthoscopic VST HMDs were considered to ensure proper space perception but at expenses of an increased bulkiness and weight. In this work, we present a hybrid video-optical see-through HMD the geometry of which explicitly violates the rigorous conditions of orthostereoscopy. For properly recovering natural stereo fusion of the scene within the personal space in a region around a predefined distance from the observer, we partially resolve the eye-camera parallax by warping the camera images through a perspective preserving homography that accounts for the geometry of the VST HMD and refers to such distance. For validating our solution; we conducted objective and subjective tests. The goal of the tests was to assess the efficacy of our solution in recovering natural depth perception in the space around said reference distance. The results obtained showed that the quasi-orthoscopic setting of the HMD; together with the perspective preserving image warping; allow the recovering of a correct perception of the relative depths. The perceived distortion of space around the reference plane proved to be not as severe as predicted by the mathematical models

    Scalable Remote Rendering using Synthesized Image Quality Assessment

    Get PDF
    Depth-image-based rendering (DIBR) is widely used to support 3D interactive graphics on low-end mobile devices. Although it reduces the rendering cost on a mobile device, it essentially turns such a cost into depth image transmission cost or bandwidth consumption, inducing performance bottleneck to a remote rendering system. To address this problem, we design a scalable remote rendering framework based on synthesized image quality assessment. Specially, we design an efficient synthesized image quality metric based on Just Noticeable Distortion (JND), properly measuring human perceived geometric distortions in synthesized images. Based on this, we predict quality-aware reference viewpoints, with viewpoint intervals optimized by the JND-based metric. An adaptive transmission scheme is also developed to control depth image transmission based on perceived quality and network bandwidth availability. Experiment results show that our approach effectively reduces transmission frequency and network bandwidth consumption with perceived quality on mobile devices maintained. A prototype system is implemented to demonstrate the scalability of our proposed framework to multiple clients

    Evaluation and optimization of central vision compensation techniques

    Get PDF
    Non-costly, non-invasive, safe, and reliable electronic vision enhancement systems (EVES) and their methods have presented a huge medical and industrial demand in the early 21st century. Two unique, vision compensation and enhancement algorithms are reviewed and compared, qualitatively optimizing the view of a restricted (or truncated) image. The first is described as the convex or fish-eye technique, and the second is the cartoon superimposition or Peli technique (after the leading author for this research). The novelty in this dissertation is in presenting and analyzing both of these with a comparison to a novel technique, motivated by characterization of quality vision parameters (or the distribution of photoreceptors in the eye), in an attempt to account for and compensate reported viewing difficulties and low image quality measures associated with these two existing methods.;This partial cartoon technique is based on introducing the invisible image to the immediate left and right of the truncated image as a superimposed cartoon into respective sides of the truncated image, yet only on a partial basis as not to distract the central view of the image. It is generated and evaluated using MatlabRTM to warp sample grayscale images according to predefined parameters such as warping method, cartoon and other warping parameters, different grayscale values, as well as comparing both the static and movie modes. Warped images are quantitatively compared by evaluating the Root-Mean-Square Error (RMSE) and the Universal Image Quality Index (UIQI), both representing image distortion and quality measures of warped, as compared to original images for five different scenes; landscape, close-up, obstacle, text, and home (or low-illumination) views. Remapped images are also evaluated through surveys performed on 115 subjects, where improvement is assessed using measures of image detail and distortion.;It is finally concluded that the presented partial cartoon method exhibits superior image quality for all objective measures, as well as for a majority of subjective distortion measures. Justification is provided as to why the technique does not offer superior subjective detail measures. Further improvement is suggested, as well as additional techniques and research
    corecore