2,847 research outputs found

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours

    Performance of a simple remote video-based eye tracker with GPU acceleration

    Get PDF
    Eye tracking is a well-established tool that is often utilised in research. There are currently many different types of eye trackers available, but they are either expensive, or provide a relatively low sampling frequency. The eye tracker presented in this paper was developed in an effort to address the lack of low-cost high-speed eye trackers. It utilises the Graphical Processing Unit (GPU) in an attempt to parallelise aspects of the process to localize feature points in eye images to attain higher sampling frequencies. Moreover, the proposed implementation allows for the system to be used on a variety of different GPUs. The developed solution is capable of sampling at frequencies of 200 Hz and higher, while allowing for head movements within an area of 10×6×10 cm and an average accuracy of one degree of visual angle. The entire system can be built for less than 700 euros, and will run on a mid-range laptop

    Master slave en-face OCT/SLO

    Get PDF
    Master Slave optical coherence tomography (MS-OCT) is an OCT method that does not require resampling of data and can be used to deliver en-face images from several depths simultaneously. As the MS-OCT method requires important computational resources, the number of multiple depth en-face images that can be produced in real-time is limited. Here, we demonstrate progress in taking advantage of the parallel processing feature of the MS-OCT technology. Harnessing the capabilities of graphics processing units (GPU)s, information from 384 depth positions is acquired in one raster with real time display of up to 40 en-face OCT images. These exhibit comparable resolution and sensitivity to the images produced using the conventional Fourier domain based method. The GPU facilitates versatile real time selection of parameters, such as the depth positions of the 40 images out of the set of 384 depth locations, as well as their axial resolution. In each updated displayed frame, in parallel with the 40 en-face OCT images, a scanning laser ophthalmoscopy (SLO) lookalike image is presented together with two B-scan OCT images oriented along rectangular directions. The thickness of the SLO lookalike image is dynamically determined by the choice of number of en-face OCT images displayed in the frame and the choice of differential axial distance between them

    Egocentric Perception using a Biologically Inspired Software Retina Integrated with a Deep CNN

    Get PDF
    We presented the concept of of a software retina, capable of significant visual data reduction in combination with scale and rotation invariance, for applications in egocentric and robot vision at the first EPIC workshop in Amsterdam [9]. Our method is based on the mammalian retino-cortical transform: a mapping between a pseudo-randomly tessellated retina model (used to sample an input image) and a CNN. The aim of this first pilot study is to demonstrate a functional retina-integrated CNN implementation and this produced the following results: a network using the full retino-cortical transform yielded an F1 score of 0.80 on a test set during a 4-way classification task, while an identical network not using the proposed method yielded an F1 score of 0.86 on the same task. On a 40K node retina the method reduced the visual data bye×7, the input data to the CNN by 40% and the number of CNN training epochs by 36%. These results demonstrate the viability of our method and hint at the potential of exploiting functional traits of natural vision systems in CNNs. In addition, to the above study, we present further recent developments in porting the retina to an Apple iPhone, an implementation in CUDA C for NVIDIA GPU platforms and extensions of the retina model we have adopted

    A space-variant visual pathway model for data efficient deep learning

    Get PDF
    We present an investigation into adopting a model of the retino-cortical mapping, found in biological visual systems, to improve the efficiency of image analysis using Deep Convolutional Neural Nets (DCNNs) in the context of robot vision and egocentric perception systems. This work has now enabled DCNNs to process input images approaching one million pixels in size, in real time, using only consumer grade graphics processor (GPU) hardware in a single pass of the DCNN

    Smart Visual Sensing Using a Software Retina Model

    Get PDF
    We present an approach to efficient visual sensing and perception based on a non-uniformly sampled, biologically inspired, software retina that when combined with a DCNN classifier has enabled megapixel-sized camera input images to be processed in a single pass, while maintaining state-of-the recognition performance

    Development of real-time dual-display handheld and bench-top hybrid-mode SD-OCTs

    Get PDF
    Development of a dual-display handheld optical coherence tomography (OCT) system for retina and optic-nerve-head diagnosis beyond the volunteer motion constraints is reported. The developed system is portable and easily movable, containing the compact portable OCT system that includes the handheld probe and computer. Eye posterior chambers were diagnosed using the handheld probe, and the probe could be fixed to the bench-top cradle depending on the volunteers' physical condition. The images obtained using this handheld probe were displayed in real time on the computer monitor and on a small secondary built-in monitor; the displayed images were saved using the handheld probe's built-in button. Large-scale signal-processing procedures such as k-domain linearization, fast Fourier transform (FFT), and log-scaling signal processing can be rapidly applied using graphics-processing-unit (GPU) accelerated processing rather than central-processing-unit (CPU) processing. The Labview-based system resolution is 1,024 ?? 512 pixels, and the frame rate is 56 frames/s, useful for real-time display. The 3D images of the posterior chambers including the retina, optic-nerve head, blood vessels, and optic nerve were composed using real-time displayed images with 500 ?? 500 ?? 500 pixel resolution. A handheld and bench-top hybrid mode with a dual-display handheld OCT was developed to overcome the drawbacks of the conventional method.open0
    corecore