43 research outputs found
A stereo display prototype with multiple focal distances
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee
Creating effective focus cues in multi-plane 3D displays.
Focus cues are incorrect in conventional stereoscopic displays. This causes a dissociation of vergence and accommodation, which leads to visual fatigue and perceptual distortions. Multi-plane displays can minimize these problems by creating nearly correct focus cues. But to create the appearance of continuous depth in a multi-plane display, one needs to use depth-weighted blending: i.e., distribute light intensity between adjacent planes. Akeley et al. [ACM Trans. Graph. 23, 804 (2004)] and Liu and Hua [Opt. Express 18, 11562 (2009)] described rather different rules for depth-weighted blending. We examined the effectiveness of those and other rules using a model of a typical human eye and biologically plausible metrics for image quality. We find that the linear blending rule proposed by Akeley and colleagues [ACM Trans. Graph. 23, 804 (2004)] is the best solution for natural stimuli
Vergence tracking: a tool to assess oculomotor performance in stereoscopic displays
Oculomotor conflict induced between the accommodative and vergence components in stereoscopic displays represents an unnatural viewing condition. There is now some evidence that stereoscopic viewing may induce discomfort and changes in oculomotor parameters. The present study sought to measure oculomotor performance during stereoscopic viewing. Using a 3D stereo setup and an eye-tracker, vergence responses were measured during 20-min exposure to a virtual visual target oscillating in depth, which participants had to track. The results showed a significant decline in the amplitude of the in-depth oscillatory vergence response over time. We propose that eye-tracking provides a useful tool to objectively assess the timevarying alterations of the vergence system when using stereoscopic displays
Recommended from our members
Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses
Alvarez lenses offer accurate and high-speed, dynamic tuning of optical power through a lateral shifting of two lens elements, making them an appealing solution to eliminate the inherent decoupling of accommodation and convergence seen in conventional stereoscopic displays. In this paper, we present a design of a compact eyepiece coupled with two lateral-shifting freeform Alvarez lenses to enable a compact. high-resolution, optical see-through head-mounted display (HMD). The proposed design is able to tune its focal depth from 0 to 3 diopters, rendering near-accurate focus cues with high image quality and a large undistorted see-through field of view (FOV). Our design utilizes an 1920x1080 color resolution organic light-emitting diode (OLED) microdisplay to achieve a >30 degree virtual diagonal FOV, with an angular resolution of 0.4 contrast over the full field. We also experimentally demonstrate a fully functional benchtop prototype using mostly off-the-shelf optics. (C) 2019 Optical Society of America under the terms of the OSA Open Access Publishing AgreementOpen access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Joint view expansion and filtering for automultiscopic 3D displays
Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they require correctly filtered content from multiple viewpoints. This, however, cannot be easily obtained with current stereoscopic production pipelines. We provide a practical solution that takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that can be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and can be efficiently implemented on current GPUs to yield a near real-time performance. Furthermore, the ability to retarget disparity is naturally supported. Our method is robust and works well for challenging video scenes with defocus blur, motion blur, transparent materials, and specularities. We show that our results are superior when compared to the state-of-the-art depth-based rendering methods. Finally, we showcase the method in the context of a real-time 3D videoconferencing system that requires only two cameras.Quanta Computer (Firm)National Science Foundation (U.S.) (NSF IIS-1111415)National Science Foundation (U.S.) (NSF IIS-1116296
Optimizing Virtual Reality for All Users through Gaze-Contingent and Adaptive Focus Displays
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one
OCULAR VERGENCE RESPONSE OVER ANAGLYPHIC STEREOSCOPIC VIDEOS
The effect of anaglyphic stereographic stimuli on ocular vergence response is examined. An experiment is performed comparing ocular vergence response induced by anaglyphic stereographic display versus standard monoscopic display. Two visualization tools, synchronized three-dimensional scanpath playback and real-time dynamic heatmap generation, are developed and used to subjectively support the quantitative analysis of ocular disparity. The results of a one-way ANOVA indicate that there is a highly significant effect of anaglyphic stereoscopic display on ocular vergence for a majority of subjects although consistency of vergence response is difficult to predict
Polarization fields: dynamic light field display using multi-layer LCDs
We introduce polarization field displays as an optically-efficient design for dynamic light field display using multi-layered LCDs. Such displays consist of a stacked set of liquid crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a spatially-controllable polarization rotator, as opposed to a conventional spatial light modulator that directly attenuates light. Color display is achieved using field sequential color illumination with monochromatic LCDs, mitigating severe attenuation and moiré occurring with layered color filter arrays. We demonstrate such displays can be controlled, at interactive refresh rates, by adopting the SART algorithm to tomographically solve for the optimal spatially-varying polarization state rotations applied by each layer. We validate our design by constructing a prototype using modified off-the-shelf panels. We demonstrate interactive display using a GPU-based SART implementation supporting both polarization-based and attenuation-based architectures. Experiments characterize the accuracy of our image formation model, verifying polarization field displays achieve increased brightness, higher resolution, and extended depth of field, as compared to existing automultiscopic display methods for dual-layer and multi-layer LCDs.National Science Foundation (U.S.) (Grant IIS-1116452)United States. Defense Advanced Research Projects Agency (Grant HR0011-10-C-0073)Alfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award