2,110 research outputs found

    Accommodation-Free Head Mounted Display with Comfortable 3D Perception and an Enlarged Eye-box.

    Get PDF
    An accommodation-free displays, also known as Maxwellian displays, keep the displayed image sharp regardless of the viewer's focal distance. However, they typically suffer from a small eye-box and limited effective field of view (FOV) which requires careful alignment before a viewer can see the image. This paper presents a high-quality accommodation-free head mounted display (aHMD) based on pixel beam scanning for direct image forming on retina. It has an enlarged eye-box and FOV for easy viewing by replicating the viewing points with an array of beam splitters. A prototype aHMD is built using this concept, which shows high definition, low colour aberration 3D augmented reality (AR) images with an FOV of 36°. The advantage of the proposed design over other head mounted display (HMD) architectures is that, due to the narrow, collimated pixel beams, the high image quality is unaffected by changes in eye accommodation, and the approach to enlarge the eye-box is scalable. Most importantly, such an aHMD can deliver realistic three-dimensional (3D) viewing perception with no vergence-accommodation conflict (VAC). It is found that viewing the accommodation-free 3D images with the aHMD presented in this work is comfortable for viewers and does not cause the nausea or eyestrain side effects commonly associated with conventional stereoscopic 3D or HMD displays, even for all day use

    Perceptual Requirements for World-Locked Rendering in AR and VR

    Full text link
    Stereoscopic, head-tracked display systems can show users realistic, world-locked virtual objects and environments. However, discrepancies between the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced realism, immersion, and, potentially, visually-induced motion sickness. The requirements to achieve perceptually stable world-locked rendering are unknown due to the challenge of constructing a wide field of view, distortion-free display with highly accurate head- and eye-tracking. In this work we introduce new hardware and software built upon recently introduced hardware and present a system capable of rendering virtual objects over real-world references without perceivable drift under such constraints. The platform is used to study acceptable errors in render camera position for world-locked rendering in augmented and virtual reality scenarios, where we find an order of magnitude difference in perceptual sensitivity between them. We conclude by comparing study results with an analytic model which examines changes to apparent depth and visual heading in response to camera displacement errors. We identify visual heading as an important consideration for world-locked rendering alongside depth errors from incorrect disparity

    Design of Participatory Virtual Reality System for visualizing an intelligent adaptive cyberspace

    Get PDF
    The concept of 'Virtual Intelligence' is proposed as an intelligent adaptive interaction between the simulated 3-D dynamic environment and the 3-D dynamic virtual image of the participant in the cyberspace created by a virtual reality system. A system design for such interaction is realised utilising only a stereoscopic optical head-mounted LCD display with an ultrasonic head tracker, a pair of gesture-controlled fibre optic gloves and, a speech recogni(ion and synthesiser device, which are all connected to a Pentium computer. A 3-D dynamic environment is created by physically-based modelling and rendering in real-time and modification of existing object description files by afractals-based Morph software. It is supported by an extensive library of audio and video functions, and functions characterising the dynamics of various objects. The multimedia database files so created are retrieved or manipulated by intelligent hypermedia navigation and intelligent integration with existing information. Speech commands control the dynamics of the environment and the corresponding multimedia databases. The concept of a virtual camera developed by ZeIter as well as Thalmann and Thalmann, as automated by Noma and Okada, can be applied for dynamically relating the orientation and actions of the virtual image of the participant with respect to the simulated environment. Utilising the fibre optic gloves, gesture-based commands are given by the participant for controlling his 3-D virtual image using a gesture language. Optimal estimation methods and dataflow techniques enable synchronisation between the commands of the participant expressed through the gesture language and his 3-D dynamic virtual image. Utilising a framework, developed earlier by the author, for adaptive computational control of distribute multimedia systems, the data access required for the environment as well as the virtual image of the participant can be endowed with adaptive capability
    corecore