662 research outputs found

    High-quality, real-time 3D video visualization in head mounted displays

    Get PDF
    The main goal of this thesis research was to develop the ability to visualize high- quality, three-dimensional (3D) data within a virtual reality head mounted display (HMD). High-quality, 3D data collection has become easier in past years due to the development of 3D scanning technologies such as structured light methods. Structured light scanning and modern 3D data compression techniques have improved to the point at which 3D data can be captured, processed, compressed, streamed across a network, decompressed, reconstructed, and visualized all in near real-time. Now the question becomes what can be done with this live 3D information? A web application allows for real-time visualization of and interaction with this 3D video on the web. Streaming this data to the web allows for greater ease of access by a larger population. In the past, only two-dimensional (2D) video streaming has been available to the public via the web or installed desktop software. Commonly, 2D video streaming technologies, such as Skype, FaceTime or Google Hangout, are used to connect people around the world for both business and recreational purposes. As the trend continues in which society conducts itself in online environments, improvements to these telecommunication and telecollaboration technologies must be made as current systems have reached their limitations. These improvements are to ensure that interactions are as natural and as user-friendly as possible. One resolution to the limitations imposed by 2D video streaming is to stream 3D video via the aforementioned technologies to a user in a virtual reality HMD. With 3D data, improvements such as eye-gaze correction, obtaining a natural angle of viewing, and more can be accomplished. One common advantage of using 3D data in lieu of 2D data is what can be done with it during redisplay. For example, when a viewer moves about their environment in a physical space while on Skype, the 2D image on their computer monitor does not change; however, via the use of an HMD, the user can naturally view and move about their partner in 3D space almost as if they were sitting directly across from them. With these improvements, increased user perception and level of immersion in the digital world has been achieved. This allows users to perform at an increased level of efficiency in telecollaboration and telecommunication environments due to the increased ability to visualize and communicate more naturally with another human being. This thesis will present some preliminary results which support the notion that users better perceive their environments and also have a greater sense of interpersonal communica- tion when immersed in a 3D video scenario as opposed to a 2D video scenario. This novel technology utilizes high-quality and real-time 3D scanning and 3D compression techniques which in turn allows the user to experience a realistic reconstruction within a virtual reality HMD

    Tailored displays to compensate for visual aberrations

    Get PDF
    We introduce tailored displays that enhance visual acuity by decomposing virtual objects and placing the resulting anisotropic pieces into the subject's focal range. The goal is to free the viewer from needing wearable optical corrections when looking at displays. Our tailoring process uses aberration and scattering maps to account for refractive errors and cataracts. It splits an object's light field into multiple instances that are each in-focus for a given eye sub-aperture. Their integration onto the retina leads to a quality improvement of perceived images when observing the display with naked eyes. The use of multiple depths to render each point of focus on the retina creates multi-focus, multi-depth displays. User evaluations and validation with modified camera optics are performed. We propose tailored displays for daily tasks where using eyeglasses are unfeasible or inconvenient (e.g., on head-mounted displays, e-readers, as well as for games); when a multi-focus function is required but undoable (e.g., driving for farsighted individuals, checking a portable device while doing physical activities); or for correcting the visual distortions produced by high-order aberrations that eyeglasses are not able to.Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 142563/2008-0)Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 308936/2010-8)Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 480485/2010- 0)National Science Foundation (U.S.) (NSF CNS 0913875)Alfred P. Sloan Foundation (fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)Massachusetts Institute of Technology. Media Laboratory (Consortium Members

    Eye tracking in virtual reality: Vive pro eye spatial accuracy, precision, and calibration reliability

    Get PDF
    A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants’ inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments

    Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays

    Get PDF
    In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings

    Optimization of Computer generated holography rendering and optical design for a compact and large eyebox Augmented Reality glass

    Get PDF
    Thesis (Master of Science in Informatics)--University of Tsukuba, no. 41288, 2019.3.2

    High-speed, image-based eye tracking with a scanning laser ophthalmoscope

    Get PDF
    We demonstrate a high-speed, image-based tracking scanning laser ophthalmoscope (TSLO) that can provide high fidelity structural images, real-time eye tracking and targeted stimulus delivery. The system was designed for diffraction-limited performance over an 8° field of view (FOV) and operates with a flexible field of view of 1°-5.5°. Stabilized videos of the retina were generated showing an amplitude of motion after stabilization of 0.2 arcmin or less across all frequencies. In addition, the imaging laser can be modulated to place a stimulus on a targeted retinal location. We show a stimulus placement accuracy with a standard deviation less than 1 arcmin. With a smaller field size of 2°, individual cone photoreceptors were clearly visible at eccentricities outside of the fovea. © 2012 Optical Society of America

    VOLUMETRIC AND VARIFOCAL-OCCLUSION AUGMENTED REALITY DISPLAYS

    Get PDF
    Augmented Reality displays are a next-generation computing platform that offer unprecedented user experience by seamlessly combining physical and digital content, and could revolutionize the way we communicate, visualize, and interact with digital information. However, providing a seamless and perceptually realistic experience requires displays capable of presenting photorealistic imagery, and especially, perceptually realistic depth cues, resulting in virtual imagery being presented at any depth and of any opacity. Today's commercial augmented reality displays are far from perceptually realistic because they do not support important depth cues such as mutual occlusion and accommodation, resulting in a transparent image overlaid onto the real-world at a fixed depth. Previous research prototypes fall short by presenting occlusion only for a fixed depth, and by presenting accommodation and defocus-blur only for a narrow depth-range, or with poor depth or spatial resolution. To address these challenges, this thesis explores a computational display approach, where the display’s optics, electronics, and algorithms are co-designed to improve performance or enable new capabilities. In one design, a Volumetric Near-eye Augmented Reality Display was developed to simultaneously present many virtual objects at different depths across a large depth range (15 - 400 cm) without sacrificing spatial resolution, frame rate, or bitdepth. This was accomplished by (1) synchronizing a high-speed Digital Micromirror Device (DMD) projector and a focus-tunable lens to periodically sweep out a volume composed of 280 single-color binary images in front of the user's eye, (2) a new voxel-oriented decomposition algorithm, and (3) per-depth-plane illumination control. In a separate design, for the first time, we demonstrate depth-correct occlusion in optical see-through augmented reality displays. This was accomplished by an optical system composed of two fixed-focus lenses and two focus-tunable lenses to dynamically move the occlusion and virtual image planes in depth, and designing the optics to ensure unit magnification of the see-through real world irrespective of the occlusion or virtual image plane distance. Contributions of this thesis include new optical designs, new rendering algorithms, and prototype displays that demonstrate accommodation, defocus blur, and occlusion depth cues over an extended depth-range.Doctor of Philosoph

    Contributions to virtual reality

    Get PDF
    153 p.The thesis contributes in three Virtual Reality areas: Âż Visual perception: a calibration algorithm is proposed to estimate stereo projection parameters in head-mounted displays, so that correct shapes and distances can be perceived, and calibration and control procedures are proposed to obtain desired accommodation stimuli at different virtual distances.Âż Immersive scenarios: the thesis analyzes several use cases demanding varying degrees of immersion and special, innovative visualization solutions are proposed to fulfil their requirements. Contributions focus on machinery simulators, weather radar volumetric visualization and manual arc welding simulation.Âż Ubiquitous visualization: contributions are presented to scenarios where users access interactive 3D applications remotely. The thesis follows the evolution of Web3D standards and technologies to propose original visualization solutions for volume rendering of weather radar data, e-learning on energy efficiency, virtual e-commerce and visual product configurators

    Iterative Solvers for Physics-based Simulations and Displays

    Full text link
    La génération d’images et de simulations réalistes requiert des modèles complexes pour capturer tous les détails d’un phénomène physique. Les équations mathématiques qui composent ces modèles sont compliquées et ne peuvent pas être résolues analytiquement. Des procédures numériques doivent donc être employées pour obtenir des solutions approximatives à ces modèles. Ces procédures sont souvent des algorithmes itératifs, qui calculent une suite convergente vers la solution désirée à partir d’un essai initial. Ces méthodes sont une façon pratique et efficace de calculer des solutions à des systèmes complexes, et sont au coeur de la plupart des méthodes de simulation modernes. Dans cette thèse par article, nous présentons trois projets où les algorithmes itératifs jouent un rôle majeur dans une méthode de simulation ou de rendu. Premièrement, nous présentons une méthode pour améliorer la qualité visuelle de simulations fluides. En créant une surface de haute résolution autour d’une simulation existante, stabilisée par une méthode itérative, nous ajoutons des détails additionels à la simulation. Deuxièmement, nous décrivons une méthode de simulation fluide basée sur la réduction de modèle. En construisant une nouvelle base de champ de vecteurs pour représenter la vélocité d’un fluide, nous obtenons une méthode spécifiquement adaptée pour améliorer les composantes itératives de la simulation. Finalement, nous présentons un algorithme pour générer des images de haute qualité sur des écrans multicouches dans un contexte de réalité virtuelle. Présenter des images sur plusieurs couches demande des calculs additionels à coût élevé, mais nous formulons le problème de décomposition des images afin de le résoudre efficacement avec une méthode itérative simple.Realistic computer-generated images and simulations require complex models to properly capture the many subtle behaviors of each physical phenomenon. The mathematical equations underlying these models are complicated, and cannot be solved analytically. Numerical procedures must thus be used to obtain approximate solutions. These procedures are often iterative algorithms, where an initial guess is progressively improved to converge to a desired solution. Iterative methods are a convenient and efficient way to compute solutions to complex systems, and are at the core of most modern simulation methods. In this thesis by publication, we present three papers where iterative algorithms play a major role in a simulation or rendering method. First, we propose a method to improve the visual quality of fluid simulations. By creating a high-resolution surface representation around an input fluid simulation, stabilized with iterative methods, we introduce additional details atop of the simulation. Second, we describe a method to compute fluid simulations using model reduction. We design a novel vector field basis to represent fluid velocity, creating a method specifically tailored to improve all iterative components of the simulation. Finally, we present an algorithm to compute high-quality images for multifocal displays in a virtual reality context. Displaying images on multiple display layers incurs significant additional costs, but we formulate the image decomposition problem so as to allow an efficient solution using a simple iterative algorithm
    • …
    corecore