29 research outputs found

    Computational See-Through Near-Eye Displays

    Get PDF
    See-through near-eye displays with the form factor and field of view of eyeglasses are a natural choice for augmented reality systems: the non-encumbering size enables casual and extended use and large field of view enables general-purpose spatially registered applications. However, designing displays with these attributes is currently an open problem. Support for enhanced realism through mutual occlusion and the focal depth cues is also not found in eyeglasses-like displays. This dissertation provides a new strategy for eyeglasses-like displays that follows the principles of computational displays, devices that rely on software as a fundamental part of image formation. Such devices allow more hardware simplicity and flexibility, showing greater promise of meeting form factor and field of view goals while enhancing realism. This computational approach is realized in two novel and complementary see-through near-eye display designs. The first subtractive approach filters omnidirectional light through a set of optimized patterns displayed on a stack of spatial light modulators, reproducing a light field corresponding to in-focus imagery. The design is thin and scales to wide fields of view; see-through is achieved with transparent components placed directly in front of the eye. Preliminary support for focal cues and environment occlusion is also demonstrated. The second additive approach uses structured point light illumination to form an image with a minimal set of rays. Each of an array of defocused point light sources is modulated by a region of a spatial light modulator, essentially encoding an image in the focal blur. See-through is also achieved with transparent components and thin form factors and wide fields of view (>= 100 degrees) are demonstrated. The designs are examined in theoretical terms, in simulation, and through prototype hardware with public demonstrations. This analysis shows that the proposed computational near-eye display designs offer a significantly different set of trade-offs than conventional optical designs. Several challenges remain to make the designs practical, most notably addressing diffraction limits.Doctor of Philosoph

    Focus 3D: Compressive Accommodation Display

    Get PDF
    We present a glasses-free 3D display design with the potential to provide viewers with nearly correct accommodative depth cues, as well as motion parallax and binocular cues. Building on multilayer attenuator and directional backlight architectures, the proposed design achieves the high angular resolution needed for accommodation by placing spatial light modulators about a large lens: one conjugate to the viewer's eye, and one or more near the plane of the lens. Nonnegative tensor factorization is used to compress a high angular resolution light field into a set of masks that can be displayed on a pair of commodity LCD panels. By constraining the tensor factorization to preserve only those light rays seen by the viewer, we effectively steer narrow high-resolution viewing cones into the user's eyes, allowing binocular disparity, motion parallax, and the potential for nearly correct accommodation over a wide field of view. We verify the design experimentally by focusing a camera at different depths about a prototype display, establish formal upper bounds on the design's accommodation range and diffraction-limited performance, and discuss practical limitations that must be overcome to allow the device to be used with human observers

    Neural \'{E}tendue Expander for Ultra-Wide-Angle High-Fidelity Holographic Display

    Full text link
    Holographic displays can generate light fields by dynamically modulating the wavefront of a coherent beam of light using a spatial light modulator, promising rich virtual and augmented reality applications. However, the limited spatial resolution of existing dynamic spatial light modulators imposes a tight bound on the diffraction angle. As a result, modern holographic displays possess low \'{e}tendue, which is the product of the display area and the maximum solid angle of diffracted light. The low \'{e}tendue forces a sacrifice of either the field-of-view (FOV) or the display size. In this work, we lift this limitation by presenting neural \'{e}tendue expanders. This new breed of optical elements, which is learned from a natural image dataset, enables higher diffraction angles for ultra-wide FOV while maintaining both a compact form factor and the fidelity of displayed contents to human viewers. With neural \'{e}tendue expanders, we experimentally achieve 64×\times \'{e}tendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically, with high-fidelity reconstruction quality (measured in PSNR) over 29 dB on retinal-resolution images

    Opportunity rover localization and topographic mapping at the landing site of Meridiani Planum, Mars

    Get PDF
    This paper presents the results of Mars topographic mapping and lander and rover localization for the Opportunity rover at Meridiani Planum during the Mars Exploration Rover (MER) 2003 mission. By Sol 458, the Opportunity rover traversed a distance of 5.20 km. We localized the lander using two-way Doppler radio positioning and cartographic triangulation of craters visible in both orbital and ground images. Additional high-resolution orbital images were taken to verify the determined lander position. Visual odometry and bundle adjustment techniques were applied to overcome wheel slippages, azimuthal angle drift, and other navigation errors (as large as 21% within Eagle crater). In addition, orbit-to-ground image-based adjustment was applied to correct rover location errors where bundle adjustment was not applicable. We generated timely topographic products, including orthoimages, digital terrain models (DTMs), three-dimensional (3-D) crater models, and rover traverse maps. In particular, detailed 3-D terrain models of major features, such as Endurance crater, have been generated using multisite panoramic stereo images based on bundle adjustment and wide baseline stereo technique

    Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission

    No full text
    In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions

    Holographie optics for thin and lightweight virtual reality

    No full text

    International Journal of Computer Vision c ○ 2007 Springer Science + Business Media, LLC. Manufactured in the United States. DOI: 10.1007/s11263-007-0046-z Computer Vision on Mars

    No full text
    Abstract. Increasing the level of spacecraft autonomy is essential for broadening the reach of solar system exploration. Computer vision has and will continue to play an important role in increasing autonomy of both spacecraft and Earthbased robotic vehicles. This article addresses progress on computer vision for planetary rovers and landers and has four main parts. First, we review major milestones in the development of computer vision for robotic vehicles over the last four decades. Since research on applications for Earth and space has often been closely intertwined, the review includes elements of both. Second, we summarize the design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission, which was a major step forward in the use of computer vision in space. These algorithms did stereo vision and visual odometry for rover navigation and feature tracking for horizontal velocity estimation for the landers. Third, we summarize ongoing research to improve vision systems for planetary rovers, which includes various aspects of noise reduction, FPGA implementation, and vision-based slip perception. Finally, we briefly survey other opportunities for computer vision to impact rovers, landers, and orbiters in future solar system exploration missions

    Pupil-aware Holography

    No full text
    Holographic displays promise to deliver unprecedented display capabilities in augmented reality applications, featuring a wide field of view, wide color gamut, spatial resolution, and depth cues all in a compact form factor. While emerging holographic display approaches have been successful in achieving large étendue and high image quality as seen by a camera, the large étendue also reveals a problem that makes existing displays impractical: the sampling of the holographic field by the eye pupil. Existing methods have not investigated this issue due to the lack of displays with large enough étendue, and, as such, they suffer from severe artifacts with varying eye pupil size and location. We show that the holographic field as sampled by the eye pupil is highly varying for existing display setups, and we propose pupil-aware holography that maximizes the perceptual image quality irrespective of the size, location, and orientation of the eye pupil in a near-eye holographic display. We validate the proposed approach both in simulations and on a prototype holographic display and show that our method eliminates severe artifacts and significantly outperforms existing approaches.11Ysciescopu
    corecore