6 research outputs found

    Ghost imaging with the human eye

    Get PDF
    Computational ghost imaging relies on the decomposition of an image into patterns that are summed together with weights that measure the overlap of each pattern with the scene being imaged. These tasks rely on a computer. Here we demonstrate that the computational integration can be performed directly with the human eye. We use this human ghost imaging technique to evaluate the temporal response of the eye and establish the image persistence time to be around 20 ms followed by a further 20 ms exponential decay. These persistence times are in agreement with previous studies but can now potentially be extended to include a more precise characterisation of visual stimuli and provide a new experimental tool for the study of visual perception

    High-Dynamic-Range and High-Efficiency Near-Eye Display Systems

    Get PDF
    Near-eye display systems, which project digital information directly into the human visual system, are expected to revolutionize the interface between digital information and physical world. However, the image quality of most near-eye displays is still far inferior to that of direct-view displays. Both light engine and imaging optics of near-eye display systems play important roles to the degraded image quality. In addition, near-eye displays also suffer from a relatively low optical efficiency, which severely limits the device operation time. Such an efficiency loss originates from both light engines and projection processes. This dissertation is devoted to addressing these two critical issues from the entire system perspective. In Chapter 2, we propose useful design guidelines for the miniature light-emitting diode (mLED) backlit liquid crystal displays (LCDs) to mitigate halo artifacts. After developing a high dynamic range (HDR) light engine in Chapter 3, we establish a systematic image quality evaluation model for virtual reality (VR) devices and analyze the requirements for light engines. Our guidelines for mLED backlit LCDs have been widely practiced in direct-view displays. Similarly, the newly established criteria for light engines will shed new light to guide future VR display development. To improve the optical efficiency of near eye displays, we must optimize each component. For the light engine, we focus on color-converted micro-LED microdisplays. We fabricate a pixelated cholesteric liquid crystal film on top of a pixelated QD array to recycle the leaked blue light, which in turn doubles the optical efficiency and widens the color gamut. In Chapter 5, we tailor the radiation pattern of the light engine to match the etendue of the imaging systems, as a result, the power loss in the projection process is greatly reduced. The system efficiency is enhanced by over one-third for both organic light-emitting diode (OLED) displays and LCDs while maintaining indistinguishable image nonuniformity. In Chapter 6, we briefly summarize our major accomplishments

    Low Latency Displays for Augmented Reality

    Get PDF
    The primary goal for Augmented Reality (AR) is bringing the real and virtual together into a common space. Maintaining this illusion, however, requires preserving spatially and temporally consistent registration despite changes in user or object pose. The greatest source of registration error is latency—the delay between when something moves and the display changes in response—which breaks temporal consistency. Furthermore, the real world varies greatly in brightness; ranging from bright sunlight to deep shadows. Thus, a compelling AR system must also support High-Dynamic Range (HDR) to maintain its virtual objects’ appearance both spatially and temporally consistent with the real world. This dissertation presents new methods, implementations, results (both visual and performance), and future steps for low latency displays, primarily in the context of Optical See-through Augmented Reality (OST-AR) Head-Mounted Displays, focusing on temporal consistency in registration, HDR color support, and spatial and temporal consistency in brightness: 1. For registration temporal consistency, the primary insight is breaking the conventional display paradigm: computers render imagery, frame by frame, and then transmit it to the display for emission. Instead, the display must also contribute towards rendering by performing a post-rendering, post-transmission warp of the computer-supplied imagery in the display hardware. By compensating in the display for system latency by using the latest tracking information, much of the latency can be short-circuited. Furthermore, the low latency display must support ultra-high frequency (multiple kHz) refreshing to minimize pose displacement between updates. 2. For HDR color support, the primary insight is developing new display modulation techniques. DMDs, a type of ultra-high frequency display, emit binary output, which require modulation to produce multiple brightness levels. Conventional modulation breaks low latency guarantees, and modulation of bright LEDs illuminators at frequencies to support kHz-order HDR exceeds their capabilities. Thus one must directly synthesize the necessary variation in brightness. 3. For spatial and temporal brightness consistency, the primary insight is integrating HDR light sensors into the display hardware: the same processes which both compensate for latency and generate HDR output can also modify it in response to the spatially sensed brightness of the real world.Doctor of Philosoph

    VOLUMETRIC AND VARIFOCAL-OCCLUSION AUGMENTED REALITY DISPLAYS

    Get PDF
    Augmented Reality displays are a next-generation computing platform that offer unprecedented user experience by seamlessly combining physical and digital content, and could revolutionize the way we communicate, visualize, and interact with digital information. However, providing a seamless and perceptually realistic experience requires displays capable of presenting photorealistic imagery, and especially, perceptually realistic depth cues, resulting in virtual imagery being presented at any depth and of any opacity. Today's commercial augmented reality displays are far from perceptually realistic because they do not support important depth cues such as mutual occlusion and accommodation, resulting in a transparent image overlaid onto the real-world at a fixed depth. Previous research prototypes fall short by presenting occlusion only for a fixed depth, and by presenting accommodation and defocus-blur only for a narrow depth-range, or with poor depth or spatial resolution. To address these challenges, this thesis explores a computational display approach, where the display’s optics, electronics, and algorithms are co-designed to improve performance or enable new capabilities. In one design, a Volumetric Near-eye Augmented Reality Display was developed to simultaneously present many virtual objects at different depths across a large depth range (15 - 400 cm) without sacrificing spatial resolution, frame rate, or bitdepth. This was accomplished by (1) synchronizing a high-speed Digital Micromirror Device (DMD) projector and a focus-tunable lens to periodically sweep out a volume composed of 280 single-color binary images in front of the user's eye, (2) a new voxel-oriented decomposition algorithm, and (3) per-depth-plane illumination control. In a separate design, for the first time, we demonstrate depth-correct occlusion in optical see-through augmented reality displays. This was accomplished by an optical system composed of two fixed-focus lenses and two focus-tunable lenses to dynamically move the occlusion and virtual image planes in depth, and designing the optics to ensure unit magnification of the see-through real world irrespective of the occlusion or virtual image plane distance. Contributions of this thesis include new optical designs, new rendering algorithms, and prototype displays that demonstrate accommodation, defocus blur, and occlusion depth cues over an extended depth-range.Doctor of Philosoph
    corecore