15 research outputs found

    High-dynamic-range Foveated Near-eye Display System

    Get PDF
    Wearable near-eye display has found widespread applications in education, gaming, entertainment, engineering, military training, and healthcare, just to name a few. However, the visual experience provided by current near-eye displays still falls short to what we can perceive in the real world. Three major challenges remain to be overcome: 1) limited dynamic range in display brightness and contrast, 2) inadequate angular resolution, and 3) vergence-accommodation conflict (VAC) issue. This dissertation is devoted to addressing these three critical issues from both display panel development and optical system design viewpoints. A high-dynamic-range (HDR) display requires both high peak brightness and excellent dark state. In the second and third chapters, two mainstream display technologies, namely liquid crystal display (LCD) and organic light emitting diode (OLED), are investigated to extend their dynamic range. On one hand, LCD can easily boost its peak brightness to over 1000 nits, but it is challenging to lower the dark state to \u3c 0.01 nits. To achieve HDR, we propose to use a mini-LED local dimming backlight. Based on our simulations and subjective experiments, we establish practical guidelines to correlate the device contrast ratio, viewing distance, and required local dimming zone number. On the other hand, self-emissive OLED display exhibits a true dark state, but boosting its peak brightness would unavoidably cause compromised lifetime. We propose a systematic approach to enhance OLED\u27s optical efficiency while keeping indistinguishable angular color shift. These findings will shed new light to guide future HDR display designs. In Chapter four, in order to improve angular resolution, we demonstrate a multi-resolution foveated display system with two display panels and an optical combiner. The first display panel provides wide field of view for peripheral vision, while the second panel offers ultra-high resolution for the central fovea. By an optical minifying system, both 4x and 5x enhanced resolutions are demonstrated. In addition, a Pancharatnam-Berry phase deflector is applied to actively shift the high-resolution region, in order to enable eye-tracking function. The proposed design effectively reduces the pixelation and screen-door effect in near-eye displays. The VAC issue in stereoscopic displays is believed to be the main cause of visual discomfort and fatigue when wearing VR headsets. In Chapter five, we propose a novel polarization-multiplexing approach to achieve multiplane display. A polarization-sensitive Pancharatnam-Berry phase lens and a spatial polarization modulator are employed to simultaneously create two independent focal planes. This method enables generation of two image planes without the need of temporal multiplexing. Therefore, it can effectively reduce the frame rate by one-half. In Chapter six, we briefly summarize our major accomplishments

    Optical simulation, modeling and evaluation of 3D medical displays

    Get PDF

    High-Dynamic-Range and High-Efficiency Near-Eye Display Systems

    Get PDF
    Near-eye display systems, which project digital information directly into the human visual system, are expected to revolutionize the interface between digital information and physical world. However, the image quality of most near-eye displays is still far inferior to that of direct-view displays. Both light engine and imaging optics of near-eye display systems play important roles to the degraded image quality. In addition, near-eye displays also suffer from a relatively low optical efficiency, which severely limits the device operation time. Such an efficiency loss originates from both light engines and projection processes. This dissertation is devoted to addressing these two critical issues from the entire system perspective. In Chapter 2, we propose useful design guidelines for the miniature light-emitting diode (mLED) backlit liquid crystal displays (LCDs) to mitigate halo artifacts. After developing a high dynamic range (HDR) light engine in Chapter 3, we establish a systematic image quality evaluation model for virtual reality (VR) devices and analyze the requirements for light engines. Our guidelines for mLED backlit LCDs have been widely practiced in direct-view displays. Similarly, the newly established criteria for light engines will shed new light to guide future VR display development. To improve the optical efficiency of near eye displays, we must optimize each component. For the light engine, we focus on color-converted micro-LED microdisplays. We fabricate a pixelated cholesteric liquid crystal film on top of a pixelated QD array to recycle the leaked blue light, which in turn doubles the optical efficiency and widens the color gamut. In Chapter 5, we tailor the radiation pattern of the light engine to match the etendue of the imaging systems, as a result, the power loss in the projection process is greatly reduced. The system efficiency is enhanced by over one-third for both organic light-emitting diode (OLED) displays and LCDs while maintaining indistinguishable image nonuniformity. In Chapter 6, we briefly summarize our major accomplishments

    Crosstalk in stereoscopic displays

    Get PDF
    Crosstalk is an important image quality attribute of stereoscopic 3D displays. The research presented in this thesis examines the presence, mechanisms, simulation, and reduction of crosstalk for a selection of stereoscopic display technologies. High levels of crosstalk degrade the perceived quality of stereoscopic displays hence it is important to minimise crosstalk. This thesis provides new insights which are critical to a detailed understanding of crosstalk and consequently to the development of effective crosstalk reduction techniques

    Towards Highly-Integrated Stereovideoscopy for \u3ci\u3ein vivo\u3c/i\u3e Surgical Robots

    Get PDF
    When compared to traditional surgery, laparoscopic procedures result in better patient outcomes: shorter recovery, reduced post-operative pain, and less trauma to incisioned tissue. Unfortunately, laparoscopic procedures require specialized training for surgeons, as these minimally-invasive procedures provide an operating environment that has limited dexterity and limited vision. Advanced surgical robotics platforms can make minimally-invasive techniques safer and easier for the surgeon to complete successfully. The most common type of surgical robotics platforms -- the laparoscopic robots -- accomplish this with multi-degree-of-freedom manipulators that are capable of a diversified set of movements when compared to traditional laparoscopic instruments. Also, these laparoscopic robots allow for advanced kinematic translation techniques that allow the surgeon to focus on the surgical site, while the robot calculates the best possible joint positions to complete any surgical motion. An important component of these systems is the endoscopic system used to transmit a live view of the surgical environment to the surgeon. Coupled with 3D high-definition endoscopic cameras, the entirety of the platform, in effect, eliminates the peculiarities associated with laparoscopic procedures, which allows less-skilled surgeons to complete minimally-invasive surgical procedures quickly and accurately. A much newer approach to performing minimally-invasive surgery is the idea of using in-vivo surgical robots -- small robots that are inserted directly into the patient through a single, small incision; once inside, an in-vivo robot can perform surgery at arbitrary positions, with a much wider range of motion. While laparoscopic robots can harness traditional endoscopic video solutions, these in-vivo robots require a fundamentally different video solution that is as flexible as possible and free of bulky cables or fiber optics. This requires a miniaturized videoscopy system that incorporates an image sensor with a transceiver; because of severe size constraints, this system should be deeply embedded into the robotics platform. Here, early results are presented from the integration of a miniature stereoscopic camera into an in-vivo surgical robotics platform. A 26mm X 24mm stereo camera was designed and manufactured. The proposed device features USB connectivity and 1280 X 720 resolution at 30 fps. Resolution testing indicates the device performs much better than similarly-priced analog cameras. Suitability of the platform for 3D computer vision tasks -- including stereo reconstruction -- is examined. The platform was also tested in a living porcine model at the University of Nebraska Medical Center. Results from this experiment suggest that while the platform performs well in controlled, static environments, further work is required to obtain usable results in true surgeries. Concluding, several ideas for improvement are presented, along with a discussion of core challenges associated with the platform. Adviser: Lance C. Pérez [Document = 28 Mb

    무안경식 3 차원 디스플레이와 투사형 디스플레이를 이용한 깊이 융합 디스플레이의 관찰 특성 향상

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 8. 이병호.In this dissertation, various methods for enhancing the viewing characteristics of the depth-fused display are proposed with combination of projection-type displays or integral imaging display technologies. Depth-fused display (DFD) is one kind of the volumetric three-dimensional (3D) displays composed of multiple slices of depth images. With a proper weighting to the luminance of the images on the visual axis of the observer, it provides continuous change of the accommodation within the volume confined by the display layers. Because of its volumetric property depth-fused 3D images can provide very natural volumetric images, but the base images should be located on the exact positions on the viewing axis, which gives complete superimpose of the images. If this condition is not satisfied, the images are observed as two separated images instead of continuous volume. This viewing characteristic extremely restricts the viewing condition of the DFD resulting in the limited applications of DFDs. While increasing the number of layers can result in widening of the viewing angle and depth range by voxelizing the reconstructed 3D images, the required system complexity also increases along with the number of image layers. For solving this problem with a relatively simple configuration of the system, hybrid techniques are proposed for DFDs. The hybrid technique is the combination of DFD with other display technologies such as projection-type displays or autostereoscopic displays. The projection-type display can be combined with polarization-encoded depth method for projection of 3D information. Because the depth information is conveyed by polarization states, there is no degradation in spatial resolution or video frame in the reconstructed 3D images. The polarized depth images are partially selected at the stacked polarization selective screens according to the given depth states. As the screen does not require any active component for the reconstruction of images, projection part and reconstruction part can be totally separated. Also, the projection property enables the scalability of the reconstructed images like a conventional projection display, which can give immersive 3D experience by providing large 3D images. The separation of base images due to the off-axis observation can be compensated by shifting the base images along the viewers visual axis. It can be achieved by adopting multi-view techniques. While conventional multi-view displays provide different view images for different viewers positions, it can be used for showing shifted base images for DFD. As a result, multiple users can observe the depth-fused 3D images at the same time. Another hybrid method is the combination of floating method with DFD. Convex lens can optically translate the depth position of the object. Based on this principle, the optical gap between two base images can be extended beyond the physical dimension of the images. Employing the lens with a short focal length, the gap between the base images can be greatly reduced. For a practical implementation of the system, integral imaging method can be used because it is composed of array of lenses. The floated image can be located in front of the lens as well as behind the lens. Both cases result in the expansion of depth range beyond the physical gap of base images, but real-mode floating enables interactive application of the DFD. In addition to the expansion of depth range, the viewing angle of the hybrid system can be increased by employing tracking method. Viewer tracking method also enables dynamic parallax for the DFD with real-time update of base images along with the viewing direction of the tracked viewers. Each chapter of this dissertation explains the theoretical background of the proposed hybrid method and demonstrates the feasibility of the idea with experimental systems.Abstract i Contents iv List of Figures vi List of Tables xii Chapter 1 Introduction 1 1.1 Overview of three-dimensional displays 1 1.2 Motivation 7 1.3 Scope and organization 9 Chapter 2 Multi-layered depth-fused display with projection-type display 10 2.1 Introduction 10 2.2 Polarization-encoded depth information for depth-fused display 12 2.3 Visualization with passive scattering film 16 2.4 Summary 30 Chapter 3 Compact depth-fused display with enhanced depth and viewing angle 31 3.1 Introduction 31 3.2 Enhancement of viewing characteristics 34 3.2.1 Viewing angle enhancement using multi-view method 34 3.2.2 Depth enhancement using integral imaging 37 3.2.3 Depth and viewing angle enhancement 39 3.3 Implementation of experimental system with enhanced viewing parameters 44 3.4 Summary 51 Chapter 4 Real-mode depth-fused display with viewer tracking 52 4.1 Introduction 52 4.2 Viewer tracking method 55 4.2.1 Viewer-tracked depth-fused display 55 4.2.2 Viewer-tracked integral imaging for a depth-fused display 58 4.3 Implementation of viewer-tracked integral imaging 63 4.4 Summary 71 Chapter 5 Conclusion 72 Bibliography 74 초록 83Docto

    Panoramic, large-screen, 3-D flight display system design

    Get PDF
    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified

    Liquid Crystal on Silicon Devices: Modeling and Advanced Spatial Light Modulation Applications

    Get PDF
    Liquid Crystal on Silicon (LCoS) has become one of the most widespread technologies for spatial light modulation in optics and photonics applications. These reflective microdisplays are composed of a high-performance silicon complementary metal oxide semiconductor (CMOS) backplane, which controls the light-modulating properties of the liquid crystal layer. State-of-the-art LCoS microdisplays may exhibit a very small pixel pitch (below 4 ?m), a very large number of pixels (resolutions larger than 4K), and high fill factors (larger than 90%). They modulate illumination sources covering the UV, visible, and far IR. LCoS are used not only as displays but also as polarization, amplitude, and phase-only spatial light modulators, where they achieve full phase modulation. Due to their excellent modulating properties and high degree of flexibility, they are found in all sorts of spatial light modulation applications, such as in LCOS-based display systems for augmented and virtual reality, true holographic displays, digital holography, diffractive optical elements, superresolution optical systems, beam-steering devices, holographic optical traps, and quantum optical computing. In order to fulfil the requirements in this extensive range of applications, specific models and characterization techniques are proposed. These devices may exhibit a number of degradation effects such as interpixel cross-talk and fringing field, and time flicker, which may also depend on the analog or digital backplane of the corresponding LCoS device. The use of appropriate characterization and compensation techniques is then necessary
    corecore