2,684 research outputs found

    Beaming Displays

    Get PDF
    Existing near-eye display designs struggle to balance between multiple trade-offs such as form factor, weight, computational requirements, and battery life. These design trade-offs are major obstacles on the path towards an all-day usable near-eye display. In this work, we address these trade-offs by, paradoxically, removing the display from near-eye displays. We present the beaming displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with correct perspectives. In our system, a wearable headset guides the beamed images to a userโ€™s retina, which are then perceived as an augmented scene within a userโ€™s field of view. In addition to providing the system design of the beaming display, we provide a physical prototype and show that the beaming display can provide resolutions as high as consumer-level near-eye displays. We also discuss the different aspects of the design space for our proposal

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Evaluation of a portable image overlay projector for the visualisation of surgical navigation data: phantom studies

    Get PDF
    Introduction: Presenting visual feedback for image-guided surgery on a monitor requires the surgeon to perform time-consuming comparisons and diversion of sight and attention away from the patient. Deficiencies in previously developed augmented reality systems for image-guided surgery have, however, prevented the general acceptance of any one technique as a viable alternative to monitor displays. This work presents an evaluation of the feasibility and versatility of a novel augmented reality approach for the visualisation of surgical planning and navigation data. The approach, which utilises a portable image overlay device, was evaluated during integration into existing surgical navigation systems and during application within simulated navigated surgery scenarios. Methods: A range of anatomical models, surgical planning data and guidance information taken from liver surgery, cranio-maxillofacial surgery, orthopaedic surgery and biopsy were displayed on patient-specific phantoms, directly on to the patient's skin and on to cadaver tissue. The feasibility of employing the proposed augmented reality visualisation approach in each of the four tested clinical applications was qualitatively assessed for usability, visibility, workspace, line of sight and obtrusiveness. Results: The visualisation approach was found to assist in spatial understanding and reduced the need for sight diversion throughout the simulated surgical procedures. The approach enabled structures to be identified and targeted quickly and intuitively. All validated augmented reality scenes were easily visible and were implemented with minimal overhead. The device showed sufficient workspace for each of the presented applications, and the approach was minimally intrusiveness to the surgical scene. Conclusion: The presented visualisation approach proved to be versatile and applicable to a range of image-guided surgery applications, overcoming many of the deficiencies of previously described AR approaches. The approach presents an initial step towards a widely accepted alternative to monitor displays for the visualisation of surgical navigation dat

    HOLOGRAPHICS: Combining Holograms with Interactive Computer Graphics

    Get PDF
    Among all imaging techniques that have been invented throughout the last decades, computer graphics is one of the most successful tools today. Many areas in science, entertainment, education, and engineering would be unimaginable without the aid of 2D or 3D computer graphics. The reason for this success story might be its interactivity, which is an important property that is still not provided efficiently by competing technologies โ€“ such as holography. While optical holography and digital holography are limited to presenting a non-interactive content, electroholography or computer generated holograms (CGH) facilitate the computer-based generation and display of holograms at interactive rates [2,3,29,30]. Holographic fringes can be computed by either rendering multiple perspective images, then combining them into a stereogram [4], or simulating the optical interference and calculating the interference pattern [5]. Once computed, such a system dynamically visualizes the fringes with a holographic display. Since creating an electrohologram requires processing, transmitting, and storing a massive amount of data, todayโ€™s computer technology still sets the limits for electroholography. To overcome some of these performance issues, advanced reduction and compression methods have been developed that create truly interactive electroholograms. Unfortunately, most of these holograms are relatively small, low resolution, and cover only a small color spectrum. However, recent advances in consumer graphics hardware may reveal potential acceleration possibilities that can overcome these limitations [6]. In parallel to the development of computer graphics and despite their non-interactivity, optical and digital holography have created new fields, including interferometry, copy protection, data storage, holographic optical elements, and display holograms. Especially display holography has conquered several application domains. Museum exhibits often use optical holograms because they can present 3D objects with almost no loss in visual quality. In contrast to most stereoscopic or autostereoscopic graphics displays, holographic images can provide all depth cuesโ€”perspective, binocular disparity, motion parallax, convergence, and accommodationโ€”and theoretically can be viewed simultaneously from an unlimited number of positions. Displaying artifacts virtually removes the need to build physical replicas of the original objects. In addition, optical holograms can be used to make engineering, medical, dental, archaeological, and other recordingsโ€”for teaching, training, experimentation and documentation. Archaeologists, for example, use optical holograms to archive and investigate ancient artifacts [7,8]. Scientists can use hologram copies to perform their research without having access to the original artifacts or settling for inaccurate replicas. Optical holograms can store a massive amount of information on a thin holographic emulsion. This technology can record and reconstruct a 3D scene with almost no loss in quality. Natural color holographic silver halide emulsion with grain sizes of 8nm is todayโ€™s state-of-the-art [14]. Today, computer graphics and raster displays offer a megapixel resolution and the interactive rendering of megabytes of data. Optical holograms, however, provide a terapixel resolution and are able to present an information content in the range of terabytes in real-time. Both are dimensions that will not be reached by computer graphics and conventional displays within the next years โ€“ even if Mooreโ€™s law proves to hold in future. Obviously, one has to make a decision between interactivity and quality when choosing a display technology for a particular application. While some applications require high visual realism and real-time presentation (that cannot be provided by computer graphics), others depend on user interaction (which is not possible with optical and digital holograms). Consequently, holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. The intention of the project which is summarized in this chapter is to combine both technologies to create a powerful tool for science, industry and education. This has been referred to as HoloGraphics. Several possibilities have been investigated that allow merging computer generated graphics and holograms [1]. The goal is to combine the advantages of conventional holograms (i.e. extremely high visual quality and realism, support for all depth queues and for multiple observers at no computational cost, space efficiency, etc.) with the advantages of todayโ€™s computer graphics capabilities (i.e. interactivity, real-time rendering, simulation and animation, stereoscopic and autostereoscopic presentation, etc.). The results of these investigations are presented in this chapter

    A Portable Augmented Reality Science Laboratory

    Get PDF
    Augmented Reality (AR) is a technology which overlays virtual objects on the real world; generates three-dimensional (3D) virtual objects and provides an interactive interface which people can work in the real world and interact with 3D virtual objects at the same time. AR has the potential to engage and motivate learners to explore material from a variety of differing perspective, and has been shown to be particularly useful for teaching subject matter that students could not possibly experience first hand in the real world. This report provides a conceptual framework of a simulated augmented reality lab which could be used in teaching science in classrooms. The recent years, the importance of lab-based courses and its significant role in the science education is irrefutable. The use of AR in formal education could prove a key component in future learning environments that are richly populated with a blend of hardware and software applications. The aim of this project is to enhance the teaching and learning of science by complementing the existing traditional lab with the use of a simulated augmented reality lab. The system architecture and the technical aspects of the proposed project will be described. Implementation issues and benefits of the proposed AR Lab will be highlighted

    Robotic Cameraman for Augmented Reality based Broadcast and Demonstration

    Get PDF
    In recent years, a number of large enterprises have gradually begun to use vari-ous Augmented Reality technologies to prominently improve the audiencesโ€™ view oftheir products. Among them, the creation of an immersive virtual interactive scenethrough the projection has received extensive attention, and this technique refers toprojection SAR, which is short for projection spatial augmented reality. However,as the existing projection-SAR systems have immobility and limited working range,they have a huge difficulty to be accepted and used in human daily life. Therefore,this thesis research has proposed a technically feasible optimization scheme so thatit can be practically applied to AR broadcasting and demonstrations. Based on three main techniques required by state-of-art projection SAR applica-tions, this thesis has created a novel mobile projection SAR cameraman for ARbroadcasting and demonstration. Firstly, by combining the CNN scene parsingmodel and multiple contour extractors, the proposed contour extraction pipelinecan always detect the optimal contour information in non-HD or blurred images.This algorithm reduces the dependency on high quality visual sensors and solves theproblems of low contour extraction accuracy in motion blurred images. Secondly, aplane-based visual mapping algorithm is introduced to solve the difficulties of visualmapping in these low-texture scenarios. Finally, a complete process of designing theprojection SAR cameraman robot is introduced. This part has solved three mainproblems in mobile projection-SAR applications: (i) a new method for marking con-tour on projection model is proposed to replace the model rendering process. Bycombining contour features and geometric features, users can identify objects oncolourless model easily. (ii) a camera initial pose estimation method is developedbased on visual tracking algorithms, which can register the start pose of robot to thewhole scene in Unity3D. (iii) a novel data transmission approach is introduced to establishes a link between external robot and the robot in Unity3D simulation work-space. This makes the robotic cameraman can simulate its trajectory in Unity3D simulation work-space and project correct virtual content. Our proposed mobile projection SAR system has made outstanding contributionsto the academic value and practicality of the existing projection SAR technique. Itfirstly solves the problem of limited working range. When the system is running ina large indoor scene, it can follow the user and project dynamic interactive virtualcontent automatically instead of increasing the number of visual sensors. Then,it creates a more immersive experience for audience since it supports the user hasmore body gestures and richer virtual-real interactive plays. Lastly, a mobile systemdoes not require up-front frameworks and cheaper and has provided the public aninnovative choice for indoor broadcasting and exhibitions

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeonโ€™s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Augmented reality X-ray vision on optical see-through head mounted displays

    Get PDF
    Abstract. In this thesis, we present the development and evaluation of an augmented reality X-ray system on optical see-through head-mounted displays. Augmented reality X-ray vision allows users to see through solid surfaces such as walls and facades, by augmenting the real view with virtual images representing the hidden objects. Our system is developed based on the optical see-through mixed reality headset Microsoft Hololens. We have developed an X-ray cutout algorithm that uses the geometric data of the environment and enables seeing through surfaces. We have developed four different visualizations as well based on the algorithm. The first visualization renders simply the X-ray cutout without displaying any information about the occluding surface. The other three visualizations display features extracted from the occluder surface to help the user to get better depth perception of the virtual objects. We have used Sobel edge detection to extract the information. The three visualizations differ in the way to render the extracted features. A subjective experiment is conducted to test and evaluate the visualizations and to compare them with each other. The experiment consists of two parts; depth estimation task and a questionnaire. Both the experiment and its results are presented in the thesis

    ๋น„๋“ฑ๋ฐฉ์„ฑ ๊ด‘ํ•™ ์†Œ์ž๋ฅผ ์ด์šฉํ•œ ๊ด‘ ์‹œ์•ผ๊ฐ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. ์ด๋ณ‘ํ˜ธ.Near-eye display is considered as a promising display technique to realize augmented reality by virtue of its high sense of immersion and user-friendly interface. Among the important performances of near-eye display, a field of view is the most crucial factor for providing a seamless and immersive experience for augmented reality. In this dissertation, a transmissive eyepiece is devised instead of a conventional reflective eyepiece and it is discussed how to widen the field of view without loss of additional system performance. In order to realize the transmissive eyepiece, the eyepiece should operate lens to virtual information and glass to real-world scene. Polarization multiplexing technique is used to implement the multi-functional optical element, and anisotropic optical elements are used as material for multi-functional optical element. To demonstrate the proposed idea, an index-matched anisotropic crystal lens has been presented that reacts differently depending on polarization. With the combination of isotropic material and anisotropic crystal, the index-matched anisotropic crystal lens can be the transmissive eyepiece and achieve the large field of view. Despite the large field of view by the index-matched anisotropic crystal lens, many problems including form factor still remain to be solved. In order to overcome the limitations of conventional optics, a metasurface is adopted to the augmented reality application. With a stunning optical performance of the metasurface, a see-through metasurface lens is proposed and designed for implementing wide field of view near-eye display. The proposed novel eyepieces are expected to be an initiative study not only improving the specification of the existing near-eye display but opening the way for a next generation near-eye display.๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด๋Š” ๋†’์€ ๋ชฐ์ž…๊ฐ๊ณผ ์‚ฌ์šฉ์ž ์นœํ™”์ ์ธ ์ธํ„ฐํŽ˜์ด์Šค๋กœ ์ธํ•ด ์ฆ๊ฐ• ํ˜„์‹ค์„ ๊ตฌํ˜„ํ•˜๋Š” ๊ฐ€์žฅ ํšจ๊ณผ์ ์ธ ๊ธฐ์ˆ ๋กœ ์ตœ๊ทผ ํ™œ๋ฐœํ•œ ์—ฐ๊ตฌ๊ฐ€ ๊ณ„์†๋˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ค‘์š”ํ•œ ์„ฑ๋Šฅ ์ค‘ ์‹œ์•ผ๊ฐ์€ ๋งค๋„๋Ÿฝ๊ณ  ๋ชฐ์ž…๊ฐ ์žˆ๋Š” ๊ฒฝํ—˜์„ ์‚ฌ์šฉ์ž์—๊ฒŒ ์ „ํ•ด์คŒ์œผ๋กœ์จ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ด‘ํ•™์  ํ‰๊ฐ€์ง€ํ‘œ ์ค‘์— ํ•˜๋‚˜์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ธฐ์กด์˜ ๋ฐ˜์‚ฌํ˜• ์•„์ดํ”ผ์Šค (eyepiece) ๋ฅผ ๋Œ€์‹ ํ•˜๋Š” ํˆฌ๊ณผํ˜• ์•„์ดํ”ผ์Šค๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํˆฌ๊ณผํ˜• ์•„์ดํ”ผ์Šค๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์™ธ๋ถ€ ์ •๋ณด์— ๋Œ€ํ•ด์„œ๋Š” ํˆฌ๋ช…ํ•œ ์œ ๋ฆฌ์™€ ๊ฐ™์ด ํˆฌ๊ณผ์‹œํ‚ค๋ฉฐ, ๋™์‹œ์— ๊ฐ€์ƒ ์ •๋ณด๋Š” ๋ Œ์ฆˆ๋กœ ์ž‘๋™ํ•˜์—ฌ ๋จผ ๊ฑฐ๋ฆฌ์— ๋„์šธ ์ˆ˜ ์žˆ๋Š” ๊ด‘ํ•™์†Œ์ž๋ฅผ ๊ฐœ๋ฐœํ•˜์—ฌ์•ผ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํˆฌ๊ณผํ˜• ์•„์ดํ”ผ์Šค๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด์„œ ํŽธ๊ด‘์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ฒŒ ๋ฐ˜์‘ํ•˜๋Š” ๊ตด์ ˆ๋ฅ  ์ •ํ•ฉ ์ด๋ฐฉ์„ฑ ๊ฒฐ์ • ๋ Œ์ฆˆ (index-matched anisotropic crystal lens) ๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋ฐฉ์„ฑ ๊ฒฐ์ • ๊ตฌ์กฐ (anisotropic crystal)๋กœ ์ด๋ฃจ์–ด์ง„ ๋ Œ์ฆˆ์™€ ์ด๋ฅผ ๋‘˜๋Ÿฌ์‹ผ ๋“ฑ๋ฐฉ์„ฑ ๋ฌผ์งˆ (isotropic crytal) ๋กœ ์ด๋ฃจ์–ด์ง„ ๊ตด์ ˆ๋ฅ  ์ •ํ•ฉ ์ด๋ฐฉ์„ฑ ๊ฒฐ์ • ๋ Œ์ฆˆ๋Š” ํŽธ๊ด‘์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ฒŒ ์ž‘๋™ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํˆฌ๊ณผํ˜• ์•„์ดํ”ผ์Šค๋Š” ๊ธฐ์กด์˜ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด์— ๋น„ํ•ด ๋„“์€ ์‹œ์•ผ๊ฐ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์ง€๋งŒ ์ด๋ฐฉ์„ฑ ๊ฒฐ์ • ๊ตฌ์กฐ์˜ ๋‚ฎ์€ ๊ตด์ ˆ๋ฅ  ์ฐจ์ด๋กœ ์ธํ•ด ์‹œ์Šคํ…œ์˜ ํฌ๊ธฐ๊ฐ€ ์ปค์ง€๋Š” ๋‹จ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋‹จ์ ์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด ๋ฉ”ํƒ€ ํ‘œ๋ฉด์„ ์ฆ๊ฐ• ํ˜„์‹ค ๋””์Šคํ”Œ๋ ˆ์ด ๋ถ„์•ผ์— ์ ์šฉํ•˜์˜€๋‹ค. ๋ฉ”ํƒ€ ํ‘œ๋ฉด์˜ ๊ธฐ์กด ๊ด‘ํ•™ ์†Œ์ž๋ฅผ ๋Šฅ๊ฐ€ํ•˜๋Š” ๋†€๋ผ์šด ๊ด‘ํ•™ ์„ฑ๋Šฅ์„ ์ด์šฉํ•˜์—ฌ ๋„“์€ ์‹œ์•ผ๊ฐ์„ ๊ฐ€์ง€๋Š” ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ํˆฌ๋ช… ๋ฉ”ํƒ€ ๋ Œ์ฆˆ๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ํŽธ๊ด‘์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ฒŒ ๋ฐ˜์‘ํ•˜๋Š” ํˆฌ๋ช… ๋ฉ”ํƒ€๋ Œ์ฆˆ๋Š” ๋„“์€ ์‹œ์•ผ๊ฐ๊ณผ ๊ฒฝ๋Ÿ‰ํ™” ์‹œ์Šคํ…œ ๊ตฌํ˜„์ด ๊ฐ€๋Šฅํ•˜๋ฉฐ ์ด๋ฅผ ์ž…์ฆํ•˜๊ธฐ ์œ„ํ•ด ํˆฌ๋ช… ๋ฉ”ํƒ€๋ Œ์ฆˆ์˜ ์„ค๊ณ„ ๋ฐฉ๋ฒ• ๋ฟ ์•„๋‹ˆ๋ผ ์‹ค์ œ ๊ตฌํ˜„์„ ํ†ตํ•œ ๊ฐ€๋Šฅ์„ฑ์„ ์ž…์ฆํ•˜์˜€๋‹ค. ์ด๋Ÿฌํ•œ ์ƒˆ๋กœ์šด ์•„์ดํ”ผ์Šค์— ๋Œ€ํ•œ ๊ฐœ๋…์€ ๊ธฐ์กด์˜ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์‚ฌ์–‘ ๊ฐœ์„ ์— ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉ๋  ๋ฟ ์•„๋‹ˆ๋ผ ์ฐจ์„ธ๋Œ€ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์œ„ํ•œ ์„ ๋„์ ์ธ ์—ญํ• ์„ ํ•  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค.Abstract Contents List of Tables List of Figures Near-eye displays with wide field of view using anisotropic optical elements Chapter 1 Introduction 1.1 Near-eye displays for augmented reality 1.2 Optical performances of near-eye display 1.3 State-of-the-arts of near-eye display 1.4 Motivation and contribution of this dissertation Chapter 2 Transmissive eyepiece for wide field of view near-eye display 2.1 Transmissive eyepiece for near-eye display Chapter 3 Near-eye display using index-matched anisotropic crystal lens 3.1 Introduction 3.2 Index-matched anisotropic crystal lens 3.2.1 Principle of the index-matched anisotropic crystal lens 3.2.2 Aberration analysis of index-matched anisotropic crystal lens 3.2.3 Implementation 3.3 Near-eye displays using index-matched anisotropic crystal lens 3.3.1 Near-eye display using index-matched anisotropic crystal lens 3.3.2 Flat panel type near-eye display using IMACL 3.3.3 Polarization property of transparent screen 3.4 Conclusion Chapter 4 Near-eye display using metasurface lens 4.1 Introduction 4.2 See-through metasurface lens 4.2.1 Metasurface lens 4.3 Full-color near-eye display using metasurface lens 4.3.1 Full-color near-eye display using metasurface lens 4.3.2 Holographic near-eye display using metasurface lens for aberration compensation 4.4 Conclusion Chapter 5 Conclusion Bibliography AppendixDocto
    • โ€ฆ
    corecore