566 research outputs found

    Compact Lens Assembly for the Teleportal Augmented Reality System

    Get PDF
    A compact, ultra-light and high-performance projection optics lens assembly using Diffractive Optics Technology, plastic and glass optics, and Aspheric optics has been designed with only four elements. The lens may be included as is, or scaled to be included in all instruments using conventional Double-Gauss lens forms. The preferred lens is only 20 mm in diameter and 15 mm long has a weight of only 8 g. Thus, two lenses for head mounted stereo displays is only 16 g. Such a stereo display, known as a head-mounted projective display (HMPD), consists of a pair of miniature projection lenses, beam splitters, and miniature displays mounted on the helmet, and retro-reflective sheeting materials placed strategically in the environment

    Head Mounted Projection Display with a Wide Field of View

    Get PDF
    A ultra-wide field of view head mounted display has been realized by integrating an ARC display component having a greater than about 70 degrees field of retro-reflection with an optical tiling display which provides a greater than about 80 degrees horizontal field of view by about 50 degrees vertical field of view whereby an overall binocular horizontal field of view greater than about 120 degrees is realized with a greater resolution than about 2 arc minutes. There is also taught a method of providing a wide field of view head mounted display by the steps of: combining an ARC display component and an optical tiling display; and, integrating said component and said tiling display whereby an overall field of view greater than about 120 degrees is realized

    A Wearable Head-mounted Projection Display

    Get PDF
    Conventional head-mounted projection displays (HMPDs) contain of a pair of miniature projection lenses, beamsplitters, and miniature displays mounted on the helmet, as well as a retro-reflective screen placed strategically in the environment. We have extened the HMPD technology integrating the screen into a fully mobile embodiment. Some initial efforts of demonstrating this technology has been captured followed by an investigation of the diffraction effects versus image degradation caused by integrating the retro-reflective screen within the HMPD. The key contribution of this research is the conception and development of a mobileHMPD (M-HMPD). We have included an extensive analysis of macro- and microscopic properties that encompass the retro-reflective screen. Furthermore, an evaluation of the overall performance of the optics will be assessed in both object space for the optical designer and visual space for the possible users of this technology. This research effort will also be focused on conceiving a mobile M-HMPD aimed for dual indoor/outdoor applications. The M-HMPD shares the known advantage such as ultralightweight optics (i.e. 8g per eye), unperceptible distortion (i.e. โ‰ค 2.5%), and lightweight headset (i.e. โ‰ค 2.5 lbs) compared with eyepiece type head-mounted displays (HMDs) of equal eye relief and field of view. In addition, the M-HMPD also presents an advantage over the preexisting HMPD in that it does not require a retro-reflective screen placed strategically in the environment. This newly developed M-HMPD has the ability to project clear images at three different locations within near- or far-field observation depths without loss of image quality. This particular M-HMPD embodiment was targeted to mixed reality, augmented reality, and wearable display applications

    Optical versus video see-through mead-mounted displays in medical visualization

    Get PDF
    We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality research efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology

    Scalable multi-view stereo camera array for real world real-time image capture and three-dimensional displays

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (leaves 71-75).The number of three-dimensional displays available is escalating and yet the capturing devices for multiple view content are focused on either single camera precision rigs that are limited to stationary objects or the use of synthetically created animations. In this work we will use the existence of inexpensive digital CMOS cameras to explore a multi- image capture paradigm and the gathering of real world real-time data of active and static scenes. The capturing system can be developed and employed for a wide range of applications such as portrait-based images for multi-view facial recognition systems, hypostereo surgical training systems, and stereo surveillance by unmanned aerial vehicles. The system will be adaptable to capturing the correct stereo views based on the environmental scene and the desired three-dimensional display. Several issues explored by the system will include image calibration, geometric correction, the possibility of object tracking, and transfer of the array technology into other image capturing systems. These features provide the user more freedom to interact with their specific 3-D content while allowing the computer to take on the difficult role of stereoscopic cinematographer.Samuel L. Hill.S.M

    Scene understanding through semantic image segmentation in augmented reality

    Get PDF
    Abstract. Semantic image segmentation, the task of assigning a label to each pixel in an image, is a major challenge in the field of computer vision. Semantic image segmentation using fully convolutional neural networks (FCNNs) offers an online solution to the scene understanding while having a simple training procedure and fast inference speed if designed efficiently. The semantic information provided by the semantic segmentation is a detailed understanding of the current context and this scene understanding is vital for scene modification in augmented reality (AR), especially if one aims to perform destructive scene augmentation. Augmented reality systems, by nature, aim to have a real-time modification of the context through head-mounted see-through or video-see-through displays, thus require efficiency in each step. Although there are many solutions to the semantic image segmentation in the literature such as DeeplabV3+, Deeplab DPC, they fail to offer a low latency inference due to their complex architectures in aim to acquire the best accuracy. As a part of this thesis work, we provide an efficient architecture for semantic image segmentation using an FCNN model and achieve real-time performance on smartphones at 19.65 frames per second (fps) while maintaining a high mean intersection over union (mIOU) of 67.7% on Cityscapes validation set with our "Basic" variant and 15.41 fps and 70.3% mIOU on Cityscapes test set using our "DPC" variant. The implementation is open-sourced and compatible with Tensorflow Lite, thus able to run on embedded and mobile devices. Furthermore, the thesis work demonstrates an augmented reality implementation where semantic segmentation masks are tracked online in a 3D environment using Google ARCore. We show that the frequent calculation of semantic information is not necessary and by tracking the calculated semantic information in 3D space using inertial-visual odometry that is provided by the ARCore framework, we can achieve savings on battery and CPU usage while maintaining a high mIOU. We further demonstrate a possible use case of the system by inpainting the objects in 3D space that are found by the semantic image segmentation network. The implemented Android application performs real-time augmented reality at 30 fps while running the computationally efficient network that was proposed as a part of this thesis work in parallel

    ํˆฌ๋ช…ํ•œ ๋งค์งˆ์—์„œ์˜ ๊ด‘ ๊ฒฝ๋กœ ๋ถ„์„์„ ์ด์šฉํ•œ ์ง‘์•ฝ์  3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2017. 2. ์ด๋ณ‘ํ˜ธ.๋ณธ ๋ฐ•์‚ฌํ•™์œ„ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ด‘ํ•™์ ์œผ๋กœ ํˆฌ๋ช…ํ•œ ๋งค์งˆ์—์„œ์˜ ๊ด‘ ๊ฒฝ๋กœ ๋ถ„์„์„ ๋ฐ”ํƒ•์œผ๋กœ ์ง‘์•ฝ์ ์ธ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ๊ตฌํ˜„ํ•˜๋Š” ์ ‘๊ทผ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•˜์—ฌ ๋…ผ์˜ํ•œ๋‹ค. 3์ฐจ์› ์˜์ƒ ์žฅ์น˜๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ์š”์†Œ์™€ ์‹œ์ฒญ์ž ์‚ฌ์ด์˜ ๋ฌผ๋ฆฌ์ ์ธ ๊ฑฐ๋ฆฌ๋ฅผ ์ค„์ด๋Š” ๊ฒƒ์€ ์ง‘์•ฝ์ ์ธ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ๊ตฌํ˜„ํ•˜๋Š” ์ง๊ด€์ ์ธ ๋ฐฉ๋ฒ•์ด๋‹ค. ๋˜ํ•œ, ๊ธฐ์กด ์‹œ์Šคํ…œ์˜ ํฌ๊ธฐ๋ฅผ ์œ ์ง€ํ•˜๋ฉด์„œ ๋” ๋งŽ์€ ์–‘์˜ 3์ฐจ์› ์˜์ƒ ์ •๋ณด๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๊ฒƒ ๋˜ํ•œ ์ง‘์•ฝ์  3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ์˜๋ฏธํ•œ๋‹ค. ๋†’์€ ๋Œ€์—ญํญ๊ณผ ์ž‘์€ ๊ตฌ์กฐ๋ฅผ ๊ฐ€์ง„ ์ง‘์•ฝ์  3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๋‹ค์Œ์˜ ๋‘ ๊ฐ€์ง€ ๊ด‘ํ•™ ํ˜„์ƒ์„ ์ด์šฉํ•œ๋‹ค. ๋“ฑ๋ฐฉ์„ฑ ๋ฌผ์งˆ์—์„œ์˜ ์ „๋ฐ˜์‚ฌ ํŠน์„ฑ๊ณผ ์ด๋ฐฉ์„ฑ ๋ฌผ์งˆ์—์„œ์˜ ๋ณต๊ตด์ ˆ ํŠน์„ฑ์ด๋‹ค. ๊ฐ€์‹œ๊ด‘ ์˜์—ญ์—์„œ ๋น›์„ ํˆฌ๊ณผ์‹œํ‚ค๋Š” ๋‘ ๋งค์งˆ์˜ ๊ณ ์œ  ๊ด‘ํ•™ ํŠน์„ฑ์„ ๊ธฐ์กด์˜ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๊ด‘ ๊ฒฝ๋กœ ์ถ”์ ์„ ํ†ตํ•˜์—ฌ ๋ถ„์„ํ•œ๋‹ค. ๊ด‘ ๋„ํŒŒ๋กœ์˜ ์ „๋ฐ˜์‚ฌ ํŠน์„ฑ์€ ์ง‘์•ฝ์  ๋‹ค์ค‘ ํˆฌ์‚ฌ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ์‚ฌ์šฉํ•œ๋‹ค. ํˆฌ์‚ฌ ๊ด‘ํ•™๊ณ„์˜ ์˜์ƒ ์ •๋ณด๋Š” ๊ด‘ ๋„ํŒŒ๋กœ๋กœ ์ž…์‚ฌ, ๋‚ด๋ถ€์—์„œ ์ „๋ฐ˜์‚ฌ๋ฅผ ํ†ตํ•˜์—ฌ ์ง„ํ–‰ํ•˜๊ณ , ์ด์— ์ˆ˜ํ‰ ํˆฌ์‚ฌ ๊ฑฐ๋ฆฌ๋Š” ๊ด‘ ๋„ํŒŒ๋กœ์˜ ๋‘๊ป˜๋กœ ์ œํ•œ๋œ๋‹ค. ๋‹ค์ˆ˜์˜ ์ „๋ฐ˜์‚ฌ ์ดํ›„ ์˜์ƒ ์ •๋ณด๋Š” ๊ด‘ ๋„ํŒŒ๋กœ์˜ ์ถœ์‚ฌ ๋ฉด์„ ํ†ตํ•ด ๋น ์ ธ๋‚˜๊ฐ€๊ณ , ๋ Œ์ฆˆ์— ์˜ํ•˜์—ฌ ์ตœ์  ์‹œ์ฒญ ์ง€์ ์—์„œ ์‹œ์ ์„ ํ˜•์„ฑํ•œ๋‹ค. ๊ด‘ ๋„ํŒŒ๋กœ ๋‚ด๋ถ€์—์„œ์˜ ๊ด‘ ๊ฒฝ๋กœ๋ฅผ ๋“ฑ๊ฐ€ ๋ชจ๋ธ์„ ํ†ตํ•˜์—ฌ ์กฐ์‚ฌํ•˜๊ณ , ์ด๋ฅผ ํ†ตํ•ด ๋‹ค์ˆ˜์˜ ํˆฌ์‚ฌ ๊ด‘ํ•™๊ณ„๋กœ๋ถ€ํ„ฐ ์ƒ์„ฑ๋œ ๋‹ค์ˆ˜์˜ ์‹œ์  ์˜์ƒ์ด ์™œ๊ณก๋˜๋Š” ๊ฒƒ์„ ๋ถ„์„ํ•˜๊ณ  ๋ณด์ •ํ•œ๋‹ค. 10๊ฐœ์˜ ์‹œ์ ์„ ์ œ๊ณตํ•˜๋Š” ์ง‘์•ฝ์  ๋‹ค์ค‘ ํˆฌ์‚ฌ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ๊ฒ€์ฆํ•œ๋‹ค. ํ–ฅ์ƒ๋œ ๋Œ€์—ญํญ ํŠน์„ฑ์„ ๊ฐ€์ง„ ๋‹ค์ค‘ ํˆฌ์‚ฌ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด์™€ ๋‹ค์ค‘ ์ดˆ์  ํ—ค๋“œ ๋งˆ์šดํŠธ ๋””์Šคํ”Œ๋ ˆ์ด ๊ตฌํ˜„์„ ์œ„ํ•œ ์ด๋ฐฉ์„ฑ ํŒ์„ ์ด์šฉํ•œ ํŽธ๊ด‘ ๋‹ค์ค‘ํ™” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋น›์˜ ํŽธ๊ด‘ ์ƒํƒœ, ์ด๋ฐฉ์„ฑ ํŒ์˜ ๊ด‘์ถ• ๋ฐฉํ–ฅ์— ๋”ฐ๋ผ ๊ด‘ ๊ฒฝ๋กœ๊ฐ€ ๋‹ฌ๋ผ์ง„๋‹ค. ์ธก๋ฉด ๋ฐฉํ–ฅ์œผ๋กœ์˜ ๊ด‘ ๊ฒฝ๋กœ ์ „ํ™˜์€ ๋‹ค์ค‘ ํˆฌ์‚ฌ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ๊ธฐ์ˆ ๊ณผ ๊ฒฐํ•ฉํ•˜์—ฌ ์‹œ์ ์„ ์ธก๋ฉด ๋ฐฉํ–ฅ์œผ๋กœ ๋‘ ๋ฐฐ๋กœ ์ฆ๊ฐ€์‹œํ‚จ๋‹ค. ๊นŠ์ด ๋ฐฉํ–ฅ์œผ๋กœ์˜ ๊ด‘ ๊ฒฝ๋กœ ์ „ํ™˜์€ ํ—ค๋“œ ๋งˆ์šดํŠธ ๋””์Šคํ”Œ๋ ˆ์ด์—์„œ ๋‹ค์ค‘ ์ดˆ์  ๊ธฐ๋Šฅ์„ ๊ตฌํ˜„ํ•œ๋‹ค. ๊ด‘ ๊ฒฝ๋กœ ์ถ”์  ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ์ด๋ฐฉ์„ฑ ํŒ์˜ ๋ชจ์–‘, ๊ด‘์ถ•, ํŒŒ์žฅ ๋“ฑ์˜ ๋‹ค์–‘ํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ ๋ณ€ํ™”์— ๋”ฐ๋ฅธ ๊ด‘ ๊ฒฝ๋กœ ์ „ํ™˜์„ ๋ถ„์„ํ•œ๋‹ค. ๊ฐ๊ฐ์˜ ๊ธฐ๋Šฅ์— ๋งž๋„๋ก ์„ค๊ณ„๋œ ์ด๋ฐฉ์„ฑ ํŒ๊ณผ ํŽธ๊ด‘ ํšŒ์ „์ž๋ฅผ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๊ฒฐํ•ฉํ•˜์—ฌ, ๋‹ค์ค‘ ํˆฌ์‚ฌ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด์™€ ๋‹ค์ค‘ ์ดˆ์  ํ—ค๋“œ ๋งˆ์šดํŠธ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๋Œ€์—ญํญ์ด 2๋ฐฐ ์ฆ๊ฐ€ํ•œ๋‹ค. ๊ฐ ์‹œ์Šคํ…œ์— ๋Œ€ํ•œ ์‹œ์ž‘ํ’ˆ์„ ์ œ์ž‘ํ•˜๊ณ , ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ด‘ ๋„ํŒŒ๋กœ์™€ ๋ณต๊ตด์ ˆ ๋ฌผ์งˆ์„ ์ด์šฉํ•˜์—ฌ ๊ทธ ๊ด‘ ๊ฒฝ๋กœ๋ฅผ ๋ถ„์„, ๋Œ€ํ˜•์˜ ๋‹ค์ค‘ ํˆฌ์‚ฌ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ๊ณผ ๊ฐœ์ธ ์‚ฌ์šฉ์ž์˜ ํ—ค๋“œ ๋งˆ์šดํŠธ ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์˜ ํฌ๊ธฐ๋ฅผ ๊ฐ์†Œ์‹œํ‚ค๊ณ , ํ‘œํ˜„ ๊ฐ€๋Šฅํ•œ ์ •๋ณด๋Ÿ‰์„ ์ฆ๊ฐ€์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๊ด‘ ๋„ํŒŒ๋กœ์™€ ์ด๋ฐฉ์„ฑ ํŒ์€ ๊ธฐ์กด์˜ 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ๊ณผ ์‰ฝ๊ฒŒ ๊ฒฐํ•ฉ์ด ๊ฐ€๋Šฅํ•˜๋ฉฐ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์€ ํ–ฅํ›„ ์†Œํ˜•๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ค‘๋Œ€ํ˜• 3์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์˜ ์ง‘์•ฝํ™”์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค.This dissertation investigates approaches for realizing compact three-dimensional (3D) display systems based on optical path analysis in optically transparent medium. Reducing the physical distance between 3D display apparatuses and an observer is an intuitive method to realize compact 3D display systems. In addition, it is considered compact 3D display systems when they present more 3D data than conventional systems while preserving the size of the systems. For implementing compact 3D display systems with high bandwidth and minimized structure, two optical phenomena are investigated: one is the total internal reflection (TIR) in isotropic materials and the other is the double refraction in birefringent crystals. Both materials are optically transparent in visible range and ray tracing simulations for analyzing the optical path in the materials are performed to apply the unique optical phenomenon into conventional 3D display systems. An optical light-guide with the TIR is adopted to realize a compact multi-projection 3D display system. A projection image originated from the projection engine is incident on the optical light-guide and experiences multiple folds by the TIR. The horizontal projection distance of the system is effectively reduced as the thickness of the optical light-guide. After multiple folds, the projection image is emerged from the exit surface of the optical light-guide and collimated to form a viewing zone at the optimum viewing position. The optical path governed by the TIR is analyzed by adopting an equivalent model of the optical light-guide. Through the equivalent model, image distortion for multiple view images in the optical light-guide is evaluated and compensated. For verifying the feasibility of the proposed system, a ten-view multi-projection 3D display system with minimized projection distance is implemented. To improve the bandwidth of multi-projection 3D display systems and head-mounted display (HMD) systems, a polarization multiplexing technique with the birefringent plate is proposed. With the polarization state of the image and the direction of optic axis of the birefringent plate, the optical path of rays varies in the birefringent material. The optical path switching in the lateral direction is applied in the multi-projection system to duplicate the viewing zone in the lateral direction. Likewise, a multi-focal function in the HMD is realized by adopting the optical path switching in the longitudinal direction. For illuminating the detailed optical path switching and the image characteristic such as an astigmatism and a color dispersion in the birefringent material, ray tracing simulations with the change of optical structure, the optic axis, and wavelengths are performed. By combining the birefringent material and a polarization rotation device, the bandwidth of both the multi-projection 3D display and the HMD is doubled in real-time. Prototypes of both systems are implemented and the feasibility of the proposed systems is verified through experiments. In this dissertation, the optical phenomena of the TIR and the double refraction realize the compact 3D display systems: the multi-projection 3D display for public and the multi-focal HMD display for individual. The optical components of the optical light-guide and the birefringent plate can be easily combined with the conventional 3D display system and it is expected that the proposed method can contribute to the realization of future 3D display systems with compact size and high bandwidth.Chapter 1 Introduction 10 1.1 Overview of modern 3D display providing high quality 3D images 10 1.2 Motivation of this dissertation 15 1.3 Scope and organization 18 Chapter 2 Compact multi-projection 3D displays with optical path analysis of total internal reflection 20 2.1 Introduction 20 2.2 Principle of compact multi-projection 3D display system using optical light-guide 23 2.2.1 Multi-projection 3D display system 23 2.2.2 Optical light-guide for multi-projection 3D display system 26 2.2.3 Analysis on image characteristics of projection images in optical light-guide 34 2.2.4 Pre-distortion method for view image compensation 44 2.3 Implementation of prototype of multi-projection 3D display system with reduced projection distance 47 2.4 Summary and discussion 52 Chapter 3 Compact multi-projection 3D displays with optical path analysis of double refraction 53 3.1 Introduction 53 3.2 Principle of viewing zone duplication in multi-projection 3D display system 57 3.2.1 Polarization-dependent optical path switching in birefringent crystal 57 3.2.2 Analysis on image formation through birefringent plane-parallel plate 60 3.2.3 Full-color generation of dual projection 64 3.3 Implementation of prototype of viewing zone duplication of multi-projection 3D display system 68 3.3.1 Experimental setup for viewing zone duplication of multi-projection 3D display system 68 3.3.2 Luminance distribution measurement of viewing zone duplication of multi-projection 3D display system 74 3.4 Summary and discussion 79 Chapter 4 Compact multi-focal 3D HMDs with optical path analysis of double refraction 81 4.1 Introduction 81 4.2 Principle of multi-focal 3D HMD system 86 4.2.1 Multi-focal 3D HMD system using Savart plate 86 4.2.2 Astigmatism compensation by modified Savart plate 89 4.2.3 Analysis on lateral chromatic aberration of extraordinary plane 96 4.2.4 Additive type compressive light field display 101 4.3 Implementation of prototype of multi-focal 3D HMD system 104 4.4 Summary and discussion 112 Chapter 5 Conclusion 114 Bibliography 117 Appendix 129 ์ดˆ ๋ก 130Docto

    Towards Highly-Integrated Stereovideoscopy for \u3ci\u3ein vivo\u3c/i\u3e Surgical Robots

    Get PDF
    When compared to traditional surgery, laparoscopic procedures result in better patient outcomes: shorter recovery, reduced post-operative pain, and less trauma to incisioned tissue. Unfortunately, laparoscopic procedures require specialized training for surgeons, as these minimally-invasive procedures provide an operating environment that has limited dexterity and limited vision. Advanced surgical robotics platforms can make minimally-invasive techniques safer and easier for the surgeon to complete successfully. The most common type of surgical robotics platforms -- the laparoscopic robots -- accomplish this with multi-degree-of-freedom manipulators that are capable of a diversified set of movements when compared to traditional laparoscopic instruments. Also, these laparoscopic robots allow for advanced kinematic translation techniques that allow the surgeon to focus on the surgical site, while the robot calculates the best possible joint positions to complete any surgical motion. An important component of these systems is the endoscopic system used to transmit a live view of the surgical environment to the surgeon. Coupled with 3D high-definition endoscopic cameras, the entirety of the platform, in effect, eliminates the peculiarities associated with laparoscopic procedures, which allows less-skilled surgeons to complete minimally-invasive surgical procedures quickly and accurately. A much newer approach to performing minimally-invasive surgery is the idea of using in-vivo surgical robots -- small robots that are inserted directly into the patient through a single, small incision; once inside, an in-vivo robot can perform surgery at arbitrary positions, with a much wider range of motion. While laparoscopic robots can harness traditional endoscopic video solutions, these in-vivo robots require a fundamentally different video solution that is as flexible as possible and free of bulky cables or fiber optics. This requires a miniaturized videoscopy system that incorporates an image sensor with a transceiver; because of severe size constraints, this system should be deeply embedded into the robotics platform. Here, early results are presented from the integration of a miniature stereoscopic camera into an in-vivo surgical robotics platform. A 26mm X 24mm stereo camera was designed and manufactured. The proposed device features USB connectivity and 1280 X 720 resolution at 30 fps. Resolution testing indicates the device performs much better than similarly-priced analog cameras. Suitability of the platform for 3D computer vision tasks -- including stereo reconstruction -- is examined. The platform was also tested in a living porcine model at the University of Nebraska Medical Center. Results from this experiment suggest that while the platform performs well in controlled, static environments, further work is required to obtain usable results in true surgeries. Concluding, several ideas for improvement are presented, along with a discussion of core challenges associated with the platform. Adviser: Lance C. Pรฉrez [Document = 28 Mb
    • โ€ฆ
    corecore