15 research outputs found

    ๋ฌด์•ˆ๊ฒฝ์‹ 3 ์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด์™€ ํˆฌ์‚ฌํ˜• ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์ด์šฉํ•œ ๊นŠ์ด ์œตํ•ฉ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ด€์ฐฐ ํŠน์„ฑ ํ–ฅ์ƒ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2015. 8. ์ด๋ณ‘ํ˜ธ.In this dissertation, various methods for enhancing the viewing characteristics of the depth-fused display are proposed with combination of projection-type displays or integral imaging display technologies. Depth-fused display (DFD) is one kind of the volumetric three-dimensional (3D) displays composed of multiple slices of depth images. With a proper weighting to the luminance of the images on the visual axis of the observer, it provides continuous change of the accommodation within the volume confined by the display layers. Because of its volumetric property depth-fused 3D images can provide very natural volumetric images, but the base images should be located on the exact positions on the viewing axis, which gives complete superimpose of the images. If this condition is not satisfied, the images are observed as two separated images instead of continuous volume. This viewing characteristic extremely restricts the viewing condition of the DFD resulting in the limited applications of DFDs. While increasing the number of layers can result in widening of the viewing angle and depth range by voxelizing the reconstructed 3D images, the required system complexity also increases along with the number of image layers. For solving this problem with a relatively simple configuration of the system, hybrid techniques are proposed for DFDs. The hybrid technique is the combination of DFD with other display technologies such as projection-type displays or autostereoscopic displays. The projection-type display can be combined with polarization-encoded depth method for projection of 3D information. Because the depth information is conveyed by polarization states, there is no degradation in spatial resolution or video frame in the reconstructed 3D images. The polarized depth images are partially selected at the stacked polarization selective screens according to the given depth states. As the screen does not require any active component for the reconstruction of images, projection part and reconstruction part can be totally separated. Also, the projection property enables the scalability of the reconstructed images like a conventional projection display, which can give immersive 3D experience by providing large 3D images. The separation of base images due to the off-axis observation can be compensated by shifting the base images along the viewers visual axis. It can be achieved by adopting multi-view techniques. While conventional multi-view displays provide different view images for different viewers positions, it can be used for showing shifted base images for DFD. As a result, multiple users can observe the depth-fused 3D images at the same time. Another hybrid method is the combination of floating method with DFD. Convex lens can optically translate the depth position of the object. Based on this principle, the optical gap between two base images can be extended beyond the physical dimension of the images. Employing the lens with a short focal length, the gap between the base images can be greatly reduced. For a practical implementation of the system, integral imaging method can be used because it is composed of array of lenses. The floated image can be located in front of the lens as well as behind the lens. Both cases result in the expansion of depth range beyond the physical gap of base images, but real-mode floating enables interactive application of the DFD. In addition to the expansion of depth range, the viewing angle of the hybrid system can be increased by employing tracking method. Viewer tracking method also enables dynamic parallax for the DFD with real-time update of base images along with the viewing direction of the tracked viewers. Each chapter of this dissertation explains the theoretical background of the proposed hybrid method and demonstrates the feasibility of the idea with experimental systems.Abstract i Contents iv List of Figures vi List of Tables xii Chapter 1 Introduction 1 1.1 Overview of three-dimensional displays 1 1.2 Motivation 7 1.3 Scope and organization 9 Chapter 2 Multi-layered depth-fused display with projection-type display 10 2.1 Introduction 10 2.2 Polarization-encoded depth information for depth-fused display 12 2.3 Visualization with passive scattering film 16 2.4 Summary 30 Chapter 3 Compact depth-fused display with enhanced depth and viewing angle 31 3.1 Introduction 31 3.2 Enhancement of viewing characteristics 34 3.2.1 Viewing angle enhancement using multi-view method 34 3.2.2 Depth enhancement using integral imaging 37 3.2.3 Depth and viewing angle enhancement 39 3.3 Implementation of experimental system with enhanced viewing parameters 44 3.4 Summary 51 Chapter 4 Real-mode depth-fused display with viewer tracking 52 4.1 Introduction 52 4.2 Viewer tracking method 55 4.2.1 Viewer-tracked depth-fused display 55 4.2.2 Viewer-tracked integral imaging for a depth-fused display 58 4.3 Implementation of viewer-tracked integral imaging 63 4.4 Summary 71 Chapter 5 Conclusion 72 Bibliography 74 ์ดˆ๋ก 83Docto

    ๋น„๋“ฑ๋ฐฉ์„ฑ ๊ด‘ํ•™ ์†Œ์ž๋ฅผ ์ด์šฉํ•œ ๊ด‘ ์‹œ์•ผ๊ฐ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. ์ด๋ณ‘ํ˜ธ.Near-eye display is considered as a promising display technique to realize augmented reality by virtue of its high sense of immersion and user-friendly interface. Among the important performances of near-eye display, a field of view is the most crucial factor for providing a seamless and immersive experience for augmented reality. In this dissertation, a transmissive eyepiece is devised instead of a conventional reflective eyepiece and it is discussed how to widen the field of view without loss of additional system performance. In order to realize the transmissive eyepiece, the eyepiece should operate lens to virtual information and glass to real-world scene. Polarization multiplexing technique is used to implement the multi-functional optical element, and anisotropic optical elements are used as material for multi-functional optical element. To demonstrate the proposed idea, an index-matched anisotropic crystal lens has been presented that reacts differently depending on polarization. With the combination of isotropic material and anisotropic crystal, the index-matched anisotropic crystal lens can be the transmissive eyepiece and achieve the large field of view. Despite the large field of view by the index-matched anisotropic crystal lens, many problems including form factor still remain to be solved. In order to overcome the limitations of conventional optics, a metasurface is adopted to the augmented reality application. With a stunning optical performance of the metasurface, a see-through metasurface lens is proposed and designed for implementing wide field of view near-eye display. The proposed novel eyepieces are expected to be an initiative study not only improving the specification of the existing near-eye display but opening the way for a next generation near-eye display.๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด๋Š” ๋†’์€ ๋ชฐ์ž…๊ฐ๊ณผ ์‚ฌ์šฉ์ž ์นœํ™”์ ์ธ ์ธํ„ฐํŽ˜์ด์Šค๋กœ ์ธํ•ด ์ฆ๊ฐ• ํ˜„์‹ค์„ ๊ตฌํ˜„ํ•˜๋Š” ๊ฐ€์žฅ ํšจ๊ณผ์ ์ธ ๊ธฐ์ˆ ๋กœ ์ตœ๊ทผ ํ™œ๋ฐœํ•œ ์—ฐ๊ตฌ๊ฐ€ ๊ณ„์†๋˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ค‘์š”ํ•œ ์„ฑ๋Šฅ ์ค‘ ์‹œ์•ผ๊ฐ์€ ๋งค๋„๋Ÿฝ๊ณ  ๋ชฐ์ž…๊ฐ ์žˆ๋Š” ๊ฒฝํ—˜์„ ์‚ฌ์šฉ์ž์—๊ฒŒ ์ „ํ•ด์คŒ์œผ๋กœ์จ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ด‘ํ•™์  ํ‰๊ฐ€์ง€ํ‘œ ์ค‘์— ํ•˜๋‚˜์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ธฐ์กด์˜ ๋ฐ˜์‚ฌํ˜• ์•„์ดํ”ผ์Šค (eyepiece) ๋ฅผ ๋Œ€์‹ ํ•˜๋Š” ํˆฌ๊ณผํ˜• ์•„์ดํ”ผ์Šค๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํˆฌ๊ณผํ˜• ์•„์ดํ”ผ์Šค๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์™ธ๋ถ€ ์ •๋ณด์— ๋Œ€ํ•ด์„œ๋Š” ํˆฌ๋ช…ํ•œ ์œ ๋ฆฌ์™€ ๊ฐ™์ด ํˆฌ๊ณผ์‹œํ‚ค๋ฉฐ, ๋™์‹œ์— ๊ฐ€์ƒ ์ •๋ณด๋Š” ๋ Œ์ฆˆ๋กœ ์ž‘๋™ํ•˜์—ฌ ๋จผ ๊ฑฐ๋ฆฌ์— ๋„์šธ ์ˆ˜ ์žˆ๋Š” ๊ด‘ํ•™์†Œ์ž๋ฅผ ๊ฐœ๋ฐœํ•˜์—ฌ์•ผ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํˆฌ๊ณผํ˜• ์•„์ดํ”ผ์Šค๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด์„œ ํŽธ๊ด‘์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ฒŒ ๋ฐ˜์‘ํ•˜๋Š” ๊ตด์ ˆ๋ฅ  ์ •ํ•ฉ ์ด๋ฐฉ์„ฑ ๊ฒฐ์ • ๋ Œ์ฆˆ (index-matched anisotropic crystal lens) ๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋ฐฉ์„ฑ ๊ฒฐ์ • ๊ตฌ์กฐ (anisotropic crystal)๋กœ ์ด๋ฃจ์–ด์ง„ ๋ Œ์ฆˆ์™€ ์ด๋ฅผ ๋‘˜๋Ÿฌ์‹ผ ๋“ฑ๋ฐฉ์„ฑ ๋ฌผ์งˆ (isotropic crytal) ๋กœ ์ด๋ฃจ์–ด์ง„ ๊ตด์ ˆ๋ฅ  ์ •ํ•ฉ ์ด๋ฐฉ์„ฑ ๊ฒฐ์ • ๋ Œ์ฆˆ๋Š” ํŽธ๊ด‘์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ฒŒ ์ž‘๋™ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํˆฌ๊ณผํ˜• ์•„์ดํ”ผ์Šค๋Š” ๊ธฐ์กด์˜ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด์— ๋น„ํ•ด ๋„“์€ ์‹œ์•ผ๊ฐ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์ง€๋งŒ ์ด๋ฐฉ์„ฑ ๊ฒฐ์ • ๊ตฌ์กฐ์˜ ๋‚ฎ์€ ๊ตด์ ˆ๋ฅ  ์ฐจ์ด๋กœ ์ธํ•ด ์‹œ์Šคํ…œ์˜ ํฌ๊ธฐ๊ฐ€ ์ปค์ง€๋Š” ๋‹จ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋‹จ์ ์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด ๋ฉ”ํƒ€ ํ‘œ๋ฉด์„ ์ฆ๊ฐ• ํ˜„์‹ค ๋””์Šคํ”Œ๋ ˆ์ด ๋ถ„์•ผ์— ์ ์šฉํ•˜์˜€๋‹ค. ๋ฉ”ํƒ€ ํ‘œ๋ฉด์˜ ๊ธฐ์กด ๊ด‘ํ•™ ์†Œ์ž๋ฅผ ๋Šฅ๊ฐ€ํ•˜๋Š” ๋†€๋ผ์šด ๊ด‘ํ•™ ์„ฑ๋Šฅ์„ ์ด์šฉํ•˜์—ฌ ๋„“์€ ์‹œ์•ผ๊ฐ์„ ๊ฐ€์ง€๋Š” ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ํˆฌ๋ช… ๋ฉ”ํƒ€ ๋ Œ์ฆˆ๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ํŽธ๊ด‘์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ฒŒ ๋ฐ˜์‘ํ•˜๋Š” ํˆฌ๋ช… ๋ฉ”ํƒ€๋ Œ์ฆˆ๋Š” ๋„“์€ ์‹œ์•ผ๊ฐ๊ณผ ๊ฒฝ๋Ÿ‰ํ™” ์‹œ์Šคํ…œ ๊ตฌํ˜„์ด ๊ฐ€๋Šฅํ•˜๋ฉฐ ์ด๋ฅผ ์ž…์ฆํ•˜๊ธฐ ์œ„ํ•ด ํˆฌ๋ช… ๋ฉ”ํƒ€๋ Œ์ฆˆ์˜ ์„ค๊ณ„ ๋ฐฉ๋ฒ• ๋ฟ ์•„๋‹ˆ๋ผ ์‹ค์ œ ๊ตฌํ˜„์„ ํ†ตํ•œ ๊ฐ€๋Šฅ์„ฑ์„ ์ž…์ฆํ•˜์˜€๋‹ค. ์ด๋Ÿฌํ•œ ์ƒˆ๋กœ์šด ์•„์ดํ”ผ์Šค์— ๋Œ€ํ•œ ๊ฐœ๋…์€ ๊ธฐ์กด์˜ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์‚ฌ์–‘ ๊ฐœ์„ ์— ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉ๋  ๋ฟ ์•„๋‹ˆ๋ผ ์ฐจ์„ธ๋Œ€ ๊ทผ์•ˆ ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์œ„ํ•œ ์„ ๋„์ ์ธ ์—ญํ• ์„ ํ•  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค.Abstract Contents List of Tables List of Figures Near-eye displays with wide field of view using anisotropic optical elements Chapter 1 Introduction 1.1 Near-eye displays for augmented reality 1.2 Optical performances of near-eye display 1.3 State-of-the-arts of near-eye display 1.4 Motivation and contribution of this dissertation Chapter 2 Transmissive eyepiece for wide field of view near-eye display 2.1 Transmissive eyepiece for near-eye display Chapter 3 Near-eye display using index-matched anisotropic crystal lens 3.1 Introduction 3.2 Index-matched anisotropic crystal lens 3.2.1 Principle of the index-matched anisotropic crystal lens 3.2.2 Aberration analysis of index-matched anisotropic crystal lens 3.2.3 Implementation 3.3 Near-eye displays using index-matched anisotropic crystal lens 3.3.1 Near-eye display using index-matched anisotropic crystal lens 3.3.2 Flat panel type near-eye display using IMACL 3.3.3 Polarization property of transparent screen 3.4 Conclusion Chapter 4 Near-eye display using metasurface lens 4.1 Introduction 4.2 See-through metasurface lens 4.2.1 Metasurface lens 4.3 Full-color near-eye display using metasurface lens 4.3.1 Full-color near-eye display using metasurface lens 4.3.2 Holographic near-eye display using metasurface lens for aberration compensation 4.4 Conclusion Chapter 5 Conclusion Bibliography AppendixDocto

    High-dynamic-range Foveated Near-eye Display System

    Get PDF
    Wearable near-eye display has found widespread applications in education, gaming, entertainment, engineering, military training, and healthcare, just to name a few. However, the visual experience provided by current near-eye displays still falls short to what we can perceive in the real world. Three major challenges remain to be overcome: 1) limited dynamic range in display brightness and contrast, 2) inadequate angular resolution, and 3) vergence-accommodation conflict (VAC) issue. This dissertation is devoted to addressing these three critical issues from both display panel development and optical system design viewpoints. A high-dynamic-range (HDR) display requires both high peak brightness and excellent dark state. In the second and third chapters, two mainstream display technologies, namely liquid crystal display (LCD) and organic light emitting diode (OLED), are investigated to extend their dynamic range. On one hand, LCD can easily boost its peak brightness to over 1000 nits, but it is challenging to lower the dark state to \u3c 0.01 nits. To achieve HDR, we propose to use a mini-LED local dimming backlight. Based on our simulations and subjective experiments, we establish practical guidelines to correlate the device contrast ratio, viewing distance, and required local dimming zone number. On the other hand, self-emissive OLED display exhibits a true dark state, but boosting its peak brightness would unavoidably cause compromised lifetime. We propose a systematic approach to enhance OLED\u27s optical efficiency while keeping indistinguishable angular color shift. These findings will shed new light to guide future HDR display designs. In Chapter four, in order to improve angular resolution, we demonstrate a multi-resolution foveated display system with two display panels and an optical combiner. The first display panel provides wide field of view for peripheral vision, while the second panel offers ultra-high resolution for the central fovea. By an optical minifying system, both 4x and 5x enhanced resolutions are demonstrated. In addition, a Pancharatnam-Berry phase deflector is applied to actively shift the high-resolution region, in order to enable eye-tracking function. The proposed design effectively reduces the pixelation and screen-door effect in near-eye displays. The VAC issue in stereoscopic displays is believed to be the main cause of visual discomfort and fatigue when wearing VR headsets. In Chapter five, we propose a novel polarization-multiplexing approach to achieve multiplane display. A polarization-sensitive Pancharatnam-Berry phase lens and a spatial polarization modulator are employed to simultaneously create two independent focal planes. This method enables generation of two image planes without the need of temporal multiplexing. Therefore, it can effectively reduce the frame rate by one-half. In Chapter six, we briefly summarize our major accomplishments

    Optical simulation, modeling and evaluation of 3D medical displays

    Get PDF

    Efficient image-based rendering

    Get PDF
    Recent advancements in real-time ray tracing and deep learning have significantly enhanced the realism of computer-generated images. However, conventional 3D computer graphics (CG) can still be time-consuming and resource-intensive, particularly when creating photo-realistic simulations of complex or animated scenes. Image-based rendering (IBR) has emerged as an alternative approach that utilizes pre-captured images from the real world to generate realistic images in real-time, eliminating the need for extensive modeling. Although IBR has its advantages, it faces challenges in providing the same level of control over scene attributes as traditional CG pipelines and accurately reproducing complex scenes and objects with different materials, such as transparent objects. This thesis endeavors to address these issues by harnessing the power of deep learning and incorporating the fundamental principles of graphics and physical-based rendering. It offers an efficient solution that enables interactive manipulation of real-world dynamic scenes captured from sparse views, lighting positions, and times, as well as a physically-based approach that facilitates accurate reproduction of the view dependency effect resulting from the interaction between transparent objects and their surrounding environment. Additionally, this thesis develops a visibility metric that can identify artifacts in the reconstructed IBR images without observing the reference image, thereby contributing to the design of an effective IBR acquisition pipeline. Lastly, a perception-driven rendering technique is developed to provide high-fidelity visual content in virtual reality displays while retaining computational efficiency.Jรผngste Fortschritte im Bereich Echtzeit-Raytracing und Deep Learning haben den Realismus computergenerierter Bilder erheblich verbessert. Konventionelle 3DComputergrafik (CG) kann jedoch nach wie vor zeit- und ressourcenintensiv sein, insbesondere bei der Erstellung fotorealistischer Simulationen von komplexen oder animierten Szenen. Das bildbasierte Rendering (IBR) hat sich als alternativer Ansatz herauskristallisiert, bei dem vorab aufgenommene Bilder aus der realen Welt verwendet werden, um realistische Bilder in Echtzeit zu erzeugen, so dass keine umfangreiche Modellierung erforderlich ist. Obwohl IBR seine Vorteile hat, ist es eine Herausforderung, das gleiche MaรŸ an Kontrolle รผber Szenenattribute zu bieten wie traditionelle CG-Pipelines und komplexe Szenen und Objekte mit unterschiedlichen Materialien, wie z.B. transparente Objekte, akkurat wiederzugeben. In dieser Arbeit wird versucht, diese Probleme zu lรถsen, indem die Mรถglichkeiten des Deep Learning genutzt und die grundlegenden Prinzipien der Grafik und des physikalisch basierten Renderings einbezogen werden. Sie bietet eine effiziente Lรถsung, die eine interaktive Manipulation von dynamischen Szenen aus der realen Welt ermรถglicht, die aus spรคrlichen Ansichten, Beleuchtungspositionen und Zeiten erfasst wurden, sowie einen physikalisch basierten Ansatz, der eine genaue Reproduktion des Effekts der Sichtabhรคngigkeit ermรถglicht, der sich aus der Interaktion zwischen transparenten Objekten und ihrer Umgebung ergibt. Darรผber hinaus wird in dieser Arbeit eine Sichtbarkeitsmetrik entwickelt, mit der Artefakte in den rekonstruierten IBR-Bildern identifiziert werden kรถnnen, ohne das Referenzbild zu betrachten, und die somit zur Entwicklung einer effektiven IBR-Erfassungspipeline beitrรคgt. SchlieรŸlich wird ein wahrnehmungsgesteuertes Rendering-Verfahren entwickelt, um visuelle Inhalte in Virtual-Reality-Displays mit hoherWiedergabetreue zu liefern und gleichzeitig die Rechenleistung zu erhalten

    Computational See-Through Near-Eye Displays

    Get PDF
    See-through near-eye displays with the form factor and field of view of eyeglasses are a natural choice for augmented reality systems: the non-encumbering size enables casual and extended use and large field of view enables general-purpose spatially registered applications. However, designing displays with these attributes is currently an open problem. Support for enhanced realism through mutual occlusion and the focal depth cues is also not found in eyeglasses-like displays. This dissertation provides a new strategy for eyeglasses-like displays that follows the principles of computational displays, devices that rely on software as a fundamental part of image formation. Such devices allow more hardware simplicity and flexibility, showing greater promise of meeting form factor and field of view goals while enhancing realism. This computational approach is realized in two novel and complementary see-through near-eye display designs. The first subtractive approach filters omnidirectional light through a set of optimized patterns displayed on a stack of spatial light modulators, reproducing a light field corresponding to in-focus imagery. The design is thin and scales to wide fields of view; see-through is achieved with transparent components placed directly in front of the eye. Preliminary support for focal cues and environment occlusion is also demonstrated. The second additive approach uses structured point light illumination to form an image with a minimal set of rays. Each of an array of defocused point light sources is modulated by a region of a spatial light modulator, essentially encoding an image in the focal blur. See-through is also achieved with transparent components and thin form factors and wide fields of view (>= 100 degrees) are demonstrated. The designs are examined in theoretical terms, in simulation, and through prototype hardware with public demonstrations. This analysis shows that the proposed computational near-eye display designs offer a significantly different set of trade-offs than conventional optical designs. Several challenges remain to make the designs practical, most notably addressing diffraction limits.Doctor of Philosoph

    Characteristics of flight simulator visual systems

    Get PDF
    The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality

    ํฌํ† ํด๋ฆฌ๋จธ ์ƒ์˜ ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ธฐ๋ก ๊ธฐ์ˆ ์„ ์ด์šฉํ•œ ๋ฌด์•ˆ๊ฒฝ์‹ ์‚ผ์ฐจ์› ์ด๋ฏธ์ง• ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2014. 8. ์ด๋ณ‘ํ˜ธ.์ด ๋ฐ•์‚ฌํ•™์œ„ ๋…ผ๋ฌธ์—์„œ๋Š” ์ƒˆ๋กœ์šด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ธฐ๋ก๋ฐฉ๋ฒ•๋“ค์„ ์ด์šฉํ•˜์—ฌ ๊ธฐ์กด์˜ ๋ฌด์•ˆ๊ฒฝ์‹ ์‚ผ์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ œํ•œ์ ๋“ค์„ ๊ฐœ์„ ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๋…ผํ•œ๋‹ค. ๋‘ ์ข…๋ฅ˜์˜ ์ƒˆ๋กœ์šด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ธฐ๋ก๋ฐฉ๋ฒ•์„ ๋ฌด์•ˆ๊ฒฝ์‹ ์‚ผ์ฐจ์› ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ์ œ์•ˆํ•œ๋‹ค. ์ฒซ ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ ํ˜ธ๊ฒ” ์ค‘์ฒฉ์„ ์ด์šฉํ•˜์—ฌ ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ์Šคํ…Œ๋ ˆ์˜ค๊ทธ๋žจ์˜ ํ•ด์ƒ๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์ด๋ฉฐ, ๋‹ค๋ฅธ ํ•˜๋‚˜์˜ ๋ฐฉ๋ฒ•์€ ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž๋ฅผ ์ด์šฉํ•˜์—ฌ ํˆฌ๋ช…ํ•œ ํŠน์„ฑ์„ ๊ฐ€์ง€๋Š” ์ด์ฐจ์› ๋ฐ ์‚ผ์ฐจ์› ์˜์ƒ์„ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ์ด ๋ฐ•์‚ฌํ•™์œ„ ๋…ผ๋ฌธ์—์„œ๋Š” ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ธฐ๋ก๋งค์งˆ๋กœ ๊ด‘์ค‘ํ•ฉ์ฒด ํ•„๋ฆ„์„ ์‚ฌ์šฉํ•œ๋‹ค. ๋‹จ์ผํŒŒ์žฅ ๊ธฐ๋ก๋ฐฉ๋ฒ•๊ณผ ์„ธ ํŒŒ์žฅ ๋‹ค์ค‘ํ™” ๊ธฐ๋ก๋ฐฉ๋ฒ•์„ ์ด์šฉํ•ด ์ œ์ž‘๋œ ๊ด‘์ค‘ํ•ฉ์ฒด ํ•„๋ฆ„ ์ƒ์— ๊ธฐ๋ก๋˜๋Š” ์ฒด์ ํ™€๋กœ๊ทธ๋žจ์— ๋Œ€ํ•œ ๋…ธ์ถœ๋ฐ˜์‘ ํŠน์„ฑ์„ ์‹คํ—˜์„ ํ†ตํ•ด ๋ถ„์„ํ•œ๋‹ค. ๊ด‘์ค‘ํ•ฉ์ฒด ํ•„๋ฆ„์— ๊ธฐ๋ก๋œ ์„ธ ํŒŒ์žฅ ๋‹ค์ค‘ํ™”๋œ ํ™€๋กœ๊ทธ๋žจ์˜ ํˆฌ๋ช…ํ•œ ํŠน์„ฑ๊ณผ ํšŒ์ ˆํšจ์œจ์„ ์žฌ์ƒ์‹คํ—˜์„ ํ†ตํ•ด ํ‰๊ฐ€ํ•˜๋ฉฐ, ๊ด‘์ค‘ํ•ฉ์ฒด ํ•„๋ฆ„์˜ ์ˆ˜์ถ•ํŠน์„ฑ์— ๋Œ€ํ•ด ์ด๋ก ์ ์œผ๋กœ ๋ถ„์„ํ•˜๊ณ  ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ํ”„๋ฆฐํŒ… ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•ด ๊ธฐ๋ก๋œ ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ์Šคํ…Œ๋ ˆ์˜ค๊ทธ๋žจ์˜ ํ•ด์ƒ๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚ค๊ธฐ ์œ„ํ•œ ํ˜ธ๊ฒ”์ค‘์ฒฉ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ํ”„๋ฆฐํŒ… ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ์Šคํ…Œ๋ ˆ์˜ค๊ทธ๋žจ์„ ๊ธฐ๋กํ•  ๋•Œ ํ•ด์ƒ๋„ ์ฆ๊ฐ€๋ฅผ ์œ„ํ•ด ํ˜ธ๊ฒ”์˜ ํฌ๊ธฐ๋ฅผ ๋ฌดํ•œ์ • ์ค„์ผ ์ˆ˜ ์—†์Œ์„ ์ปดํ“จํ„ฐ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ๊ฒ€์ฆํ•œ๋‹ค. ํ˜ธ๊ฒ” ํฌ๊ธฐ๋ฅผ ์ค„์—ฌ ํ•ด์ƒ๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ• ๋Œ€์‹ ์— ํ˜ธ๊ฒ”์„ ์ค‘์ฒฉ ๊ธฐ๋ก์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ์Šคํ…Œ๋ ˆ์˜ค๊ทธ๋žจ์˜ ํ•ด์ƒ๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ์Šคํ…Œ๋ ˆ์˜ค๊ทธ๋žจ์„ ๊ธฐ๋กํ•  ์ˆ˜ ์žˆ๋Š” ์‹คํ—˜ํ™˜๊ฒฝ์„ ๊ตฌ์ถ•ํ•˜์˜€์œผ๋ฉฐ ์ด๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๊ณผ ์ œ์•ˆ๋œ ํ˜ธ๊ฒ” ์ค‘์ฒฉ ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•œ ์„œ๋กœ ๋‹ค๋ฅธ ๋‘ ๊ฐœ์˜ ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ์Šคํ…Œ๋ ˆ์˜ค๊ทธ๋žจ์„ ์ œ์ž‘ํ•œ๋‹ค. ์ œ์ž‘๋œ ๋‘ ๊ฐœ์˜ ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ์Šคํ…Œ๋ ˆ์˜ค๊ทธ๋žจ์˜ ์žฌ์ƒ ์˜์ƒ์„ ๋น„๊ตํ•จ์œผ๋กœ์จ ์ œ์•ˆ๋œ ํ˜ธ๊ฒ”์ค‘์ฒฉ ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•œ ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ์Šคํ…Œ๋ ˆ์˜ค๊ทธ๋žจ์ด ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋ณด๋‹ค ๋†’์€ ํ•ด์ƒ๋„๋ฅผ ๊ฐ€์ง์„ ๊ฒ€์ฆํ•œ๋‹ค. ์ง‘์ ์˜์ƒ์„ ํ†ตํ•ด ํˆฌ๋ช…ํ•œ ํŠน์„ฑ์„ ๊ฐ€์ง€๋Š” ์ด์ฐจ์› ๋ฐ ์‚ผ์ฐจ์› ๊ฐ€์ƒ์˜์ƒ์„ ์žฌ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž๋ฅผ ๊ธฐ๋กํ•˜๊ธฐ ์œ„ํ•œ ์‹คํ—˜ํ™˜๊ฒฝ์— ํŒŒ์žฅ๋‹ค์ค‘ํ™” ๊ธฐ๋ก ๋ฐฉ๋ฒ•๊ณผ ๊ณต๊ฐ„๋‹ค์ค‘ํ™” ๊ธฐ๋ก๋ฐฉ๋ฒ•์„ ์ ์šฉํ•˜์—ฌ ์ด์ฒœ์—ฐ์ƒ‰ ํ™€๋กœ๊ทธ๋žจ ๊ธฐ๋ก๊ณผ ๋Œ€๋ฉด์  ๊ธฐ๋ก์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๋…ผํ•œ๋‹ค. ์ œ์•ˆ๋œ ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž์˜ ํŠน์„ฑ์„ ์žฌ์ƒ์‹คํ—˜์„ ํ†ตํ•ด ํ‰๊ฐ€ํ•œ๋‹ค. ์ œ์•ˆ๋œ ๊ตฌ์กฐ์—์„œ ์žฌ์ƒ๋œ ์ง‘์ ์˜์ƒ์„ ํ†ตํ•ด ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž๊ฐ€ ์ฆ๊ฐ•ํ˜„์‹ค ๊ธฐ์ˆ ์— ์ ์šฉํ•˜๊ธฐ ์ ํ•ฉํ•œ ๊ด‘ํ•™์ ์œผ๋กœ ํˆฌ๋ช…ํ•œ ํŠน์„ฑ์„ ๋งŒ์กฑ์‹œํ‚ค๋ฉด์„œ ์ด์ฒœ์—ฐ์ƒ‰์˜ ์‚ผ์ฐจ์› ๊ฐ€์ƒ์˜์ƒ์„ ์žฌ์ƒ ๊ฐ€๋Šฅํ•จ์„ ๊ฒ€์ฆํ•œ๋‹ค. ์ œ์•ˆ๋œ ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž๋ฅผ ํ†ตํ•ด ์žฌ์ƒ๋˜๋Š” ์˜์ƒ์˜ ๊ด€์ฐฐํŠน์„ฑ๋“ค์„ ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž์˜ ๊ด‘ํ•™๋ณ€์ˆ˜๋“ค์„ ์ด์šฉํ•˜์—ฌ ํ‰๊ฐ€ํ•˜๋ฉฐ, ์ด๋ฅผ ์ด์šฉํ•ด ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์ด ์ด์ฐจ์›/์‚ผ์ฐจ์› ํˆฌ๋ช…์Šคํฌ๋ฆฐ์œผ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๋Šฅ์„ฑ์„ ์ œ์‹œํ•œ๋‹ค. ์ด์ฐจ์›๊ณผ ์‚ผ์ฐจ์› ํˆฌ๋ช…์Šคํฌ๋ฆฐ ์—ญํ• ์„ ํ•  ์ˆ˜ ์žˆ๋Š” ๋‘ ๊ฐœ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ์‚ฌ์–‘์˜ ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž๋“ค์„ ์ œ์ž‘ํ•œ๋‹ค. ์ด ๋‘ ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž๋“ค์€ ์žฌ์ƒ์‹คํ—˜์˜ ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด ์ด์ฐจ์›/์‚ผ์ฐจ์› ํˆฌ๋ช… ์Šคํฌ๋ฆฐ์œผ๋กœ ์‘์šฉํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ด๋ฉฐ, ์ด ๋•Œ ์žฌ์ƒ๋˜๋Š” ์˜์ƒ๋“ค์˜ ๊ด€์ฐฐ ํŠน์„ฑ๋“ค์„ ํ‰๊ฐ€ํ•œ๋‹ค. ๋˜ํ•œ ์™ธ๋ถ€์˜ ์ด๋ฏธ์ง€ ํ”„๋กœ์ ํ„ฐ ๋“ฑ์˜ ๊ด‘๋ณ€์กฐ์žฅ์น˜๋ฅผ ํ†ตํ•ด ์ œ๊ณต๋˜๋Š” ์˜์ƒ์ •๋ณด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž ์ƒ์—์„œ ๋™์  ์‚ผ์ฐจ์› ๊ฐ€์ƒ์˜์ƒ์„ ์žฌ์ƒํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ๋™์  ์š”์†Œ์˜์ƒ์„ ์ปดํ“จํ„ฐ ๊ทธ๋ž˜ํ”ฝ์Šค๋ฅผ ์ด์šฉํ•˜์—ฌ ์ œ์ž‘ํ•˜์˜€์œผ๋ฉฐ, ์ œ์•ˆ๋œ ๋ Œ์ฆˆ์–ด๋ ˆ์ด ํ™€๋กœ๊ทธ๋ž˜ํ”ฝ ๊ด‘ํ•™์†Œ์ž๊ฐ€ ๋™์  ์‚ผ์ฐจ์› ๊ฐ€์ƒ์˜์ƒ์„ ์žฌ์ƒํ•  ์ˆ˜ ์žˆ์Œ์„ ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค.The novel holographic recording techniques suggested in this dissertation are experimentally investigated to improve the limitations of conventional autostereoscopic three-dimensional (3D) displays. Two types of holographic recording techniques for implementing novel autostereoscopic 3D imaging methods are presented in this dissertation work: i) hogel overlapping method for enhancing lateral resolution of a holographic stereogram, and ii) lens-array holographic optical element (HOE) for providing a see-through property in an integral imaging method. Photopolymer film is used as a holographic material in this work. Dosage responses of a single wavelength and a three-wavelength multiplexed hologram recorded in the photopolymer film are presented. See-through property and diffraction efficiencies of the three-wavelength multiplexed hologram recorded in the photopolymer film are evaluated by the display experiments. Additionally, shrinkage of the photopolymer film is theoretically analyzed and measured by the experiments. The hogel overlapping method for a holographic printing technique is proposed to enhance the lateral resolution of holographic stereograms. A numerical analysis by computer simulation shows that there is a limitation on decreasing the hogel size while recording holographic stereograms. Instead of reducing the size of hogel, the lateral resolution of holographic stereograms can be improved by printing overlapped hogels. An experimental setup for holographic printing is built, and two holographic stereograms using the conventional and proposed overlapping methods are recorded, respectively. The resultant images from the experimentally generated holographic stereograms make a comparison between the conventional and proposed methods. The experimental results confirm that the proposed hogel overlapping method improves the lateral resolution of holographic stereograms compared to the conventional holographic printing method. The lens-array HOE is suggested for a see-through 3D imaging based on the integral imaging. The full-color lens-array HOE provides a see-through property with three-dimensional virtual images. An HOE recording setup is built, and the full-color lens-array HOEs are recorded by using holographic recording techniques of a spatial and wavelength multiplexing. The experimental results confirm that the suggested method can provide the full-color 3D virtual images with the see-through property. The viewing characteristics of the presented autostereoscopic 3D display are evaluated by the optical parameters of the lens-array HOE. Two lens-array HOEs with different optical parameters are fabricated to have both functions of the two-dimensional (2D) and 3D transparent screens. Display experiments for the 2D and 3D imaging on the proposed transparent screens are carried out, and the viewing characteristics in both cases are discussed. The autostereoscopic 3D display using the lens-array HOE can provide dynamic 3D virtual images because it has an external spatial light modulator. The dynamic elemental images are generated by computer graphics, and the feasibility of displaying the dynamic 3D virtual images on the lens-array HOE is experimentally verified.Chapter 1 Introduction 1 1.1 Background and current issues of autostereoscopic three-dimensional display 1 1.2 Motivation of this dissertation work 8 1.3 Objective and scope of this dissertation 11 Chapter 2 Photopolymer film for holographic material 14 2.1 Introduction 14 2.2 Principles of refractive index modulation in photopolymer film 18 2.3 Dosage response of photopolymer film for the hologram recording in a reflection geometry 23 2.4 Shrinkage characteristic of photopolymer film 37 2.5 Results 45 Chapter 3 High resolution autostereoscopic 3D display using holographic printing 47 3.1 Introduction 47 3.2 Overview of holographic printing method 49 3.3 Limitation on enhancing lateral resolution of holographic printing 54 3.4 Hogel overlapping method for enhancing lateral resolution of holographic printer 60 3.5 Experiments 64 3.6 Results 71 Chapter 4 See-through autostereoscopic 3D display using lens-array holographic optical elements 73 4.1 Introduction 73 4.2 Full-color lens-array holographic optical elements for displaying integral images 77 4.2.1 Principles of full-color lens-array holographic optical elements 77 4.2.2 Hologram recording setup for fabricating proposed lens-array holographic optical elements 80 4.2.3 Three-dimensional imaging on full-color lens-array holographic optical elements 85 4.3 Viewing characteristic analysis on lens-array holographic optical elements 90 4.3.1 Viewing characteristic of lens-array holographic optical elements 91 4.3.2 2D and 3D imaging on lens-array holographic optical elements with different viewing parameters 94 4.3.3 Experiments 98 4.4 Dynamic autostereoscopic 3D images displayed on the lens-array HOE 109 4.5 Results 113 Chapter 5 Conclusion and recommendation for future work 115 5.1 Conclusion 115 5.2 Recommendation for future work 120 Bibliography 124 Appendix 138 ์ดˆ ๋ก 139Docto

    Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding

    Get PDF
    Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180ยฐ), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with todayโ€™s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience
    corecore