5 research outputs found

    View and depth preprocessing for view synthesis enhancement

    Get PDF
    In the paper, two preprocessing methods for virtual view synthesis are presented. In the first approach, both horizontal and vertical resolutions of the real views and the corresponding depth maps are doubled in order to perform view synthesis on images with densely arranged points. In the second method, real views are filtered in order to eliminate blurred or improperly shifted edges of the objects. Both methods are performed prior to synthesis, thus they may be applied to different Depth-Image-Based Rendering algorithms. In the paper, for both proposed methods, the achieved quality gains are presented

    Segmentation-based Method of Increasing The Depth Maps Temporal Consistency

    Get PDF
    In this paper, a modification of the graph-based depth estimation is presented. The purpose of proposed modification is to increase the quality of estimated depth maps, reduce the time of the estimation, and increase the temporal consistency of depth maps. The modification is based on the image segmentation using superpixels, therefore in the first step of the proposed modification a segmentation of previous frames is used in the currently processed frame in order to reduce the overall time of the depth estimation. In the next step, a depth map from the previous frame is used in the depth map optimization as the initial values of a depth map estimated for the current frame. It results in the better representation of silhouettes of objects in depth maps and in the reduced computational complexity of the depth estimation process. In order to evaluate the performance of the proposed modification the authors performed the experiment for a set of multiview test sequences that varied in their content and an arrangement of cameras. The results of the experiments confirmed the increase of the depth maps quality โ€” the quality of depth maps calculated with the proposed modification is higher than for the unmodified depth estimation method, apart from the number of the performed optimization cycles. Therefore, use of the proposed modification allows to estimate a depth of the better quality with almost 40% reduction of the estimation time. Moreover, the temporal consistency, measured through the reduction of the bitrate of encoded virtual views, was also considerably increased

    Light field image coding with flexible viewpoint scalability and random access

    Get PDF
    This paper proposes a novel light field image compression approach with viewpoint scalability and random access functionalities. Although current state-of-the-art image coding algorithms for light fields already achieve high compression ratios, there is a lack of support for such functionalities, which are important for ensuring compatibility with different displays/capturing devices, enhanced user interaction and low decoding delay. The proposed solution enables various encoding profiles with different flexible viewpoint scalability and random access capabilities, depending on the application scenario. When compared to other state-of-the-art methods, the proposed approach consistently presents higher bitrate savings (44% on average), namely when compared to pseudo-video sequence coding approach based on HEVC. Moreover, the proposed scalable codec also outperforms MuLE and WaSP verification models, achieving average bitrate saving gains of 37% and 47%, respectively. The various flexible encoding profiles proposed add fine control to the image prediction dependencies, which allow to exploit the tradeoff between coding efficiency and the viewpoint random access, consequently, decreasing the maximum random access penalties that range from 0.60 to 0.15, for lenslet and HDCA light fields.info:eu-repo/semantics/acceptedVersio

    A Stackable 3D Light Field System for Free Viewpoint Virtual Reality

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2020. 8. ์ดํ˜์žฌ.๊ธฐ์กด์˜ ์ดฌ์˜๋œ ์ด๋ฏธ์ง€๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๊ฐ€์ƒํ˜„์‹ค (VR: Virtual Reality) ์‹œ์Šคํ…œ์€ 3-degree-of-freedom (3-DoF)์„ ๋งŒ์กฑํ•˜๋ฉฐ, roll, yaw, pitch์˜ ํšŒ์ „ ๋ณ€ํ™˜๋งŒ์„ ์ง€์›ํ•˜์˜€๋‹ค. DoF๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์›€์ง์ผ ์ˆ˜ ์žˆ๋Š” ๋ฐฉํ–ฅ์„ ์˜๋ฏธํ•˜๋ฉฐ, ์ตœ๋Œ€ 6-DoF๊ฐ€ ์žˆ๋‹ค. roll, yaw, pitch์˜ ์„ธ ๊ฐ€์ง€ ํšŒ์ „ ๋ณ€ํ™˜๊ณผ ๋”๋ถˆ์–ด x, y, z ์ถ•์„ ๋”ฐ๋ผ ์ด๋™ํ•˜๋Š” ์ˆ˜ํ‰ ๋ณ€ํ™˜์„ ํฌํ•จํ•œ๋‹ค. ์ฆ‰, ๊ธฐ์กด์˜ ์ดฌ์˜๋œ ์ด๋ฏธ์ง€ ๊ธฐ๋ฐ˜์˜ ๊ฐ€์ƒํ˜„์‹ค ๊ธฐ์ˆ ์€ ์ œํ•œ์ ์ธ ์›€์ง์ž„ ๋งŒ์„ ์ง€์›ํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ์ด๋Š” ๋งˆ์น˜ ๋จธ๋ฆฌ๊ฐ€ ๊ณ ์ •๋œ ์ƒํƒœ์—์„œ ๊ณ ๊ฐœ๋งŒ ์›€์ง์ด๋Š” ๊ฒƒ๊ณผ ๊ฐ™๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ์‚ฌ์šฉ์ž์˜ ๋ชฐ์ž…๋„๋ฅผ ํฌ๊ฒŒ ๋‚ฎ์ถ”๋Š” ์š”์ธ์œผ๋กœ ์ž‘์šฉํ•œ๋‹ค. ๋ผ์ดํŠธ ํ•„๋“œ(LF: Light Field)๋Š” ์ž์œ  ๊ณต๊ฐ„์„ ํ†ต๊ณผํ•˜๋Š” ๋น›์˜ ์กฐํ•ฉ์„ ํ†ตํ•ด ์ƒˆ๋กœ์šด ์‹œ์ ์—์„œ์˜ ๋ทฐ๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ๊ธฐ๋ฒ•์œผ๋กœ, ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ดฌ์˜๋œ ์ด๋ฏธ์ง€ ๊ธฐ๋ฐ˜์˜ ๊ฐ€์ƒํ˜„์‹ค๋ณด๋‹ค DoF๋ฅผ ํ–ฅ์ƒํ•˜๊ธฐ ์œ„ํ•œ ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜์–ด ์™”๋‹ค. LF๋Š” ๊ฐ€์ •ํ•˜๋Š” ๋ฉด์„ ๋”ฐ๋ผ ํš๋“ํ•œ light ray์˜ ์กฐํ•ฉ์„ ํ†ตํ•ด ์ž„์˜์˜ viewpoint์—์„œ์˜ ์ƒˆ๋กœ์šด ๋ทฐ(view)๋ฅผ ๊ตฌ์„ฑํ•œ๋‹ค. 2D ํ‰๋ฉด ๋˜๋Š” ๊ตฌ๋ฉด์„ ๊ฐ€์ •ํ•˜๋Š” LF๊ฐ€ ์žˆ์œผ๋ฉฐ, ํŠนํžˆ ๊ตฌ๋ฉด์„ ๊ฐ€์ •ํ•˜๋Š” LF ์‹œ์Šคํ…œ์€ 360๋„ ๋ฐฉํ–ฅ์—์„œ ์ž…์‚ฌ๋˜๋Š” light ray์˜ ์กฐํ•ฉ์„ ํ†ตํ•ด 360๋„ ๋ทฐ๋ฅผ ๋งŒ๋“ ๋‹ค. ๋˜ํ•œ ๊ตฌ ๋‚ด๋ถ€์—์„œ viewpoint๋Š” ์ž์œ ๋กญ๊ฒŒ ์„ ํƒ๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ 6-DoF๋ฅผ ๋งŒ์กฑํ•œ๋‹ค. ํ•˜์ง€๋งŒ ์œ„์™€ ๊ฐ™์ด ํ‰๋ฉด์ด๋‚˜ ๊ตฌ๋ฉด์„ ๊ฐ€์ •ํ•˜๋Š” LF ๊ตฌ์กฐ๋Š” light ray์˜ ํš๋“์ด ์–ด๋ ต๋‹ค๋Š” ๋ฌธ์ œ๊ฐ€ ์žˆ๋‹ค. ํŠนํžˆ ๊ตฌ๋ฉด์„ ๊ฐ€์ •ํ•œ ์‹œ์Šคํ…œ์˜ ๊ฒฝ์šฐ ๊ตฌ๋ฉด์˜ ๋”ฐ๋ผ light ray์„ ํš๋“ํ•˜๊ธฐ ์œ„ํ•ด ์•„์น˜ ๋ชจ์–‘์œผ๋กœ ๋ฐฐ์น˜๋œ ๋‹ค์ค‘ ์นด๋ฉ”๋ผ๋ฅผ ์›์œผ๋กœ ํšŒ์ „์‹œํ‚ค๋Š” ํŠน์ˆ˜ ์žฅ๋น„๋ฅผ ๋ณ„๋„๋กœ ๊ฐœ๋ฐœํ•˜์—ฌ ์‚ฌ์šฉํ•œ๋‹ค. ๋ณด๋‹ค ๋„“์€ ๊ณต๊ฐ„์„ ์ปค๋ฒ„ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋” ํฐ ํ‰๋ฉด, ๊ตฌ๋ฉด์„ ๊ฐ€์ •ํ•ด์•ผ ํ•˜๋Š”๋ฐ, ํฐ ๋ฉด์„ ๊ฐ€์ •ํ• ์ˆ˜๋ก light ray ํš๋“ ๋‚œ์ด๋„๋Š” ๋”์šฑ ์ฆ๊ฐ€ํ•œ๋‹ค. 3D LF๋Š” ๊ธฐ์กด์˜ LF๊ฐ€ ๋ฉด์„ ๊ฐ€์ •ํ•˜๋Š” ๊ฒƒ๊ณผ ๋‹ฌ๋ฆฌ ์„ ์„ ๋”ฐ๋ผ ํš๋“ํ•˜๋Š” light ray๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ํ‰๋ฉด ๋Œ€์‹  ์ง์„ , ๊ตฌ๋ฉด ๋Œ€์‹  ์› ๊ตฌ์กฐ๊ฐ€ ๊ฐ€์ •๋˜์–ด LF๋ฅผ ๊ตฌ์„ฑํ•œ๋‹ค. ๋ฉด ๋Œ€์‹  ์„ ์„ ๊ฐ€์ •ํ•จ์œผ๋กœ์จ ํ•˜๋‚˜์˜ ๋ณ€์ˆ˜๊ฐ€ ๊ณ ์ •๋˜๊ณ , 4๊ฐœ ๋ณ€์ˆ˜ ๋Œ€์‹  3๊ฐœ ๋ณ€์ˆ˜๋กœ light ray๋ฅผ ํ‘œํ˜„๋œ๋‹ค. ์ด์— ๋”ฐ๋ผ ํš๋“ํ•  ์ˆ˜ ์žˆ๋Š” light ray์˜ ์ˆ˜๊ฐ€ ์ œํ•œ์ ์ด๋ฉฐ, ํŠนํžˆ vertically ํ•˜๋‚˜์˜ ์ ์—์„œ ํš๋“ํ•œ light ray๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— vertical parallax๋ฅผ ํ‘œํ˜„ํ•˜์ง€ ๋ชปํ•˜๋Š” ๋ฌธ์ œ๊ฐ€ ์žˆ๋‹ค. ๋ฐ˜๋ฉด, ์„ ์„ ๋”ฐ๋ผ light ray๋ฅผ ํš๋“ํ•˜๋Š” ๊ณผ์ •์€ ์Šฌ๋ผ์ด๋”๋‚˜ ๋‹ฌ๋ฆฌ(dolly) ์žฅ๋น„์™€ ๊ฐ™์€ ์ ‘ํ•˜๊ธฐ ์‰ฌ์šด ์žฅ๋น„๋กœ๋„ ๊ฐ€๋Šฅํ•˜๋ฉฐ, ์Šฌ๋ผ์ด๋”, dolly ์žฅ๋น„์— ์žฅ์ฐฉ๋œ ์นด๋ฉ”๋ผ๋ฅผ ์ด๋™ํ•˜๋ฉด์„œ ๊ฐ„ํŽธํ•˜๊ฒŒ light ray๋ฅผ ํš๋“ํ•  ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ ๋” ๋„“์€ ๋ฒ”์œ„๋ฅผ ์›€์ง์ด๊ธฐ ์œ„ํ•ด ๋” ํฐ ๊ตฌ์กฐ๋ฅผ ๊ฐ€์ •ํ•˜๋”๋ผ๋„ ํš๋“์˜ ๋‚œ์ด๋„๊ฐ€ ํฌ๊ฒŒ ์ฆ๊ฐ€ํ•˜์ง€ ์•Š๋Š”๋‹ค. ํ•˜์ง€๋งŒ ๊ตฌ์กฐ๊ฐ€ ์ปค์งˆ ๊ฒฝ์šฐ vertical parallax๋ฅผ ํ‘œํ˜„ํ•˜์ง€ ๋ชปํ•˜๋Š” ๋ฌธ์ œ๋กœ ์ธํ•œ ๋ทฐ๊ฐ€ ์™œ๊ณก๋˜๋Š” ์—๋Ÿฌ๊ฐ€ ํฌ๊ฒŒ ์ฆ๊ฐ€ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ 3D LF๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ณด๋‹ค ๋„“์€ ๊ณต๊ฐ„์„ ์ž์œ ๋กญ๊ฒŒ ์›€์ง์ผ ์ˆ˜ ์žˆ๋Š” ๊ฐ€์ƒํ˜„์‹ค ์‹œ์Šคํ…œ์˜ ๊ฐœ๋ฐœ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•์—์„œ ๋„“์€ ๊ตฌ์กฐ๋ฅผ ์ปค๋ฒ„ํ•˜๊ธฐ ์œ„ํ•ด ๊ตฌ์กฐ ์ž์ฒด๋ฅผ ํ™•์žฅํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ 3D LF๋ฅผ ์—ฌ๋Ÿฌ ๊ฐœ ์Œ“์€ ํ˜•ํƒœ์ธ 3D LF Stack ๊ตฌ์กฐ๋ฅผ ๊ฐ€์ •ํ•˜๊ณ  ์ด๋ฅผ ํ†ตํ•ด ์ปค๋ฒ„ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฒ”์œ„๋ฅผ ๋„“ํžŒ๋‹ค. ์œ„์˜ ์ œ์•ˆ ๋ฐฉ์•ˆ์„ ๋ฐ”ํƒ•์œผ๋กœ ์—ฌ์ „ํžˆ light ray ํš๋“ ๋ฐฉ์‹์„ ๊ฐ„ํŽธํ•˜๊ฒŒ ํ•˜๋ฉด์„œ ๋™์‹œ์— 3D LF์˜ vertical parallax๋กœ ์ธํ•œ ์—๋Ÿฌ๋ฅผ ์ผ์ • ์ˆ˜์ค€์œผ๋กœ ์ œํ•œํ•œ๋‹ค. ๋˜ํ•œ ์ œ์•ˆํ•˜๋Š” ์‹œ์Šคํ…œ์—์„œ๋Š” 3D LF Stack ๊ตฌ์กฐ๋ฅผ ์ˆ˜์ง ๋ฐฉํ–ฅ์œผ๋กœ ๋‘ ๊ฐœ ๋ฐฐ์น˜ํ•จ์œผ๋กœ์จ ์ž„์˜์˜ viewpoint์— ๋Œ€ํ•œ 360๋„ ๋ทฐ๋ฅผ ๊ตฌ์„ฑํ•œ๋‹ค. ์ œ์•ˆ ์‹œ์Šคํ…œ์€ ์—ฌ์ „ํžˆ 3D LF์˜ vertical parallax๋ฅผ ํ‘œํ˜„ํ•˜์ง€ ๋ชปํ•˜๋Š” ์—๋Ÿฌ๋ฅผ ํฌํ•จํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ์ด๋Š” ํŠนํžˆ, viewpoint๊ฐ€ ์ด๋™ํ•˜๊ณ , 3D LF Stack ์ƒ์—์„œ ๋‹ค๋ฅธ 3D LF๋ฅผ ๋„˜์–ด๊ฐˆ ๋•Œ ๋‘๋“œ๋Ÿฌ์ง€๊ฒŒ ๋‚˜ํƒ€๋‚œ๋‹ค. ์ด๋ฅผ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด ์•ž, ๋’ค๋กœ ๋ฐฐ์น˜๋œ ๋‘ ๊ฐœ์˜ 3D LF๋ฅผ ์‚ฌ์šฉํ•œ ๋ทฐ ๊ตฌ์„ฑ ๋ฐฉ์•ˆ์„ ์ œ์•ˆํ•œ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ œ์•ˆ๋œ ์‹œ์Šคํ…œ์—์„œ ์ž„์˜์˜ viewpoint์˜ ๋ทฐ๋Š” ๊ธฐ์กด์˜ LF ๊ธฐ๋ฐ˜์˜ ์ ‘๊ทผ ๋ฐฉ๋ฒ•๊ณผ ๋‹ฌ๋ฆฌ ๋‹ค์ˆ˜์˜ LF๋ฅผ ๋™์‹œ์— ์‚ฌ์šฉํ•ด์„œ ํ•˜๋‚˜์˜ ๋ทฐ๋ฅผ ๊ตฌ์„ฑํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ์„œ๋กœ ๋‹ค๋ฅธ 3D LF์˜ ์—ฐ๊ฒฐ ๋ฐฉ์•ˆ์„ ์ œ์•ˆํ•˜๊ณ , ๋‹ค์–‘ํ•œ ์‹œ์Šคํ…œ ๊ตฌํ˜„ ํ™˜๊ฒฝ์— ๋”ฐ๋ฅธ ์ ์ ˆํ•œ ์ ์šฉ ๋ฐฉ์•ˆ์„ ์†Œ๊ฐœํ•œ๋‹ค.Conventional captured-image-based virtual reality (VR) systems only support rotational view direction changes, roll, yaw, and pitch. It is 3-degree-of-freedom (3-DoF). DoF represents the users movements, and the highest DoF is 6, which includes three rotational view direction changes, roll, yaw, and pitch, as well as three translational viewpoint movements along the x, y, and z axes. The limited DoF of the conventional captured-image-based VR lowers users sense of reality. Light field (LF), which can generate a view at a free viewpoint through a combination of light rays, is a suitable approach to support the freedom to change viewpoints. LF assumes a planar or spherical surface, and generates a view by combing light rays passing through the surface. In particular, the spherical LF system creates a 360-degree view through light rays incident from 360-degree direction. In the spherical LF, a viewpoint freely moves along the x, y, and z axes inside the sphere and changes view direction, and thus 6-DoF is supported. However, it is difficult to acquire light rays for LF assuming a planar or spherical surface. In the case of spherical LF, a special equipment for rotating multiple cameras arranged in an arch shape is used to acquire light rays along a spherical surface. In order to cover a larger space, it is necessary to assume a larger plane or spherical surface. The larger the surface, the more difficult the light ray acquisition is. 3D LF consists of light rays acquired along the line, unlike conventional LF that assumes surfaces. A line instead of a plane and a circular structure instead of a spherical surface are used to construct 3D LF. It is easy to acquire light rays, which is acquired by moving the camera mounted on a camera slider and a dolly along the line. However, 3D LF cannot acquire vertical parallax because it obtains light rays at only one vertical point, and it causes distortion of the generated views. Assuming a larger structure does not significantly increase the difficulty of acquisition, but it increases the distortion of 3D LF view generation. This paper aims to develop a free viewpoint VR system for large space based on 3D LF. In contrast to extending the structure in the existing method, it assumes a 3D LF Stack in which multiple 3D LFs are stacked in front and back. The proposed system is simple to obtain light rays and limits the distortion to a certain range. In addition, two 3D LF Stacks are arranged orthogonally to generate a 360-degree view at a free viewpoint. There are two challenges for the proposed system. First is the need to connect independent 3D LFs. The existing LF-based approach creates a view using a single LF, while the proposed system generates a view using four 3D LFs. This paper proposes two 3D LF connection methods and introduces appropriate usage methods according to various implementation environments. Another is that 3D LF Stack still contains distortion, and the error is particularly noticeable as the viewpoint moves and the 3D LF that generates a view changes. This paper proposes a view generation method using a light ray set with epipolar geometry relationship in 3D LF stack.์ œ ๏ผ‘ ์žฅ ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ ๋ฐฐ๊ฒฝ 1 1.2 ์—ฐ๊ตฌ ๋‚ด์šฉ 6 1.3 ๋…ผ๋ฌธ ๊ตฌ์„ฑ 7 ์ œ ๏ผ’ ์žฅ ๊ด€๋ จ ์—ฐ๊ตฌ 8 2.1 3D ์Šค์บ๋‹ ๋ฐ ๋ Œ๋”๋ง 8 2.2 ๋ผ์ดํŠธ ํ•„๋“œ (Light Field) 13 2.2.1 Light Field Representation 13 2.2.2 Light Ray Acquisition 15 2.2.3 View Generation in LF 19 2.2.4 ํ‰๋ฉด์„ ๊ฐ€์ •ํ•˜๋Š” LF ๊ธฐ๋ฐ˜์˜ ์ž์œ  ์‹œ์  ๋ณ€ํ™˜ ์‹œ์Šคํ…œ 20 2.2.5 ๊ตฌ๋ฉด์„ ๊ฐ€์ •ํ•˜๋Š” LF ๊ธฐ๋ฐ˜์˜ ์ž์œ  ์‹œ์  ๋ณ€ํ™˜ ์‹œ์Šคํ…œ 21 2.3 3D ๋ผ์ดํŠธ ํ•„๋“œ (3D LF) 23 ์ œ ๏ผ“ ์žฅ Stackable 3D LF ๊ธฐ๋ฐ˜์˜ ์ž์œ ์‹œ์  ๋ณ€ํ™˜ ๊ฐ€์ƒํ˜„์‹ค ์‹œ์Šคํ…œ 28 3.1 Stackable 3D LF์˜ ๊ตฌ์กฐ์  ํŠน์ง• 29 3.1.1 ๋‹ค์ค‘ 3D LF์˜ ์ ์ธตํ˜• ๋ฐฐ์น˜ 29 3.1.2 ์–‘ ๋ฐฉํ–ฅ์œผ๋กœ ํ†ต๊ณผํ•˜๋Š” light ray๋ฅผ ์ด์šฉํ•œ 3D LF ๊ตฌ์„ฑ 30 3.1.3 ๋‘ ๊ฐœ์˜ 3D LF Stack์„ ์ˆ˜์ง ๋ฐฉํ–ฅ์œผ๋กœ ๋ฐฐ์น˜ 32 3.2 Stackable 3D LF ์‹œ์Šคํ…œ์—์„œ์˜ ์ž์œ  ์‹œ์  ๋ณ€ํ™˜ 33 3.3 Light Field Unit (LFU)์™€ ๋‹ค์ค‘ LFU ๊ตฌ์กฐ 34 3.4 ์ œ์•ˆ ์‹œ์Šคํ…œ์˜ ๊ฐ„๋žตํ•œ ๋™์ž‘ ๊ณผ์ • ์„ค๋ช… 35 3.5 ์ œ์•ˆ ์‹œ์Šคํ…œ์˜ ๋‘ ๊ฐ€์ง€ ํ•ด๊ฒฐ ๊ณผ์ œ 38 ์ œ ๏ผ” ์žฅ 3D LF Connection 40 4.1 Physical Connection 40 4.2 Physical Connection in LFU 42 4.3 Non-physical Connection 44 4.4 Non-physical Connection in LFU 46 4.5 ์ผ๋ฐ˜ ์นด๋ฉ”๋ผ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” 3D LF ๊ตฌ์„ฑ ํ™˜๊ฒฝ์—์„œ์˜ 3D LF Connection 51 4.6 360๋„ ์นด๋ฉ”๋ผ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” 3D LF ๊ตฌ์„ฑ ํ™˜๊ฒฝ์—์„œ์˜ 3D LF Connection 59 4.6.1 ์ˆ˜ํ‰, ์ˆ˜์ง ์ž…์‚ฌ ๊ฐ๋„๊ฐ€ ํฐ light ray๋ฅผ ์ด์šฉํ•œ 3D LF ๋ทฐ ๊ตฌ์„ฑ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์—๋Ÿฌ 59 4.6.2 Hybrid 3D LF Connection 63 ์ œ ๏ผ• ์žฅ View generation in 3D LF Stack 69 5.1 3D LF Stack ๊ตฌ์กฐ์—์„œ ๋ทฐ๊ฐ€ ๊ธ‰๊ฒฉํžˆ ๋ฐ”๋€Œ๋Š” ๋ฌธ์ œ 69 5.2 ์•ž, ๋’ค๋กœ ๋ฐฐ์น˜๋œ 3D LF ์‚ฌ์ด์˜ light ray ๊ณต์œ  71 5.3 Epipolar geometry ๊ด€๊ณ„๋ฅผ ๊ฐ€์ง€๋Š” light ray ์„ธํŠธ๋ฅผ ์ด์šฉํ•œ View Generation 77 5.4 Epipolar geometry ๊ด€๊ณ„์˜ light ray ์„ธํŠธ ๊ธฐ๋ฐ˜์˜ ๋ทฐ ๊ตฌ์„ฑ ๊ฒฐ๊ณผ ๋น„๊ต 81 ์ œ ๏ผ– ์žฅ ์ œ์•ˆ ์‹œ์Šคํ…œ ๊ตฌํ˜„ 91 6.1 ์ผ๋ฐ˜ ์นด๋ฉ”๋ผ + ์Šฌ๋ผ์ด๋”๋ฅผ ์ด์šฉํ•œ ๊ตฌ์„ฑ 91 6.2 ์ผ๋ฐ˜ ์นด๋ฉ”๋ผ + ์Šฌ๋ผ์ด๋”๋ฅผ ์ด์šฉํ•œ ๊ตฌ์กฐ ๊ฒฐ๊ณผ 93 6.2.1 ์ž์œ ์‹œ์  ๋ณ€ํ™”์— ๋Œ€ํ•œ ๋ทฐ ๊ตฌ์„ฑ ๊ฒฐ๊ณผ ๋น„๊ต 93 6.2.2 Blending์„ ์ด์šฉํ•œ 3D LF ์—ฐ๊ฒฐ ๋ถ€๋ถ„ ๋ณด์ • 98 6.2.3 ์ œ์•ˆ ๊ตฌ์กฐ์™€ ์› ๊ตฌ์กฐ 3D LF์˜ ๊ตฌ์„ฑ ๋ทฐ ๋น„๊ต 100 6.2.4 Physical connection๊ณผ Non-physical connection์˜ ํšจ์œจ์„ฑ ๋น„๊ต 102 6.3 360๋„ ์นด๋ฉ”๋ผ + dolly๋ฅผ ์ด์šฉํ•œ ๊ตฌ์„ฑ 106 6.4 360๋„ ์นด๋ฉ”๋ผ + dolly๋ฅผ ์ด์šฉํ•œ ๊ตฌ์กฐ ๊ฒฐ๊ณผ 109 6.4.1 Hybrid 3D LF Connection Result 109 6.4.2 ๊ตฌ์„ฑํ•œ ๊ตฌ์กฐ ๋ฐ ์ž„์˜์˜ viewpoint์— ๋”ฐ๋ฅธ ๋ทฐ ๊ตฌ์„ฑ ๊ฒฐ๊ณผ 117 6.4.3 ๋‹ค๋ฅธ ์ž์œ  ์‹œ์  ๋ณ€ํ™” ์‹œ์Šคํ…œ๊ณผ์˜ ๋น„๊ต 124 ์ œ ๏ผ— ์žฅ ๊ฒฐ ๋ก  128 ์ฐธ๊ณ ๋ฌธํ—Œ 130 Abstract 135Docto
    corecore