1 research outputs found

    ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ์ ์ง„์  ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง์„ ์‚ฌ์šฉํ•œ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ๋ Œ๋”๋ง

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021. 2. ์‹ ์˜๊ธธ.Direct volume rendering is a widely used technique for extracting information from 3D scalar fields acquired by measurement or numerical simulation. To visualize the structure inside the volume, the voxels scalar value is often represented by a translucent color. This translucency of direct volume rendering makes it difficult to perceive the depth between the nested structures. Various volume rendering techniques to improve depth perception are mainly based on illustrative rendering techniques, and physically based rendering techniques such as depth of field effects are difficult to apply due to long computation time. With the development of immersive systems such as virtual and augmented reality and the growing interest in perceptually motivated medical visualization, it is necessary to implement depth of field in direct volume rendering. This study proposes a novel method for applying depth of field effects to volume ray casting to improve the depth perception. By performing ray casting using multiple rays per pixel, objects at a distance in focus are sharply rendered and objects at an out-of-focus distance are blurred. To achieve these effects, a thin lens camera model is used to simulate rays passing through different parts of the lens. And an effective lens sampling method is used to generate an aliasing-free image with a minimum number of lens samples that directly affect performance. The proposed method is implemented without preprocessing based on the GPU-based volume ray casting pipeline. Therefore, all acceleration techniques of volume ray casting can be applied without restrictions. We also propose multi-pass rendering using progressive lens sampling as an acceleration technique. More lens samples are progressively used for ray generation over multiple render passes. Each pixel has a different final render pass depending on the predicted maximum blurring size based on the circle of confusion. This technique makes it possible to apply a different number of lens samples for each pixel, depending on the degree of blurring of the depth of field effects over distance. This acceleration method reduces unnecessary lens sampling and increases the cache hit rate of the GPU, allowing us to generate the depth of field effects at interactive frame rates in direct volume rendering. In the experiments using various data, the proposed method generated realistic depth of field effects in real time. These results demonstrate that our method produces depth of field effects with similar quality to the offline image synthesis method and is up to 12 times faster than the existing depth of field method in direct volume rendering.์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง(direct volume rendering, DVR)์€ ์ธก์ • ๋˜๋Š” ์ˆ˜์น˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์œผ๋กœ ์–ป์€ 3์ฐจ์› ๊ณต๊ฐ„์˜ ์Šค์นผ๋ผ ํ•„๋“œ(3D scalar fields) ๋ฐ์ดํ„ฐ์—์„œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š”๋ฐ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š” ๊ธฐ์ˆ ์ด๋‹ค. ๋ณผ๋ฅจ ๋‚ด๋ถ€์˜ ๊ตฌ์กฐ๋ฅผ ๊ฐ€์‹œํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋ณต์…€(voxel)์˜ ์Šค์นผ๋ผ ๊ฐ’์€ ์ข…์ข… ๋ฐ˜ํˆฌ๋ช…์˜ ์ƒ‰์ƒ์œผ๋กœ ํ‘œํ˜„๋œ๋‹ค. ์ด๋Ÿฌํ•œ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๋ฐ˜ํˆฌ๋ช…์„ฑ์€ ์ค‘์ฒฉ๋œ ๊ตฌ์กฐ ๊ฐ„ ๊นŠ์ด ์ธ์‹์„ ์–ด๋ ต๊ฒŒ ํ•œ๋‹ค. ๊นŠ์ด ์ธ์‹์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ๋ณผ๋ฅจ ๋ Œ๋”๋ง ๊ธฐ๋ฒ•๋“ค์€ ์ฃผ๋กœ ์‚ฝํ™”ํ’ ๋ Œ๋”๋ง(illustrative rendering)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋ฉฐ, ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„(depth of field, DoF) ํšจ๊ณผ์™€ ๊ฐ™์€ ๋ฌผ๋ฆฌ ๊ธฐ๋ฐ˜ ๋ Œ๋”๋ง(physically based rendering) ๊ธฐ๋ฒ•๋“ค์€ ๊ณ„์‚ฐ ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์— ์ ์šฉ์ด ์–ด๋ ต๋‹ค. ๊ฐ€์ƒ ๋ฐ ์ฆ๊ฐ• ํ˜„์‹ค๊ณผ ๊ฐ™์€ ๋ชฐ์ž…ํ˜• ์‹œ์Šคํ…œ์˜ ๋ฐœ์ „๊ณผ ์ธ๊ฐ„์˜ ์ง€๊ฐ์— ๊ธฐ๋ฐ˜ํ•œ ์˜๋ฃŒ์˜์ƒ ์‹œ๊ฐํ™”์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ์ฆ๊ฐ€ํ•จ์— ๋”ฐ๋ผ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„๋ฅผ ๊ตฌํ˜„ํ•  ํ•„์š”๊ฐ€ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๊นŠ์ด ์ธ์‹์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•์— ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ ์šฉํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ํ”ฝ์…€ ๋‹น ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ด‘์„ ์„ ์‚ฌ์šฉํ•œ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•(ray casting)์„ ์ˆ˜ํ–‰ํ•˜์—ฌ ์ดˆ์ ์ด ๋งž๋Š” ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ๋ฌผ์ฒด๋Š” ์„ ๋ช…ํ•˜๊ฒŒ ํ‘œํ˜„๋˜๊ณ  ์ดˆ์ ์ด ๋งž์ง€ ์•Š๋Š” ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ๋ฌผ์ฒด๋Š” ํ๋ฆฌ๊ฒŒ ํ‘œํ˜„๋œ๋‹ค. ์ด๋Ÿฌํ•œ ํšจ๊ณผ๋ฅผ ์–ป๊ธฐ ์œ„ํ•˜์—ฌ ๋ Œ์ฆˆ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ๋ถ€๋ถ„์„ ํ†ต๊ณผํ•˜๋Š” ๊ด‘์„ ๋“ค์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ•˜๋Š” ์–‡์€ ๋ Œ์ฆˆ ์นด๋ฉ”๋ผ ๋ชจ๋ธ(thin lens camera model)์ด ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์„ฑ๋Šฅ์— ์ง์ ‘์ ์œผ๋กœ ์˜ํ–ฅ์„ ๋ผ์น˜๋Š” ๋ Œ์ฆˆ ์ƒ˜ํ”Œ์€ ์ตœ์ ์˜ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ์†Œํ•œ์˜ ๊ฐœ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์•จ๋ฆฌ์–ด์‹ฑ(aliasing)์ด ์—†๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ GPU ๊ธฐ๋ฐ˜ ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ• ํŒŒ์ดํ”„๋ผ์ธ ๋‚ด์—์„œ ์ „์ฒ˜๋ฆฌ ์—†์ด ๊ตฌํ˜„๋œ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•์˜ ๋ชจ๋“  ๊ฐ€์†ํ™” ๊ธฐ๋ฒ•์„ ์ œํ•œ์—†์ด ์ ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ๊ฐ€์† ๊ธฐ์ˆ ๋กœ ๋ˆ„์ง„ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง(progressive lens sampling)์„ ์‚ฌ์šฉํ•˜๋Š” ๋‹ค์ค‘ ํŒจ์Šค ๋ Œ๋”๋ง(multi-pass rendering)์„ ์ œ์•ˆํ•œ๋‹ค. ๋” ๋งŽ์€ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋“ค์ด ์—ฌ๋Ÿฌ ๋ Œ๋” ํŒจ์Šค๋“ค์„ ๊ฑฐ์น˜๋ฉด์„œ ์ ์ง„์ ์œผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค. ๊ฐ ํ”ฝ์…€์€ ์ฐฉ๋ž€์›(circle of confusion)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์˜ˆ์ธก๋œ ์ตœ๋Œ€ ํ๋ฆผ ์ •๋„์— ๋”ฐ๋ผ ๋‹ค๋ฅธ ์ตœ์ข… ๋ Œ๋”๋ง ํŒจ์Šค๋ฅผ ๊ฐ–๋Š”๋‹ค. ์ด ๊ธฐ๋ฒ•์€ ๊ฑฐ๋ฆฌ์— ๋”ฐ๋ฅธ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ์˜ ํ๋ฆผ ์ •๋„์— ๋”ฐ๋ผ ๊ฐ ํ”ฝ์…€์— ๋‹ค๋ฅธ ๊ฐœ์ˆ˜์˜ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ€์†ํ™” ๋ฐฉ๋ฒ•์€ ๋ถˆํ•„์š”ํ•œ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง์„ ์ค„์ด๊ณ  GPU์˜ ์บ์‹œ(cache) ์ ์ค‘๋ฅ ์„ ๋†’์—ฌ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ์ƒํ˜ธ์ž‘์šฉ์ด ๊ฐ€๋Šฅํ•œ ํ”„๋ ˆ์ž„ ์†๋„๋กœ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ๋ Œ๋”๋ง ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•œ ์‹คํ—˜์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์€ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์‚ฌ์‹ค์ ์ธ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ƒ์„ฑํ–ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒฐ๊ณผ๋Š” ์šฐ๋ฆฌ์˜ ๋ฐฉ๋ฒ•์ด ์˜คํ”„๋ผ์ธ ์ด๋ฏธ์ง€ ํ•ฉ์„ฑ ๋ฐฉ๋ฒ•๊ณผ ์œ ์‚ฌํ•œ ํ’ˆ์งˆ์˜ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ƒ์„ฑํ•˜๋ฉด์„œ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๊ธฐ์กด ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ๋ Œ๋”๋ง ๋ฐฉ๋ฒ•๋ณด๋‹ค ์ตœ๋Œ€ 12๋ฐฐ๊นŒ์ง€ ๋น ๋ฅด๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์—ฌ์ค€๋‹ค.CHAPTER 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Dissertation Goals 5 1.3 Main Contributions 6 1.4 Organization of Dissertation 8 CHAPTER 2 RELATED WORK 9 2.1 Depth of Field on Surface Rendering 10 2.1.1 Object-Space Approaches 11 2.1.2 Image-Space Approaches 15 2.2 Depth of Field on Volume Rendering 26 2.2.1 Blur Filtering on Slice-Based Volume Rendering 28 2.2.2 Stochastic Sampling on Volume Ray Casting 30 CHAPTER 3 DEPTH OF FIELD VOLUME RAY CASTING 33 3.1 Fundamentals 33 3.1.1 Depth of Field 34 3.1.2 Camera Models 36 3.1.3 Direct Volume Rendering 42 3.2 Geometry Setup 48 3.3 Lens Sampling Strategy 53 3.3.1 Sampling Techniques 53 3.3.2 Disk Mapping 57 3.4 CoC-Based Multi-Pass Rendering 60 3.4.1 Progressive Lens Sample Sequence 60 3.4.2 Final Render Pass Determination 62 CHAPTER 4 GPU IMPLEMENTATION 66 4.1 Overview 66 4.2 Rendering Pipeline 67 4.3 Focal Plane Transformation 74 4.4 Lens Sample Transformation 76 CHAPTER 5 EXPERIMENTAL RESULTS 78 5.1 Number of Lens Samples 79 5.2 Number of Render Passes 82 5.3 Render Pass Parameter 84 5.4 Comparison with Previous Methods 87 CHAPTER 6 CONCLUSION 97 Bibliography 101 Appendix 111Docto
    corecore