330 research outputs found

    ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ์ ์ง„์  ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง์„ ์‚ฌ์šฉํ•œ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ๋ Œ๋”๋ง

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021. 2. ์‹ ์˜๊ธธ.Direct volume rendering is a widely used technique for extracting information from 3D scalar fields acquired by measurement or numerical simulation. To visualize the structure inside the volume, the voxels scalar value is often represented by a translucent color. This translucency of direct volume rendering makes it difficult to perceive the depth between the nested structures. Various volume rendering techniques to improve depth perception are mainly based on illustrative rendering techniques, and physically based rendering techniques such as depth of field effects are difficult to apply due to long computation time. With the development of immersive systems such as virtual and augmented reality and the growing interest in perceptually motivated medical visualization, it is necessary to implement depth of field in direct volume rendering. This study proposes a novel method for applying depth of field effects to volume ray casting to improve the depth perception. By performing ray casting using multiple rays per pixel, objects at a distance in focus are sharply rendered and objects at an out-of-focus distance are blurred. To achieve these effects, a thin lens camera model is used to simulate rays passing through different parts of the lens. And an effective lens sampling method is used to generate an aliasing-free image with a minimum number of lens samples that directly affect performance. The proposed method is implemented without preprocessing based on the GPU-based volume ray casting pipeline. Therefore, all acceleration techniques of volume ray casting can be applied without restrictions. We also propose multi-pass rendering using progressive lens sampling as an acceleration technique. More lens samples are progressively used for ray generation over multiple render passes. Each pixel has a different final render pass depending on the predicted maximum blurring size based on the circle of confusion. This technique makes it possible to apply a different number of lens samples for each pixel, depending on the degree of blurring of the depth of field effects over distance. This acceleration method reduces unnecessary lens sampling and increases the cache hit rate of the GPU, allowing us to generate the depth of field effects at interactive frame rates in direct volume rendering. In the experiments using various data, the proposed method generated realistic depth of field effects in real time. These results demonstrate that our method produces depth of field effects with similar quality to the offline image synthesis method and is up to 12 times faster than the existing depth of field method in direct volume rendering.์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง(direct volume rendering, DVR)์€ ์ธก์ • ๋˜๋Š” ์ˆ˜์น˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์œผ๋กœ ์–ป์€ 3์ฐจ์› ๊ณต๊ฐ„์˜ ์Šค์นผ๋ผ ํ•„๋“œ(3D scalar fields) ๋ฐ์ดํ„ฐ์—์„œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š”๋ฐ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š” ๊ธฐ์ˆ ์ด๋‹ค. ๋ณผ๋ฅจ ๋‚ด๋ถ€์˜ ๊ตฌ์กฐ๋ฅผ ๊ฐ€์‹œํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋ณต์…€(voxel)์˜ ์Šค์นผ๋ผ ๊ฐ’์€ ์ข…์ข… ๋ฐ˜ํˆฌ๋ช…์˜ ์ƒ‰์ƒ์œผ๋กœ ํ‘œํ˜„๋œ๋‹ค. ์ด๋Ÿฌํ•œ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๋ฐ˜ํˆฌ๋ช…์„ฑ์€ ์ค‘์ฒฉ๋œ ๊ตฌ์กฐ ๊ฐ„ ๊นŠ์ด ์ธ์‹์„ ์–ด๋ ต๊ฒŒ ํ•œ๋‹ค. ๊นŠ์ด ์ธ์‹์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ๋ณผ๋ฅจ ๋ Œ๋”๋ง ๊ธฐ๋ฒ•๋“ค์€ ์ฃผ๋กœ ์‚ฝํ™”ํ’ ๋ Œ๋”๋ง(illustrative rendering)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋ฉฐ, ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„(depth of field, DoF) ํšจ๊ณผ์™€ ๊ฐ™์€ ๋ฌผ๋ฆฌ ๊ธฐ๋ฐ˜ ๋ Œ๋”๋ง(physically based rendering) ๊ธฐ๋ฒ•๋“ค์€ ๊ณ„์‚ฐ ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์— ์ ์šฉ์ด ์–ด๋ ต๋‹ค. ๊ฐ€์ƒ ๋ฐ ์ฆ๊ฐ• ํ˜„์‹ค๊ณผ ๊ฐ™์€ ๋ชฐ์ž…ํ˜• ์‹œ์Šคํ…œ์˜ ๋ฐœ์ „๊ณผ ์ธ๊ฐ„์˜ ์ง€๊ฐ์— ๊ธฐ๋ฐ˜ํ•œ ์˜๋ฃŒ์˜์ƒ ์‹œ๊ฐํ™”์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ์ฆ๊ฐ€ํ•จ์— ๋”ฐ๋ผ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„๋ฅผ ๊ตฌํ˜„ํ•  ํ•„์š”๊ฐ€ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๊นŠ์ด ์ธ์‹์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•์— ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ ์šฉํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ํ”ฝ์…€ ๋‹น ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ด‘์„ ์„ ์‚ฌ์šฉํ•œ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•(ray casting)์„ ์ˆ˜ํ–‰ํ•˜์—ฌ ์ดˆ์ ์ด ๋งž๋Š” ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ๋ฌผ์ฒด๋Š” ์„ ๋ช…ํ•˜๊ฒŒ ํ‘œํ˜„๋˜๊ณ  ์ดˆ์ ์ด ๋งž์ง€ ์•Š๋Š” ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ๋ฌผ์ฒด๋Š” ํ๋ฆฌ๊ฒŒ ํ‘œํ˜„๋œ๋‹ค. ์ด๋Ÿฌํ•œ ํšจ๊ณผ๋ฅผ ์–ป๊ธฐ ์œ„ํ•˜์—ฌ ๋ Œ์ฆˆ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ๋ถ€๋ถ„์„ ํ†ต๊ณผํ•˜๋Š” ๊ด‘์„ ๋“ค์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ•˜๋Š” ์–‡์€ ๋ Œ์ฆˆ ์นด๋ฉ”๋ผ ๋ชจ๋ธ(thin lens camera model)์ด ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์„ฑ๋Šฅ์— ์ง์ ‘์ ์œผ๋กœ ์˜ํ–ฅ์„ ๋ผ์น˜๋Š” ๋ Œ์ฆˆ ์ƒ˜ํ”Œ์€ ์ตœ์ ์˜ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ์†Œํ•œ์˜ ๊ฐœ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์•จ๋ฆฌ์–ด์‹ฑ(aliasing)์ด ์—†๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ GPU ๊ธฐ๋ฐ˜ ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ• ํŒŒ์ดํ”„๋ผ์ธ ๋‚ด์—์„œ ์ „์ฒ˜๋ฆฌ ์—†์ด ๊ตฌํ˜„๋œ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•์˜ ๋ชจ๋“  ๊ฐ€์†ํ™” ๊ธฐ๋ฒ•์„ ์ œํ•œ์—†์ด ์ ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ๊ฐ€์† ๊ธฐ์ˆ ๋กœ ๋ˆ„์ง„ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง(progressive lens sampling)์„ ์‚ฌ์šฉํ•˜๋Š” ๋‹ค์ค‘ ํŒจ์Šค ๋ Œ๋”๋ง(multi-pass rendering)์„ ์ œ์•ˆํ•œ๋‹ค. ๋” ๋งŽ์€ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋“ค์ด ์—ฌ๋Ÿฌ ๋ Œ๋” ํŒจ์Šค๋“ค์„ ๊ฑฐ์น˜๋ฉด์„œ ์ ์ง„์ ์œผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค. ๊ฐ ํ”ฝ์…€์€ ์ฐฉ๋ž€์›(circle of confusion)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์˜ˆ์ธก๋œ ์ตœ๋Œ€ ํ๋ฆผ ์ •๋„์— ๋”ฐ๋ผ ๋‹ค๋ฅธ ์ตœ์ข… ๋ Œ๋”๋ง ํŒจ์Šค๋ฅผ ๊ฐ–๋Š”๋‹ค. ์ด ๊ธฐ๋ฒ•์€ ๊ฑฐ๋ฆฌ์— ๋”ฐ๋ฅธ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ์˜ ํ๋ฆผ ์ •๋„์— ๋”ฐ๋ผ ๊ฐ ํ”ฝ์…€์— ๋‹ค๋ฅธ ๊ฐœ์ˆ˜์˜ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ€์†ํ™” ๋ฐฉ๋ฒ•์€ ๋ถˆํ•„์š”ํ•œ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง์„ ์ค„์ด๊ณ  GPU์˜ ์บ์‹œ(cache) ์ ์ค‘๋ฅ ์„ ๋†’์—ฌ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ์ƒํ˜ธ์ž‘์šฉ์ด ๊ฐ€๋Šฅํ•œ ํ”„๋ ˆ์ž„ ์†๋„๋กœ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ๋ Œ๋”๋ง ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•œ ์‹คํ—˜์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์€ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์‚ฌ์‹ค์ ์ธ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ƒ์„ฑํ–ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒฐ๊ณผ๋Š” ์šฐ๋ฆฌ์˜ ๋ฐฉ๋ฒ•์ด ์˜คํ”„๋ผ์ธ ์ด๋ฏธ์ง€ ํ•ฉ์„ฑ ๋ฐฉ๋ฒ•๊ณผ ์œ ์‚ฌํ•œ ํ’ˆ์งˆ์˜ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ƒ์„ฑํ•˜๋ฉด์„œ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๊ธฐ์กด ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ๋ Œ๋”๋ง ๋ฐฉ๋ฒ•๋ณด๋‹ค ์ตœ๋Œ€ 12๋ฐฐ๊นŒ์ง€ ๋น ๋ฅด๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์—ฌ์ค€๋‹ค.CHAPTER 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Dissertation Goals 5 1.3 Main Contributions 6 1.4 Organization of Dissertation 8 CHAPTER 2 RELATED WORK 9 2.1 Depth of Field on Surface Rendering 10 2.1.1 Object-Space Approaches 11 2.1.2 Image-Space Approaches 15 2.2 Depth of Field on Volume Rendering 26 2.2.1 Blur Filtering on Slice-Based Volume Rendering 28 2.2.2 Stochastic Sampling on Volume Ray Casting 30 CHAPTER 3 DEPTH OF FIELD VOLUME RAY CASTING 33 3.1 Fundamentals 33 3.1.1 Depth of Field 34 3.1.2 Camera Models 36 3.1.3 Direct Volume Rendering 42 3.2 Geometry Setup 48 3.3 Lens Sampling Strategy 53 3.3.1 Sampling Techniques 53 3.3.2 Disk Mapping 57 3.4 CoC-Based Multi-Pass Rendering 60 3.4.1 Progressive Lens Sample Sequence 60 3.4.2 Final Render Pass Determination 62 CHAPTER 4 GPU IMPLEMENTATION 66 4.1 Overview 66 4.2 Rendering Pipeline 67 4.3 Focal Plane Transformation 74 4.4 Lens Sample Transformation 76 CHAPTER 5 EXPERIMENTAL RESULTS 78 5.1 Number of Lens Samples 79 5.2 Number of Render Passes 82 5.3 Render Pass Parameter 84 5.4 Comparison with Previous Methods 87 CHAPTER 6 CONCLUSION 97 Bibliography 101 Appendix 111Docto

    Haptic Interaction with 3D oriented point clouds on the GPU

    Get PDF
    Real-time point-based rendering and interaction with virtual objects is gaining popularity and importance as di๏ฟฝerent haptic devices and technologies increasingly provide the basis for realistic interaction. Haptic Interaction is being used for a wide range of applications such as medical training, remote robot operators, tactile displays and video games. Virtual object visualization and interaction using haptic devices is the main focus; this process involves several steps such as: Data Acquisition, Graphic Rendering, Haptic Interaction and Data Modi๏ฟฝcation. This work presents a framework for Haptic Interaction using the GPU as a hardware accelerator, and includes an approach for enabling the modi๏ฟฝcation of data during interaction. The results demonstrate the limits and capabilities of these techniques in the context of volume rendering for haptic applications. Also, the use of dynamic parallelism as a technique to scale the number of threads needed from the accelerator according to the interaction requirements is studied allowing the editing of data sets of up to one million points at interactive haptic frame rates

    Volumetric real-time particle-based representation of large unstructured tetrahedral polygon meshes

    No full text
    In this paper we propose a particle-based volume rendering approach for unstructured, three-dimensional, tetrahedral polygon meshes. We stochastically generate millions of particles per second and project them on the screen in real-time. In contrast to previous rendering techniques of tetrahedral volume meshes, our method does not need a prior depth sorting of geometry. Instead, the rendered image is generated by choosing particles closest to the camera. Furthermore, we use spatial superimposing. Each pixel is constructed from multiple subpixels. This approach not only increases projection accuracy, but allows also a combination of subpixels into one superpixel that creates the well-known translucency effect of volume rendering. We show that our method is fast enough for the visualization of unstructured three-dimensional grids with hard real-time constraints and that it scales well for a high number of particles

    Proxy-guided Image-based Rendering for Mobile Devices

    Get PDF
    VR headsets and hand-held devices are not powerful enough to render complex scenes in real-time. A server can take on the rendering task, but network latency prohibits a good user experience. We present a new image-based rendering (IBR) architecture for masking the latency. It runs in real-time even on very weak mobile devices, supports modern game engine graphics, and maintains high visual quality even for large view displacements. We propose a novel server-side dual-view representation that leverages an optimally-placed extra view and depth peeling to provide the client with coverage for filling disocclusion holes. This representation is directly rendered in a novel wide-angle projection with favorable directional parameterization. A new client-side IBR algorithm uses a pre-transmitted level-of-detail proxy with an encaging simplification and depth-carving to maintain highly complex geometric detail. We demonstrate our approach with typical VR / mobile gaming applications running on mobile hardware. Our technique compares favorably to competing approaches according to perceptual and numerical comparisons

    Deep Bilateral Learning for Real-Time Image Enhancement

    Get PDF
    Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201

    Depth of field guided visualisation on light field displays

    Get PDF
    Light field displays are capable of realistic visualization of arbitrary 3D content. However, due to the finite number of light rays reproduced by the display, its bandwidth is limited in terms of angular and spatial resolution. Consequently, 3D content that falls outside of that bandwidth will cause aliasing during visualization. Therefore, a light field to be visualized must be properly preprocessed. In this thesis, we propose three methods that properly filter the parts in the input light field that would cause aliasing. First method is based on a 2D FIR circular filter that is applied over the 4D light field. Second method utilizes the structured nature of the epipolar plane images representing the light field. Third method adopts real-time multi-layer depth-of-field rendering using tiled splatting. We also establish a connection between lens parameters in the proposed depth-of-field rendering and the displayโ€™s bandwidth in order to determine the optimal blurring amount. As we prepare light field for light field displays, a stage is added to the proposed real-time rendering pipeline that simultaneously renders adjacent views. The rendering performance of the proposed methods is demonstrated on Holografikaโ€™s Holovizio 722RC projection-based light field display
    • โ€ฆ
    corecore