16,590 research outputs found

    직접 볼륨 렌더링에서 점진적 렌즈 샘플링을 사용한 피사계 심도 렌더링

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2021. 2. 신영길.Direct volume rendering is a widely used technique for extracting information from 3D scalar fields acquired by measurement or numerical simulation. To visualize the structure inside the volume, the voxels scalar value is often represented by a translucent color. This translucency of direct volume rendering makes it difficult to perceive the depth between the nested structures. Various volume rendering techniques to improve depth perception are mainly based on illustrative rendering techniques, and physically based rendering techniques such as depth of field effects are difficult to apply due to long computation time. With the development of immersive systems such as virtual and augmented reality and the growing interest in perceptually motivated medical visualization, it is necessary to implement depth of field in direct volume rendering. This study proposes a novel method for applying depth of field effects to volume ray casting to improve the depth perception. By performing ray casting using multiple rays per pixel, objects at a distance in focus are sharply rendered and objects at an out-of-focus distance are blurred. To achieve these effects, a thin lens camera model is used to simulate rays passing through different parts of the lens. And an effective lens sampling method is used to generate an aliasing-free image with a minimum number of lens samples that directly affect performance. The proposed method is implemented without preprocessing based on the GPU-based volume ray casting pipeline. Therefore, all acceleration techniques of volume ray casting can be applied without restrictions. We also propose multi-pass rendering using progressive lens sampling as an acceleration technique. More lens samples are progressively used for ray generation over multiple render passes. Each pixel has a different final render pass depending on the predicted maximum blurring size based on the circle of confusion. This technique makes it possible to apply a different number of lens samples for each pixel, depending on the degree of blurring of the depth of field effects over distance. This acceleration method reduces unnecessary lens sampling and increases the cache hit rate of the GPU, allowing us to generate the depth of field effects at interactive frame rates in direct volume rendering. In the experiments using various data, the proposed method generated realistic depth of field effects in real time. These results demonstrate that our method produces depth of field effects with similar quality to the offline image synthesis method and is up to 12 times faster than the existing depth of field method in direct volume rendering.직접 볼륨 렌더링(direct volume rendering, DVR)은 측정 또는 수치 시뮬레이션으로 얻은 3차원 공간의 스칼라 필드(3D scalar fields) 데이터에서 정보를 추출하는데 널리 사용되는 기술이다. 볼륨 내부의 구조를 가시화하기 위해 복셀(voxel)의 스칼라 값은 종종 반투명의 색상으로 표현된다. 이러한 직접 볼륨 렌더링의 반투명성은 중첩된 구조 간 깊이 인식을 어렵게 한다. 깊이 인식을 향상시키기 위한 다양한 볼륨 렌더링 기법들은 주로 삽화풍 렌더링(illustrative rendering)을 기반으로 하며, 피사계 심도(depth of field, DoF) 효과와 같은 물리 기반 렌더링(physically based rendering) 기법들은 계산 시간이 오래 걸리기 때문에 적용이 어렵다. 가상 및 증강 현실과 같은 몰입형 시스템의 발전과 인간의 지각에 기반한 의료영상 시각화에 대한 관심이 증가함에 따라 직접 볼륨 렌더링에서 피사계 심도를 구현할 필요가 있다. 본 논문에서는 직접 볼륨 렌더링의 깊이 인식을 향상시키기 위해 볼륨 광선투사법에 피사계 심도 효과를 적용하는 새로운 방법을 제안한다. 픽셀 당 여러 개의 광선을 사용한 광선투사법(ray casting)을 수행하여 초점이 맞는 거리에 있는 물체는 선명하게 표현되고 초점이 맞지 않는 거리에 있는 물체는 흐리게 표현된다. 이러한 효과를 얻기 위하여 렌즈의 서로 다른 부분을 통과하는 광선들을 시뮬레이션 하는 얇은 렌즈 카메라 모델(thin lens camera model)이 사용되었다. 그리고 성능에 직접적으로 영향을 끼치는 렌즈 샘플은 최적의 렌즈 샘플링 방법을 사용하여 최소한의 개수를 가지고 앨리어싱(aliasing)이 없는 이미지를 생성하였다. 제안한 방법은 기존의 GPU 기반 볼륨 광선투사법 파이프라인 내에서 전처리 없이 구현된다. 따라서 볼륨 광선투사법의 모든 가속화 기법을 제한없이 적용할 수 있다. 또한 가속 기술로 누진 렌즈 샘플링(progressive lens sampling)을 사용하는 다중 패스 렌더링(multi-pass rendering)을 제안한다. 더 많은 렌즈 샘플들이 여러 렌더 패스들을 거치면서 점진적으로 사용된다. 각 픽셀은 착란원(circle of confusion)을 기반으로 예측된 최대 흐림 정도에 따라 다른 최종 렌더링 패스를 갖는다. 이 기법은 거리에 따른 피사계 심도 효과의 흐림 정도에 따라 각 픽셀에 다른 개수의 렌즈 샘플을 적용할 수 있게 한다. 이러한 가속화 방법은 불필요한 렌즈 샘플링을 줄이고 GPU의 캐시(cache) 적중률을 높여 직접 볼륨 렌더링에서 상호작용이 가능한 프레임 속도로 피사계 심도 효과를 렌더링 할 수 있게 한다. 다양한 데이터를 사용한 실험에서 제안한 방법은 실시간으로 사실적인 피사계 심도 효과를 생성했다. 이러한 결과는 우리의 방법이 오프라인 이미지 합성 방법과 유사한 품질의 피사계 심도 효과를 생성하면서 직접 볼륨 렌더링의 기존 피사계 심도 렌더링 방법보다 최대 12배까지 빠르다는 것을 보여준다.CHAPTER 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Dissertation Goals 5 1.3 Main Contributions 6 1.4 Organization of Dissertation 8 CHAPTER 2 RELATED WORK 9 2.1 Depth of Field on Surface Rendering 10 2.1.1 Object-Space Approaches 11 2.1.2 Image-Space Approaches 15 2.2 Depth of Field on Volume Rendering 26 2.2.1 Blur Filtering on Slice-Based Volume Rendering 28 2.2.2 Stochastic Sampling on Volume Ray Casting 30 CHAPTER 3 DEPTH OF FIELD VOLUME RAY CASTING 33 3.1 Fundamentals 33 3.1.1 Depth of Field 34 3.1.2 Camera Models 36 3.1.3 Direct Volume Rendering 42 3.2 Geometry Setup 48 3.3 Lens Sampling Strategy 53 3.3.1 Sampling Techniques 53 3.3.2 Disk Mapping 57 3.4 CoC-Based Multi-Pass Rendering 60 3.4.1 Progressive Lens Sample Sequence 60 3.4.2 Final Render Pass Determination 62 CHAPTER 4 GPU IMPLEMENTATION 66 4.1 Overview 66 4.2 Rendering Pipeline 67 4.3 Focal Plane Transformation 74 4.4 Lens Sample Transformation 76 CHAPTER 5 EXPERIMENTAL RESULTS 78 5.1 Number of Lens Samples 79 5.2 Number of Render Passes 82 5.3 Render Pass Parameter 84 5.4 Comparison with Previous Methods 87 CHAPTER 6 CONCLUSION 97 Bibliography 101 Appendix 111Docto

    Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    Get PDF
    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours

    Computational Light Transport for Forward and Inverse Problems.

    Get PDF
    El transporte de luz computacional comprende todas las técnicas usadas para calcular el flujo de luz en una escena virtual. Su uso es ubicuo en distintas aplicaciones, desde entretenimiento y publicidad, hasta diseño de producto, ingeniería y arquitectura, incluyendo el generar datos validados para técnicas basadas en imagen por ordenador. Sin embargo, simular el transporte de luz de manera precisa es un proceso costoso. Como consecuencia, hay que establecer un balance entre la fidelidad de la simulación física y su coste computacional. Por ejemplo, es común asumir óptica geométrica o una velocidad de propagación de la luz infinita, o simplificar los modelos de reflectancia ignorando ciertos fenómenos. En esta tesis introducimos varias contribuciones a la simulación del transporte de luz, dirigidas tanto a mejorar la eficiencia del cálculo de la misma, como a expandir el rango de sus aplicaciones prácticas. Prestamos especial atención a remover la asunción de una velocidad de propagación infinita, generalizando el transporte de luz a su estado transitorio. Respecto a la mejora de eficiencia, presentamos un método para calcular el flujo de luz que incide directamente desde luminarias en un sistema de generación de imágenes por Monte Carlo, reduciendo significativamente la variancia de las imágenes resultantes usando el mismo tiempo de ejecución. Asimismo, introducimos una técnica basada en estimación de densidad en el estado transitorio, que permite reusar mejor las muestras temporales en un medio parcipativo. En el dominio de las aplicaciones, también introducimos dos nuevos usos del transporte de luz: Un modelo para simular un tipo especial de pigmentos gonicromáticos que exhiben apariencia perlescente, con el objetivo de proveer una forma de edición intuitiva para manufactura, y una técnica de imagen sin línea de visión directa usando información del tiempo de vuelo de la luz, construida sobre un modelo de propagación de la luz basado en ondas.<br /

    Light field image processing : overview and research issues

    Get PDF
    Light field (LF) imaging first appeared in the computer graphics community with the goal of photorealistic 3D rendering [1]. Motivated by a variety of potential applications in various domains (e.g., computational photography, augmented reality, light field microscopy, medical imaging, 3D robotic, particle image velocimetry), imaging from real light fields has recently gained in popularity, both at the research and industrial level.peer-reviewe

    Photorealistic physically based render engines: a comparative study

    Full text link
    Pérez Roig, F. (2012). Photorealistic physically based render engines: a comparative study. http://hdl.handle.net/10251/14797.Archivo delegad
    corecore