3,558 research outputs found

    Context Preserving Focal Probes for Exploration of Volumetric Medical Datasets

    Get PDF
    During real-time medical data exploration using volume rendering, it is often difficult to enhance a particular region of interest without losing context information. In this paper, we present a new illustrative technique for focusing on a user-driven region of interest while preserving context information. Our focal probes define a region of interest using a distance function which controls the opacity of the voxels within the probe, exploit silhouette enhancement and use non-photorealistic shading techniques to improve shape depiction.187-19

    Enhanced perception in volume visualization

    Get PDF
    Due to the nature of scientic data sets, the generation of convenient visualizations may be a difficult task, but crucial to correctly convey the relevant information of the data. When working with complex volume models, such as the anatomical ones, it is important to provide accurate representations, since a misinterpretation can lead to serious mistakes while diagnosing a disease or planning surgery. In these cases, enhancing the perception of the features of interest usually helps to properly understand the data. Throughout years, researchers have focused on different methods to improve the visualization of volume data sets. For instance, the definition of good transfer functions is a key issue in Volume Visualization, since transfer functions determine how materials are classified. Other approaches are based on simulating realistic illumination models to enhance the spatial perception, or using illustrative effects to provide the level of abstraction needed to correctly interpret the data. This thesis contributes with new approaches to enhance the visual and spatial perception in Volume Visualization. Thanks to the new computing capabilities of modern graphics hardware, the proposed algorithms are capable of modifying the illumination model and simulating illustrative motifs in real time. In order to enhance local details, which are useful to better perceive the shape and the surfaces of the volume, our first contribution is an algorithm that employs a common sharpening operator to modify the lighting applied. As a result, the overall contrast of the visualization is enhanced by brightening the salient features and darkening the deeper regions of the volume model. The enhancement of depth perception in Direct Volume Rendering is also covered in the thesis. To do this, we propose two algorithms to simulate ambient occlusion: a screen-space technique based on using depth information to estimate the amount of light occluded, and a view-independent method that uses the density values of the data set to estimate the occlusion. Additionally, depth perception is also enhanced by adding halos around the structures of interest. Maximum Intensity Projection images provide a good understanding of the high intensity features of the data, but lack any contextual information. In order to enhance the depth perception in such a case, we present a novel technique based on changing how intensity is accumulated. Furthermore, the perception of the spatial arrangement of the displayed structures is also enhanced by adding certain colour cues. The last contribution is a new manipulation tool designed for adding contextual information when cutting the volume. Based on traditional illustrative effects, this method allows the user to directly extrude structures from the cross-section of the cut. As a result, the clipped structures are displayed at different heights, preserving the information needed to correctly perceive them.Debido a la naturaleza de los datos científicos, visualizarlos correctamente puede ser una tarea complicada, pero crucial para interpretarlos de forma adecuada. Cuando se trabaja con modelos de volumen complejos, como es el caso de los modelos anatómicos, es importante generar imágenes precisas, ya que una mala interpretación de las mismas puede producir errores graves en el diagnóstico de enfermedades o en la planificación de operaciones quirúrgicas. En estos casos, mejorar la percepción de las zonas de interés, facilita la comprensión de la información inherente a los datos. Durante décadas, los investigadores se han centrado en el desarrollo de técnicas para mejorar la visualización de datos volumétricos. Por ejemplo, los métodos que permiten definir buenas funciones de transferencia son clave, ya que éstas determinan cómo se clasifican los materiales. Otros ejemplos son las técnicas que simulan modelos de iluminación realista, que permiten percibir mejor la distribución espacial de los elementos del volumen, o bien los que imitan efectos ilustrativos, que proporcionan el nivel de abstracción necesario para interpretar correctamente los datos. El trabajo presentado en esta tesis se centra en mejorar la percepción de los elementos del volumen, ya sea modificando el modelo de iluminación aplicado en la visualización, o simulando efectos ilustrativos. Aprovechando la capacidad de cálculo de los nuevos procesadores gráficos, se describen un conjunto de algoritmos que permiten obtener los resultados en tiempo real. Para mejorar la percepción de detalles locales, proponemos modificar el modelo de iluminación utilizando una conocida herramienta de procesado de imágenes (unsharp masking). Iluminando aquellos detalles que sobresalen de las superficies y oscureciendo las zonas profundas, se mejora el contraste local de la imagen, con lo que se consigue realzar los detalles de superficie. También se presentan diferentes técnicas para mejorar la percepción de la profundidad en Direct Volume Rendering. Concretamente, se propone modificar la iluminación teniendo en cuenta la oclusión ambiente de dos maneras diferentes: la primera utiliza los valores de profundidad en espacio imagen para calcular el factor de oclusión del entorno de cada pixel, mientras que la segunda utiliza los valores de densidad del volumen para aproximar dicha oclusión en cada vóxel. Además de estas dos técnicas, también se propone mejorar la percepción espacial y de la profundidad de ciertas estructuras mediante la generación de halos. La técnica conocida como Maximum Intensity Projection (MIP) permite visualizar los elementos de mayor intensidad del volumen, pero no aporta ningún tipo de información contextual. Para mejorar la percepción de la profundidad, proponemos una nueva técnica basada en cambiar la forma en la que se acumula la intensidad en MIP. También se describe un esquema de color para mejorar la percepción espacial de los elementos visualizados. La última contribución de la tesis es una herramienta de manipulación directa de los datos, que permite preservar la información contextual cuando se realizan cortes en el modelo de volumen. Basada en técnicas ilustrativas tradicionales, esta técnica permite al usuario estirar las estructuras visibles en las secciones de los cortes. Como resultado, las estructuras de interés se visualizan a diferentes alturas sobre la sección, lo que permite al observador percibirlas correctamente

    The virtual magic lantern: an interaction metaphor for enhanced medical data inspection

    Get PDF
    In this paper we present the Virtual Magic Lantern (VML), an interaction tool tailored to facilitate volumetric data inspection. It behaves like a lantern whose virtual illumination cone provides the focal region which is visualized using a secondary transfer function or different rendering style. This may be used for simple visual inspection, surgery planning, or injure diagnosis. The VML is a particularly friendly and intuitive interaction tool suitable for an immersive Virtual Reality setup with a large screen, where the user moves a Wanda device, like a lantern pointing to the model. We show that this inspection metaphor can be efficiently and easily adapted to a GPU ray casting volume visualization algorithm. We also present the Virtual Magic Window (VMW) metaphor as an efficient collateral implementation of the VML, that can be seen as a restricted case where the lantern illuminates following the viewing direction, through a virtual window created as the intersection of the virtual lantern (guided by the Wanda device) and the bounding box of the volume.Peer ReviewedPostprint (author’s final draft

    Doctor of Philosophy

    Get PDF
    dissertationConfocal microscopy has become a popular imaging technique in biology research in recent years. It is often used to study three-dimensional (3D) structures of biological samples. Confocal data are commonly multichannel, with each channel resulting from a different fluorescent staining. This technique also results in finely detailed structures in 3D, such as neuron fibers. Despite the plethora of volume rendering techniques that have been available for many years, there is a demand from biologists for a flexible tool that allows interactive visualization and analysis of multichannel confocal data. Together with biologists, we have designed and developed FluoRender. It incorporates volume rendering techniques such as a two-dimensional (2D) transfer function and multichannel intermixing. Rendering results can be enhanced through tone-mappings and overlays. To facilitate analyses of confocal data, FluoRender provides interactive operations for extracting complex structures. Furthermore, we developed the Synthetic Brainbow technique, which takes advantage of the asynchronous behavior in Graphics Processing Unit (GPU) framebuffer loops and generates random colorizations for different structures in single-channel confocal data. The results from our Synthetic Brainbows, when applied to a sequence of developing cells, can then be used for tracking the movements of these cells. Finally, we present an application of FluoRender in the workflow of constructing anatomical atlases

    직접 볼륨 렌더링에서 점진적 렌즈 샘플링을 사용한 피사계 심도 렌더링

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2021. 2. 신영길.Direct volume rendering is a widely used technique for extracting information from 3D scalar fields acquired by measurement or numerical simulation. To visualize the structure inside the volume, the voxels scalar value is often represented by a translucent color. This translucency of direct volume rendering makes it difficult to perceive the depth between the nested structures. Various volume rendering techniques to improve depth perception are mainly based on illustrative rendering techniques, and physically based rendering techniques such as depth of field effects are difficult to apply due to long computation time. With the development of immersive systems such as virtual and augmented reality and the growing interest in perceptually motivated medical visualization, it is necessary to implement depth of field in direct volume rendering. This study proposes a novel method for applying depth of field effects to volume ray casting to improve the depth perception. By performing ray casting using multiple rays per pixel, objects at a distance in focus are sharply rendered and objects at an out-of-focus distance are blurred. To achieve these effects, a thin lens camera model is used to simulate rays passing through different parts of the lens. And an effective lens sampling method is used to generate an aliasing-free image with a minimum number of lens samples that directly affect performance. The proposed method is implemented without preprocessing based on the GPU-based volume ray casting pipeline. Therefore, all acceleration techniques of volume ray casting can be applied without restrictions. We also propose multi-pass rendering using progressive lens sampling as an acceleration technique. More lens samples are progressively used for ray generation over multiple render passes. Each pixel has a different final render pass depending on the predicted maximum blurring size based on the circle of confusion. This technique makes it possible to apply a different number of lens samples for each pixel, depending on the degree of blurring of the depth of field effects over distance. This acceleration method reduces unnecessary lens sampling and increases the cache hit rate of the GPU, allowing us to generate the depth of field effects at interactive frame rates in direct volume rendering. In the experiments using various data, the proposed method generated realistic depth of field effects in real time. These results demonstrate that our method produces depth of field effects with similar quality to the offline image synthesis method and is up to 12 times faster than the existing depth of field method in direct volume rendering.직접 볼륨 렌더링(direct volume rendering, DVR)은 측정 또는 수치 시뮬레이션으로 얻은 3차원 공간의 스칼라 필드(3D scalar fields) 데이터에서 정보를 추출하는데 널리 사용되는 기술이다. 볼륨 내부의 구조를 가시화하기 위해 복셀(voxel)의 스칼라 값은 종종 반투명의 색상으로 표현된다. 이러한 직접 볼륨 렌더링의 반투명성은 중첩된 구조 간 깊이 인식을 어렵게 한다. 깊이 인식을 향상시키기 위한 다양한 볼륨 렌더링 기법들은 주로 삽화풍 렌더링(illustrative rendering)을 기반으로 하며, 피사계 심도(depth of field, DoF) 효과와 같은 물리 기반 렌더링(physically based rendering) 기법들은 계산 시간이 오래 걸리기 때문에 적용이 어렵다. 가상 및 증강 현실과 같은 몰입형 시스템의 발전과 인간의 지각에 기반한 의료영상 시각화에 대한 관심이 증가함에 따라 직접 볼륨 렌더링에서 피사계 심도를 구현할 필요가 있다. 본 논문에서는 직접 볼륨 렌더링의 깊이 인식을 향상시키기 위해 볼륨 광선투사법에 피사계 심도 효과를 적용하는 새로운 방법을 제안한다. 픽셀 당 여러 개의 광선을 사용한 광선투사법(ray casting)을 수행하여 초점이 맞는 거리에 있는 물체는 선명하게 표현되고 초점이 맞지 않는 거리에 있는 물체는 흐리게 표현된다. 이러한 효과를 얻기 위하여 렌즈의 서로 다른 부분을 통과하는 광선들을 시뮬레이션 하는 얇은 렌즈 카메라 모델(thin lens camera model)이 사용되었다. 그리고 성능에 직접적으로 영향을 끼치는 렌즈 샘플은 최적의 렌즈 샘플링 방법을 사용하여 최소한의 개수를 가지고 앨리어싱(aliasing)이 없는 이미지를 생성하였다. 제안한 방법은 기존의 GPU 기반 볼륨 광선투사법 파이프라인 내에서 전처리 없이 구현된다. 따라서 볼륨 광선투사법의 모든 가속화 기법을 제한없이 적용할 수 있다. 또한 가속 기술로 누진 렌즈 샘플링(progressive lens sampling)을 사용하는 다중 패스 렌더링(multi-pass rendering)을 제안한다. 더 많은 렌즈 샘플들이 여러 렌더 패스들을 거치면서 점진적으로 사용된다. 각 픽셀은 착란원(circle of confusion)을 기반으로 예측된 최대 흐림 정도에 따라 다른 최종 렌더링 패스를 갖는다. 이 기법은 거리에 따른 피사계 심도 효과의 흐림 정도에 따라 각 픽셀에 다른 개수의 렌즈 샘플을 적용할 수 있게 한다. 이러한 가속화 방법은 불필요한 렌즈 샘플링을 줄이고 GPU의 캐시(cache) 적중률을 높여 직접 볼륨 렌더링에서 상호작용이 가능한 프레임 속도로 피사계 심도 효과를 렌더링 할 수 있게 한다. 다양한 데이터를 사용한 실험에서 제안한 방법은 실시간으로 사실적인 피사계 심도 효과를 생성했다. 이러한 결과는 우리의 방법이 오프라인 이미지 합성 방법과 유사한 품질의 피사계 심도 효과를 생성하면서 직접 볼륨 렌더링의 기존 피사계 심도 렌더링 방법보다 최대 12배까지 빠르다는 것을 보여준다.CHAPTER 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Dissertation Goals 5 1.3 Main Contributions 6 1.4 Organization of Dissertation 8 CHAPTER 2 RELATED WORK 9 2.1 Depth of Field on Surface Rendering 10 2.1.1 Object-Space Approaches 11 2.1.2 Image-Space Approaches 15 2.2 Depth of Field on Volume Rendering 26 2.2.1 Blur Filtering on Slice-Based Volume Rendering 28 2.2.2 Stochastic Sampling on Volume Ray Casting 30 CHAPTER 3 DEPTH OF FIELD VOLUME RAY CASTING 33 3.1 Fundamentals 33 3.1.1 Depth of Field 34 3.1.2 Camera Models 36 3.1.3 Direct Volume Rendering 42 3.2 Geometry Setup 48 3.3 Lens Sampling Strategy 53 3.3.1 Sampling Techniques 53 3.3.2 Disk Mapping 57 3.4 CoC-Based Multi-Pass Rendering 60 3.4.1 Progressive Lens Sample Sequence 60 3.4.2 Final Render Pass Determination 62 CHAPTER 4 GPU IMPLEMENTATION 66 4.1 Overview 66 4.2 Rendering Pipeline 67 4.3 Focal Plane Transformation 74 4.4 Lens Sample Transformation 76 CHAPTER 5 EXPERIMENTAL RESULTS 78 5.1 Number of Lens Samples 79 5.2 Number of Render Passes 82 5.3 Render Pass Parameter 84 5.4 Comparison with Previous Methods 87 CHAPTER 6 CONCLUSION 97 Bibliography 101 Appendix 111Docto
    corecore