5 research outputs found

    FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    Get PDF
    Journal Article2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data

    Doctor of Philosophy

    Get PDF
    dissertationConfocal microscopy has become a popular imaging technique in biology research in recent years. It is often used to study three-dimensional (3D) structures of biological samples. Confocal data are commonly multichannel, with each channel resulting from a different fluorescent staining. This technique also results in finely detailed structures in 3D, such as neuron fibers. Despite the plethora of volume rendering techniques that have been available for many years, there is a demand from biologists for a flexible tool that allows interactive visualization and analysis of multichannel confocal data. Together with biologists, we have designed and developed FluoRender. It incorporates volume rendering techniques such as a two-dimensional (2D) transfer function and multichannel intermixing. Rendering results can be enhanced through tone-mappings and overlays. To facilitate analyses of confocal data, FluoRender provides interactive operations for extracting complex structures. Furthermore, we developed the Synthetic Brainbow technique, which takes advantage of the asynchronous behavior in Graphics Processing Unit (GPU) framebuffer loops and generates random colorizations for different structures in single-channel confocal data. The results from our Synthetic Brainbows, when applied to a sequence of developing cells, can then be used for tracking the movements of these cells. Finally, we present an application of FluoRender in the workflow of constructing anatomical atlases

    Enhanced perception in volume visualization

    Get PDF
    Due to the nature of scientic data sets, the generation of convenient visualizations may be a difficult task, but crucial to correctly convey the relevant information of the data. When working with complex volume models, such as the anatomical ones, it is important to provide accurate representations, since a misinterpretation can lead to serious mistakes while diagnosing a disease or planning surgery. In these cases, enhancing the perception of the features of interest usually helps to properly understand the data. Throughout years, researchers have focused on different methods to improve the visualization of volume data sets. For instance, the definition of good transfer functions is a key issue in Volume Visualization, since transfer functions determine how materials are classified. Other approaches are based on simulating realistic illumination models to enhance the spatial perception, or using illustrative effects to provide the level of abstraction needed to correctly interpret the data. This thesis contributes with new approaches to enhance the visual and spatial perception in Volume Visualization. Thanks to the new computing capabilities of modern graphics hardware, the proposed algorithms are capable of modifying the illumination model and simulating illustrative motifs in real time. In order to enhance local details, which are useful to better perceive the shape and the surfaces of the volume, our first contribution is an algorithm that employs a common sharpening operator to modify the lighting applied. As a result, the overall contrast of the visualization is enhanced by brightening the salient features and darkening the deeper regions of the volume model. The enhancement of depth perception in Direct Volume Rendering is also covered in the thesis. To do this, we propose two algorithms to simulate ambient occlusion: a screen-space technique based on using depth information to estimate the amount of light occluded, and a view-independent method that uses the density values of the data set to estimate the occlusion. Additionally, depth perception is also enhanced by adding halos around the structures of interest. Maximum Intensity Projection images provide a good understanding of the high intensity features of the data, but lack any contextual information. In order to enhance the depth perception in such a case, we present a novel technique based on changing how intensity is accumulated. Furthermore, the perception of the spatial arrangement of the displayed structures is also enhanced by adding certain colour cues. The last contribution is a new manipulation tool designed for adding contextual information when cutting the volume. Based on traditional illustrative effects, this method allows the user to directly extrude structures from the cross-section of the cut. As a result, the clipped structures are displayed at different heights, preserving the information needed to correctly perceive them.Debido a la naturaleza de los datos cient铆ficos, visualizarlos correctamente puede ser una tarea complicada, pero crucial para interpretarlos de forma adecuada. Cuando se trabaja con modelos de volumen complejos, como es el caso de los modelos anat贸micos, es importante generar im谩genes precisas, ya que una mala interpretaci贸n de las mismas puede producir errores graves en el diagn贸stico de enfermedades o en la planificaci贸n de operaciones quir煤rgicas. En estos casos, mejorar la percepci贸n de las zonas de inter茅s, facilita la comprensi贸n de la informaci贸n inherente a los datos. Durante d茅cadas, los investigadores se han centrado en el desarrollo de t茅cnicas para mejorar la visualizaci贸n de datos volum茅tricos. Por ejemplo, los m茅todos que permiten definir buenas funciones de transferencia son clave, ya que 茅stas determinan c贸mo se clasifican los materiales. Otros ejemplos son las t茅cnicas que simulan modelos de iluminaci贸n realista, que permiten percibir mejor la distribuci贸n espacial de los elementos del volumen, o bien los que imitan efectos ilustrativos, que proporcionan el nivel de abstracci贸n necesario para interpretar correctamente los datos. El trabajo presentado en esta tesis se centra en mejorar la percepci贸n de los elementos del volumen, ya sea modificando el modelo de iluminaci贸n aplicado en la visualizaci贸n, o simulando efectos ilustrativos. Aprovechando la capacidad de c谩lculo de los nuevos procesadores gr谩ficos, se describen un conjunto de algoritmos que permiten obtener los resultados en tiempo real. Para mejorar la percepci贸n de detalles locales, proponemos modificar el modelo de iluminaci贸n utilizando una conocida herramienta de procesado de im谩genes (unsharp masking). Iluminando aquellos detalles que sobresalen de las superficies y oscureciendo las zonas profundas, se mejora el contraste local de la imagen, con lo que se consigue realzar los detalles de superficie. Tambi茅n se presentan diferentes t茅cnicas para mejorar la percepci贸n de la profundidad en Direct Volume Rendering. Concretamente, se propone modificar la iluminaci贸n teniendo en cuenta la oclusi贸n ambiente de dos maneras diferentes: la primera utiliza los valores de profundidad en espacio imagen para calcular el factor de oclusi贸n del entorno de cada pixel, mientras que la segunda utiliza los valores de densidad del volumen para aproximar dicha oclusi贸n en cada v贸xel. Adem谩s de estas dos t茅cnicas, tambi茅n se propone mejorar la percepci贸n espacial y de la profundidad de ciertas estructuras mediante la generaci贸n de halos. La t茅cnica conocida como Maximum Intensity Projection (MIP) permite visualizar los elementos de mayor intensidad del volumen, pero no aporta ning煤n tipo de informaci贸n contextual. Para mejorar la percepci贸n de la profundidad, proponemos una nueva t茅cnica basada en cambiar la forma en la que se acumula la intensidad en MIP. Tambi茅n se describe un esquema de color para mejorar la percepci贸n espacial de los elementos visualizados. La 煤ltima contribuci贸n de la tesis es una herramienta de manipulaci贸n directa de los datos, que permite preservar la informaci贸n contextual cuando se realizan cortes en el modelo de volumen. Basada en t茅cnicas ilustrativas tradicionales, esta t茅cnica permite al usuario estirar las estructuras visibles en las secciones de los cortes. Como resultado, las estructuras de inter茅s se visualizan a diferentes alturas sobre la secci贸n, lo que permite al observador percibirlas correctamente
    corecore