10,655 research outputs found
VolumeEVM: A new surface/volume integrated model
Volume visualization is a very active research area in the field of scien-tific
visualization. The Extreme Vertices Model (EVM) has proven to be
a complete intermediate model to visualize and manipulate volume data
using a surface rendering approach. However, the ability to integrate the
advantages of surface rendering approach with the superiority in visual exploration
of the volume rendering would actually produce a very complete
visualization and edition system for volume data. Therefore, we decided
to define an enhanced EVM-based model which incorporates the volumetric
information required to achieved a nearly direct volume visualization
technique. Thus, VolumeEVM was designed maintaining the same EVM-based
data structure plus a sorted list of density values corresponding to
the EVM-based VoIs interior voxels. A function which relates interior
voxels of the EVM with the set of densities was mandatory to be defined.
This report presents the definition of this new surface/volume integrated
model based on the well known EVM encoding and propose implementations
of the main software-based direct volume rendering techniques
through the proposed model.Postprint (published version
Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences
Results: We present an application that enables the quantitative analysis of
multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence
microscopy images. The image sequences show stem cells together with blood
vessels, enabling quantification of the dynamic behaviors of stem cells in
relation to their vascular niche, with applications in developmental and cancer
biology. Our application automatically segments, tracks, and lineages the image
sequence data and then allows the user to view and edit the results of
automated algorithms in a stereoscopic 3-D window while simultaneously viewing
the stem cell lineage tree in a 2-D window. Using the GPU to store and render
the image sequence data enables a hybrid computational approach. An
inference-based approach utilizing user-provided edits to automatically correct
related mistakes executes interactively on the system CPU while the GPU handles
3-D visualization tasks. Conclusions: By exploiting commodity computer gaming
hardware, we have developed an application that can be run in the laboratory to
facilitate rapid iteration through biological experiments. There is a pressing
need for visualization and analysis tools for 5-D live cell image data. We
combine accurate unsupervised processes with an intuitive visualization of the
results. Our validation interface allows for each data set to be corrected to
100% accuracy, ensuring that downstream data analysis is accurate and
verifiable. Our tool is the first to combine all of these aspects, leveraging
the synergies obtained by utilizing validation information from stereo
visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc
Visualization-Based Mapping of Language Function in the Brain
Cortical language maps, obtained through intraoperative electrical stimulation studies, provide a rich source of information for research on language organization. Previous studies have shown interesting correlations between the distribution of essential language sites and such behavioral indicators as verbal IQ and have provided suggestive evidence for regarding human language cortex as an organization of multiple distributed systems. Noninvasive studies using ECoG, PET, and functional MR lend support to this model; however, there as yet are no studies that integrate these two forms of information. In this paper we describe a method for mapping the stimulation data onto a 3-D MRI-based neuroanatomic model of the individual patient. The mapping is done by comparing an intraoperative photograph of the exposed cortical surface with a computer-based MR visualization of the surface, interactively indicating corresponding stimulation sites, and recording 3-D MR machine coordinates of the indicated sites. Repeatability studies were performed to validate the accuracy of the mapping technique. Six observers—a neurosurgeon, a radiologist, and four computer scientists, independently mapped 218 stimulation sites from 12 patients. The mean distance of a mapping from the mean location of each site was 2.07 mm, with a standard deviation of 1.5 mm, or within 5.07 mm with 95% confidence. Since the surgical sites are accurate within approximately 1 cm, these results show that the visualization-based approach is accurate within the limits of the stimulation maps. When incorporated within the kind of information system envisioned by the Human Brain Project, this anatomically based method will not only provide a key link between noninvasive and invasive approaches to understanding language organization, but will also provide the basis for studying the relationship between language function and anatomical variability
Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification
Deep neural networks have led to state-of-the-art results in many medical imaging tasks including Alzheimer’s disease (AD) detection based on structural magnetic resonance imaging (MRI) data. However, the network decisions are often perceived as being highly non-transparent, making it difficult to apply these algorithms in clinical routine. In this study, we propose using layer-wise relevance propagation (LRP) to visualize convolutional neural network decisions for AD based on MRI data. Similarly to other visualization methods, LRP produces a heatmap in the input space indicating the importance/relevance of each voxel contributing to the final classification outcome. In contrast to susceptibility maps produced by guided backpropagation (“Which change in voxels would change the outcome most?”), the LRP method is able to directly highlight positive contributions to the network classification in the input space. In particular, we show that (1) the LRP method is very specific for individuals (“Why does this person have AD?”) with high inter-patient variability, (2) there is very little relevance for AD in healthy controls and (3) areas that exhibit a lot of relevance correlate well with what is known from literature. To quantify the latter, we compute size-corrected metrics of the summed relevance per brain area, e.g., relevance density or relevance gain. Although these metrics produce very individual “fingerprints” of relevance patterns for AD patients, a lot of importance is put on areas in the temporal lobe including the hippocampus. After discussing several limitations such as sensitivity toward the underlying model and computation parameters, we conclude that LRP might have a high potential to assist clinicians in explaining neural network decisions for diagnosing AD (and potentially other diseases) based on structural MRI data
Detection of Polyps via Shape and Appearance Modeling
Presented at the MICCAI 2008 Workshop on Computational and Visualization Challenges in the New Era of Virtual Colonoscopy, September 6, 2008, New York, USA.This paper describes a CAD system for the detection of colorectal polyps in CT. It is based on stochastic shape and appearance modeling of structures of the colon and rectum, in contrast to the data-driven approaches more commonly found in the literature it derives predictive stochastic models for the features used for classification. The method makes extensive use of medical domain knowledge in the design of the models and in the setting of their parameters. The proposed approach was successfully tested on challenging datasets acquired under a protocol with little colonic preparation; such protocol reduces patient discomfort and potentially improves compliance
- …