19 research outputs found
Statistically quantitative volume visualization
Journal ArticleVisualization users are increasingly in need of techniques for assessing quantitative uncertainty and error in the images produced. Statistical segmentation algorithms compute these quantitative results, yet volume rendering tools typically produce only qualitative imagery via transfer functionbased classification. This paper presents a visualization technique that allows users to nteractively explore the uncertainty, risk, and probabilistic decision of surface boundaries. Our approach makes it possible to directly visualize the combined "fuzzy" classification results from multiple segmentations by combining these data into a unified probabilistic data space. We represent this unified space, the combination of scalar volumes from numerous segmentations, using a novel graph-based dimensionality reduction scheme. The scheme both dramatically reduces the dataset size and is suitable for efficient, high quality, quantitative visualization. Lastly, we show that the statistical risk arising from overlapping segmentations is a robust measure for visualizing features and assigning optical properties
Statistically quantitative volume visualization
Journal ArticleVisualization users are increasingly in need of techniques for assessing quantitative uncertainty and error in the images produced. Statistical segmentation algorithms compute these quantitative results, yet volume rendering tools typically produce only qualitative imagery via transfer function-based classification. This paper presents a visualization technique that allows users to interactively explore the uncertainty, risk, and probabilistic decision of surface boundaries. Our approach makes it possible to directly visualize the combined "fuzzy" classification results from multiple segmentations by combining these data into a unified probabilistic data space. We represent this unified space, the combination of scalar volumes from numerous segmentations, using a novel graph-based dimensionality reduction scheme. The scheme both dramatically reduces the dataset size and is suitable for efficient, high quality, quantitative visualization. Lastly, we show that the statistical risk arising from overlapping segmentations is a robust measure for visualizing features and assigning optical properties
Forensic volumetric visualization of gunshot residue in its anatomic context in forensic post mortem computed tomography: Development of transfer function preset
While the visualization of gunshot injuries so far focused on solid metal density in routine forensic post mortem computed tomography (PMCT) as well as in micro-computed tomography, gunshot residue (GSR) as dispersed metal particles typically succumbs to partial volume effect. A case series of seven contact shots to the head was evaluated to determine a density range for GSR with at least three times higher likelihood than encountering bone, skin, muscle or blood. For that, a Bayesian likelihood was determined from normal distributions of the CT-densities of blood, bone, skin, muscle and GSR as identified in correlation with visual evidence. Resulting transfer functions matched ring and cone shaped GSR deposits as published elsewhere, thus representing a plausible result. Only fast and plausibly specific visualization is suitable for routine use in forensic PMCT, to allow the examination of GSR in real cases on a wider scale
Supervised manifold distance segmentation
In this paper, I will propose a simple and robust method for image and volume data segmentation based on manifold distance metrics. In this approach, pixels in an image are not considered as points with color values arranged in a grid. In this way, a new data set is built by a transform function from one traditional 2D image or 3D volume to a manifold in higher dimension feature space. Multiple possible feature spaces like position, gradient and probabilistic measures are studied and experimented. Graph algorithm and probabilistic classification are involved. Both time and space complexity of this algorithm is O(N). With appropriate choice of feature vector, this method could produce similar qualitative and quantitative results to other algorithms like Level Sets and Random Walks. Analysis of sensitivity to parameters is presented. Comparison between segmentation results and ground-truth images is also provided to validate of the robustness of this method
Leveraging Self-Supervised Vision Transformers for Neural Transfer Function Design
In volume rendering, transfer functions are used to classify structures of
interest, and to assign optical properties such as color and opacity. They are
commonly defined as 1D or 2D functions that map simple features to these
optical properties. As the process of designing a transfer function is
typically tedious and unintuitive, several approaches have been proposed for
their interactive specification. In this paper, we present a novel method to
define transfer functions for volume rendering by leveraging the feature
extraction capabilities of self-supervised pre-trained vision transformers. To
design a transfer function, users simply select the structures of interest in a
slice viewer, and our method automatically selects similar structures based on
the high-level features extracted by the neural network. Contrary to previous
learning-based transfer function approaches, our method does not require
training of models and allows for quick inference, enabling an interactive
exploration of the volume data. Our approach reduces the amount of necessary
annotations by interactively informing the user about the current
classification, so they can focus on annotating the structures of interest that
still require annotation. In practice, this allows users to design transfer
functions within seconds, instead of minutes. We compare our method to existing
learning-based approaches in terms of annotation and compute time, as well as
with respect to segmentation accuracy. Our accompanying video showcases the
interactivity and effectiveness of our method
Segmentation-based regularization of dynamic SPECT reconstructions
Abstract-Dynamic SPECT reconstruction using a single slow camera rotation is a highly underdetermined problem, which requires the use of regularization techniques to obtain useful results. The dSPECT algorithm We test this approach with a digital phantom simulating the kinetics of Tc99m-DTPA in the renal system, including healthy and unhealthy behaviour. Summed TACs for each kidney and the bladder were calculated for the spatially regularized and nonregularized reconstructions, and compared to the true values. The TACs for the two kidneys were noticeably improved in every case, while TACs for the smaller bladder region were unchanged. Furthermore, in two cases where the segmentation was intentionally done incorrectly, the spatially regularized reconstructions were still as good as the non-regularized ones. In general, the segmentation-based regularization improves TAC quality within ROIs, as well as image contrast
Gaussian Processes for Uncertainty Visualization
Data is virtually always uncertain in one way or another. Yet, uncertainty information is not routinely included in visualizations and, outside of simple 1D diagrams, there is no established way to do it. One big issue is to find a method that shows the uncertainty without completely cluttering the display. A second important question that needs to be solved, is how uncertainty and interpolation interact. Interpolated values are inherently uncertain, because they are heuristically estimated values – not measurements. But how much more uncertain are they? How can this effect be modeled?
In this thesis, we introduce Gaussian processes, a statistical framework that allows for the smooth interpolation of data with heteroscedastic uncertainty through regression. Its theoretical background makes it a convincing method to analyze uncertain data and create a model of the underlying phenomenon and, most importantly, the uncertainty at and in-between the data points. For this reason, it is already popular in the GIS community where it is known as Kriging but has applications in machine learning too.
In contrast to traditional interpolation methods, Gaussian processes do not merely create a surface that runs through the data points, but respect the uncertainty in them. This way, noise, errors or outliers in the data do not disturb the model inappropriately. Most importantly, the model shows the variance in the interpolated values, which can be higher but also lower than that of its neighboring data points, providing us with a lot more insight into the quality of our data and how it influences our uncertainty! This enables us to use uncertainty information in algorithms that need to interpolate between data points, which includes almost all visualization algorithms
Semi-automatic transfer function generation for non-domain specific direct volume rendering
The field of volume rendering is focused on the visualization of three-dimensional data sets. Although it is predominantly used in biomedical applications, volume rendering has proven useful in fields such as meteorology, physics, and fluid dynamics as a means of analyzing features of interest in three-dimensional scalar fields. The features visualized by volume rendering differ by application, though most applications focus on providing the user with a model for understanding the physical structure represented in the data such as materials or the boundaries between materials. One form of volume rendering, direct volume rendering (DVR), has proven to be a particularly powerful tool for visualizing material and boundary structures represented in volume data through the use of transfer functions which map each unit of the data to optical properties such as color and opacity. Specifying these transfer functions in a manner that yields an informative rendering is often done manually by trial and error and has become the topic of much research. While automated techniques for transfer function creation do exist, many rely on domain-specific knowledge and produce less informative renderings than those generated by manually constructed transfer functions. This thesis presents a novel extension to a successful semi-automated transfer function technique in an effort to minimize the time and effort required in creation of informative transfer functions. In particular, the method proposed provides a means for the semi-automatic generation of transfer functions which highlight and classify material boundaries in a non-domain specific manner