2,932 research outputs found

    A clustering based transfer function for volume rendering using gray-gradient mode histogram

    Get PDF
    Volume rendering is an emerging technique widely used in the medical field to visualize human organs using tomography image slices. In volume rendering, sliced medical images are transformed into attributes, such as color and opacity through transfer function. Thus, the design of the transfer function directly affects the result of medical images visualization. A well-designed transfer function can improve both the image quality and visualization speed. In one of our previous paper, we designed a multi-dimensional transfer function based on region growth to determine the transparency of a voxel, where both gray threshold and gray change threshold are used to calculate the transparency. In this paper, a new approach of the transfer function is proposed based on clustering analysis of gray-gradient mode histogram, where volume data is represented in a two-dimensional histogram. Clustering analysis is carried out based on the spatial information of volume data in the histogram, and the transfer function is automatically generated by means of clustering analysis of the spatial information. The dataset of human thoracic is used in our experiment to evaluate the performance of volume rendering using the proposed transfer function. By comparing with the original transfer function implemented in two popularly used volume rendering systems, visualization toolkit (VTK) and RadiAnt DICOM Viewer, the effectiveness and performance of the proposed transfer function are demonstrated in terms of the rendering efficiency and image quality, where more accurate and clearer features are presented rather than a blur red area. Furthermore, the complex operations on the two-dimensional histogram are avoided in our proposed approach and more detailed information can be seen from our final visualized image

    Doctor of Philosophy

    Get PDF
    dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Realistic Virtual Cuts

    Get PDF

    Visualization of large medical volume data

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Segmentation Assisted Object Distinction For Direct Volume Rendering

    Get PDF
    Ray casting is a direct volume rendering technique for visualizing 3D arrays of sampled data. It has vital applications in medical and biological imaging. Nevertheless, it is inherently open to cluttered classification results. It suffers from overlapping transfer function values and lacks a sufficiently powerful voxel parsing mechanism for object distinction. In this research work, we are proposing an image processing based approach towards enhancing ray casting technique’s object distinction process. The ray casting architecture is modified to accommodate object membership information generated by a K-means based hybrid segmentation algorithm. Object membership information is assigned to cubical vertices in the form of ID tags. An intra-object buffer is devised and coordinated with inter-object buffer, allowing the otherwise global rendering module to embed multiple local (secondary) rendering processes. A local rendering process adds two advantageous aspects to global rendering module. First, depth oriented manipulation of interpolation and composition operations that lead to freedom of interpolation method choice based on the number of available objects in various volumetric depths, improvement of LOD (level of details) for desired objects and reduced number of required mathematical computations. Second, localization of transfer function design that enables the utilization of binary (non-overlapping) transfer functions for color and opacity assignment. A set of image processing techniques are creatively employed in the design of K-means based hybrid segmentation algorithm

    Abstract visualization of large-scale time-varying data

    Get PDF
    The explosion of large-scale time-varying datasets has created critical challenges for scientists to study and digest. One core problem for visualization is to develop effective approaches that can be used to study various data features and temporal relationships among large-scale time-varying datasets. In this dissertation, we first present two abstract visualization approaches to visualizing and analyzing time-varying datasets. The first approach visualizes time-varying datasets with succinct lines to represent temporal relationships of the datasets. A time line visualizes time steps as points and temporal sequence as a line. They are generated by sampling the distributions of virtual words across time to study temporal features. The key idea of time line is to encode various data properties with virtual words. We apply virtual words to characterize feature points and use their distribution statistics to measure temporal relationships. The second approach is ensemble visualization, which provides a highly abstract platform for visualizing an ensemble of datasets. Both approaches can be used for exploration, analysis, and demonstration purposes. The second component of this dissertation is an animated visualization approach to study dramatic temporal changes. Animation has been widely used to show trends, dynamic features and transitions in scientific simulations, while animated visualization is new. We present an automatic animation generation approach that simulates the composition and transition of storytelling techniques and synthesizes animations to describe various event features. We also extend the concept of animated visualization to non-traditional time-varying datasets--network protocols--for visualizing key information in abstract sequences. We have evaluated the effectiveness of our animated visualization with a formal user study and demonstrated the advantages of animated visualization for studying time-varying datasets
    corecore