9,839 research outputs found

    Adaptive transfer functions: improved multiresolution visualization of medical models

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft

    Facilitating the design of multidimensional and local transfer functions for volume visualization

    Get PDF
    The importance of volume visualization is increasing since the sizes of the datasets that need to be inspected grow with every new version of medical scanners (e.g., CT and MR). Direct volume rendering is a 3D visualization technique that has, in many cases, clear benefits over 2D views. It is able to show 3D information, facilitating mental reconstruction of the 3D shape of objects and their spatial relation. The complexity of the settings required in order to generate a 3D rendering is, however, one of the main reasons for this technique not being used more widely in practice. Transfer functions play an important role in the appearance of volume rendered images by determining the optical properties of each piece of the data. The transfer function determines what will be seen and how. The goal of the project on which this PhD thesis reports was to develop and investigate new approaches that would facilitate the setting of transfer functions. As shown in the state of the art overview in Chapter 2, there are two main aspects that influence the effectiveness of a TF: the choice of the TF domain and the process of defining the shape of the TF. The choice of a TF domain, i.e., the choice of the data properties used, directly determines which aspects of the volume data can be visualized. In many approaches, special attention is given to TF domains that would enable an easier selection and visualization of boundaries between materials. The boundaries are an important aspect of the volume data since they reveal the shapes and sizes of objects. Our research in improving the TF definition focused on introducing new user interaction methods and automation techniques that shield the user from the complex process of manually defining the shape and color properties of TFs. Our research dealt with both the TF domain and the TF definition since they are closely related. A suitable TF domain cannot only greatly improve the manual definition, but also, more importantly, increases the possibilities of using automated techniques. Chapter 3 presents a new TF domain. We have used the LH space and the associated LH histogram for TFs based on material boundaries. We showed that the LH space reduces the ambiguity when selecting boundaries compared to the commonly used space of the data value and gradient magnitude. Fur- thermore, boundaries appear as blobs in the LH histogram that make them easier to select. Its compactness and easier selectivity of the boundaries makes the LH histogram suitable for the introduction of clustering-based automation. The mirrored extension of the LH space differentiates between both sides of the boundary. The mirrored LH histogram shows interesting properties of this space, allowing the selection of all boundaries belonging to one material in an easy way. We have also shown that segmentation techniques, such as region growing methods, can benefit from the properties of LH space. Standard cost functions based on the data value and/or the gradient magnitude may experience problems at the boundaries due to the partial volume effect. However, our cost function that is based on the LH space is, however, capable of handling the region growing of boundaries better. Chapter 4 presents an interaction framework for the TF definition based on hierarchical clustering of material boundaries. Our framework aims at an easy combination of various similarity measures that reflect requirements of the user. One of the main benefits of the framework is the absence of similarity-weighting coefficients that are usually hard to define. Further, the framework enables the user to visualize objects that may exist at different levels of the hierarchy. We also introduced two similarity measures that illustrate the functionality of the framework. The main contribution is the first similarity measure that takes advantage of properties of the LH histogram from Chapter 3. We assumed that the shapes of the peaks in the LH histogram can guide the grouping of clusters. The second similarity measure is based on the spatial relationships of clusters. In Chapter 5, we presented part of our research that focused on one of the main issues encountered in the TFs in general. Standard TFs, as they are applied everywhere in the volume in the same way, become difficult to use when the data properties (measurements) of the same material vary over the volume, for example, due to the acquisition inaccuracies. We address this problem by introducing the concept and framework of local transfer functions (LTFs). Local transfer functions are based on using locally applicable TFs in cases where it might be difficult or impossible to define a globally applicable TF. We discussed a number of reasons that hamper the global TF and illustrated how the LTFs may help to alleviate these problems. We have also discussed how multiple TFs can be combined and automatically adapted. One of our contributions is the use of the similarity of local histograms and their correlation for the combination and adaptation of LTFs

    Doctor of Philosophy

    Get PDF
    dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations

    ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    Full text link
    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, ROOT offers packages for complex data modeling and fitting, as well as multivariate classification based on machine learning techniques. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way

    Abstract Feature Space Representation for Volumetric Transfer Function Exploration

    Get PDF
    The application of n-dimensional transfer functions for feature segmentation has become increasingly popular in volume rendering. Recent work has focused on the utilization of higher order dimensional transfer functions incorporating spatial dimensions (x,y, and z) along with traditional feature space dimensions (value and value gradient). However, as the dimensionality increases, it becomes exceedingly difficult to abstract the transfer function into an intuitive and interactive workspace. In this work we focus on populating the traditional two-dimensional histogram with a set of derived metrics from the spatial (x, y and z) and feature space (value, value gradient, etc.) domain to create a set of abstract feature space transfer function domains. Current two-dimensional transfer function widgets typically consist of a two-dimensional histogram where each entry in the histogram represents the number of voxels that maps to that entry. In the case of an abstract transfer function design, the amount of spatial variance at that histogram coordinate is mapped instead, thereby relating additional information about the data abstraction in the projected space. Finally, a non-parametric kernel density estimation approach for feature space clustering is applied in the abstracted space, and the resultant transfer functions are discussed with respect to the space abstraction
    • 

    corecore