10,479 research outputs found

    Adaptive transfer functions: improved multiresolution visualization of medical models

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft

    Surface Projection Method for Visualizing Volumetric Data

    Get PDF
    The goal of this project was to explore, develop, and implement additional visualization methods for volumetric data within MindSeer. This paper discusses the implementation of one such visualization method, the surface projection method, and compares it to other existing methods

    Cross-Platform Presentation of Interactive Volumetric Imagery

    Get PDF
    Volume data is useful across many disciplines, not just medicine. Thus, it is very important that researchers have a simple and lightweight method of sharing and reproducing such volumetric data. In this paper, we explore some of the challenges associated with volume rendering, both from a classical sense and from the context of Web3D technologies. We describe and evaluate the pro- posed X3D Volume Rendering Component and its associated styles for their suitability in the visualization of several types of image data. Additionally, we examine the ability for a minimal X3D node set to capture provenance and semantic information from outside ontologies in metadata and integrate it with the scene graph

    The volume in focus: hardwareassisted focus and context effects for volume visualization

    Get PDF
    In many volume visualization applications there is some region of specific interest where we wish to see fine detail - yet we do not want to lose an impression of the overall picture. In this research we apply the notion of focus and context to texture-based volume rendering. A framework has been developed that enables users to achieve fast volumetric distortion and other effects of practical use. The framework has been implemented through direct programming of the graphics processor and integrated into a volume rendering system. Our driving application is the effective visualization of aneurysms, an important issue in neurosurgery. We have developed and evaluated an easy-to-use system that allows a neurosurgicalteam to explore the nature of cerebral aneurysms, visualizing the aneurysm itself in fine detail while still retaining a view of the surrounding vasculature

    Integration of multimodal data based on surface registration

    Get PDF
    The paper proposes and evaluates a strategy for the alignment of anatomical and functional data of the brain. The method takes as an input two different sets of images of a same patient: MR data and SPECT. It proceeds in four steps: first, it constructs two voxel models from the two image sets; next, it extracts from the two voxel models the surfaces of regions of interest; in the third step, the surfaces are interactively aligned by corresponding pairs; finally a unique volume model is constructed by selectively applying the geometrical transformations associated to the regions and weighting their contributions. The main advantages of this strategy are (i) that it can be applied retrospectively, (ii) that it is tri-dimensional, and (iii) that it is local. Its main disadvantage with regard to previously published methods it that it requires the extraction of surfaces. However, this step is often required for other stages of the multimodal analysis such as the visualization and therefore its cost can be accounted in the global cost of the process.Postprint (published version

    Crepuscular Rays for Tumor Accessibility Planning

    Get PDF

    DoctorEye: A clinically driven multifunctional platform, for accurate processing of tumors in medical images

    Get PDF
    Copyright @ Skounakis et al.This paper presents a novel, open access interactive platform for 3D medical image analysis, simulation and visualization, focusing in oncology images. The platform was developed through constant interaction and feedback from expert clinicians integrating a thorough analysis of their requirements while having an ultimate goal of assisting in accurately delineating tumors. It allows clinicians not only to work with a large number of 3D tomographic datasets but also to efficiently annotate multiple regions of interest in the same session. Manual and semi-automatic segmentation techniques combined with integrated correction tools assist in the quick and refined delineation of tumors while different users can add different components related to oncology such as tumor growth and simulation algorithms for improving therapy planning. The platform has been tested by different users and over large number of heterogeneous tomographic datasets to ensure stability, usability, extensibility and robustness with promising results. AVAILABILITY: THE PLATFORM, A MANUAL AND TUTORIAL VIDEOS ARE AVAILABLE AT: http://biomodeling.ics.forth.gr. It is free to use under the GNU General Public License

    Design of a multimodal rendering system

    Get PDF
    This paper addresses the rendering of aligned regular multimodal datasets. It presents a general framework of multimodal data fusion that includes several data merging methods. We also analyze the requirements of a rendering system able to provide these different fusion methods. On the basis of these requirements, we propose a novel design for a multimodal rendering system. The design has been implemented and proved showing to be efficient and flexible.Postprint (published version
    corecore