419 research outputs found

    Visibility-driven PET-CT Visualisation with Region of Interest (ROI) Segmentation

    Get PDF
    Multi-modality positron emission tomography – computed tomography (PET-CT) visualises biological and physiological functions (from PET) as region of interests (ROIs) within a higher resolution anatomical reference frame (from CT). The need to efficiently assess and assimilate the information from these co-aligned volumes simultaneously has stimulated new visualisation techniques that combine 3D volume rendering with interactive transfer functions to enable efficient manipulation of these volumes. However, in typical multi-modality volume rendering visualisation, the transfer functions for the volumes are manipulated in isolation with the resulting volumes being fused, thus failing to exploit the spatial correlation that exists between the aligned volumes. Such lack of feedback makes multi-modality transfer function manipulation to be complex and time-consuming. Further, transfer function alone is often insufficient to select the ROIs when it comprises of similar voxel properties to those of non-relevant regions. In this study, we propose a new ROI-based multi-modality visibility-driven transfer function (m2-vtf) for PET-CT visualisation. We present a novel ‘visibility’ metrics, a fundamental optical property that represents how much of the ROIs are visible to the users, and use it to measure the visibility of the ROIs in PET in relation to how it is affected by transfer function manipulations to its counterpart CT. To overcome the difficulty in ROI selection, we provide an intuitive ROIs selection tool based on automated PET segmentation. We further present a multi-modality transfer function automation where the visibility metrics from the PET ROIs are used to automate its CT’s transfer function. Our GPU implementation achieved an interactive visualisation of multi-modality PET-CT with efficient and intuitive transfer function manipulations

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Occlusion and Slice-Based Volume Rendering Augmentation for PET-CT

    Get PDF
    Dual-modality positron emission tomography and computed tomography (PET-CT) depicts pathophysiological function with PET in an anatomical context provided by CT. Three-dimensional volume rendering approaches enable visualization of a two-dimensional slice of interest (SOI) from PET combined with direct volume rendering (DVR) from CT. However, because DVR depicts the whole volume, it may occlude a region of interest, such as a tumor in the SOI. Volume clipping can eliminate this occlusion by cutting away parts of the volume, but it requires intensive user involvement in deciding on the appropriate depth to clip. Transfer functions that are currently available can make the regions of interest visible, but this often requires complex parameter tuning and coupled pre-processing of the data to define the regions. Hence, we propose a new visualization algorithm where a SOI from PET is augmented by volumetric contextual information from a DVR of the counterpart CT so that the obtrusiveness from the CT in the SOI is minimized. Our approach automatically calculates an augmentation depth parameter by considering the occlusion information derived from the voxels of the CT in front of the PET SOI. The depth parameter is then used to generate an opacity weight function that controls the amount of contextual information visible from the DVR. We outline the improvements with our visualization approach compared to other slice-based and our previous approaches. We present the preliminary clinical evaluation of our visualization in a series of PET-CT studies from patients with non-small cell lung cancer

    Investigation and Correction for the Partial Volume Spill in Effects in Positron Emission Tomography

    Get PDF
    Positron emission tomography (PET) imaging has a wide applicability in oncology, cardiology and neurology. However, a major drawback when imaging very active regions such as the bladder and the bone is the spill in effect, leading to inaccurate quantification and obscured visualisation of nearby lesions. Therefore, this thesis aims at investigating and correcting for the spill in effect from high activity regions to the surroundings, as a function of activity in the hot region, lesion size and location, system resolution as well as application of post-filtering, using the background correction technique. This thesis involved analytical simulations for the digital XCAT2 phantom, and validation acquiring data from NEMA phantoms and patient datasets with the GE Signa PET/MR and Siemens Biograph mMR/mCT scanners. Reconstructions were done using the ordered subset expectation maximisation (OSEM) algorithm. Dedicated point spread function (OSEM+PSF) and the background correction (OSEM+PSF+BC) were incorporated into the reconstruction for spill in correction. For region of interest (ROI) analysis, a semi-automated ellipsoidal ROIs were drawn on the exact location of the lesions, and these were used to extract the standardized uptake value (SUV). The bias, recovery coefficient (RC), coefficient of variation (CoV) and contrast-to-noise ratio (CNR) were computed from the SUVs, and these were used as figures of merit to compare the performances of all the reconstruction algorithms. The thesis revealed that: (i) lesions within 15-20 mm from the hot region are predominantly affected by the spill in effect, leading to an increased bias and impaired lesion visualisation within the region; (ii) the spill in effect is further influenced by the ROI selection, increasing activity in the hot region, reduced resolution and application of post-filter; (iii) the spill in effect is more evident for the SUVmax than the SUVmean; (iv) for proximal lesions (within 2 voxels around the hot region), PSF has no major improvement over OSEM because of the spill in effect, coupled with the Gibbs effect; (v) with OSEM+PSF+BC, the spill in contribution from the hot region was removed in all cases (irrespective of ROI-selection, proximity of lesion to hot source, or application of post-filter), thereby facilitating stability in quantification and enhancing the contrast in lesions with low uptake. This thesis therefore concludes that the BC technique is effective in correcting for the spill in effect from hot regions to surrounding regions of interest. It is also robust to ROI-induced errors and post-filtering

    Exploration of virtual and augmented reality for visual analytics and 3D volume rendering of functional magnetic resonance imaging (fMRI) data

    Get PDF
    Statistical analysis of functional magnetic resonance imaging (fMRI), such as independent components analysis, is providing new scientific and clinical insights into the data with capabilities such as characterising traits of schizophrenia. However, with existing approaches to fMRI analysis, there are a number of challenges that prevent it from being fully utilised, including understanding exactly what a 'significant activity' pattern is, which structures are consistent and different between individuals and across the population, and how to deal with imaging artifacts such as noise. Interactive visual analytics has been presented as a step towards solving these challenges by presenting the data to users in a way that illuminates meaning. This includes using circular layouts that represent network connectivity and volume renderings with 'in situ' network diagrams. These visualisations currently rely on traditional 2D 'flat' displays with mouse-and-keyboard input. Due to the constrained screen space and an implied concept of depth, they are limited in presenting a meaningful, uncluttered abstraction of the data without compromising on preserving anatomic context. In this paper, we present our ongoing research on fMRI visualisation and discuss the potential for virtual reality (VR) and augmented reality (AR), coupled with gesture-based inputs to create an immersive environment for visualising fMRI data. We suggest that VR/AR can potentially overcome the identified challenges by allowing for a reduction in visual clutter and by allowing users to navigate the data abstractions in a 'natural' way that lets them keep their focus on the visualisations. We present exploratory research we have performed in creating immersive VR environments for fMRI data
    • …
    corecore