4 research outputs found

    Visibility-driven PET-CT Visualisation with Region of Interest (ROI) Segmentation

    Get PDF
    Multi-modality positron emission tomography – computed tomography (PET-CT) visualises biological and physiological functions (from PET) as region of interests (ROIs) within a higher resolution anatomical reference frame (from CT). The need to efficiently assess and assimilate the information from these co-aligned volumes simultaneously has stimulated new visualisation techniques that combine 3D volume rendering with interactive transfer functions to enable efficient manipulation of these volumes. However, in typical multi-modality volume rendering visualisation, the transfer functions for the volumes are manipulated in isolation with the resulting volumes being fused, thus failing to exploit the spatial correlation that exists between the aligned volumes. Such lack of feedback makes multi-modality transfer function manipulation to be complex and time-consuming. Further, transfer function alone is often insufficient to select the ROIs when it comprises of similar voxel properties to those of non-relevant regions. In this study, we propose a new ROI-based multi-modality visibility-driven transfer function (m2-vtf) for PET-CT visualisation. We present a novel ‘visibility’ metrics, a fundamental optical property that represents how much of the ROIs are visible to the users, and use it to measure the visibility of the ROIs in PET in relation to how it is affected by transfer function manipulations to its counterpart CT. To overcome the difficulty in ROI selection, we provide an intuitive ROIs selection tool based on automated PET segmentation. We further present a multi-modality transfer function automation where the visibility metrics from the PET ROIs are used to automate its CT’s transfer function. Our GPU implementation achieved an interactive visualisation of multi-modality PET-CT with efficient and intuitive transfer function manipulations

    Occlusion and Slice-Based Volume Rendering Augmentation for PET-CT

    Get PDF
    Dual-modality positron emission tomography and computed tomography (PET-CT) depicts pathophysiological function with PET in an anatomical context provided by CT. Three-dimensional volume rendering approaches enable visualization of a two-dimensional slice of interest (SOI) from PET combined with direct volume rendering (DVR) from CT. However, because DVR depicts the whole volume, it may occlude a region of interest, such as a tumor in the SOI. Volume clipping can eliminate this occlusion by cutting away parts of the volume, but it requires intensive user involvement in deciding on the appropriate depth to clip. Transfer functions that are currently available can make the regions of interest visible, but this often requires complex parameter tuning and coupled pre-processing of the data to define the regions. Hence, we propose a new visualization algorithm where a SOI from PET is augmented by volumetric contextual information from a DVR of the counterpart CT so that the obtrusiveness from the CT in the SOI is minimized. Our approach automatically calculates an augmentation depth parameter by considering the occlusion information derived from the voxels of the CT in front of the PET SOI. The depth parameter is then used to generate an opacity weight function that controls the amount of contextual information visible from the DVR. We outline the improvements with our visualization approach compared to other slice-based and our previous approaches. We present the preliminary clinical evaluation of our visualization in a series of PET-CT studies from patients with non-small cell lung cancer

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios
    corecore