221 research outputs found

    Automated thresholded region classification using a robust feature selection method for PET-CT

    Get PDF
    Fluorodeoxyglucose Positron Emission Tomography - Computed Tomography (FDG PET-CT) is the preferred imaging modality for staging the lymphomas. Sites of disease usually appear as foci of increased FDG uptake. Thresholding is the most common method used to identify these regions. The thresholding method, however, is not able to separate sites of FDG excretion and physiological FDG uptake (sFEPU) from sites of disease. sFEPU can make image interpretation problematic and so the ability to identify / label sFEPU will improve image interpretation and the assessment of the total disease burden and will be beneficial for any computer aided diagnosis software. Existing classification methods, however, are sub-optimal as there is a tendency for over-fitting and increased computational burden because they are unable to identify optimal features that can be used for classification. In this study, we propose a new method to delineate sFEPU from thresholded PET images. We propose a feature selection method, which differs from existing approaches, in that it focuses on selecting optimal features from individual structures, rather than from the entire image. Our classification results on 9222 coronal slices derived from 40 clinical lymphoma patient studies produced higher classification accuracy when compared to existing feature selection based methods

    Learning Optimal Deep Projection of 18^{18}F-FDG PET Imaging for Early Differential Diagnosis of Parkinsonian Syndromes

    Full text link
    Several diseases of parkinsonian syndromes present similar symptoms at early stage and no objective widely used diagnostic methods have been approved until now. Positron emission tomography (PET) with 18^{18}F-FDG was shown to be able to assess early neuronal dysfunction of synucleinopathies and tauopathies. Tensor factorization (TF) based approaches have been applied to identify characteristic metabolic patterns for differential diagnosis. However, these conventional dimension-reduction strategies assume linear or multi-linear relationships inside data, and are therefore insufficient to distinguish nonlinear metabolic differences between various parkinsonian syndromes. In this paper, we propose a Deep Projection Neural Network (DPNN) to identify characteristic metabolic pattern for early differential diagnosis of parkinsonian syndromes. We draw our inspiration from the existing TF methods. The network consists of a (i) compression part: which uses a deep network to learn optimal 2D projections of 3D scans, and a (ii) classification part: which maps the 2D projections to labels. The compression part can be pre-trained using surplus unlabelled datasets. Also, as the classification part operates on these 2D projections, it can be trained end-to-end effectively with limited labelled data, in contrast to 3D approaches. We show that DPNN is more effective in comparison to existing state-of-the-art and plausible baselines.Comment: 8 pages, 3 figures, conference, MICCAI DLMIA, 201

    Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation

    Get PDF
    The automated segmentation of regions of interest (ROIs) in medical imaging is the fundamental requirement for the derivation of high-level semantics for image analysis in clinical decision support systems. Traditional segmentation approaches such as region-based depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, methods based on fully convolutional networks (FCN) have achieved great success in the segmentation of general images. FCNs leverage a large labeled dataset to hierarchically learn the features that best correspond to the shallow appearance as well as the deep semantics of the images. However, when applied to medical images, FCNs usually produce coarse ROI detection and poor boundary definitions primarily due to the limited number of labeled training data and limited constraints of label agreement among neighboring similar pixels. In this paper, we propose a new stacked FCN architecture with multi-channel learning (SFCN-ML). We embed the FCN in a stacked architecture to learn the foreground ROI features and background non-ROI features separately and then integrate these different channels to produce the final segmentation result. In contrast to traditional FCN methods, our SFCN-ML architecture enables the visual attributes and semantics derived from both the fore- and background channels to be iteratively learned and inferred. We conducted extensive experiments on three public datasets with a variety of visual challenges. Our results show that our SFCN-ML is more effective and robust than a routine FCN and its variants, and other state-of-the-art methods

    Cerebral F18 -FDG PET CT in Children: Patterns during Normal Childhood and Clinical Application of Statistical Parametric Mapping

    Get PDF
    The first aim was to recruit and analyse a high quality dataset of cerebral FDG PET CT scans in neurologically normal children. Using qualitative, semi-quantitative and statistical parametric mapping (SPM) techniques, the results showed that a pattern of FDG uptake similar to adults does not occur by one year of age as was previously believed, but the regional FDG uptake changes throughout childhood driven by differing age related regional rates of increasing FDG uptake. The second aim was to use this normal dataset in the clinical analysis of cerebral FDG PET CT scans in children with epilepsy and Neurofibromatosis type 1 (NF1). The normal dataset was validated for single-subject-versus-group SPM analysis and was highly specific for identifying the epileptogenic focus likely to result in a good post-operative outcome in children with epilepsy. Qualitative, semi-quantitative and group-versus-group SPM analyses were applied to FDG PET CT scans in children with NF1. The results showed reduced metabolism in the thalami and medial temporal lobes compared to neurologically normal children. This thesis has produced novel findings that advance the understanding of childhood brain development and has developed SPM techniques that can be applied to cerebral FDG PET CT scans in children with neurological disorders

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    CT-PET guided target delineation in head and neck cancer and implications for improved outcome

    Get PDF
    Aim: Fifty percent of patients with squamous cell carcinoma of the Head and Neck develop loco-regional recurrence after treatment. Factors leading to this failure are most likely altered intra-tumoural glucose metabolism and increased hypoxia. Tissue glucose utilisation and the degree of hypoxia can be visualised by CTPET imaging with 18FDG and hypoxic radio-nuclides. This thesis has investigated 18FDG CT-PET guided target volume delineation methods and attempted to validate 64Cu-ATSM as a hypoxic radio-nuclide in patients with squamous cell carcinoma of the Head and Neck. Materials and Methods: Eight patients with locally advanced disease underwent 18FDG CT-PET imaging before and during curative radiotherapy or chemo-radiotherapy. Fixed (SUV cut off and percentage threshold of the SUVmax) and adaptive thresholds were investigated. The functional volumes automatically delineated by these methods and SUVmax were compared at each point, and between thresholds. Four patients with locally advanced disease, two to seven days prior to surgery, underwent 3D dynamic CT-PET imaging immediately after injection of 64Cu- ATSM. Two patients were also imaged 18 hours after injection, and two underwent a dynamic contrast-enhanced CT to evaluate intra-tumoural perfusion. All patients received pimonidazole before surgery. The pimonidazole, GLUT1, CAIX, and HIF1a immuno-histochemical hypoxic fractions were defined. Staining was correlated with the retention pattern of 64Cu-ATSM at 3 time points. Hypoxic target volumes were delineated according to tumour to muscle, blood and background ratios. Results: 18FDG primary and lymph node target volumes significantly reduced with radiation dose by the SUV cut off method and correlated with the reduction in the SUVmax within the volume. Volume reduction was also found between thresholds by the same delineation method. The volumes delineated by the other methods were not significantly reduced (except the lymph node functional volume when defined by the adaptive threshold). 64Cu-ATSM correlated with hypoxic immuno-histochemical staining but not with blood flow. Tumour ratios increased with time after injection, which influenced the delineated hypoxic target volume. Conclusion: Dose-escalated image-guided radiotherapy strategies using these CT-PET guided functional volumes have the potential to improve loco-regional control in patients with squamous cell carcinoma of the Head and Neck. CT-PET 18FDG volume delineation is intricately linked to the method and threshold of delineation and the timing of the imaging. 64Cu-ATSM is promising as a hypoxic radio-nuclide and warrants further investigation

    Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation

    Full text link
    Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection with PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, there is not an accurate automated segmentation method. Segmentation tends to be done manually by different imaging experts and it is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a multimodal spatial attention module (MSAM) that automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) for segmentation of areas with higher tumor likelihood. Our MSAM can be applied to common backbone architectures and trained end-to-end. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of the MSAM in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC)
    • …
    corecore