401 research outputs found

    Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images

    Get PDF
    Positron emission tomography (PET)-Computed tomography (CT) plays an important role in cancer management. As a multi-modal imaging technique it provides both functional and anatomical information of tumor spread. Such information improves cancer treatment in many ways. One important usage of PET-CT in cancer treatment is to facilitate radiotherapy planning, for the information it provides helps radiation oncologists to better target the tumor region. However, currently most tumor delineations in radiotherapy planning are performed by manual segmentation, which consumes a lot of time and work. Most computer-aided algorithms need a knowledgeable user to locate roughly the tumor area as a starting point. This is because, in PET-CT imaging, some tissues like heart and kidney may also exhibit a high level of activity similar to that of a tumor region. In order to address this issue, a novel co-segmentation method is proposed in this work to enhance the accuracy of tumor segmentation using PET-CT, and a localization algorithm is developed to differentiate and segment tumor regions from normal regions. On a combined dataset containing 29 patients with lung tumor, the combined method shows good segmentation results as well as good tumor recognition rate

    Random Walk and Graph Cut for Co-Segmentation of Lung Tumor on PET-CT Images

    Full text link

    Topology polymorphism graph for lung tumor segmentation in PET-CT images

    Get PDF
    Accurate lung tumor segmentation is problematic when the tumor boundary or edge, which reflects the advancing edge of the tumor, is difficult to discern on chest CT or PET. We propose a ‘topo-poly’ graph model to improve identification of the tumor extent. Our model incorporates an intensity graph and a topology graph. The intensity graph provides the joint PET-CT foreground similarity to differentiate the tumor from surrounding tissues. The topology graph is defined on the basis of contour tree to reflect the inclusion and exclusion relationship of regions. By taking into account different topology relations, the edges in our model exhibit topological polymorphism. These polymorphic edges in turn affect the energy cost when crossing different topology regions under a random walk framework, and hence contribute to appropriate tumor delineation. We validated our method on 40 patients with non-small cell lung cancer where the tumors were manually delineated by a clinical expert. The studies were separated into an ‘isolated’ group (n = 20) where the lung tumor was located in the lung parenchyma and away from associated structures / tissues in the thorax and a ‘complex’ group (n = 20) where the tumor abutted / involved a variety of adjacent structures and had heterogeneous FDG uptake. The methods were validated using Dice’s similarity coefficient (DSC) to measure the spatial volume overlap and Hausdorff distance (HD) to compare shape similarity calculated as the maximum surface distance between the segmentation results and the manual delineations. Our method achieved an average DSC of 0.881  ±  0.046 and HD of 5.311  ±  3.022 mm for the isolated cases and DSC of 0.870  ±  0.038 and HD of 9.370  ±  3.169 mm for the complex cases. Student’s t-test showed that our model outperformed the other methods (p-values <0.05)

    Computational delineation and quantitative heterogeneity analysis of lung tumor on 18F-FDG PET for radiation dose-escalation

    Get PDF
    © 2018 The Author(s). Quantitative measurement and analysis of tumor metabolic activities could provide a more optimal solution to personalized accurate dose painting. We collected PET images of 58 lung cancer patients, in which the tumor exhibits heterogeneous FDG uptake. We design an automated delineation and quantitative heterogeneity measurement of the lung tumor for dose-escalation. For tumor delineation, our algorithm firstly separates the tumor from its adjacent high-uptake tissues using 3D projection masks; then the tumor boundary is delineated with our stopping criterion of joint gradient and intensity affinities. For dose-escalation, tumor sub-volumes with low, moderate and high metabolic activities are extracted and measured. Based on our quantitative heterogeneity measurement, a sub-volume oriented dose-escalation plan is implemented in intensity modulated radiation therapy (IMRT) planning system. With respect to manual tumor delineations by two radiation oncologists, the paired t-test demonstrated our model outperformed the other computational methods in comparison (p 0.05)

    Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics

    Get PDF
    Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation

    Learning Algorithms for Fat Quantification and Tumor Characterization

    Get PDF
    Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice

    Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation

    Full text link
    Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection with PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, there is not an accurate automated segmentation method. Segmentation tends to be done manually by different imaging experts and it is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a multimodal spatial attention module (MSAM) that automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) for segmentation of areas with higher tumor likelihood. Our MSAM can be applied to common backbone architectures and trained end-to-end. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of the MSAM in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC)

    Fast and Accurate Lung Tumor Spotting and Segmentation for Boundary Delineation on CT Slices In A Coarse-To-Fine Framework

    Get PDF
    Label noise and class imbalance are two of the critical challenges when training image-based deep neural networks, especially in the biomedical image processing domain. Our work focuses on how to address the two challenges effectively and accurately in the task of lesion segmentation from biomedical/medical images. To address the pixel-level label noise problem, we propose an advanced transfer training and learning approach with a detailed DICOM pre-processing method. To address the tumor/non-tumor class imbalance problem, we exploit a self-adaptive fully convolutional neural network with an automated weight distribution mechanism to spot the Radiomics lung tumor regions accurately. Furthermore, an improved conditional random field method is employed to obtain sophisticated lung tumor contour delineation and segmentation. Finally, our approach has been evaluated using several well-known evaluation metrics on the Lung Tumor segmentation dataset used in the 2018 IEEE VIP-CUP Challenge. Experimental results show that our weakly supervised learning algorithm outperforms other deep models and state-of-the-art approache
    • …
    corecore