13 research outputs found

    Random Walk and Graph Cut for Co-Segmentation of Lung Tumor on PET-CT Images

    Full text link

    Fast and Accurate Lung Tumor Spotting and Segmentation for Boundary Delineation on CT Slices In A Coarse-To-Fine Framework

    Get PDF
    Label noise and class imbalance are two of the critical challenges when training image-based deep neural networks, especially in the biomedical image processing domain. Our work focuses on how to address the two challenges effectively and accurately in the task of lesion segmentation from biomedical/medical images. To address the pixel-level label noise problem, we propose an advanced transfer training and learning approach with a detailed DICOM pre-processing method. To address the tumor/non-tumor class imbalance problem, we exploit a self-adaptive fully convolutional neural network with an automated weight distribution mechanism to spot the Radiomics lung tumor regions accurately. Furthermore, an improved conditional random field method is employed to obtain sophisticated lung tumor contour delineation and segmentation. Finally, our approach has been evaluated using several well-known evaluation metrics on the Lung Tumor segmentation dataset used in the 2018 IEEE VIP-CUP Challenge. Experimental results show that our weakly supervised learning algorithm outperforms other deep models and state-of-the-art approache

    Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation

    Full text link
    Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection with PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, there is not an accurate automated segmentation method. Segmentation tends to be done manually by different imaging experts and it is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a multimodal spatial attention module (MSAM) that automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) for segmentation of areas with higher tumor likelihood. Our MSAM can be applied to common backbone architectures and trained end-to-end. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of the MSAM in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC)

    Unsupervised supervoxel-based lung tumor segmentation across patient scans in hybrid PET/MRI

    Get PDF
    Tumor segmentation is a crucial but difficult task in treatment planning and follow-up of cancerous patients. The challenge of automating the tumor segmentation has recently received a lot of attention, but the potential of utilizing hybrid positron emission tomography (PET)/magnetic resonance imaging (MRI), a novel and promising imaging modality in oncology, is still under-explored. Recent approaches have either relied on manual user input and/or performed the segmentation patient-by-patient, whereas a fully unsupervised segmentation framework that exploits the available information from all patients is still lacking. We present an unsupervised across-patients supervoxel-based clustering framework for lung tumor segmentation in hybrid PET/MRI. The method consists of two steps: First, each patient is represented by a set of PET/ MRI supervoxel-features. Then the data points from all patients are transformed and clustered on a population level into tumor and non-tumor supervoxels. The proposed framework is tested on the scans of 18 non-small cell lung cancer patients with a total of 19 tumors and evaluated with respect to manual delineations provided by clinicians. Experiments study the performance of several commonly used clustering algorithms within the framework and provide analysis of (i) the effect of tumor size, (ii) the segmentation errors, (iii) the benefit of across-patient clustering, and (iv) the noise robustness. The proposed framework detected 15 out of 19 tumors in an unsupervised manner. Moreover, performance increased considerably by segmenting across patients, with the mean dice score increasing from 0.169 ± 0.295 (patient-by-patient) to 0.470 ± 0.308 (across-patients). Results demonstrate that both spectral clustering and Manhattan hierarchical clustering have the potential to segment tumors in PET/MRI with a low number of missed tumors and a low number of false-positives, but that spectral clustering seems to be more robust to noise

    Reproducibility Study of Tumor Biomarkers Extracted from Positron Emission To-mography Images with 18F-Fluorodeoxyglucose

    Get PDF
    Introduction and aim Cancer is one of the main causes of death worldwide. Tumor diagnosis, staging, surveillance, prognosis and access to the response to therapy are critical when it comes to plan and analyze the optimal treatment strategies of cancer diseases. 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) imaging has provided some reliable prognostic factors in several cancer types, by extracting quantitative measures from the images obtained in clinics. The recent addition of digital equipment to the clinical armamentarium of PET leads to some concerns regarding inter-device data variability. Consequently, the reproducibility assess-ment of the tumor features, usually used in clinics and research, extracted from images acquired in an analog and new digital PET equipment is of paramount importance for use of multi-scanner studies in longitudinal patient’s studies. The aim of this study was to evaluate the inter-equipment reliability of a set of 25 lesional features commonly used in clinics and research. Material and methods In order to access the features agreement, a dual imaging protocol was designed. Whole-body 18F-FDG PET images from 53 oncological patients were acquired, after a single 18F-FDG injection, with two devices alternatively: Philips Vereos Digital PET/CT (VE-REOS with three different reconstruction protocols- digital) and Philips GEMINI TF-16 (GEM-INI with single standard reconstruction protocol- analog). A nuclear medicine physician identi-fied 283 18F-FDG avid lesions. Then, all lesions (both equipment) were automatically segmented based on a Bayesian classifier optimized to this study. In the total, 25 features (first order statistics and geometric features) were computed and compared. The intraclass correlation coefficient (ICC) was used as measure of agreement. Results A high agreement (ICC > 0.75) was obtained for most of the lesion features pulled out from both devices imaging data, for all (GEMINI vs VEREOS) reconstructions. The lesion fea-tures most frequently used, maximum standardized uptake value, metabolic tumor volume, and total lesion glycolysis reached maximum ICC of 0.90, 0.98 and 0.97, respectively. Conclusions Under controlled acquisition and reconstruction parameters, most of the features studied can be used for research and clinical work, whenever multiple scanner (e.g. VEREOS and GEMINI) studies, mainly during longitudinal patient evaluation, are used

    Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics

    Get PDF
    Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation
    corecore