22 research outputs found

    Artificial intelligence-based analysis of whole-body bone scintigraphy: The quest for the optimal deep learning algorithm and comparison with human observer performance

    Get PDF
    Purpose: Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers. Materials and Methods: After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers. Results: DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time. Conclusion: Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images. © 2023 The Author(s

    Deep-JASC: joint attenuation and scatter correction in whole-body 18F-FDG PET using a deep residual network

    Get PDF
    Objective: We demonstrate the feasibility of direct generation of attenuation and scatter-corrected images from uncorrected images (PET-nonASC) using deep residual networks in whole-body 18F-FDG PET imaging. Methods: Two- and three-dimensional deep residual networks using 2D successive slices (DL-2DS), 3D slices (DL-3DS) and 3D patches (DL-3DP) as input were constructed to perform joint attenuation and scatter correction on uncorrected whole-body images in an end-to-end fashion. We included 1150 clinical whole-body 18F-FDG PET/CT studies, among which 900, 100 and 150 patients were randomly partitioned into training, validation and independent validation sets, respectively. The images generated by the proposed approach were assessed using various evaluation metrics, including the root-mean-squared-error (RMSE) and absolute relative error (ARE ) using CT-based attenuation and scatter-corrected (CTAC) PET images as reference. PET image quantification variability was also assessed through voxel-wise standardized uptake value (SUV) bias calculation in different regions of the body (head, neck, chest, liver-lung, abdomen and pelvis). Results: Our proposed attenuation and scatter correction (Deep-JASC) algorithm provided good image quality, comparable with those produced by CTAC. Across the 150 patients of the independent external validation set, the voxel-wise REs () were � 1.72 ± 4.22, 3.75 ± 6.91 and � 3.08 ± 5.64 for DL-2DS, DL-3DS and DL-3DP, respectively. Overall, the DL-2DS approach led to superior performance compared with the other two 3D approaches. The brain and neck regions had the highest and lowest RMSE values between Deep-JASC and CTAC images, respectively. However, the largest ARE was observed in the chest (15.16 ± 3.96) and liver/lung (11.18 ± 3.23) regions for DL-2DS. DL-3DS and DL-3DP performed slightly better in the chest region, leading to AREs of 11.16 ± 3.42 and 11.69 ± 2.71, respectively (p value < 0.05). The joint histogram analysis resulted in correlation coefficients of 0.985, 0.980 and 0.981 for DL-2DS, DL-3DS and DL-3DP approaches, respectively. Conclusion: This work demonstrated the feasibility of direct attenuation and scatter correction of whole-body 18F-FDG PET images using emission-only data via a deep residual network. The proposed approach achieved accurate attenuation and scatter correction without the need for anatomical images, such as CT and MRI. The technique is applicable in a clinical setting on standalone PET or PET/MRI systems. Nevertheless, Deep-JASC showing promising quantitative accuracy, vulnerability to noise was observed, leading to pseudo hot/cold spots and/or poor organ boundary definition in the resulting PET images. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature

    Dual-Centre Harmonised Multimodal Positron Emission Tomography/Computed Tomography Image Radiomic Features and Machine Learning Algorithms for Non-small Cell Lung Cancer Histopathological Subtype Phenotype Decoding

    No full text
    Aims: We aimed to build radiomic models for classifying non-small cell lung cancer (NSCLC) histopathological subtypes through a dual-centre dataset and comprehensively evaluate the effect of ComBat harmonisation on the performance of single- and multimodality radiomic models.Materials and methods: A public dataset of NSCLC patients from two independent centres was used. Two image fusion methods, namely guided filtering-based fusion and image fusion based on visual saliency map and weighted least square optimisation, were used. Radiomic features were extracted from each scan, including first-order, texture and moment-invariant features. Subsequently, ComBat harmonisation was applied to the extracted features from computed tomography (CT), positron emission tomography (PET) and fused images to correct the centre effect. For feature selection, least absolute shrinkage and selection operator (Lasso) and recursive feature elimination (RFE) were investigated. For machine learning, logistic regression (LR), support vector machine (SVM) and AdaBoost were evaluated for classifying NSCLC subtypes. Training and evaluation of the models were carried out in a robust framework to offset plausible errors and performance was reported using area under the curve, balanced accuracy, sensitivity and specificity before and after harmonisation. N-way ANOVA was used to assess the effect of different factors on the performance of the models.Results: Support vector machine fed with selected features by recursive feature elimination from a harmonised PET feature set achieved the highest performance (area under the curve = 0.82) in classifying NSCLC histopathological subtypes. Although the performance of the models did not significantly improve for CT images after harmonisation, the performance of PET and guided filtering-based fusion feature signatures significantly improved for almost all models. Although the selection of the image modality and feature selection methods was effective on the performance of the model (ANOVA P-values &lt;0.001), machine learning and harmonisation did not change the performance significantly (ANOVA P-values = 0.839 and 0.292, respectively).Conclusion: This study confirmed the potential of radiomic analysis on PET, CT and hybrid images for histopathological classification of NSCLC subtypes.</p

    Dual-Centre Harmonised Multimodal Positron Emission Tomography/Computed Tomography Image Radiomic Features and Machine Learning Algorithms for Non-small Cell Lung Cancer Histopathological Subtype Phenotype Decoding

    No full text
    Aims: We aimed to build radiomic models for classifying non-small cell lung cancer (NSCLC) histopathological subtypes through a dual-centre dataset and comprehensively evaluate the effect of ComBat harmonisation on the performance of single- and multimodality radiomic models.Materials and methods: A public dataset of NSCLC patients from two independent centres was used. Two image fusion methods, namely guided filtering-based fusion and image fusion based on visual saliency map and weighted least square optimisation, were used. Radiomic features were extracted from each scan, including first-order, texture and moment-invariant features. Subsequently, ComBat harmonisation was applied to the extracted features from computed tomography (CT), positron emission tomography (PET) and fused images to correct the centre effect. For feature selection, least absolute shrinkage and selection operator (Lasso) and recursive feature elimination (RFE) were investigated. For machine learning, logistic regression (LR), support vector machine (SVM) and AdaBoost were evaluated for classifying NSCLC subtypes. Training and evaluation of the models were carried out in a robust framework to offset plausible errors and performance was reported using area under the curve, balanced accuracy, sensitivity and specificity before and after harmonisation. N-way ANOVA was used to assess the effect of different factors on the performance of the models.Results: Support vector machine fed with selected features by recursive feature elimination from a harmonised PET feature set achieved the highest performance (area under the curve = 0.82) in classifying NSCLC histopathological subtypes. Although the performance of the models did not significantly improve for CT images after harmonisation, the performance of PET and guided filtering-based fusion feature signatures significantly improved for almost all models. Although the selection of the image modality and feature selection methods was effective on the performance of the model (ANOVA P-values &lt;0.001), machine learning and harmonisation did not change the performance significantly (ANOVA P-values = 0.839 and 0.292, respectively).Conclusion: This study confirmed the potential of radiomic analysis on PET, CT and hybrid images for histopathological classification of NSCLC subtypes.</p

    Dual-Centre Harmonised Multimodal Positron Emission Tomography/Computed Tomography Image Radiomic Features and Machine Learning Algorithms for Non-small Cell Lung Cancer Histopathological Subtype Phenotype Decoding

    No full text
    Aims: We aimed to build radiomic models for classifying non-small cell lung cancer (NSCLC) histopathological subtypes through a dual-centre dataset and comprehensively evaluate the effect of ComBat harmonisation on the performance of single- and multimodality radiomic models.Materials and methods: A public dataset of NSCLC patients from two independent centres was used. Two image fusion methods, namely guided filtering-based fusion and image fusion based on visual saliency map and weighted least square optimisation, were used. Radiomic features were extracted from each scan, including first-order, texture and moment-invariant features. Subsequently, ComBat harmonisation was applied to the extracted features from computed tomography (CT), positron emission tomography (PET) and fused images to correct the centre effect. For feature selection, least absolute shrinkage and selection operator (Lasso) and recursive feature elimination (RFE) were investigated. For machine learning, logistic regression (LR), support vector machine (SVM) and AdaBoost were evaluated for classifying NSCLC subtypes. Training and evaluation of the models were carried out in a robust framework to offset plausible errors and performance was reported using area under the curve, balanced accuracy, sensitivity and specificity before and after harmonisation. N-way ANOVA was used to assess the effect of different factors on the performance of the models.Results: Support vector machine fed with selected features by recursive feature elimination from a harmonised PET feature set achieved the highest performance (area under the curve = 0.82) in classifying NSCLC histopathological subtypes. Although the performance of the models did not significantly improve for CT images after harmonisation, the performance of PET and guided filtering-based fusion feature signatures significantly improved for almost all models. Although the selection of the image modality and feature selection methods was effective on the performance of the model (ANOVA P-values &lt;0.001), machine learning and harmonisation did not change the performance significantly (ANOVA P-values = 0.839 and 0.292, respectively).Conclusion: This study confirmed the potential of radiomic analysis on PET, CT and hybrid images for histopathological classification of NSCLC subtypes.</p

    Dual-Centre Harmonised Multimodal Positron Emission Tomography/Computed Tomography Image Radiomic Features and Machine Learning Algorithms for Non-small Cell Lung Cancer Histopathological Subtype Phenotype Decoding

    No full text
    Aims: We aimed to build radiomic models for classifying non-small cell lung cancer (NSCLC) histopathological subtypes through a dual-centre dataset and comprehensively evaluate the effect of ComBat harmonisation on the performance of single- and multimodality radiomic models.Materials and methods: A public dataset of NSCLC patients from two independent centres was used. Two image fusion methods, namely guided filtering-based fusion and image fusion based on visual saliency map and weighted least square optimisation, were used. Radiomic features were extracted from each scan, including first-order, texture and moment-invariant features. Subsequently, ComBat harmonisation was applied to the extracted features from computed tomography (CT), positron emission tomography (PET) and fused images to correct the centre effect. For feature selection, least absolute shrinkage and selection operator (Lasso) and recursive feature elimination (RFE) were investigated. For machine learning, logistic regression (LR), support vector machine (SVM) and AdaBoost were evaluated for classifying NSCLC subtypes. Training and evaluation of the models were carried out in a robust framework to offset plausible errors and performance was reported using area under the curve, balanced accuracy, sensitivity and specificity before and after harmonisation. N-way ANOVA was used to assess the effect of different factors on the performance of the models.Results: Support vector machine fed with selected features by recursive feature elimination from a harmonised PET feature set achieved the highest performance (area under the curve = 0.82) in classifying NSCLC histopathological subtypes. Although the performance of the models did not significantly improve for CT images after harmonisation, the performance of PET and guided filtering-based fusion feature signatures significantly improved for almost all models. Although the selection of the image modality and feature selection methods was effective on the performance of the model (ANOVA P-values &lt;0.001), machine learning and harmonisation did not change the performance significantly (ANOVA P-values = 0.839 and 0.292, respectively).Conclusion: This study confirmed the potential of radiomic analysis on PET, CT and hybrid images for histopathological classification of NSCLC subtypes.</p

    Dual-Centre Harmonised Multimodal Positron Emission Tomography/Computed Tomography Image Radiomic Features and Machine Learning Algorithms for Non-small Cell Lung Cancer Histopathological Subtype Phenotype Decoding

    No full text
    Aims: We aimed to build radiomic models for classifying non-small cell lung cancer (NSCLC) histopathological subtypes through a dual-centre dataset and comprehensively evaluate the effect of ComBat harmonisation on the performance of single- and multimodality radiomic models.Materials and methods: A public dataset of NSCLC patients from two independent centres was used. Two image fusion methods, namely guided filtering-based fusion and image fusion based on visual saliency map and weighted least square optimisation, were used. Radiomic features were extracted from each scan, including first-order, texture and moment-invariant features. Subsequently, ComBat harmonisation was applied to the extracted features from computed tomography (CT), positron emission tomography (PET) and fused images to correct the centre effect. For feature selection, least absolute shrinkage and selection operator (Lasso) and recursive feature elimination (RFE) were investigated. For machine learning, logistic regression (LR), support vector machine (SVM) and AdaBoost were evaluated for classifying NSCLC subtypes. Training and evaluation of the models were carried out in a robust framework to offset plausible errors and performance was reported using area under the curve, balanced accuracy, sensitivity and specificity before and after harmonisation. N-way ANOVA was used to assess the effect of different factors on the performance of the models.Results: Support vector machine fed with selected features by recursive feature elimination from a harmonised PET feature set achieved the highest performance (area under the curve = 0.82) in classifying NSCLC histopathological subtypes. Although the performance of the models did not significantly improve for CT images after harmonisation, the performance of PET and guided filtering-based fusion feature signatures significantly improved for almost all models. Although the selection of the image modality and feature selection methods was effective on the performance of the model (ANOVA P-values &lt;0.001), machine learning and harmonisation did not change the performance significantly (ANOVA P-values = 0.839 and 0.292, respectively).Conclusion: This study confirmed the potential of radiomic analysis on PET, CT and hybrid images for histopathological classification of NSCLC subtypes.</p

    Robust identification of Parkinson's disease subtypes using radiomics and hybrid machine learning

    No full text
    Objectives: It is important to subdivide Parkinson's disease (PD) into subtypes, enabling potentially earlier disease recognition and tailored treatment strategies. We aimed to identify reproducible PD subtypes robust to variations in the number of patients and features. Methods: We applied multiple feature-reduction and cluster-analysis methods to cross-sectional and timeless data, extracted from longitudinal datasets (years 0, 1, 2 & 4; Parkinson's Progressive Marker Initiative; 885 PD/163 healthy-control visits; 35 datasets with combinations of non-imaging, conventional-imaging, and radiomics features from DAT-SPECT images). Hybrid machine-learning systems were constructed invoking 16 feature-reduction algorithms, 8 clustering algorithms, and 16 classifiers (C-index clustering evaluation used on each trajectory). We subsequently performed: i) identification of optimal subtypes, ii) multiple independent tests to assess reproducibility, iii) further confirmation by a statistical approach, iv) test of reproducibility to the size of the samples. Results: When using no radiomics features, the clusters were not robust to variations in features, whereas, utilizing radiomics information enabled consistent generation of clusters through ensemble analysis of trajectories. We arrived at 3 distinct subtypes, confirmed using the training and testing process of k-means, as well as Hotelling's T2 test. The 3 identified PD subtypes were 1) mild; 2) intermediate; and 3) severe, especially in terms of dopaminergic deficit (imaging), with some escalating motor and non-motor manifestations. Conclusion: Appropriate hybrid systems and independent statistical tests enable robust identification of 3 distinct PD subtypes. This was assisted by utilizing radiomics features from SPECT images (segmented using MRI). The PD subtypes provided were robust to the number of the subjects, and features. © 2020 Elsevier Lt

    Noninvasive Fuhrman grading of clear cell renal cell carcinoma using computed tomography radiomic features and machine learning

    No full text
    Purpose: To identify optimal classification methods for computed tomography (CT) radiomics-based preoperative prediction of clear cell renal cell carcinoma (ccRCC) grade. Materials and methods: Seventy-one ccRCC patients (31 low grade and 40 high grade) were included in this study. Tumors were manually segmented on CT images followed by the application of three image preprocessing techniques (Laplacian of Gaussian, wavelet filter, and discretization of the intensity values) on delineated tumor volumes. Overall, 2530 radiomics features (tumor shape and size, intensity statistics, and texture) were extracted from each segmented tumor volume. Univariate analysis was performed to assess the association between each feature and the histological condition. Multivariate analysis involved the use of machine learning (ML) algorithms and the following three feature selection algorithms: the least absolute shrinkage and selection operator, Student�s t test, and minimum Redundancy Maximum Relevance. These selected features were then used to construct three classification models (SVM, random forest, and logistic regression) to discriminate high from low-grade ccRCC at nephrectomy. Lastly, multivariate model performance was evaluated on the bootstrapped validation cohort using the area under the receiver operating characteristic curve (AUC) metric. Results: The univariate analysis demonstrated that among the different image sets, 128 bin-discretized images have statistically significant different texture parameters with a mean AUC of 0.74 ± 3 (q value < 0.05). The three ML-based classifiers showed proficient discrimination between high and low-grade ccRCC. The AUC was 0.78 for logistic regression, 0.62 for random forest, and 0.83 for the SVM model, respectively. Conclusion: CT radiomic features can be considered as a useful and promising noninvasive methodology for preoperative evaluation of ccRCC Fuhrman grades. © 2020, Italian Society of Medical Radiology
    corecore