66 research outputs found

    COPD identification and grading based on deep learning of lung parenchyma and bronchial wall in chest CT images

    Get PDF
    OBJECTIVE: Chest CT can display the main pathogenic factors of chronic obstructive pulmonary disease (COPD), emphysema and airway wall remodeling. This study aims to establish deep convolutional neural network (CNN) models using these two imaging markers to diagnose and grade COPD. METHODS: Subjects who underwent chest CT and pulmonary function test (PFT) from one hospital (n = 373) were retrospectively included as the training cohort, and subjects from another hospital (n = 226) were used as the external test cohort. According to the PFT results, all subjects were labeled as Global Initiative for Chronic Obstructive Lung Disease (GOLD) Grade 1, 2, 3, 4 or normal. Two DenseNet-201 CNNs were trained using CT images of lung parenchyma and bronchial wall to generate two corresponding confidence levels to indicate the possibility of COPD, then combined with logistic regression analysis. Quantitative CT was used for comparison. RESULTS: In the test cohort, CNN achieved an area under the curve of 0.899 (95%CI: 0.853-0.935) to determine the existence of COPD, and an accuracy of 81.7% (76.2-86.7%), which was significantly higher than the accuracy 68.1% (61.6%-74.2%) using quantitative CT method (p < 0.05). For three-way (normal, GOLD 1-2, and GOLD 3-4) and five-way (normal, GOLD 1, 2, 3, and 4) classifications, CNN reached accuracies of 77.4 and 67.9%, respectively. CONCLUSION: CNN can identify emphysema and airway wall remodeling on CT images to infer lung function and determine the existence and severity of COPD. It provides an alternative way to detect COPD using the extensively available chest CT. ADVANCES IN KNOWLEDGE: CNN can identify the main pathological changes of COPD (emphysema and airway wall remodeling) based on CT images, to infer lung function and determine the existence and severity of COPD. CNN reached an area under the curve of 0.853 to determine the existence of COPD in the external test cohort. The CNN approach provides an alternative and effective way for early detection of COPD using extensively used chest CT, as an important alternative to pulmonary function test

    Feasibility of bronchial wall quantification in low- and ultralow-dose third-generation dual-source CT:An ex vivo lung study

    Get PDF
    Purpose To investigate image quality and bronchial wall quantification in low- and ultralow-dose third-generation dual-source computed tomography (CT). Methods A lung specimen from a formerly healthy male was scanned using third-generation dual-source CT at standard-dose (51 mAs/120 kV, CTDI(vol)3.41 mGy), low-dose (1/4th and 1/10th of standard dose), and ultralow-dose setting (1/20th). Low kV (70, 80, 90, and Sn100 kV) scanning was applied in each low/ultralow-dose setting, combined with adaptive mAs to keep a constant dose. Images were reconstructed at advanced modeled iterative reconstruction (ADMIRE) levels 1, 3, and 5 for each scan. Bronchial wall were semi-automatically measured from the lobar level to subsegmental level. Spearman correlation analysis was performed between bronchial wall quantification (wall thickness and wall area percentage) and protocol settings (dose, kV, and ADMIRE). ANOVA with a post hoc pairwise test was used to compare signal-to-noise ratio (SNR), noise and bronchial wall quantification values among standard- and low/ultralow-dose settings, and among ADMIRE levels. Results Bronchial wall quantification had no correlation with dose level, kV, or ADMIRE level (|correlation coefficients| 0.05). Generally, there were no significant differences in bronchial wall quantification among the standard- and low/ultralow-dose settings, and among different ADMIRE levels (P > 0.05). Conclusion The combined use of low/ultralow-dose scanning and ADMIRE does not influence bronchial wall quantification compared to standard-dose CT. This specimen study suggests the potential that an ultralow-dose scan can be used for bronchial wall quantification

    Classification of moving coronary calcified plaques based on motion artifacts using convolutional neural networks:a robotic simulating study on influential factors

    Get PDF
    Abstract Background Motion artifacts affect the images of coronary calcified plaques. This study utilized convolutional neural networks (CNNs) to classify the motion-contaminated images of moving coronary calcified plaques and to determine the influential factors for the classification performance. Methods Two artificial coronary arteries containing four artificial plaques of different densities were placed on a robotic arm in an anthropomorphic thorax phantom. Each artery moved linearly at velocities ranging from 0 to 60 mm/s. CT examinations were performed with four state-of-the-art CT systems. All images were reconstructed with filtered back projection and at least three levels of iterative reconstruction. Each examination was performed at 100%, 80% and 40% radiation dose. Three deep CNN architectures were used for training the classification models. A five-fold cross-validation procedure was applied to validate the models. Results The accuracy of the CNN classification was 90.2 ± 3.1%, 90.6 ± 3.5%, and 90.1 ± 3.2% for the artificial plaques using Inception v3, ResNet101 and DenseNet201 CNN architectures, respectively. In the multivariate analysis, higher density and increasing velocity were significantly associated with higher classification accuracy (all P  0.05). Conclusions The CNN achieved a high accuracy of 90% when classifying the motion-contaminated images into the actual category, regardless of different vendors, velocities, radiation doses, and reconstruction algorithms, which indicates the potential value of using a CNN to correct calcium scores

    3D radiomics predicts EGFR mutation, exon-19 deletion and exon-21 L858R mutation in lung adenocarcinoma

    Get PDF
    Background: To establish a radiomic approach to identify epidermal growth factor receptor (EGFR) mutation status in lung adenocarcinoma patients based on CT images, and to distinguish exon-19 deletion and exon-21 L858R mutation. Methods: Two hundred sixty-three patients who underwent pre-surgical contrast-enhanced CT and molecular testing were included, and randomly divided into the training (80%) and test (20%) cohort. Tumor images were three-dimensionally segmented to extract 1,672 radiomic features. Clinical features (age, gender, and smoking history) were added to build classification models together with radiomic features. Subsequently, the top-10 most relevant features were used to establish classifiers. For the classifying tasks including EGFR mutation, exon-19 deletion, and exon-21 L858R mutation, four logistic regression models were established for each task. Results: The training and test cohort consisted of 210 and 53 patients, respectively. Among the established models, the highest accuracy and sensitivity among the four models were 75.5% (61.7-86.2%) and 92.9% (76.5-99.1%) to classify EGFR mutation, respectively. The highest specificity values were 86.7% (69.3-96.2%) and 70.4% (49.8-86.3%) to classify exon-19 deletion and exon-21 L858R mutation, respectively. Conclusions: CT radiomics can sensitively identify the presence of EGFR mutation, and increase the certainty of distinguishing exon-19 deletion and exon-21 L858R mutation in lung adenocarcinoma patients. CT radiomics may become a helpful non-invasive biomarker to select EGFR mutation patients for invasive sampling

    Human-recognizable CT image features of subsolid lung nodules associated with diagnosis and classification by convolutional neural networks

    Get PDF
    Objectives The interpretability of convolutional neural networks (CNNs) for classifying subsolid nodules (SSNs) is insufficient for clinicians. Our purpose was to develop CNN models to classify SSNs on CT images and to investigate image features associated with the CNN classification. Methods CT images containing SSNs with a diameter o

    Deep Learning Reconstruction Shows Better Lung Nodule Detection for Ultra-Low-Dose Chest CT

    Get PDF
    Background Ultra-low-dose (ULD) CT could facilitate the clinical implementation of large-scale lung cancer screening while minimizing the radiation dose. However, traditional image reconstruction methods are associated with image noise in low-dose acquisitions. Purpose To compare the image quality and lung nodule detectability of deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction-V (ASIR-V) in ULD CT. Materials and Methods Patients who underwent noncontrast ULD CT (performed at 0.07 or 0.14 mSv, similar to a single chest radiograph) and contrast-enhanced chest CT (CECT) from April to June 2020 were included in this prospective study. ULD CT images were reconstructed with filtered back projection (FBP), ASIR-V, and DLIR. Three-dimensional segmentation of lung tissue was performed to evaluate image noise. Radiologists detected and measured nodules with use of a deep learning-based nodule assessment system and recognized malignancy-related imaging features. Bland-Altman analysis and repeated-measures analysis of variance were used to evaluate the differences between ULD CT images and CECT images. Results A total of 203 participants (mean age ± standard deviation, 61 years ± 12; 129 men) with 1066 nodules were included, with 100 scans at 0.07 mSv and 103 scans at 0.14 mSv. The mean lung tissue noise ± standard deviation was 46 HU ± 4 for CECT and 59 HU ± 4, 56 HU ± 4, 53 HU ± 4, 54 HU ± 4, and 51 HU ± 4 in FBP, ASIR-V level 40%, ASIR-V level 80% (ASIR-V-80%), medium-strength DLIR, and high-strength DLIR (DLIR-H), respectively, of ULD CT scans (P < .001). The nodule detection rates of FBP reconstruction, ASIR-V-80%, and DLIR-H were 62.5% (666 of 1066 nodules), 73.3% (781 of 1066 nodules), and 75.8% (808 of 1066 nodules), respectively (P < .001). Bland-Altman analysis showed the percentage difference in long diameter from that of CECT was 9.3% (95% CI of the mean: 8.0, 10.6), 9.2% (95% CI of the mean: 8.0, 10.4), and 6.2% (95% CI of the mean: 5.0, 7.4) in FBP reconstruction, ASIR-V-80%, and DLIR-H, respectively (P < .001). Conclusion Compared with adaptive statistical iterative reconstruction-V, deep learning image reconstruction reduced image noise, increased nodule detection rate, and improved measurement accuracy on ultra-low-dose chest CT images. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Lee in this issue

    Lung Nodule Detectability of Artificial Intelligence-assisted CT Image Reading in Lung Cancer Screening

    Get PDF
    BACKGROUND: Artificial intelligence (AI)-based automatic lung nodule detection system improves the detection rate of nodules. It is important to evaluate the clinical value of AI system by comparing AI-assisted nodule detection with actu-al radiology reports. OBJECTIVE: To compare the detection rate of lung nodules between the actual radiology reports and AI-assisted reading in lung cancer CT screening. METHODS: Participants in chest CT screening from November to December 2019 were retrospectively included. In the real-world radiologist observation, 14 residents and 15 radiologists participated to finalize radiology reports. In AI-assisted reading, one resident and one radiologist reevaluated all subjects with the assistance of an AI system to lo-cate and measure the detected lung nodules. A reading panel determined the type and number of detected lung nodules between these two methods. RESULTS: In 860 participants (57±7 years), the reading panel confirmed 250 patients with >1 solid nodule, while radiolo-gists observed 131, lower than 247 by AI-assisted reading (p1 non-solid nodule, whereas radiologist observation identified 28, lower than 110 by AI-assisted reading (p<0.001). The accuracy and sensitivity of radiologist observation for solid nodules were 86.2% and 52.4%, lower than 99.1% and 98.8% by AI-assisted reading, respectively. These metrics were 90.4% and 25.2% for non-solid nodules, lower than 98.8% and 99.1% by AI-assisted reading, respectively. CONCLUSION: Comparing with the actual radiology reports, AI-assisted reading greatly improves the accuracy and sensi-tivity of nodule detection in chest CT, which benefits lung nodule detection, especially for non-solid nodules

    Machine-learning-based radiomics identifies atrial fibrillation on the epicardial fat in contrast-enhanced and non-enhanced chest CT

    Get PDF
    Objective: The purpose is to establish and validate a machine-learning-derived radiomics approach to deter-mine the existence of atrial fibrillation (AF) by analyzing epicardial adipose tissue (EAT) in CT images. Methods: Patients with AF based on electrocardio-graphic tracing who underwent contrast-enhanced (n = 200) or non-enhanced (n = 300) chest CT scans were analyzed retrospectively. After EAT segmentation and radiomics feature extraction, the segmented EAT yielded 1691 radiomics features. The most contributive features to AF were selected by the Boruta algorithm and machine-learning-based random forest algorithm, and combined to construct a radiomics signature (EAT-score). Multivariate logistic regression was used to build clinical factor and nested models. Results: In the test cohort of contrast-enhanced scanning (n = 60/200), the AUC of EAT-score for identifying patients with AF was 0.92 (95%CI: 0.84–1.00), higher than 0.71 (0.58–0.85) of the clinical factor model (total cholesterol and body mass index) (DeLong’s p = 0.01), and higher than 0.73 (0.61–0.86) of the EAT volume model (p = 0.01). In the test cohort of non-enhanced scanning (n = 100/300), the AUC of EAT-score was 0.85 (0.77–0.92), higher than that of the CT attenuation model (p 0.05). Conclusion: EAT-score generated by machine-learning-based radiomics achieved high performance in identifying patients with AF. Advances in knowledge: A radiomics analysis based on machine learning allows for the identification of AF on the EAT in contrast-enhanced and non-enhanced chest CT
    • …
    corecore