422 research outputs found
Towards automatic pulmonary nodule management in lung cancer screening with deep learning
The introduction of lung cancer screening programs will produce an
unprecedented amount of chest CT scans in the near future, which radiologists
will have to read in order to decide on a patient follow-up strategy. According
to the current guidelines, the workup of screen-detected nodules strongly
relies on nodule size and nodule type. In this paper, we present a deep
learning system based on multi-stream multi-scale convolutional networks, which
automatically classifies all nodule types relevant for nodule workup. The
system processes raw CT data containing a nodule without the need for any
additional information such as nodule segmentation or nodule size and learns a
representation of 3D data by analyzing an arbitrary number of 2D views of a
given nodule. The deep learning system was trained with data from the Italian
MILD screening trial and validated on an independent set of data from the
Danish DLCST screening trial. We analyze the advantage of processing nodules at
multiple scales with a multi-stream convolutional network architecture, and we
show that the proposed deep learning system achieves performance at classifying
nodule type that surpasses the one of classical machine learning approaches and
is within the inter-observer variability among four experienced human
observers.Comment: Published on Scientific Report
Interleaving cerebral CT perfusion with neck CT angiography. Part II: clinical implementation and image quality
Detectability of Iodine in Mediastinal Lesions on Photon Counting CT:A Phantom Study
Background/Objectives: To evaluate the detectability of iodine in mediastinal lesions with photon counting CT (PCCT) compared to conventional CT (CCT) in a phantom study.Methods: Mediastinal lesions were simulated by five cylindrical inserts with diameters from 1 to 12 mm within a 10 cm solid water phantom that was placed in the mediastinal area of an anthropomorphic chest phantom with fat ring (QRM-thorax, QRM L-ring, 30 cm × 40 cm cross-section). Inserts were filled with iodine contrast at concentrations of 0.238 to 27.5 mg/mL. A clinical chest protocol at 120 kV on a high-end CCT (Somatom Force, Siemens Healthineers) was compared to the same protocol on a PCCT (Naeotom Alpha, Siemens Healthineers). Images reconstructed with a soft tissue kernel at 1 mm thickness and a 512 matrix served as a reference. For PCCT, we studied the result of reconstructing virtual mono-energetic images (VMIs) at 40, 50, 60 and 70 keV, reducing exposure dose up by 66%, reducing slice thickness to 0.4 and 0.2 mm, and increasing matrix size from 512 to 768 and 1024. Two observers with similar experience independently determined the smallest insert size for which iodine enhancement could still be detected. Consensus was reached when detectability thresholds differed between observers.Results: CTDIvol on PCCT and CCT was 3.80 ± 0.12 and 3.60 ± 0.01 mGy, respectively. PCCT was substantially more sensitive than CCT for detection of iodine in small mediastinal lesions: to detect a 3 mm lesion, 11.2 mg/mL iodine was needed with CCT, while only 1.43 mg/mL was required at 40 keV and 50 keV with PCCT. Moreover, dose reduced by 66% resulted in a comparable detection of iodine between PCCT and CCT for all lesions, except 3 mm. Detection increased from 11.2 mg/mL on CCT to 4.54 mg/mL on PCCT. A matrix size of 1024 reduced this detection threshold further, to 0.238 mg/mL at 40 and 50 keV. For 5 mm lesions, this detection threshold of 0.238 mg/mL was already achieved with a 512 matrix. Very small, 1 mm lesions did not profit from PCCT except if reconstructed with a 1024 matrix, which reduced the detection threshold from 27.5 mg/mL to 11.2 mg/mL. Reduced slice thickness decreased iodine detection of 3-12 mm lesions but not for 1 mm lesions.Conclusions: Iodine detectability with PCCT is at least equal to CCT for simulated mediastinal lesions of 1-12 mm, even at a dose reduction of 66%. Iodine detectability further profits from virtual monoenergetic images of 40 and 50 keV and increased reconstruction matrix. </p
Computer-aided detection of pulmonary nodules: a comparative study using the public LIDC/IDRI database
Objectives: To benchmark the performance of state-of-the-art computer-aided detection (CAD) of pulmonary nodules using the largest publicly available annotated CT database (LIDC/IDRI), and to show that CAD finds lesions not identified by the LIDC’s four-fold double reading process. Methods: The LIDC/IDRI database contains 888 thoracic CT scans with a section thickness of 2.5 mm or lower. We report performance of two commercial and one academic CAD system. The influence of presence of contrast, section thickness, and reconstruction kernel on CAD performance was assessed. Four radiologists independently analyzed the false positive CAD marks of the best CAD system. Results: The updated commercial CAD system showed the best performance with a sensitivity of 82 % at an average of 3.1 false positive detections per scan. Forty-five false positive CAD marks were scored as nodules by all four radiologists in our study. Conclusions: On the largest publicly available reference database for lung nodule detection in chest CT, the updated commercial CAD system locates the vast majority of pulmonary nodules at a low false positive rate. Potential for CAD is substantiated by the fact that it identifies pulmonary nodules that were not marked during the extensive four-fold LIDC annotation process
Trends in the incidence of pulmonary nodules in chest computed tomography:10-year results from two Dutch hospitals
Objective: To study trends in the incidence of reported pulmonary nodules and stage I lung cancer in chest CT. Methods: We analyzed the trends in the incidence of detected pulmonary nodules and stage I lung cancer in chest CT scans in the period between 2008 and 2019. Imaging metadata and radiology reports from all chest CT studies were collected from two large Dutch hospitals. A natural language processing algorithm was developed to identify studies with any reported pulmonary nodule. Results: Between 2008 and 2019, a total of 74,803 patients underwent 166,688 chest CT examinations at both hospitals combined. During this period, the annual number of chest CT scans increased from 9955 scans in 6845 patients in 2008 to 20,476 scans in 13,286 patients in 2019. The proportion of patients in whom nodules (old or new) were reported increased from 38% (2595/6845) in 2008 to 50% (6654/13,286) in 2019. The proportion of patients in whom significant new nodules (≥ 5 mm) were reported increased from 9% (608/6954) in 2010 to 17% (1660/9883) in 2017. The number of patients with new nodules and corresponding stage I lung cancer diagnosis tripled and their proportion doubled, from 0.4% (26/6954) in 2010 to 0.8% (78/9883) in 2017. Conclusion: The identification of incidental pulmonary nodules in chest CT has steadily increased over the past decade and has been accompanied by more stage I lung cancer diagnoses. Clinical relevance statement: These findings stress the importance of identifying and efficiently managing incidental pulmonary nodules in routine clinical practice. Key Points: • The number of patients who underwent chest CT examinations substantially increased over the past decade, as did the number of patients in whom pulmonary nodules were identified. • The increased use of chest CT and more frequently identified pulmonary nodules were associated with more stage I lung cancer diagnoses.</p
Performance evaluation of a 4D similarity filter for dynamic CT angiography imaging of the liver
Background: Dynamic computed tomography (CT) angiography of the abdomen provides perfusion information and characteristics of the tissues present in the abdomen. This information could potentially help characterize liver metastases. However, radiation dose has to be relatively low for the patient, causing the images to have very high noise content. Denoising methods are needed to increase image quality.Purpose: The purpose of this study was to investigate the performance, limitations, and behavior of a new 4D filtering method, called the 4D Similarity Filter (4DSF), to reduce image noise in temporal CT data.Methods: The 4DSF averages voxels with similar time-intensity curves (TICs). Each phase is filtered individually using the information of all phases except for the one being filtered. This approach minimizes the bias toward the noise initially present in this phase. Since the 4DSF does not base similarity on spatial proximity, loss of spatial resolution is avoided. The 4DSF was evaluated on a 12-phase liver dynamic CT angiography acquisition of 52 digital anthropomorphic phantoms, each containing one hypervascular 1 cm lesion with a small necrotic core. The metrics used for evaluation were noise reduction, lesion contrast-to-noise ratio (CNR), CT number accuracy using peak-time and peak-intensity of the TICs, and resolution loss. The results were compared to those obtained by the time-intensity profile similarity (TIPS) filter, which uses the whole TIC for determining similarity, and the combination 4DSF followed by TIPS filter (4DSF + TIPS).Results: The 4DSF alone resulted in a median noise reduction by a factor of 6.8, which is lower than that obtained by the TIPS filter at 8.1, and 4DSF + TIPS at 12.2. The 4DSF increased the median CNR from 0. 44 to 1.85, which is less than the TIPS filter at 2.59 and 4DSF + TIPS at 3.12. However, the peak-intensity accuracy in the TICs was superior for the 4DSF, with a median intensity decrease of −34 HU compared to −88 and −50 HU for the hepatic artery when using the TIPS filter and 4DSF + TIPS, respectively. The median peak-time accuracy was inferior for the 4DSF filter and 4DSF + TIPS, with a time shift of −1 phases for the portal vein TIC compared to no shift in time when using the TIPS. The analysis of the full-width-at-half-maximum (FWHM) of a small artery showed significantly less spatial resolution loss for the 4DSF at 3.2 pixels, compared to the TIPS filter at 4.3 pixels, and 3.4 pixels for the 4DSF + TIPS. Conclusion: The 4DSF can reduce noise with almost no resolution loss, making the filter very suitable for denoising 4D CT data for detection tasks, even in very low dose, i.e., very high noise level, situations. In combination with the TIPS filter, the noise reduction can be increased even further
Pricing and cost-saving potential for deep-learning computer-aided lung nodule detection software in CT lung cancer screening
OBJECTIVE: An increasing number of commercial deep learning computer-aided detection (DL-CAD) systems are available but their cost-saving potential is largely unknown. This study aimed to gain insight into appropriate pricing for DL-CAD in different reading modes to be cost-saving and to determine the potentially most cost-effective reading mode for lung cancer screening.METHODS: In three representative settings, DL-CAD was evaluated as a concurrent, pre-screening, and second reader. Scoping review was performed to estimate radiologist reading time with and without DL-CAD. Hourly cost of radiologist time was collected for the USA (€196), UK (€127), and Poland (€45), and monetary equivalence of saved time was calculated. The minimum number of screening CTs to reach break-even was calculated for one-time investment of €51,616 for DL-CAD.RESULTS: Mean reading time was 162 (95% CI: 111-212) seconds per case without DL-CAD, which decreased by 77 (95% CI: 47-107) and 104 (95% CI: 71-136) seconds for DL-CAD as concurrent and pre-screening reader, respectively, and increased by 33-41 s for DL-CAD as second reader. This translates into €1.0-4.3 per-case cost for concurrent reading and €0.8-5.7 for pre-screening reading in the USA, UK, and Poland. To achieve break-even with a one-time investment, the minimum number of CT scans was 12,300-53,600 for concurrent reader, and 9400-65,000 for pre-screening reader in the three countries.CONCLUSIONS: Given current pricing, DL-CAD must be priced substantially below €6 in a pay-per-case setting or used in a high-workload environment to reach break-even in lung cancer screening. DL-CAD as pre-screening reader shows the largest potential to be cost-saving.CRITICAL RELEVANCE STATEMENT: Deep-learning computer-aided lung nodule detection (DL-CAD) software must be priced substantially below 6 euro in a pay-per-case setting or must be used in high-workload environments with one-time investment in order to achieve break-even. DL-CAD as a pre-screening reader has the greatest cost savings potential.KEY POINTS: • DL-CAD must be substantially below €6 in a pay-per-case setting to reach break-even. • DL-CAD must be used in a high-workload screening environment to achieve break-even. • DL-CAD as a pre-screening reader shows the largest potential to be cost-saving.</p
Enhancing a deep learning model for pulmonary nodule malignancy risk estimation in chest CT with uncertainty estimation
Objective: To investigate the effect of uncertainty estimation on the performance of a Deep Learning (DL) algorithm for estimating malignancy risk of pulmonary nodules.Methods and materials: In this retrospective study, we integrated an uncertainty estimation method into a previously developed DL algorithm for nodule malignancy risk estimation. Uncertainty thresholds were developed using CT data from the Danish Lung Cancer Screening Trial (DLCST), containing 883 nodules (65 malignant) collected between 2004 and 2010. We used thresholds on the 90th and 95th percentiles of the uncertainty score distribution to categorize nodules into certain and uncertain groups. External validation was performed on clinical CT data from a tertiary academic center containing 374 nodules (207 malignant) collected between 2004 and 2012. DL performance was measured using area under the ROC curve (AUC) for the full set of nodules, for the certain cases and for the uncertain cases. Additionally, nodule characteristics were compared to identify trends for inducing uncertainty.Results: The DL algorithm performed significantly worse in the uncertain group compared to the certain group of DLCST (AUC 0.62 (95% CI: 0.49, 0.76) vs 0.93 (95% CI: 0.88, 0.97); p <.001) and the clinical dataset (AUC 0.62 (95% CI: 0.50, 0.73) vs 0.90 (95% CI: 0.86, 0.94); p <.001). The uncertain group included larger benign nodules as well as more part-solid and non-solid nodules than the certain group.Conclusion: The integrated uncertainty estimation showed excellent performance for identifying uncertain cases in which the DL-based nodule malignancy risk estimation algorithm had significantly worse performance.Clinical relevance statement: Deep Learning algorithms often lack the ability to gauge and communicate uncertainty. For safe clinical implementation, uncertainty estimation is of pivotal importance to identify cases where the deep learning algorithm harbors doubt in its prediction.Key Points: • Deep learning (DL) algorithms often lack uncertainty estimation, which potentially reduce the risk of errors and improve safety during clinical adoption of the DL algorithm. • Uncertainty estimation identifies pulmonary nodules in which the discriminative performance of the DL algorithm is significantly worse. • Uncertainty estimation can further enhance the benefits of the DL algorithm and improve its safety and trustworthiness.</p
Towards safe and reliable deep learning for lung nodule malignancy estimation using out-of-distribution detection
Artificial Intelligence (AI) models may fail or suffer from reduced performance when applied to unseen data that differs from the training data distribution, referred to as dataset shift. Automatic detection of out-of-distribution (OOD) data contributes to safe and reliable clinical implementation of AI models. In this study, we propose a recognized OOD detection method that utilizes the Mahalanobis distance (MD) and compare its performance to widely known classical methods. The MD measures the similarity between features of an unseen sample and the distribution of development samples features of intermediate model layers. We integrate our proposed method in an existing deep learning (DL) model for lung nodule malignancy risk estimation on chest CT and validate it across four dataset shifts known to reduce AI model performance. The results show that our proposed method outperforms the classical methods and can effectively detect near- and far-OOD samples across all datasets with different data distribution shifts. Additionally, we demonstrate that our proposed method can seamlessly incorporate additional In-distribution (ID) data while maintaining the ability to accurately differentiate between the remaining OOD cases. Lastly, we searched for the optimal OOD threshold in the OOD dataset where the performance of the DL model stays reliable, however no decline in DL performance was revealed as the OOD score increased.</p
- …
