40 research outputs found
A Multi-Scale Approach to Directional Field Estimation
This paper proposes a robust method for directional field estimation from fingerprint images that combines estimates at multiple scales. The method is able to provide accurate estimates in scratchy regions, while at the same time maintaining correct estimates around singular points. Compared to other methods, the penalty for detecting false singular points is much smaller, because this does not deteriorate the directional field estimate
Video-based Side-view Face Recognition for Home Safety
In this paper, we introduce a registration method for side-view face recognition that is suitable for home safety applications. We use cameras attached at door posts, and recognize people as they pass through doors to estimate their location in the house. First, we present a new database that is collected using this setup, where we use side cameras and ambient light. We recorded videos of 14 people that pass through doors in 18 different paths. Next, we propose our recognition method where we automatically find the profile to register the face images. By applying hierarchical clustering we detect the frames that include falsely detected profiles and pose variations, and automatically remove them from the video sequence to improve our results. After registering, we find the nose tip, apply recognition based on profiles, and analyze our results
Automatic Pulmonary Nodule Detection in CT Scans Using Convolutional Neural Networks Based on Maximum Intensity Projection
Accurate pulmonary nodule detection is a crucial step in lung cancer
screening. Computer-aided detection (CAD) systems are not routinely used by
radiologists for pulmonary nodule detection in clinical practice despite their
potential benefits. Maximum intensity projection (MIP) images improve the
detection of pulmonary nodules in radiological evaluation with computed
tomography (CT) scans. Inspired by the clinical methodology of radiologists, we
aim to explore the feasibility of applying MIP images to improve the
effectiveness of automatic lung nodule detection using convolutional neural
networks (CNNs). We propose a CNN-based approach that takes MIP images of
different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices
as input. Such an approach augments the two-dimensional (2-D) CT slice images
with more representative spatial information that helps discriminate nodules
from vessels through their morphologies. Our proposed method achieves
sensitivity of 92.67% with 1 false positive per scan and sensitivity of 94.19%
with 2 false positives per scan for lung nodule detection on 888 scans in the
LIDC-IDRI dataset. The use of thick MIP images helps the detection of small
pulmonary nodules (3 mm-10 mm) and results in fewer false positives.
Experimental results show that utilizing MIP images can increase the
sensitivity and lower the number of false positives, which demonstrates the
effectiveness and significance of the proposed MIP-based CNNs framework for
automatic pulmonary nodule detection in CT scans. The proposed method also
shows the potential that CNNs could gain benefits for nodule detection by
combining the clinical procedure.Comment: Submitted to IEEE TM
Worst-Case Morphs using Wasserstein ALI and Improved MIPGAN
A morph is a combination of two separate facial images and contains identity
information of two different people. When used in an identity document, both
people can be authenticated by a biometric Face Recognition (FR) system. Morphs
can be generated using either a landmark-based approach or approaches based on
deep learning such as Generative Adversarial Networks (GAN). In a recent paper,
we introduced a \emph{worst-case} upper bound on how challenging morphing
attacks can be for an FR system. The closer morphs are to this upper bound, the
bigger the challenge they pose to FR. We introduced an approach with which it
was possible to generate morphs that approximate this upper bound for a known
FR system (white box), but not for unknown (black box) FR systems.
In this paper, we introduce a morph generation method that can approximate
worst-case morphs even when the FR system is not known. A key contribution is
that we include the goal of generating difficult morphs \emph{during} training.
Our method is based on Adversarially Learned Inference (ALI) and uses concepts
from Wasserstein GANs trained with Gradient Penalty, which were introduced to
stabilise the training of GANs. We include these concepts to achieve similar
improvement in training stability and call the resulting method Wasserstein ALI
(WALI). We finetune WALI using loss functions designed specifically to improve
the ability to manipulate identity information in facial images and show how it
can generate morphs that are more challenging for FR systems than landmark- or
GAN-based morphs. We also show how our findings can be used to improve MIPGAN,
an existing StyleGAN-based morph generator
Deep convolutional neural networks for multi-planar lung nodule detection: improvement in small nodule identification
Objective: In clinical practice, small lung nodules can be easily overlooked
by radiologists. The paper aims to provide an efficient and accurate detection
system for small lung nodules while keeping good performance for large nodules.
Methods: We propose a multi-planar detection system using convolutional neural
networks. The 2-D convolutional neural network model, U-net++, was trained by
axial, coronal, and sagittal slices for the candidate detection task. All
possible nodule candidates from the three different planes are combined. For
false positive reduction, we apply 3-D multi-scale dense convolutional neural
networks to efficiently remove false positive candidates. We use the public
LIDC-IDRI dataset which includes 888 CT scans with 1186 nodules annotated by
four radiologists. Results: After ten-fold cross-validation, our proposed
system achieves a sensitivity of 94.2% with 1.0 false positive/scan and a
sensitivity of 96.0% with 2.0 false positives/scan. Although it is difficult to
detect small nodules (i.e. < 6 mm), our designed CAD system reaches a
sensitivity of 93.4% (95.0%) of these small nodules at an overall false
positive rate of 1.0 (2.0) false positives/scan. At the nodule candidate
detection stage, results show that a multi-planar method is capable to detect
more nodules compared to using a single plane. Conclusion: Our approach
achieves good performance not only for small nodules, but also for large
lesions on this dataset. This demonstrates the effectiveness and efficiency of
our developed CAD system for lung nodule detection. Significance: The proposed
system could provide support for radiologists on early detection of lung
cancer
Deep learning-based pulmonary nodule detection:Effect of slab thickness in maximum intensity projections at the nodule candidate detection stage
BACKGROUND AND OBJECTIVE: To investigate the effect of the slab thickness in maximum intensity projections (MIPs) on the candidate detection performance of a deep learning-based computer-aided detection (DL-CAD) system for pulmonary nodule detection in CT scans. METHODS: The public LUNA16 dataset includes 888 CT scans with 1186 nodules annotated by four radiologists. From those scans, MIP images were reconstructed with slab thicknesses of 5 to 50 mm (at 5 mm intervals) and 3 to 13 mm (at 2 mm intervals). The architecture in the nodule candidate detection part of the DL-CAD system was trained separately using MIP images with various slab thicknesses. Based on ten-fold cross-validation, the sensitivity and the F2 score were determined to evaluate the performance of using each slab thickness at the nodule candidate detection stage. The free-response receiver operating characteristic (FROC) curve was used to assess the performance of the whole DL-CAD system that took the results combined from 16 MIP slab thickness settings. RESULTS: At the nodule candidate detection stage, the combination of results from 16 MIP slab thickness settings showed a high sensitivity of 98.0% with 46 false positives (FPs) per scan. Regarding a single MIP slab thickness of 10 mm, the highest sensitivity of 90.0% with 8 FPs/scan was reached before false positive reduction. The sensitivity increased (82.8% to 90.0%) for slab thickness of 1 to 10 mm and decreased (88.7% to 76.6%) for slab thickness of 15-50 mm. The number of FPs was decreasing with increasing slab thickness, but was stable at 5 FPs/scan at a slab thickness of 30 mm or more. After false positive reduction, the DL-CAD system, utilizing 16 MIP slab thickness settings, had the sensitivity of 94.4% with 1 FP/scan. CONCLUSIONS: The utilization of multi-MIP images could improve the performance at the nodule candidate detection stage, even for the whole DL-CAD system. For a single slab thickness of 10 mm, the highest sensitivity for pulmonary nodule detection was reached at the nodule candidate detection stage, similar to the slab thickness usually applied by radiologists
Survival prediction for stage I-IIIA non-small cell lung cancer using deep learning
BACKGROUND AND PURPOSE: The aim of this study was to develop and evaluate a prediction model for 2-year overall survival (OS) in stage I-IIIA non-small cell lung cancer (NSCLC) patients who received definitive radiotherapy by considering clinical variables and image features from pre-treatment CT-scans. MATERIALS AND METHODS: NSCLC patients who received stereotactic radiotherapy were prospectively collected at the UMCG and split into a training and a hold out test set including 189 and 81 patients, respectively. External validation was performed on 228 NSCLC patients who were treated with radiation or concurrent chemoradiation at the Maastro clinic (Lung1 dataset). A hybrid model that integrated both image and clinical features was implemented using deep learning. Image features were learned from cubic patches containing lung tumours extracted from pre-treatment CT scans. Relevant clinical variables were selected by univariable and multivariable analyses. RESULTS: Multivariable analysis showed that age and clinical stage were significant prognostic clinical factors for 2-year OS. Using these two clinical variables in combination with image features from pre-treatment CT scans, the hybrid model achieved a median AUC of 0.76 [95% CI: 0.65-0.86] and 0.64 [95% CI: 0.58-0.70] on the complete UMCG and Maastro test sets, respectively. The Kaplan-Meier survival curves showed significant separation between low and high mortality risk groups on these two test sets (log-rank test: p-value < 0.001, p-value = 0.012, respectively) CONCLUSION: We demonstrated that a hybrid model could achieve reasonable performance by utilizing both clinical and image features for 2-year OS prediction. Such a model has the potential to identify patients with high mortality risk and guide clinical decision making. Short title: OS prediction for stage I-IIIA NSCLC using DL
Deep learning for automated exclusion of cardiac CT examinations negative for coronary artery calcium
Purpose: Coronary artery calcium (CAC) score has shown to be an accurate predictor of future cardiovascular events. Early detection by CAC scoring might reduce the number of deaths by cardiovascular disease (CVD). Automatically excluding scans which test negative for CAC could significantly reduce the workload of radiologists. We propose an algorithm that both excludes negative scans and segments the CAC. Method: The training and internal validation data were collected from the ROBINSCA study. The external validation data were collected from the ImaLife study. Both contain annotated low-dose non-contrast cardiac CT scans. 60 scans of participants were used for training and 2 sets of 50 CT scans of participants without CAC and 50 CT scans of participants with an Agatston score between 10 and 20 were collected for both internal and external validation. The effect of dilated convolutional layers was tested by using 2 CNN architectures. We used the patient-level accuracy as metric for assessing the accuracy of our pipeline for detection of CAC and the Dice coefficient score as metric for the segmentation of CAC. Results: Of the 50 negative cases in the internal and external validation set, 62 % and 86 % were classified correctly, respectively. There were no false negative predictions. For the segmentation task, Dice Coefficient scores of 0.63 and 0.84 were achieved for the internal and external validation datasets, respectively. Conclusions: Our algorithm excluded 86 % of all scans without CAC. Radiologists might need to spend less time on participants without CAC and could spend more time on participants that need their attention