252 research outputs found
Automatic Pulmonary Nodule Detection in CT Scans Using Convolutional Neural Networks Based on Maximum Intensity Projection
Accurate pulmonary nodule detection is a crucial step in lung cancer
screening. Computer-aided detection (CAD) systems are not routinely used by
radiologists for pulmonary nodule detection in clinical practice despite their
potential benefits. Maximum intensity projection (MIP) images improve the
detection of pulmonary nodules in radiological evaluation with computed
tomography (CT) scans. Inspired by the clinical methodology of radiologists, we
aim to explore the feasibility of applying MIP images to improve the
effectiveness of automatic lung nodule detection using convolutional neural
networks (CNNs). We propose a CNN-based approach that takes MIP images of
different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices
as input. Such an approach augments the two-dimensional (2-D) CT slice images
with more representative spatial information that helps discriminate nodules
from vessels through their morphologies. Our proposed method achieves
sensitivity of 92.67% with 1 false positive per scan and sensitivity of 94.19%
with 2 false positives per scan for lung nodule detection on 888 scans in the
LIDC-IDRI dataset. The use of thick MIP images helps the detection of small
pulmonary nodules (3 mm-10 mm) and results in fewer false positives.
Experimental results show that utilizing MIP images can increase the
sensitivity and lower the number of false positives, which demonstrates the
effectiveness and significance of the proposed MIP-based CNNs framework for
automatic pulmonary nodule detection in CT scans. The proposed method also
shows the potential that CNNs could gain benefits for nodule detection by
combining the clinical procedure.Comment: Submitted to IEEE TM
Semantic Segmentation of Pathological Lung Tissue with Dilated Fully Convolutional Networks
Early and accurate diagnosis of interstitial lung diseases (ILDs) is crucial
for making treatment decisions, but can be challenging even for experienced
radiologists. The diagnostic procedure is based on the detection and
recognition of the different ILD pathologies in thoracic CT scans, yet their
manifestation often appears similar. In this study, we propose the use of a
deep purely convolutional neural network for the semantic segmentation of ILD
patterns, as the basic component of a computer aided diagnosis (CAD) system for
ILDs. The proposed CNN, which consists of convolutional layers with dilated
filters, takes as input a lung CT image of arbitrary size and outputs the
corresponding label map. We trained and tested the network on a dataset of 172
sparsely annotated CT scans, within a cross-validation scheme. The training was
performed in an end-to-end and semi-supervised fashion, utilizing both labeled
and non-labeled image regions. The experimental results show significant
performance improvement with respect to the state of the art
Lung_PAYNet: a pyramidal attention based deep learning network for lung nodule segmentation
Accurate and reliable lung nodule segmentation in computed tomography (CT) images is required for early diagnosis of lung cancer. Some of the difficulties in detecting lung nodules include the various types and shapes of lung nodules, lung nodules near other lung structures, and similar visual aspects. This study proposes a new model named Lung_PAYNet, a pyramidal attention-based architecture, for improved lung nodule segmentation in low-dose CT images. In this architecture, the encoder and decoder are designed using an inverted residual block and swish activation function. It also employs a feature pyramid attention network between the encoder and decoder to extract exact dense features for pixel classification. The proposed architecture was compared to the existing UNet architecture, and the proposed methodology yielded significant results. The proposed model was comprehensively trained and validated using the LIDC-IDRI dataset available in the public domain. The experimental results revealed that the Lung_PAYNet delivered remarkable segmentation with a Dice similarity coefficient of 95.7%, mIOU of 91.75%, sensitivity of 92.57%, and precision of 96.75%
Registration and analysis of dynamic magnetic resonance image series
Cystic fibrosis (CF) is an autosomal-recessive inherited metabolic disorder that affects all organs in the human body. Patients affected with CF suffer particularly from chronic inflammation and obstruction of the airways. Through early detection, continuous monitoring methods, and new treatments, the life expectancy of patients with CF has been increased drastically in the last decades. However, continuous monitoring of the disease progression is essential for a successful treatment. The current state-of-the-art method for lung disease detection and monitoring is computed tomography (CT) or X-ray. These techniques are ill-suited for the monitoring of disease progressions because of the ionizing radiation the patient is exposed during the examination. Through the development of new magnetic resonance imaging (MRI) sequences and evaluation methods, MRI is able to measure physiological changes in the lungs. The process to create physiological maps, i.e. ventilation and perfusion maps, of the lungs using MRI can be split up into three parts: MR-acquisition, image registration, and image analysis. In this work, we present different methods for the image registration part and the image analysis part. We developed a graph-based registration method for 2D dynamic MR image series of the lungs in order to overcome the problem of sliding motion at organ boundaries. Furthermore, we developed a human-inspired learning-based registration method. Here, the registration is defined as a sequence of local transformations. The sequence-based approach combines the advantage of dense transformation models, i.e. large space of transformations, and the advantage of interpolating transformation models, i.e. smooth local transformations. We also developed a general registration framework called Autograd Image Registration Laboratory (AIRLab), which performs automatic calculation of the gradients for the registration process. This allows rapid prototyping and an easy implementation of existing registration algorithms. For the image analysis part, we developed a deep-learning approach based on gated recurrent units that are able to calculate ventilation maps with less than a third of the number of images of the current method. Automatic defect detection in the estimated MRI ventilation and perfusion maps is essential for the clinical routine to automatically evaluate the treatment progression. We developed a weakly supervised method that is able to infer a pixel-wise defect segmentation by using only a continuous global label during training. In this case, we directly use the lung clearance index (LCI) as a global weak label, without any further manual annotations. The LCI is a global measure to describe ventilation inhomogeneities of the lungs and is obtained by a multiple breath washout test
Extracting Lungs from CT Images using Fully Convolutional Networks
Analysis of cancer and other pathological diseases, like the interstitial
lung diseases (ILDs), is usually possible through Computed Tomography (CT)
scans. To aid this, a preprocessing step of segmentation is performed to reduce
the area to be analyzed, segmenting the lungs and removing unimportant regions.
Generally, complex methods are developed to extract the lung region, also using
hand-made feature extractors to enhance segmentation. With the popularity of
deep learning techniques and its automated feature learning, we propose a lung
segmentation approach using fully convolutional networks (FCNs) combined with
fully connected conditional random fields (CRF), employed in many
state-of-the-art segmentation works. Aiming to develop a generalized approach,
the publicly available datasets from University Hospitals of Geneva (HUG) and
VESSEL12 challenge were studied, including many healthy and pathological CT
scans for evaluation. Experiments using the dataset individually, its trained
model on the other dataset and a combination of both datasets were employed.
Dice scores of for the HUG-ILD dataset and
for the VESSEL12 dataset were achieved, outperforming works
in the former and obtaining similar state-of-the-art results in the latter
dataset, showing the capability in using deep learning approaches.Comment: Accepted for presentation at the International Joint Conference on
Neural Networks (IJCNN) 201
- …