159 research outputs found
Improving the Segmentation of Anatomical Structures in Chest Radiographs using U-Net with an ImageNet Pre-trained Encoder
Accurate segmentation of anatomical structures in chest radiographs is
essential for many computer-aided diagnosis tasks. In this paper we investigate
the latest fully-convolutional architectures for the task of multi-class
segmentation of the lungs field, heart and clavicles in a chest radiograph. In
addition, we explore the influence of using different loss functions in the
training process of a neural network for semantic segmentation. We evaluate all
models on a common benchmark of 247 X-ray images from the JSRT database and
ground-truth segmentation masks from the SCR dataset. Our best performing
architecture, is a modified U-Net that benefits from pre-trained encoder
weights. This model outperformed the current state-of-the-art methods tested on
the same benchmark, with Jaccard overlap scores of 96.1% for lung fields, 90.6%
for heart and 85.5% for clavicles.Comment: Presented at the First International Workshop on Thoracic Image
Analysis (TIA), MICCAI 201
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Anatomy X-Net: A Semi-Supervised Anatomy Aware Convolutional Neural Network for Thoracic Disease Classification
Thoracic disease detection from chest radiographs using deep learning methods
has been an active area of research in the last decade. Most previous methods
attempt to focus on the diseased organs of the image by identifying spatial
regions responsible for significant contributions to the model's prediction. In
contrast, expert radiologists first locate the prominent anatomical structures
before determining if those regions are anomalous. Therefore, integrating
anatomical knowledge within deep learning models could bring substantial
improvement in automatic disease classification. This work proposes an
anatomy-aware attention-based architecture named Anatomy X-Net, that
prioritizes the spatial features guided by the pre-identified anatomy regions.
We leverage a semi-supervised learning method using the JSRT dataset containing
organ-level annotation to obtain the anatomical segmentation masks (for lungs
and heart) for the NIH and CheXpert datasets. The proposed Anatomy X-Net uses
the pre-trained DenseNet-121 as the backbone network with two corresponding
structured modules, the Anatomy Aware Attention (AAA) and Probabilistic
Weighted Average Pooling (PWAP), in a cohesive framework for anatomical
attention learning. Our proposed method sets new state-of-the-art performance
on the official NIH test set with an AUC score of 0.8439, proving the efficacy
of utilizing the anatomy segmentation knowledge to improve the thoracic disease
classification. Furthermore, the Anatomy X-Net yields an averaged AUC of 0.9020
on the Stanford CheXpert dataset, improving on existing methods that
demonstrate the generalizability of the proposed framework
Full-resolution Lung Nodule Segmentation from Chest X-ray Images using Residual Encoder-Decoder Networks
Lung cancer is the leading cause of cancer death and early diagnosis is
associated with a positive prognosis. Chest X-ray (CXR) provides an inexpensive
imaging mode for lung cancer diagnosis. Suspicious nodules are difficult to
distinguish from vascular and bone structures using CXR. Computer vision has
previously been proposed to assist human radiologists in this task, however,
leading studies use down-sampled images and computationally expensive methods
with unproven generalization. Instead, this study localizes lung nodules using
efficient encoder-decoder neural networks that process full resolution images
to avoid any signal loss resulting from down-sampling. Encoder-decoder networks
are trained and tested using the JSRT lung nodule dataset. The networks are
used to localize lung nodules from an independent external CXR dataset.
Sensitivity and false positive rates are measured using an automated framework
to eliminate any observer subjectivity. These experiments allow for the
determination of the optimal network depth, image resolution and pre-processing
pipeline for generalized lung nodule localization. We find that nodule
localization is influenced by subtlety, with more subtle nodules being detected
in earlier training epochs. Therefore, we propose a novel self-ensemble model
from three consecutive epochs centered on the validation optimum. This ensemble
achieved a sensitivity of 85% in 10-fold internal testing with false positives
of 8 per image. A sensitivity of 81% is achieved at a false positive rate of 6
following morphological false positive reduction. This result is comparable to
more computationally complex systems based on linear and spatial filtering, but
with a sub-second inference time that is faster than other methods. The
proposed algorithm achieved excellent generalization results against an
external dataset with sensitivity of 77% at a false positive rate of 7.6
- …