12,897 research outputs found
Learning to detect chest radiographs containing lung nodules using visual attention networks
Machine learning approaches hold great potential for the automated detection
of lung nodules in chest radiographs, but training the algorithms requires vary
large amounts of manually annotated images, which are difficult to obtain. Weak
labels indicating whether a radiograph is likely to contain pulmonary nodules
are typically easier to obtain at scale by parsing historical free-text
radiological reports associated to the radiographs. Using a repositotory of
over 700,000 chest radiographs, in this study we demonstrate that promising
nodule detection performance can be achieved using weak labels through
convolutional neural networks for radiograph classification. We propose two
network architectures for the classification of images likely to contain
pulmonary nodules using both weak labels and manually-delineated bounding
boxes, when these are available. Annotated nodules are used at training time to
deliver a visual attention mechanism informing the model about its localisation
performance. The first architecture extracts saliency maps from high-level
convolutional layers and compares the estimated position of a nodule against
the ground truth, when this is available. A corresponding localisation error is
then back-propagated along with the softmax classification error. The second
approach consists of a recurrent attention model that learns to observe a short
sequence of smaller image portions through reinforcement learning. When a
nodule annotation is available at training time, the reward function is
modified accordingly so that exploring portions of the radiographs away from a
nodule incurs a larger penalty. Our empirical results demonstrate the potential
advantages of these architectures in comparison to competing methodologies
CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison
Large, labeled datasets have driven deep learning methods to achieve
expert-level performance on a variety of medical imaging tasks. We present
CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240
patients. We design a labeler to automatically detect the presence of 14
observations in radiology reports, capturing uncertainties inherent in
radiograph interpretation. We investigate different approaches to using the
uncertainty labels for training convolutional neural networks that output the
probability of these observations given the available frontal and lateral
radiographs. On a validation set of 200 chest radiographic studies which were
manually annotated by 3 board-certified radiologists, we find that different
uncertainty approaches are useful for different pathologies. We then evaluate
our best model on a test set composed of 500 chest radiographic studies
annotated by a consensus of 5 board-certified radiologists, and compare the
performance of our model to that of 3 additional radiologists in the detection
of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the
model ROC and PR curves lie above all 3 radiologist operating points. We
release the dataset to the public as a standard benchmark to evaluate
performance of chest radiograph interpretation models.
The dataset is freely available at
https://stanfordmlgroup.github.io/competitions/chexpert .Comment: Published in AAAI 201
Improving the Segmentation of Anatomical Structures in Chest Radiographs using U-Net with an ImageNet Pre-trained Encoder
Accurate segmentation of anatomical structures in chest radiographs is
essential for many computer-aided diagnosis tasks. In this paper we investigate
the latest fully-convolutional architectures for the task of multi-class
segmentation of the lungs field, heart and clavicles in a chest radiograph. In
addition, we explore the influence of using different loss functions in the
training process of a neural network for semantic segmentation. We evaluate all
models on a common benchmark of 247 X-ray images from the JSRT database and
ground-truth segmentation masks from the SCR dataset. Our best performing
architecture, is a modified U-Net that benefits from pre-trained encoder
weights. This model outperformed the current state-of-the-art methods tested on
the same benchmark, with Jaccard overlap scores of 96.1% for lung fields, 90.6%
for heart and 85.5% for clavicles.Comment: Presented at the First International Workshop on Thoracic Image
Analysis (TIA), MICCAI 201
Adaptive Segmentation of Knee Radiographs for Selecting the Optimal ROI in Texture Analysis
The purposes of this study were to investigate: 1) the effect of placement of
region-of-interest (ROI) for texture analysis of subchondral bone in knee
radiographs, and 2) the ability of several texture descriptors to distinguish
between the knees with and without radiographic osteoarthritis (OA). Bilateral
posterior-anterior knee radiographs were analyzed from the baseline of OAI and
MOST datasets. A fully automatic method to locate the most informative region
from subchondral bone using adaptive segmentation was developed. We used an
oversegmentation strategy for partitioning knee images into the compact regions
that follow natural texture boundaries. LBP, Fractal Dimension (FD), Haralick
features, Shannon entropy, and HOG methods were computed within the standard
ROI and within the proposed adaptive ROIs. Subsequently, we built logistic
regression models to identify and compare the performances of each texture
descriptor and each ROI placement method using 5-fold cross validation setting.
Importantly, we also investigated the generalizability of our approach by
training the models on OAI and testing them on MOST dataset.We used area under
the receiver operating characteristic (ROC) curve (AUC) and average precision
(AP) obtained from the precision-recall (PR) curve to compare the results. We
found that the adaptive ROI improves the classification performance (OA vs.
non-OA) over the commonly used standard ROI (up to 9% percent increase in AUC).
We also observed that, from all texture parameters, LBP yielded the best
performance in all settings with the best AUC of 0.840 [0.825, 0.852] and
associated AP of 0.804 [0.786, 0.820]. Compared to the current state-of-the-art
approaches, our results suggest that the proposed adaptive ROI approach in
texture analysis of subchondral bone can increase the diagnostic performance
for detecting the presence of radiographic OA
- …