2,953 research outputs found
Semantic Segmentation of Pathological Lung Tissue with Dilated Fully Convolutional Networks
Early and accurate diagnosis of interstitial lung diseases (ILDs) is crucial
for making treatment decisions, but can be challenging even for experienced
radiologists. The diagnostic procedure is based on the detection and
recognition of the different ILD pathologies in thoracic CT scans, yet their
manifestation often appears similar. In this study, we propose the use of a
deep purely convolutional neural network for the semantic segmentation of ILD
patterns, as the basic component of a computer aided diagnosis (CAD) system for
ILDs. The proposed CNN, which consists of convolutional layers with dilated
filters, takes as input a lung CT image of arbitrary size and outputs the
corresponding label map. We trained and tested the network on a dataset of 172
sparsely annotated CT scans, within a cross-validation scheme. The training was
performed in an end-to-end and semi-supervised fashion, utilizing both labeled
and non-labeled image regions. The experimental results show significant
performance improvement with respect to the state of the art
Deep convolutional networks for automated detection of posterior-element fractures on spine CT
Injuries of the spine, and its posterior elements in particular, are a common
occurrence in trauma patients, with potentially devastating consequences.
Computer-aided detection (CADe) could assist in the detection and
classification of spine fractures. Furthermore, CAD could help assess the
stability and chronicity of fractures, as well as facilitate research into
optimization of treatment paradigms.
In this work, we apply deep convolutional networks (ConvNets) for the
automated detection of posterior element fractures of the spine. First, the
vertebra bodies of the spine with its posterior elements are segmented in spine
CT using multi-atlas label fusion. Then, edge maps of the posterior elements
are computed. These edge maps serve as candidate regions for predicting a set
of probabilities for fractures along the image edges using ConvNets in a 2.5D
fashion (three orthogonal patches in axial, coronal and sagittal planes). We
explore three different methods for training the ConvNet using 2.5D patches
along the edge maps of 'positive', i.e. fractured posterior-elements and
'negative', i.e. non-fractured elements.
An experienced radiologist retrospectively marked the location of 55
displaced posterior-element fractures in 18 trauma patients. We randomly split
the data into training and testing cases. In testing, we achieve an
area-under-the-curve of 0.857. This corresponds to 71% or 81% sensitivities at
5 or 10 false-positives per patient, respectively. Analysis of our set of
trauma patients demonstrates the feasibility of detecting posterior-element
fractures in spine CT images using computer vision techniques such as deep
convolutional networks.Comment: To be presented at SPIE Medical Imaging, 2016, San Dieg
Self-paced Convolutional Neural Network for Computer Aided Detection in Medical Imaging Analysis
Tissue characterization has long been an important component of Computer
Aided Diagnosis (CAD) systems for automatic lesion detection and further
clinical planning. Motivated by the superior performance of deep learning
methods on various computer vision problems, there has been increasing work
applying deep learning to medical image analysis. However, the development of a
robust and reliable deep learning model for computer-aided diagnosis is still
highly challenging due to the combination of the high heterogeneity in the
medical images and the relative lack of training samples. Specifically,
annotation and labeling of the medical images is much more expensive and
time-consuming than other applications and often involves manual labor from
multiple domain experts. In this work, we propose a multi-stage, self-paced
learning framework utilizing a convolutional neural network (CNN) to classify
Computed Tomography (CT) image patches. The key contribution of this approach
is that we augment the size of training samples by refining the unlabeled
instances with a self-paced learning CNN. By implementing the framework on high
performance computing servers including the NVIDIA DGX1 machine, we obtained
the experimental result, showing that the self-pace boosted network
consistently outperformed the original network even with very scarce manual
labels. The performance gain indicates that applications with limited training
samples such as medical image analysis can benefit from using the proposed
framework.Comment: accepted by 8th International Workshop on Machine Learning in Medical
Imaging (MLMI 2017
- …