188 research outputs found
Deep Learning for Automated Medical Image Analysis
Medical imaging is an essential tool in many areas of medical applications,
used for both diagnosis and treatment. However, reading medical images and
making diagnosis or treatment recommendations require specially trained medical
specialists. The current practice of reading medical images is labor-intensive,
time-consuming, costly, and error-prone. It would be more desirable to have a
computer-aided system that can automatically make diagnosis and treatment
recommendations. Recent advances in deep learning enable us to rethink the ways
of clinician diagnosis based on medical images. In this thesis, we will
introduce 1) mammograms for detecting breast cancers, the most frequently
diagnosed solid cancer for U.S. women, 2) lung CT images for detecting lung
cancers, the most frequently diagnosed malignant cancer, and 3) head and neck
CT images for automated delineation of organs at risk in radiotherapy. First,
we will show how to employ the adversarial concept to generate the hard
examples improving mammogram mass segmentation. Second, we will demonstrate how
to use the weakly labeled data for the mammogram breast cancer diagnosis by
efficiently design deep learning for multi-instance learning. Third, the thesis
will walk through DeepLung system which combines deep 3D ConvNets and GBM for
automated lung nodule detection and classification. Fourth, we will show how to
use weakly labeled data to improve existing lung nodule detection system by
integrating deep learning with a probabilistic graphic model. Lastly, we will
demonstrate the AnatomyNet which is thousands of times faster and more accurate
than previous methods on automated anatomy segmentation.Comment: PhD Thesi
CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance
We introduce CASED, a novel curriculum sampling algorithm that facilitates
the optimization of deep learning segmentation or detection models on data sets
with extreme class imbalance. We evaluate the CASED learning framework on the
task of lung nodule detection in chest CT. In contrast to two-stage solutions,
wherein nodule candidates are first proposed by a segmentation model and
refined by a second detection stage, CASED improves the training of deep nodule
segmentation models (e.g. UNet) to the point where state of the art results are
achieved using only a trivial detection stage. CASED improves the optimization
of deep segmentation models by allowing them to first learn how to distinguish
nodules from their immediate surroundings, while continuously adding a greater
proportion of difficult-to-classify global context, until uniformly sampling
from the empirical data distribution. Using CASED during training yields a
minimalist proposal to the lung nodule detection problem that tops the LUNA16
nodule detection benchmark with an average sensitivity score of 88.35%.
Furthermore, we find that models trained using CASED are robust to nodule
annotation quality by showing that comparable results can be achieved when only
a point and radius for each ground truth nodule are provided during training.
Finally, the CASED learning framework makes no assumptions with regard to
imaging modality or segmentation target and should generalize to other medical
imaging problems where class imbalance is a persistent problem.Comment: 20th International Conference on Medical Image Computing and Computer
Assisted Intervention 201
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
Attention Gated Networks: Learning to Leverage Salient Regions in Medical Images
We propose a novel attention gate (AG) model for medical image analysis that
automatically learns to focus on target structures of varying shapes and sizes.
Models trained with AGs implicitly learn to suppress irrelevant regions in an
input image while highlighting salient features useful for a specific task.
This enables us to eliminate the necessity of using explicit external
tissue/organ localisation modules when using convolutional neural networks
(CNNs). AGs can be easily integrated into standard CNN models such as VGG or
U-Net architectures with minimal computational overhead while increasing the
model sensitivity and prediction accuracy. The proposed AG models are evaluated
on a variety of tasks, including medical image classification and segmentation.
For classification, we demonstrate the use case of AGs in scan plane detection
for fetal ultrasound screening. We show that the proposed attention mechanism
can provide efficient object localisation while improving the overall
prediction performance by reducing false positives. For segmentation, the
proposed architecture is evaluated on two large 3D CT abdominal datasets with
manual annotations for multiple organs. Experimental results show that AG
models consistently improve the prediction performance of the base
architectures across different datasets and training sizes while preserving
computational efficiency. Moreover, AGs guide the model activations to be
focused around salient regions, which provides better insights into how model
predictions are made. The source code for the proposed AG models is publicly
available.Comment: Accepted for Medical Image Analysis (Special Issue on Medical Imaging
with Deep Learning). arXiv admin note: substantial text overlap with
arXiv:1804.03999, arXiv:1804.0533
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
- …