291 research outputs found
Towards automatic pulmonary nodule management in lung cancer screening with deep learning
The introduction of lung cancer screening programs will produce an
unprecedented amount of chest CT scans in the near future, which radiologists
will have to read in order to decide on a patient follow-up strategy. According
to the current guidelines, the workup of screen-detected nodules strongly
relies on nodule size and nodule type. In this paper, we present a deep
learning system based on multi-stream multi-scale convolutional networks, which
automatically classifies all nodule types relevant for nodule workup. The
system processes raw CT data containing a nodule without the need for any
additional information such as nodule segmentation or nodule size and learns a
representation of 3D data by analyzing an arbitrary number of 2D views of a
given nodule. The deep learning system was trained with data from the Italian
MILD screening trial and validated on an independent set of data from the
Danish DLCST screening trial. We analyze the advantage of processing nodules at
multiple scales with a multi-stream convolutional network architecture, and we
show that the proposed deep learning system achieves performance at classifying
nodule type that surpasses the one of classical machine learning approaches and
is within the inter-observer variability among four experienced human
observers.Comment: Published on Scientific Report
Automatic Pulmonary Nodule Detection in CT Scans Using Convolutional Neural Networks Based on Maximum Intensity Projection
Accurate pulmonary nodule detection is a crucial step in lung cancer
screening. Computer-aided detection (CAD) systems are not routinely used by
radiologists for pulmonary nodule detection in clinical practice despite their
potential benefits. Maximum intensity projection (MIP) images improve the
detection of pulmonary nodules in radiological evaluation with computed
tomography (CT) scans. Inspired by the clinical methodology of radiologists, we
aim to explore the feasibility of applying MIP images to improve the
effectiveness of automatic lung nodule detection using convolutional neural
networks (CNNs). We propose a CNN-based approach that takes MIP images of
different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices
as input. Such an approach augments the two-dimensional (2-D) CT slice images
with more representative spatial information that helps discriminate nodules
from vessels through their morphologies. Our proposed method achieves
sensitivity of 92.67% with 1 false positive per scan and sensitivity of 94.19%
with 2 false positives per scan for lung nodule detection on 888 scans in the
LIDC-IDRI dataset. The use of thick MIP images helps the detection of small
pulmonary nodules (3 mm-10 mm) and results in fewer false positives.
Experimental results show that utilizing MIP images can increase the
sensitivity and lower the number of false positives, which demonstrates the
effectiveness and significance of the proposed MIP-based CNNs framework for
automatic pulmonary nodule detection in CT scans. The proposed method also
shows the potential that CNNs could gain benefits for nodule detection by
combining the clinical procedure.Comment: Submitted to IEEE TM
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
Self-paced Convolutional Neural Network for Computer Aided Detection in Medical Imaging Analysis
Tissue characterization has long been an important component of Computer
Aided Diagnosis (CAD) systems for automatic lesion detection and further
clinical planning. Motivated by the superior performance of deep learning
methods on various computer vision problems, there has been increasing work
applying deep learning to medical image analysis. However, the development of a
robust and reliable deep learning model for computer-aided diagnosis is still
highly challenging due to the combination of the high heterogeneity in the
medical images and the relative lack of training samples. Specifically,
annotation and labeling of the medical images is much more expensive and
time-consuming than other applications and often involves manual labor from
multiple domain experts. In this work, we propose a multi-stage, self-paced
learning framework utilizing a convolutional neural network (CNN) to classify
Computed Tomography (CT) image patches. The key contribution of this approach
is that we augment the size of training samples by refining the unlabeled
instances with a self-paced learning CNN. By implementing the framework on high
performance computing servers including the NVIDIA DGX1 machine, we obtained
the experimental result, showing that the self-pace boosted network
consistently outperformed the original network even with very scarce manual
labels. The performance gain indicates that applications with limited training
samples such as medical image analysis can benefit from using the proposed
framework.Comment: accepted by 8th International Workshop on Machine Learning in Medical
Imaging (MLMI 2017
Deep Learning Based Medical Image Analysis with Limited Data
Deep Learning Methods have shown its great effort in the area of Computer Vision. However, when solving the problems of medical imaging, deep learning’s power is confined by limited data available. We present a series of novel methodologies for solving medical imaging analysis problems with limited Computed tomography (CT) scans available. Our method, based on deep learning, with different strategies, including using Generative Adversar- ial Networks, two-stage training, infusing the expert knowledge, voting based or converting to other space, solves the data set limitation issue for the cur- rent medical imaging problems, specifically cancer detection and diagnosis, and shows very good performance and outperforms the state-of-art results in the literature. With the self-learned features, deep learning based techniques start to be applied to the biomedical imaging problems and various structures have been designed. In spite of its simplity and anticipated good performance,
the deep learning based techniques can not perform to its best extent due to the limited size of data sets for the medical imaging problems. On the other side, the traditional hand-engineered features based methods have been studied in the past decades and a lot of useful features have been found by these research for the task of detecting and diagnosing the pulmonary nod- ules on CT scans, but these methods are usually performed through a series of complicated procedures with manually empirical parameter adjustments. Our method significantly reduces the complications of the traditional proce- dures for pulmonary nodules detection, while retaining and even outperforming the state-of-art accuracy. Besides, we make contribution on how to convert low-dose CT image to full-dose CT so as to adapting current models on the newly-emerged low-dose CT data
- …