628 research outputs found
Abnormality Detection in Mammography using Deep Convolutional Neural Networks
Breast cancer is the most common cancer in women worldwide. The most common
screening technology is mammography. To reduce the cost and workload of
radiologists, we propose a computer aided detection approach for classifying
and localizing calcifications and masses in mammogram images. To improve on
conventional approaches, we apply deep convolutional neural networks (CNN) for
automatic feature learning and classifier building. In computer-aided
mammography, deep CNN classifiers cannot be trained directly on full mammogram
images because of the loss of image details from resizing at input layers.
Instead, our classifiers are trained on labelled image patches and then adapted
to work on full mammogram images for localizing the abnormalities.
State-of-the-art deep convolutional neural networks are compared on their
performance of classifying the abnormalities. Experimental results indicate
that VGGNet receives the best overall accuracy at 92.53\% in classifications.
For localizing abnormalities, ResNet is selected for computing class activation
maps because it is ready to be deployed without structural change or further
training. Our approach demonstrates that deep convolutional neural network
classifiers have remarkable localization capabilities despite no supervision on
the location of abnormalities is provided.Comment: 6 page
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
Deep Multi-instance Networks with Sparse Label Assignment for Whole Mammogram Classification
Mammogram classification is directly related to computer-aided diagnosis of
breast cancer. Traditional methods rely on regions of interest (ROIs) which
require great efforts to annotate. Inspired by the success of using deep
convolutional features for natural image analysis and multi-instance learning
(MIL) for labeling a set of instances/patches, we propose end-to-end trained
deep multi-instance networks for mass classification based on whole mammogram
without the aforementioned ROIs. We explore three different schemes to
construct deep multi-instance networks for whole mammogram classification.
Experimental results on the INbreast dataset demonstrate the robustness of
proposed networks compared to previous work using segmentation and detection
annotations.Comment: MICCAI 2017 Camera Read
Recommended from our members
Deep learning networks find unique mammographic differences in previous negative mammograms between interval and screen-detected cancers: a case-case study.
BackgroundTo determine if mammographic features from deep learning networks can be applied in breast cancer to identify groups at interval invasive cancer risk due to masking beyond using traditional breast density measures.MethodsFull-field digital screening mammograms acquired in our clinics between 2006 and 2015 were reviewed. Transfer learning of a deep learning network with weights initialized from ImageNet was performed to classify mammograms that were followed by an invasive interval or screen-detected cancer within 12 months of the mammogram. Hyperparameter optimization was performed and the network was visualized through saliency maps. Prediction loss and accuracy were calculated using this deep learning network. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were generated with the outcome of interval cancer using the deep learning network and compared to predictions from conditional logistic regression with errors quantified through contingency tables.ResultsPre-cancer mammograms of 182 interval and 173 screen-detected cancers were split into training/test cases at an 80/20 ratio. Using Breast Imaging-Reporting and Data System (BI-RADS) density alone, the ability to correctly classify interval cancers was moderate (AUC = 0.65). The optimized deep learning model achieved an AUC of 0.82. Contingency table analysis showed the network was correctly classifying 75.2% of the mammograms and that incorrect classifications were slightly more common for the interval cancer mammograms. Saliency maps of each cancer case found that local information could highly drive classification of cases more than global image information.ConclusionsPre-cancerous mammograms contain imaging information beyond breast density that can be identified with deep learning networks to predict the probability of breast cancer detection
- …