5,005 research outputs found
Detecting and classifying lesions in mammograms with Deep Learning
In the last two decades Computer Aided Diagnostics (CAD) systems were
developed to help radiologists analyze screening mammograms. The benefits of
current CAD technologies appear to be contradictory and they should be improved
to be ultimately considered useful. Since 2012 deep convolutional neural
networks (CNN) have been a tremendous success in image recognition, reaching
human performance. These methods have greatly surpassed the traditional
approaches, which are similar to currently used CAD solutions. Deep CNN-s have
the potential to revolutionize medical image analysis. We propose a CAD system
based on one of the most successful object detection frameworks, Faster R-CNN.
The system detects and classifies malignant or benign lesions on a mammogram
without any human intervention. The proposed method sets the state of the art
classification performance on the public INbreast database, AUC = 0.95 . The
approach described here has achieved the 2nd place in the Digital Mammography
DREAM Challenge with AUC = 0.85 . When used as a detector, the system reaches
high sensitivity with very few false positive marks per image on the INbreast
dataset. Source code, the trained model and an OsiriX plugin are availaible
online at https://github.com/riblidezso/frcnn_cad
Abnormality Detection in Mammography using Deep Convolutional Neural Networks
Breast cancer is the most common cancer in women worldwide. The most common
screening technology is mammography. To reduce the cost and workload of
radiologists, we propose a computer aided detection approach for classifying
and localizing calcifications and masses in mammogram images. To improve on
conventional approaches, we apply deep convolutional neural networks (CNN) for
automatic feature learning and classifier building. In computer-aided
mammography, deep CNN classifiers cannot be trained directly on full mammogram
images because of the loss of image details from resizing at input layers.
Instead, our classifiers are trained on labelled image patches and then adapted
to work on full mammogram images for localizing the abnormalities.
State-of-the-art deep convolutional neural networks are compared on their
performance of classifying the abnormalities. Experimental results indicate
that VGGNet receives the best overall accuracy at 92.53\% in classifications.
For localizing abnormalities, ResNet is selected for computing class activation
maps because it is ready to be deployed without structural change or further
training. Our approach demonstrates that deep convolutional neural network
classifiers have remarkable localization capabilities despite no supervision on
the location of abnormalities is provided.Comment: 6 page
Recommended from our members
Deep learning networks find unique mammographic differences in previous negative mammograms between interval and screen-detected cancers: a case-case study.
BackgroundTo determine if mammographic features from deep learning networks can be applied in breast cancer to identify groups at interval invasive cancer risk due to masking beyond using traditional breast density measures.MethodsFull-field digital screening mammograms acquired in our clinics between 2006 and 2015 were reviewed. Transfer learning of a deep learning network with weights initialized from ImageNet was performed to classify mammograms that were followed by an invasive interval or screen-detected cancer within 12 months of the mammogram. Hyperparameter optimization was performed and the network was visualized through saliency maps. Prediction loss and accuracy were calculated using this deep learning network. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were generated with the outcome of interval cancer using the deep learning network and compared to predictions from conditional logistic regression with errors quantified through contingency tables.ResultsPre-cancer mammograms of 182 interval and 173 screen-detected cancers were split into training/test cases at an 80/20 ratio. Using Breast Imaging-Reporting and Data System (BI-RADS) density alone, the ability to correctly classify interval cancers was moderate (AUC = 0.65). The optimized deep learning model achieved an AUC of 0.82. Contingency table analysis showed the network was correctly classifying 75.2% of the mammograms and that incorrect classifications were slightly more common for the interval cancer mammograms. Saliency maps of each cancer case found that local information could highly drive classification of cases more than global image information.ConclusionsPre-cancerous mammograms contain imaging information beyond breast density that can be identified with deep learning networks to predict the probability of breast cancer detection
- …