3,497 research outputs found
Recommended from our members
Deep learning networks find unique mammographic differences in previous negative mammograms between interval and screen-detected cancers: a case-case study.
BackgroundTo determine if mammographic features from deep learning networks can be applied in breast cancer to identify groups at interval invasive cancer risk due to masking beyond using traditional breast density measures.MethodsFull-field digital screening mammograms acquired in our clinics between 2006 and 2015 were reviewed. Transfer learning of a deep learning network with weights initialized from ImageNet was performed to classify mammograms that were followed by an invasive interval or screen-detected cancer within 12 months of the mammogram. Hyperparameter optimization was performed and the network was visualized through saliency maps. Prediction loss and accuracy were calculated using this deep learning network. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were generated with the outcome of interval cancer using the deep learning network and compared to predictions from conditional logistic regression with errors quantified through contingency tables.ResultsPre-cancer mammograms of 182 interval and 173 screen-detected cancers were split into training/test cases at an 80/20 ratio. Using Breast Imaging-Reporting and Data System (BI-RADS) density alone, the ability to correctly classify interval cancers was moderate (AUC = 0.65). The optimized deep learning model achieved an AUC of 0.82. Contingency table analysis showed the network was correctly classifying 75.2% of the mammograms and that incorrect classifications were slightly more common for the interval cancer mammograms. Saliency maps of each cancer case found that local information could highly drive classification of cases more than global image information.ConclusionsPre-cancerous mammograms contain imaging information beyond breast density that can be identified with deep learning networks to predict the probability of breast cancer detection
Learning to segment with image-level supervision
Deep convolutional networks have achieved the state-of-the-art for semantic
image segmentation tasks. However, training these networks requires access to
densely labeled images, which are known to be very expensive to obtain. On the
other hand, the web provides an almost unlimited source of images annotated at
the image level. How can one utilize this much larger weakly annotated set for
tasks that require dense labeling? Prior work often relied on localization
cues, such as saliency maps, objectness priors, bounding boxes etc., to address
this challenging problem. In this paper, we propose a model that generates
auxiliary labels for each image, while simultaneously forcing the output of the
CNN to satisfy the mean-field constraints imposed by a conditional random
field. We show that one can enforce the CRF constraints by forcing the
distribution at each pixel to be close to the distribution of its neighbors.
This is in stark contrast with methods that compute a recursive expansion of
the mean-field distribution using a recurrent architecture and train the
resultant distribution. Instead, the proposed model adds an extra loss term to
the output of the CNN, and hence, is faster than recursive implementations. We
achieve the state-of-the-art for weakly supervised semantic image segmentation
on VOC 2012 dataset, assuming no manually labeled pixel level information is
available. Furthermore, the incorporation of conditional random fields in CNN
incurs little extra time during training.Comment: Published in WACV 201
Visual Saliency Based on Multiscale Deep Features
Visual saliency is a fundamental problem in both cognitive and computational
sciences, including computer vision. In this CVPR 2015 paper, we discover that
a high-quality visual saliency model can be trained with multiscale features
extracted using a popular deep learning architecture, convolutional neural
networks (CNNs), which have had many successes in visual recognition tasks. For
learning such saliency models, we introduce a neural network architecture,
which has fully connected layers on top of CNNs responsible for extracting
features at three different scales. We then propose a refinement method to
enhance the spatial coherence of our saliency results. Finally, aggregating
multiple saliency maps computed for different levels of image segmentation can
further boost the performance, yielding saliency maps better than those
generated from a single segmentation. To promote further research and
evaluation of visual saliency models, we also construct a new large database of
4447 challenging images and their pixelwise saliency annotation. Experimental
results demonstrate that our proposed method is capable of achieving
state-of-the-art performance on all public benchmarks, improving the F-Measure
by 5.0% and 13.2% respectively on the MSRA-B dataset and our new dataset
(HKU-IS), and lowering the mean absolute error by 5.7% and 35.1% respectively
on these two datasets.Comment: To appear in CVPR 201
- …