16,687 research outputs found
Visualizing and Understanding Convolutional Networks
Large Convolutional Network models have recently demonstrated impressive
classification performance on the ImageNet benchmark. However there is no clear
understanding of why they perform so well, or how they might be improved. In
this paper we address both issues. We introduce a novel visualization technique
that gives insight into the function of intermediate feature layers and the
operation of the classifier. We also perform an ablation study to discover the
performance contribution from different model layers. This enables us to find
model architectures that outperform Krizhevsky \etal on the ImageNet
classification benchmark. We show our ImageNet model generalizes well to other
datasets: when the softmax classifier is retrained, it convincingly beats the
current state-of-the-art results on Caltech-101 and Caltech-256 datasets
Understanding Anatomy Classification Through Attentive Response Maps
One of the main challenges for broad adoption of deep learning based models
such as convolutional neural networks (CNN), is the lack of understanding of
their decisions. In many applications, a simpler, less capable model that can
be easily understood is favorable to a black-box model that has superior
performance. In this paper, we present an approach for designing CNNs based on
visualization of the internal activations of the model. We visualize the
model's response through attentive response maps obtained using a fractional
stride convolution technique and compare the results with known imaging
landmarks from the medical literature. We show that sufficiently deep and
capable models can be successfully trained to use the same medical landmarks a
human expert would use. Our approach allows for communicating the model
decision process well, but also offers insight towards detecting biases.Comment: Accepted at ISBI, 201
- …