8,505 research outputs found
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
Existing deep convolutional neural networks (CNNs) require a fixed-size
(e.g., 224x224) input image. This requirement is "artificial" and may reduce
the recognition accuracy for the images or sub-images of an arbitrary
size/scale. In this work, we equip the networks with another pooling strategy,
"spatial pyramid pooling", to eliminate the above requirement. The new network
structure, called SPP-net, can generate a fixed-length representation
regardless of image size/scale. Pyramid pooling is also robust to object
deformations. With these advantages, SPP-net should in general improve all
CNN-based image classification methods. On the ImageNet 2012 dataset, we
demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures
despite their different designs. On the Pascal VOC 2007 and Caltech101
datasets, SPP-net achieves state-of-the-art classification results using a
single full-image representation and no fine-tuning.
The power of SPP-net is also significant in object detection. Using SPP-net,
we compute the feature maps from the entire image only once, and then pool
features in arbitrary regions (sub-images) to generate fixed-length
representations for training the detectors. This method avoids repeatedly
computing the convolutional features. In processing test images, our method is
24-102x faster than the R-CNN method, while achieving better or comparable
accuracy on Pascal VOC 2007.
In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our
methods rank #2 in object detection and #3 in image classification among all 38
teams. This manuscript also introduces the improvement made for this
competition.Comment: This manuscript is the accepted version for IEEE Transactions on
Pattern Analysis and Machine Intelligence (TPAMI) 2015. See Changelo
Learning scale-variant and scale-invariant features for deep image classification
Convolutional Neural Networks (CNNs) require large image corpora to be
trained on classification tasks. The variation in image resolutions, sizes of
objects and patterns depicted, and image scales, hampers CNN training and
performance, because the task-relevant information varies over spatial scales.
Previous work attempting to deal with such scale variations focused on
encouraging scale-invariant CNN representations. However, scale-invariant
representations are incomplete representations of images, because images
contain scale-variant information as well. This paper addresses the combined
development of scale-invariant and scale-variant representations. We propose a
multi- scale CNN method to encourage the recognition of both types of features
and evaluate it on a challenging image classification task involving
task-relevant characteristics at multiple scales. The results show that our
multi-scale CNN outperforms single-scale CNN. This leads to the conclusion that
encouraging the combined development of a scale-invariant and scale-variant
representation in CNNs is beneficial to image recognition performance
- …