3,340 research outputs found
Manitest: Are classifiers really invariant?
Invariance to geometric transformations is a highly desirable property of
automatic classifiers in many image recognition tasks. Nevertheless, it is
unclear to which extent state-of-the-art classifiers are invariant to basic
transformations such as rotations and translations. This is mainly due to the
lack of general methods that properly measure such an invariance. In this
paper, we propose a rigorous and systematic approach for quantifying the
invariance to geometric transformations of any classifier. Our key idea is to
cast the problem of assessing a classifier's invariance as the computation of
geodesics along the manifold of transformed images. We propose the Manitest
method, built on the efficient Fast Marching algorithm to compute the
invariance of classifiers. Our new method quantifies in particular the
importance of data augmentation for learning invariance from data, and the
increased invariance of convolutional neural networks with depth. We foresee
that the proposed generic tool for measuring invariance to a large class of
geometric transformations and arbitrary classifiers will have many applications
for evaluating and comparing classifiers based on their invariance, and help
improving the invariance of existing classifiers.Comment: BMVC 201
Detail-Preserving Pooling in Deep Networks
Most convolutional neural networks use some method for gradually downscaling
the size of the hidden layers. This is commonly referred to as pooling, and is
applied to reduce the number of parameters, improve invariance to certain
distortions, and increase the receptive field size. Since pooling by nature is
a lossy process, it is crucial that each such layer maintains the portion of
the activations that is most important for the network's discriminability. Yet,
simple maximization or averaging over blocks, max or average pooling, or plain
downsampling in the form of strided convolutions are the standard. In this
paper, we aim to leverage recent results on image downscaling for the purposes
of deep learning. Inspired by the human visual system, which focuses on local
spatial changes, we propose detail-preserving pooling (DPP), an adaptive
pooling method that magnifies spatial changes and preserves important
structural detail. Importantly, its parameters can be learned jointly with the
rest of the network. We analyze some of its theoretical properties and show its
empirical benefits on several datasets and networks, where DPP consistently
outperforms previous pooling approaches.Comment: To appear at CVPR 201
Local Descriptors Optimized for Average Precision
Extraction of local feature descriptors is a vital stage in the solution
pipelines for numerous computer vision tasks. Learning-based approaches improve
performance in certain tasks, but still cannot replace handcrafted features in
general. In this paper, we improve the learning of local feature descriptors by
optimizing the performance of descriptor matching, which is a common stage that
follows descriptor extraction in local feature based pipelines, and can be
formulated as nearest neighbor retrieval. Specifically, we directly optimize a
ranking-based retrieval performance metric, Average Precision, using deep
neural networks. This general-purpose solution can also be viewed as a listwise
learning to rank approach, which is advantageous compared to recent local
ranking approaches. On standard benchmarks, descriptors learned with our
formulation achieve state-of-the-art results in patch verification, patch
retrieval, and image matching.Comment: 13 pages, 8 figures. IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 201
- …