318 research outputs found
Mining Discriminative Triplets of Patches for Fine-Grained Classification
Fine-grained classification involves distinguishing between similar
sub-categories based on subtle differences in highly localized regions;
therefore, accurate localization of discriminative regions remains a major
challenge. We describe a patch-based framework to address this problem. We
introduce triplets of patches with geometric constraints to improve the
accuracy of patch localization, and automatically mine discriminative
geometrically-constrained triplets for classification. The resulting approach
only requires object bounding boxes. Its effectiveness is demonstrated using
four publicly available fine-grained datasets, on which it outperforms or
achieves comparable performance to the state-of-the-art in classification
Part-based Multi-stream Model for Vehicle Searching
Due to the enormous requirement in public security and intelligent
transportation system, searching an identical vehicle has become more and more
important. Current studies usually treat vehicle as an integral object and then
train a distance metric to measure the similarity among vehicles. However,
these raw images may be exactly similar to ones with different identification
and include some pixels in background that may disturb the distance metric
learning. In this paper, we propose a novel and useful method to segment an
original vehicle image into several discriminative foreground parts, and these
parts consist of some fine grained regions that are named discriminative
patches. After that, these parts combined with the raw image are fed into the
proposed deep learning network. We can easily measure the similarity of two
vehicle images by computing the Euclidean distance of the features from FC
layer. Two main contributions of this paper are as follows. Firstly, a method
is proposed to estimate if a patch in a raw vehicle image is discriminative or
not. Secondly, a new Part-based Multi-Stream Model (PMSM) is designed and
optimized for vehicle retrieval and re-identification tasks. We evaluate the
proposed method on the VehicleID dataset, and the experimental results show
that our method can outperform the baseline.Comment: Published in International Conference on Pattern Recognition 201
Unsupervised Learning of Visual Representations using Videos
Is strong supervision necessary for learning a good visual representation? Do
we really need millions of semantically-labeled images to train a Convolutional
Neural Network (CNN)? In this paper, we present a simple yet surprisingly
powerful approach for unsupervised learning of CNN. Specifically, we use
hundreds of thousands of unlabeled videos from the web to learn visual
representations. Our key idea is that visual tracking provides the supervision.
That is, two patches connected by a track should have similar visual
representation in deep feature space since they probably belong to the same
object or object part. We design a Siamese-triplet network with a ranking loss
function to train this CNN representation. Without using a single image from
ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train
an ensemble of unsupervised networks that achieves 52% mAP (no bounding box
regression). This performance comes tantalizingly close to its
ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We
also show that our unsupervised network can perform competitively in other
tasks such as surface-normal estimation
Deep Unsupervised Similarity Learning using Partially Ordered Sets
Unsupervised learning of visual similarities is of paramount importance to
computer vision, particularly due to lacking training data for fine-grained
similarities. Deep learning of similarities is often based on relationships
between pairs or triplets of samples. Many of these relations are unreliable
and mutually contradicting, implying inconsistencies when trained without
supervision information that relates different tuples or triplets to each
other. To overcome this problem, we use local estimates of reliable
(dis-)similarities to initially group samples into compact surrogate classes
and use local partial orders of samples to classes to link classes to each
other. Similarity learning is then formulated as a partial ordering task with
soft correspondences of all samples to classes. Adopting a strategy of
self-supervision, a CNN is trained to optimally represent samples in a mutually
consistent manner while updating the classes. The similarity learning and
grouping procedure are integrated in a single model and optimized jointly. The
proposed unsupervised approach shows competitive performance on detailed pose
estimation and object classification.Comment: Accepted for publication at IEEE Computer Vision and Pattern
Recognition 201
Discriminative Feature Learning with Application to Fine-grained Recognition
For various computer vision tasks, finding suitable feature representations is fundamental. Fine-grained recognition, distinguishing sub-categories under the same super-category (e.g., bird species, car makes and models, etc.), serves as a good task to study discriminative feature learning for visual recognition task. The main reason is that the inter-class variations between fine-grained categories are very subtle and even smaller than intra-class variations caused by pose or deformation.
This thesis focuses on tasks mostly related to fine-grained categories. After briefly discussing our earlier attempt to capture subtle visual differences using sparse/low-rank analysis, the main part of the thesis reflects the trends in the past a few years as deep learning prevails.
In the first part of the thesis, we address the problem of fine-grained recognition via a patch-based framework built upon Convolutional Neural Network (CNN) features. We introduce triplets of patches with two geometric constraints to improve the accuracy of patch localization, and automatically mine discriminative geometrically-constrained triplets for recognition.
In the second part we begin to learn discriminative features in an end-to-end fashion. We propose a supervised feature learning approach, Label Consistent Neural Network, which enforces direct supervision in late hidden layers. We associate each neuron in a hidden layer with a particular class and encourage it to be activated for input signals from the same class by introducing a label consistency regularization. This label consistency constraint makes the features more discriminative and tends to faster convergence.
The third part proposes a more sophisticated and effective end-to-end network specifically designed for fine-grained recognition, which learns discriminative patches within a CNN. We show that patch-level learning capability of CNN can be enhanced by learning a bank of convolutional filters that capture class-specific discriminative patches without extra part or bounding box annotations. Such a filter bank is well structured, properly initialized and discriminatively learned through a novel asymmetric multi-stream architecture with convolutional filter supervision and a non-random layer initialization.
In the last part we goes beyond obtaining category labels and study the problem of continuous 3D pose estimation for fine-grained object categories. We augment three existing popular fine-grained recognition datasets by annotating each instance in the image with corresponding fine-grained 3D shape and ground-truth 3D pose. We cast the problem into a detection framework based on Faster/Mask R-CNN. To utilize the 3D information, we also introduce a novel 3D representation, named as location field, that is effective for representing 3D shapes
- …