41,800 research outputs found
Person Re-Identification by Deep Joint Learning of Multi-Loss Classification
Existing person re-identification (re-id) methods rely mostly on either
localised or global feature representation alone. This ignores their joint
benefit and mutual complementary effects. In this work, we show the advantages
of jointly learning local and global features in a Convolutional Neural Network
(CNN) by aiming to discover correlated local and global features in different
context. Specifically, we formulate a method for joint learning of local and
global feature selection losses designed to optimise person re-id when using
only generic matching metrics such as the L2 distance. We design a novel CNN
architecture for Jointly Learning Multi-Loss (JLML) of local and global
discriminative feature optimisation subject concurrently to the same re-id
labelled information. Extensive comparative evaluations demonstrate the
advantages of this new JLML model for person re-id over a wide range of
state-of-the-art re-id methods on five benchmarks (VIPeR, GRID, CUHK01, CUHK03,
Market-1501).Comment: Accepted by IJCAI 201
Multi-view Convolutional Neural Networks for 3D Shape Recognition
A longstanding question in computer vision concerns the representation of 3D
shapes for recognition: should 3D shapes be represented with descriptors
operating on their native 3D formats, such as voxel grid or polygon mesh, or
can they be effectively represented with view-based descriptors? We address
this question in the context of learning to recognize 3D shapes from a
collection of their rendered views on 2D images. We first present a standard
CNN architecture trained to recognize the shapes' rendered views independently
of each other, and show that a 3D shape can be recognized even from a single
view at an accuracy far higher than using state-of-the-art 3D shape
descriptors. Recognition rates further increase when multiple views of the
shapes are provided. In addition, we present a novel CNN architecture that
combines information from multiple views of a 3D shape into a single and
compact shape descriptor offering even better recognition performance. The same
architecture can be applied to accurately recognize human hand-drawn sketches
of shapes. We conclude that a collection of 2D views can be highly informative
for 3D shape recognition and is amenable to emerging CNN architectures and
their derivatives.Comment: v1: Initial version. v2: An updated ModelNet40 training/test split is
used; results with low-rank Mahalanobis metric learning are added. v3 (ICCV
2015): A second camera setup without the upright orientation assumption is
added; some accuracy and mAP numbers are changed slightly because a small
issue in mesh rendering related to specularities is fixe
Improving Semantic Embedding Consistency by Metric Learning for Zero-Shot Classification
This paper addresses the task of zero-shot image classification. The key
contribution of the proposed approach is to control the semantic embedding of
images -- one of the main ingredients of zero-shot learning -- by formulating
it as a metric learning problem. The optimized empirical criterion associates
two types of sub-task constraints: metric discriminating capacity and accurate
attribute prediction. This results in a novel expression of zero-shot learning
not requiring the notion of class in the training phase: only pairs of
image/attributes, augmented with a consistency indicator, are given as ground
truth. At test time, the learned model can predict the consistency of a test
image with a given set of attributes , allowing flexible ways to produce
recognition inferences. Despite its simplicity, the proposed approach gives
state-of-the-art results on four challenging datasets used for zero-shot
recognition evaluation.Comment: in ECCV 2016, Oct 2016, amsterdam, Netherlands. 201
- …