4,305 research outputs found
Temporal Model Adaptation for Person Re-Identification
Person re-identification is an open and challenging problem in computer
vision. Majority of the efforts have been spent either to design the best
feature representation or to learn the optimal matching metric. Most approaches
have neglected the problem of adapting the selected features or the learned
model over time. To address such a problem, we propose a temporal model
adaptation scheme with human in the loop. We first introduce a
similarity-dissimilarity learning method which can be trained in an incremental
fashion by means of a stochastic alternating directions methods of multipliers
optimization procedure. Then, to achieve temporal adaptation with limited human
effort, we exploit a graph-based approach to present the user only the most
informative probe-gallery matches that should be used to update the model.
Results on three datasets have shown that our approach performs on par or even
better than state-of-the-art approaches while reducing the manual pairwise
labeling effort by about 80%
Divide and Fuse: A Re-ranking Approach for Person Re-identification
As re-ranking is a necessary procedure to boost person re-identification
(re-ID) performance on large-scale datasets, the diversity of feature becomes
crucial to person reID for its importance both on designing pedestrian
descriptions and re-ranking based on feature fusion. However, in many
circumstances, only one type of pedestrian feature is available. In this paper,
we propose a "Divide and use" re-ranking framework for person re-ID. It
exploits the diversity from different parts of a high-dimensional feature
vector for fusion-based re-ranking, while no other features are accessible.
Specifically, given an image, the extracted feature is divided into
sub-features. Then the contextual information of each sub-feature is
iteratively encoded into a new feature. Finally, the new features from the same
image are fused into one vector for re-ranking. Experimental results on two
person re-ID benchmarks demonstrate the effectiveness of the proposed
framework. Especially, our method outperforms the state-of-the-art on the
Market-1501 dataset.Comment: Accepted by BMVC201
Re-identification and semantic retrieval of pedestrians in video surveillance scenarios
Person re-identification consists of recognizing individuals across different sensors of a camera
network. Whereas clothing appearance cues are widely used, other modalities could
be exploited as additional information sources, like anthropometric measures and gait. In
this work we investigate whether the re-identification accuracy of clothing appearance descriptors
can be improved by fusing them with anthropometric measures extracted from
depth data, using RGB-Dsensors, in unconstrained settings. We also propose a dissimilaritybased
framework for building and fusing multi-modal descriptors of pedestrian images for
re-identification tasks, as an alternative to the widely used score-level fusion. The experimental
evaluation is carried out on two data sets including RGB-D data, one of which is a
novel, publicly available data set that we acquired using Kinect sensors.
In this dissertation we also consider a related task, named semantic retrieval of pedestrians
in video surveillance scenarios, which consists of searching images of individuals using
a textual description of clothing appearance as a query, given by a Boolean combination of
predefined attributes. This can be useful in applications like forensic video analysis, where
the query can be obtained froma eyewitness report. We propose a general method for implementing
semantic retrieval as an extension of a given re-identification system that uses any
multiple part-multiple component appearance descriptor. Additionally, we investigate on
deep learning techniques to improve both the accuracy of attribute detectors and generalization
capabilities. Finally, we experimentally evaluate our methods on several benchmark
datasets originally built for re-identification task
- …