13 research outputs found
Few-Shot Deep Adversarial Learning for Video-based Person Re-identification
Video-based person re-identification (re-ID) refers to matching people across
camera views from arbitrary unaligned video footages. Existing methods rely on
supervision signals to optimise a projected space under which the distances
between inter/intra-videos are maximised/minimised. However, this demands
exhaustively labelling people across camera views, rendering them unable to be
scaled in large networked cameras. Also, it is noticed that learning effective
video representations with view invariance is not explicitly addressed for
which features exhibit different distributions otherwise. Thus, matching videos
for person re-ID demands flexible models to capture the dynamics in time-series
observations and learn view-invariant representations with access to limited
labeled training samples. In this paper, we propose a novel few-shot deep
learning approach to video-based person re-ID, to learn comparable
representations that are discriminative and view-invariant. The proposed method
is developed on the variational recurrent neural networks (VRNNs) and trained
adversarially to produce latent variables with temporal dependencies that are
highly discriminative yet view-invariant in matching persons. Through extensive
experiments conducted on three benchmark datasets, we empirically show the
capability of our method in creating view-invariant temporal features and
state-of-the-art performance achieved by our method.Comment: Appearing at IEEE Transactions on Image Processin