3 research outputs found

    VID-Trans-ReID: Enhanced Video Transformers for Person Re-identification

    Get PDF
    Video-based person Re-identification (Re-ID) has received increasing attention recently due to its important role within surveillance video analysis. Video-based Re- ID expands upon earlier image-based methods by extracting person features temporally across multiple video image frames. The key challenge within person Re-ID is extracting a robust feature representation that is invariant to the challenges of pose and illumination variation across multiple camera viewpoints. Whilst most contemporary methods use a CNN based methodology, recent advances in vision transformer (ViT) architectures boost fine-grained feature discrimination via the use of both multi-head attention without any loss of feature robustness. To specifically enable ViT architectures to effectively address the challenges of video person Re-ID, we propose two novel modules constructs, Temporal Clip Shift and Shuffled (TCSS) and Video Patch Part Feature (VPPF), that boost the robustness of the resultant Re-ID feature representation. Furthermore, we combine our proposed approach with current best practices spanning both image and video based Re-ID including camera view embedding. Our proposed approach outperforms existing state-of-the-art work on the MARS, PRID2011, and iLIDS-VID Re-ID benchmark datasets achieving 96.36%, 96.63%, 94.67% rank-1 accuracy respectively and achieving 90.25% mAP on MARS

    Not 3D Re-ID: Simple Single Stream 2D Convolution for Robust Video Re-identification

    No full text
    Video-based person re-identification has received increasing attention recently, as it plays an important role within surveillance video analysis. Video-based Re-ID is an expansion of earlier image-based re-identification methods by learning features from a video via multiple image frames for each person. Most contemporary video Re-ID methods utilise complex CNNbased network architectures using 3D convolution or multibranch networks to extract spatial-temporal video features. By contrast, in this paper, we illustrate superior performance from a simple single stream 2D convolution network leveraging the ResNet50-IBN architecture to extract frame-level features followed by temporal attention for clip level features. These clip level features can be generalised to extract video level features by averaging without any significant additional cost. Our approach uses best video Re-ID practice and transfer learning between datasets to outperform existing state-of-the-art approaches on the MARS, PRID2011 and iLIDS-VID datasets with 89:62%, 97:75%, 97:33% rank-1 accuracy respectively and with 84:61% mAP for MARS, without reliance on complex and memory intensive 3D convolutions or multi-stream networks architectures as found in other contemporary work. Conversely, our work shows that global features extracted by the 2D convolution network are a sufficient representation for robust state of the art video Re-ID.Comment: have been submitted to ICPR 2020 and has been ACCEPTED for presentation and inclusion in the proceeding

    Re-ID-AR: Improved Person Re-identification in Video via Joint Weakly Supervised Action Recognition

    Get PDF
    We uniquely consider the task of joint person re-identification (Re-ID) and action recognition in video as a multi-task problem. In addition to the broader potential of joint Re-ID and action recognition within the context of automated multi-camera surveillance, we show that the consideration of action recognition in addition to Re-ID results in a model that learns discriminative feature representations that both improve Re-ID performance and are capable of providing viable per-view (clip-wise) action recognition. Our approach uses a single 2D Convolutional Neural Network (CNN) architecture comprising a common ResNet50-IBN backbone CNN architecture, to extract frame-level features with subsequent temporal attention for clip level feature extraction, followed by two sub-branches:- the IDentification (sub-)Network (IDN) for person Re-ID and the Action Recognition (sub-)Network for per-view action recognition. The IDN comprises a single fully connected layer while the ARN comprises multiple attention blocks on a one-to-one ratio with the number of actions to be recognised. This is subsequently trained as a joint Re-ID and action recognition task using a combination of two task-specific, multi-loss terms via weakly labelled actions obtained over two leading benchmark Re-ID datasets (MARS, LPW). Our consideration of Re-ID and action recognition as a multi-task problem results in a multi-branch 2D CNN architecture that outperforms prior work in the field (rank-1 (mAP) – MARS: 93.21%(87.23%), LPW: 79.60%) without any reliance 3D convolutions or multi-stream networks architectures as found in other contemporary work. Our work represents the first benchmark performance for such a joint Re-ID and action recognition video understanding task, hitherto unapproached in the literature, and is accompanied by a new public dataset of supplementary action labels for the seminal MARS and LPW Re-ID datasets
    corecore