13 research outputs found

    A Simple Deep Learning Architecture for City-scale Vehicle Re-identification

    Get PDF
    The task of vehicle re-identification aims to identify a vehicle across different cameras with non overlapping fields of view and it is a challenging research problem due to viewpoint orientation, scene occlusions and intrinsic inter-class similarity of the data. In this paper, we propose a simplistic approach for one-shot vehicle re-identification based on a siamese/triple convolutional architecture for feature representation. Our method involves learning a feature space in which the vehicles of the same identities are projected closer to one another compared to those with different identities. Moreover, we provide an extensive evaluation of loss functions, including a novel combination of triplet loss with classification loss, and other network parameters applied to our vehicle re-identification system. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training, utilizing only identity-level annotations. The proposed method is evaluated on the large-scale CityFlow-ReID dataset

    StRDAN: Synthetic-to-Real Domain Adaptation Network for Vehicle Re-Identification

    Full text link
    Vehicle re-identification aims to obtain the same vehicles from vehicle images. This is challenging but essential for analyzing and predicting traffic flow in the city. Although deep learning methods have achieved enormous progress for this task, their large data requirement is a critical shortcoming. Therefore, we propose a synthetic-to-real domain adaptation network (StRDAN) framework, which can be trained with inexpensive large-scale synthetic and real data to improve performance. The StRDAN training method combines domain adaptation and semi-supervised learning methods and their associated losses. StRDAN offers significant improvement over the baseline model, which can only be trained using real data, for VeRi and CityFlow-ReID datasets, achieving 3.1% and 12.9% improved mean average precision, respectively.Comment: 7 pages, 2 figures, CVPR Workshop Paper (Revised

    Dual Embedding Expansion for Vehicle Re-identification

    Full text link
    Vehicle re-identification plays a crucial role in the management of transportation infrastructure and traffic flow. However, this is a challenging task due to the large view-point variations in appearance, environmental and instance-related factors. Modern systems deploy CNNs to produce unique representations from the images of each vehicle instance. Most work focuses on leveraging new losses and network architectures to improve the descriptiveness of these representations. In contrast, our work concentrates on re-ranking and embedding expansion techniques. We propose an efficient approach for combining the outputs of multiple models at various scales while exploiting tracklet and neighbor information, called dual embedding expansion (DEx). Additionally, a comparative study of several common image retrieval techniques is presented in the context of vehicle re-ID. Our system yields competitive performance in the 2020 NVIDIA AI City Challenge with promising results. We demonstrate that DEx when combined with other re-ranking techniques, can produce an even larger gain without any additional attribute labels or manual supervision

    Traffic-Aware Multi-Camera Tracking of Vehicles Based on ReID and Camera Link Model

    Full text link
    Multi-target multi-camera tracking (MTMCT), i.e., tracking multiple targets across multiple cameras, is a crucial technique for smart city applications. In this paper, we propose an effective and reliable MTMCT framework for vehicles, which consists of a traffic-aware single camera tracking (TSCT) algorithm, a trajectory-based camera link model (CLM) for vehicle re-identification (ReID), and a hierarchical clustering algorithm to obtain the cross camera vehicle trajectories. First, the TSCT, which jointly considers vehicle appearance, geometric features, and some common traffic scenarios, is proposed to track the vehicles in each camera separately. Second, the trajectory-based CLM is adopted to facilitate the relationship between each pair of adjacently connected cameras and add spatio-temporal constraints for the subsequent vehicle ReID with temporal attention. Third, the hierarchical clustering algorithm is used to merge the vehicle trajectories among all the cameras to obtain the final MTMCT results. Our proposed MTMCT is evaluated on the CityFlow dataset and achieves a new state-of-the-art performance with IDF1 of 74.93%.Comment: Accepted by ACM International Conference on Multimedia 202
    corecore