32 research outputs found
Regional feature learning using attribute structural analysis in bipartite attention framework for vehicle re-identification
Vehicle re-identification identifies target vehicles using images obtained by numerous non-overlapping real-time surveillance cameras. The effectiveness of re-identification is further challenging because of illumination changes, pose differences of captured images, and resolution. Fine-grained appearance changes in vehicles are recognized in addition to the coarse-grained characteristics like color of the vehicle along with model, and other custom features like logo stickers, annual service signs, and hangings to overcome these challenges. To prove the efficiency of our proposed bipartite attention framework, a novel dataset called Attributes27 which has 27 labelled attributes for each class are created. Our framework contains three major sections: The first section where the overall and semantic characteristics of every individual vehicle image are extracted by a double branch convolutional neural network (CNN) layer. Secondly, to identify the region of interests (ROIs) each branch has a self-attention block linked to it. Lastly to extract the regional features from the obtained ROIs, a partition-alignment block is deployed. The results of our proposed system’s evaluation on the Attributes27 and VeRi-776 datasets has highlighted significant regional attributes of each vehicle and improved the accuracy. Attributes27 and VeRi-776 datasets exhibits 98.5% and 84.3% accuracy respectively which are comparatively higher than the existing methods with 78.6% accuracy
Domain Adaptation with Joint Learning for Generic, Optical Car Part Recognition and Detection Systems (Go-CaRD)
Systems for the automatic recognition and detection of automotive parts are
crucial in several emerging research areas in the development of intelligent
vehicles. They enable, for example, the detection and modelling of interactions
between human and the vehicle. In this paper, we quantitatively and
qualitatively explore the efficacy of deep learning architectures for the
classification and localisation of 29 interior and exterior vehicle regions on
three novel datasets. Furthermore, we experiment with joint and transfer
learning approaches across datasets and point out potential applications of our
systems. Our best network architecture achieves an F1 score of 93.67 % for
recognition, while our best localisation approach utilising state-of-the-art
backbone networks achieve a mAP of 63.01 % for detection. The MuSe-CAR-Part
dataset, which is based on a large variety of human-car interactions in videos,
the weights of the best models, and the code is publicly available to academic
parties for benchmarking and future research.Comment: Demonstration and instructions to obtain data and models:
https://github.com/lstappen/GoCar