5 research outputs found
SR-POD : sample rotation based on principal-axis orientation distribution for data augmentation in deep object detection
Convolutional neural networks (CNNs) have outperformed most state-of-the-art methods in object detection. However, CNNs suffer the difficulty of detecting objects with rotation, because the dataset used to train the CCNs often does not contain sufficient samples with various angles of orientation. In this paper, we propose a novel data-augmentation approach to handle samples with rotation, which utilizes the distribution of the object's orientation without the time-consuming process of rotating the sample images. Firstly, we present an orientation descriptor, named as "principal-axis orientation" to describe the orientation of the object's principal axis in an image and estimate the distribution of objects’ principal-axis orientations (PODs) of the whole dataset. Secondly, we define a similarity metric to calculate the POD similarity between the training set and an additional dataset, which is built by randomly selecting images from the benchmark ImageNet ILSVRC2012 dataset. Finally, we optimize a cost function to obtain an optimal rotation angle, which indicates the highest POD similarity between the two aforementioned data sets. In order to evaluate our data augmentation method for object detection, experiments, conducted on the benchmark PASCAL VOC2007 dataset, show that with the training set augmented using our method, the average precision (AP) of the Faster RCNN in the TV-monitor is improved by 7.5%. In addition, our experimental results also demonstrate that new samples generated by random rotation are more likely to result in poor performance of object detection
Joint Group Feature Selection and Discriminative Filter Learning for Robust Visual Object Tracking
We propose a new Group Feature Selection method for Discriminative
Correlation Filters (GFS-DCF) based visual object tracking. The key innovation
of the proposed method is to perform group feature selection across both
channel and spatial dimensions, thus to pinpoint the structural relevance of
multi-channel features to the filtering system. In contrast to the widely used
spatial regularisation or feature selection methods, to the best of our
knowledge, this is the first time that channel selection has been advocated for
DCF-based tracking. We demonstrate that our GFS-DCF method is able to
significantly improve the performance of a DCF tracker equipped with deep
neural network features. In addition, our GFS-DCF enables joint feature
selection and filter learning, achieving enhanced discrimination and
interpretability of the learned filters.
To further improve the performance, we adaptively integrate historical
information by constraining filters to be smooth across temporal frames, using
an efficient low-rank approximation. By design, specific
temporal-spatial-channel configurations are dynamically learned in the tracking
process, highlighting the relevant features, and alleviating the performance
degrading impact of less discriminative representations and reducing
information redundancy. The experimental results obtained on OTB2013, OTB2015,
VOT2017, VOT2018 and TrackingNet demonstrate the merits of our GFS-DCF and its
superiority over the state-of-the-art trackers. The code is publicly available
at https://github.com/XU-TIANYANG/GFS-DCF
Dictionary Integration using 3D Morphable Face Models for Pose-invariant Collaborative-representation-based Classification
The paper presents a dictionary integration algorithm
using 3D morphable face models (3DMM) for poseinvariant
collaborative-representation-based face classification.
To this end, we first fit a 3DMM to the 2D face images of
a dictionary to reconstruct the 3D shape and texture of each
image. The 3D faces are used to render a number of virtual
2D face images with arbitrary pose variations to augment the
training data, by merging the original and rendered virtual
samples to create an extended dictionary. Second, to reduce
the information redundancy of the extended dictionary and
improve the sparsity of reconstruction coefficient vectors using
collaborative-representation-based classification (CRC), we
exploit an on-line class elimination scheme to optimise the
extended dictionary by identifying the training samples of the
most representative classes for a given query. The final goal is
to perform pose-invariant face classification using the proposed
dictionary integration method and the on-line pruning strategy
under the CRC framework. Experimental results obtained for
a set of well-known face datasets demonstrate the merits of the
proposed method, especially its robustness to pose variations
Advanced Biometrics with Deep Learning
Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others