7 research outputs found
Experiments on deep face recognition using partial faces
YesFace recognition is a very current subject of great interest in the area of visual computing. In the past, numerous face recognition and authentication approaches have been proposed, though the great majority of them use full frontal faces both for training machine learning algorithms and for measuring the recognition rates. In this paper, we discuss some novel experiments to test the performance of machine learning, especially the performance of deep learning, using partial faces as training and recognition cues. Thus, this study sharply differs from the common approaches of using the full face for recognition tasks. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the forehead. In this study, we use a convolutional neural network based architecture along with the pre-trained VGG-Face model to extract features for training. We then use two classifiers namely the cosine similarity and the linear support vector machine to test the recognition rates. We ran our experiments on the Brazilian FEI dataset consisting of 200 subjects. Our results show that the cheek of the face has the lowest recognition rate with 15% while the (top, bottom and right) half and the 3/4 of the face have near 100% recognition rates.Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035
Spectrum-Guided Adversarial Disparity Learning
It has been a significant challenge to portray intraclass disparity precisely
in the area of activity recognition, as it requires a robust representation of
the correlation between subject-specific variation for each activity class. In
this work, we propose a novel end-to-end knowledge directed adversarial
learning framework, which portrays the class-conditioned intraclass disparity
using two competitive encoding distributions and learns the purified latent
codes by denoising learned disparity. Furthermore, the domain knowledge is
incorporated in an unsupervised manner to guide the optimization and further
boosts the performance. The experiments on four HAR benchmark datasets
demonstrate the robustness and generalization of our proposed methods over a
set of state-of-the-art. We further prove the effectiveness of automatic domain
knowledge incorporation in performance enhancement
Deep face recognition using imperfect facial data
YesToday, computer based face recognition is a mature and reliable mechanism which is being practically utilised for many access control scenarios. As such, face recognition or authentication is predominantly performed using ‘perfect’ data of full frontal facial images. Though that may be the case, in reality, there are numerous situations where full frontal faces may not be available — the imperfect face images that often come from CCTV cameras do demonstrate the case in point. Hence, the problem of computer based face recognition using partial facial data as probes is still largely an unexplored area of research. Given that humans and computers perform face recognition and authentication inherently differently, it must be interesting as well as intriguing to understand how a computer favours various parts of the face when presented to the challenges of face recognition. In this work, we explore the question that surrounds the idea of face recognition using partial facial data. We explore it by applying novel experiments to test the performance of machine learning using partial faces and other manipulations on face images such as rotation and zooming, which we use as training and recognition cues. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the cheek. We also study the effect of face recognition subject to facial rotation as well as the effect of recognition subject to zooming out of the facial images. Our experiments are based on using the state of the art convolutional neural network based architecture along with the pre-trained VGG-Face model through which we extract features for machine learning. We then use two classifiers namely the cosine similarity and the linear support vector machines to test the recognition rates. We ran our experiments on two publicly available datasets namely, the controlled Brazilian FEI and the uncontrolled LFW dataset. Our results show that individual parts of the face such as the eyes, nose and the cheeks have low recognition rates though the rate of recognition quickly goes up when individual parts of the face in combined form are presented as probes
Single-Sample Face Recognition Based on Intra-Class Differences in a Variation Model
In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems
Advanced Biometrics with Deep Learning
Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
Recommended from our members
Computational Face Recognition Using Machine Learning Models
Faces are among the most complex stimuli that the human visual system
processes. Growing commercial interest in face recognition is encouraging, but it
also turns out to be a challenging endeavour. These challenges arise when the
situations are complex and cause varied facial appearance due to e.g., occlusion,
low-resolution, and ageing. The problem of computer-based face recognition
using partial facial data is still largely an unexplored area of research and how
does computer interpret various parts of the face. Another challenge is age
progression and regression, which is considered to be the most revealing topic
for understanding the human face changes during life.
In this research, the various computational face recognition models are
investigated to overcome the challenges posed by ageing and occlusions/partial
faces. For partial face-based face recognition, a pre-trained VGGF model is
employed for feature extraction and then followed by popular classifiers such as
SVMs and Cosine Similarity CS for classification. In this framework, parts of faces
such as eyes, nose, forehead, are used individually for training and testing. The
results showing that there is an improvement in recognition in small parts, such
as recognition rate in forehead enhanced form about 0% to nearly 35%, eyes
from about 22% to approximately 65%. In the second framework, five sub-models
were built based on Convolutional Neural Networks (CNNs) and those models
are named Eyes-CNNs, Nose-CNNs, Mouth-CNNs, Forehead-CNNs, and
combined EyesNose-CNNs. The experimental results illustrate a high recognition
rate when it comes to small parts, for example, eyes increased up to about
90.83% and forehead reached about 44.5%. Furthermore, the challenge of face
ageing is also approached by proposing an age-template based framework,
generating an age-based face template for enhanced face generation and
recognition. The results showing that generated new aged faces are more reliable
comparing with state-of-the-art