552 research outputs found
Infrared face recognition: a comprehensive review of methodologies and databases
Automatic face recognition is an area with immense practical potential which
includes a wide range of commercial and law enforcement applications. Hence it
is unsurprising that it continues to be one of the most active research areas
of computer vision. Even after over three decades of intense research, the
state-of-the-art in face recognition continues to improve, benefitting from
advances in a range of different research fields such as image processing,
pattern recognition, computer graphics, and physiology. Systems based on
visible spectrum images, the most researched face recognition modality, have
reached a significant level of maturity with some practical success. However,
they continue to face challenges in the presence of illumination, pose and
expression changes, as well as facial disguises, all of which can significantly
decrease recognition accuracy. Amongst various approaches which have been
proposed in an attempt to overcome these limitations, the use of infrared (IR)
imaging has emerged as a particularly promising research direction. This paper
presents a comprehensive and timely review of the literature on this subject.
Our key contributions are: (i) a summary of the inherent properties of infrared
imaging which makes this modality promising in the context of face recognition,
(ii) a systematic review of the most influential approaches, with a focus on
emerging common trends as well as key differences between alternative
methodologies, (iii) a description of the main databases of infrared facial
images available to the researcher, and lastly (iv) a discussion of the most
promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap
with arXiv:1306.160
On Designing Tattoo Registration and Matching Approaches in the Visible and SWIR Bands
Face, iris and fingerprint based biometric systems are well explored areas of research. However, there are law enforcement and military applications where neither of the aforementioned modalities may be available to be exploited for human identification. In such applications, soft biometrics may be the only clue available that can be used for identification or verification purposes. Tattoo is an example of such a soft biometric trait. Unlike face-based biometric systems that used in both same-spectral and cross-spectral matching scenarios, tattoo-based human identification is still a not fully explored area of research. At this point in time there are no pre-processing, feature extraction and matching algorithms using tattoo images captured at multiple bands. This thesis is focused on exploring solutions on two main challenging problems. The first one is cross-spectral tattoo matching. The proposed algorithmic approach is using as an input raw Short-Wave Infrared (SWIR) band tattoo images and matches them successfully against their visible band counterparts. The SWIR tattoo images are captured at 1100 nm, 1200 nm, 1300 nm, 1400 nm and 1500 nm. After an empirical study where multiple photometric normalization techniques were used to pre-process the original multi-band tattoo images, only one was determined to significantly improve cross spectral tattoo matching performance. The second challenging problem was to develop a fully automatic visible-based tattoo image registration system based on SIFT descriptors and the RANSAC algorithm with a homography model. The proposed automated registration approach significantly improves the operational cost of a tattoo image identification system (using large scale tattoo image datasets), where the alignment of a pair of tattoo images by system operators needs to be performed manually. At the same time, tattoo matching accuracy is also improved (before vs. after automated alignment) by 45.87% for the NIST-Tatt-C database and 12.65% for the WVU-Tatt database
Human Recognition from Video Sequences and Off-Angle Face Images Supported by Respiration Signatures
In this work, we study the problem of human identity recognition using human respiratory waveforms extracted from videos combined with component-based off- angle human facial images. Our proposed system is composed of (i) a physiology- based human clustering module and (ii) an identification module based on facial features (nose, mouth, etc.) fetched from face videos. In our proposed methodology we, first, manage to passively extract an important vital sign (breath), cluster human subjects into nostril motion vs. nostril non-motion groups, and, then, localize a set of facial features, before we apply feature extraction and matching.;Our novel human identity recognition system is very robust, since it is working well when dealing with breath signals and a combination of different facial components acquired in uncontrolled luminous conditions. This is achieved by using our proposed Motion Classification approach and Feature Clustering technique based on the breathing waveforms we produce. The contributions of this work are three-fold. First, we collected a set of different datasets where we tested our proposed approach. Specifically, we considered six different types of facial components and their combination, to generate face-based video datasets, which present two diverse data collection conditions, i.e. videos acquired in head fully frontal position (baseline) and head looking up pose. Second, we propose a new way of passively measuring human breath from face videos and show comparatively identical output against baseline breathing waveforms produced by an ADInstruments device. Third, we demonstrate good human recognition performance when using the pro- posed pre-processing procedure of Motion Classification and Feature Clustering, working on partial features of human faces.;Our method achieves increased identification rates across all datasets used, and it manages to obtain a significantly high identification rate (ranging from 96%-100% when using a single or a combination of facial features), yielding an average of 7% raise, when compared to the baseline scenario. To the best of our knowledge, this is the first time that a biometric system is composed of an important human vital sign (breath) that is fused with facial features is such an efficient manner
An Extensive Review on Spectral Imaging in Biometric Systems: Challenges and Advancements
Spectral imaging has recently gained traction for face recognition in
biometric systems. We investigate the merits of spectral imaging for face
recognition and the current challenges that hamper the widespread deployment of
spectral sensors for face recognition. The reliability of conventional face
recognition systems operating in the visible range is compromised by
illumination changes, pose variations and spoof attacks. Recent works have
reaped the benefits of spectral imaging to counter these limitations in
surveillance activities (defence, airport security checks, etc.). However, the
implementation of this technology for biometrics, is still in its infancy due
to multiple reasons. We present an overview of the existing work in the domain
of spectral imaging for face recognition, different types of modalities and
their assessment, availability of public databases for sake of reproducible
research as well as evaluation of algorithms, and recent advancements in the
field, such as, the use of deep learning-based methods for recognizing faces
from spectral images
Improving Multi-view Facial Expression Recognition in Unconstrained Environments
Facial expression and emotion-related research has been a longstanding activity in psychology while computerized/automatic facial expression recognition of emotion is a relative recent and still emerging but active research area. Although many automatic computer systems have been proposed to address facial expression recognition problems, the majority of them fail to cope with the requirements of many practical application scenarios arising from either environmental factors or unexpected behavioural bias introduced by the users, such as illumination conditions and large head pose variation to the camera. In this thesis, two of the most influential and common issues raised in practical application scenarios when applying automatic facial expression recognition system are comprehensively explored and investigated. Through a series of experiments carried out under a proposed texture-based system framework for multi-view facial expression recognition, several novel texture feature representations are introduced for implementing multi-view facial expression recognition systems in practical environments, for which the state-of-the-art performance is achieved. In addition, a variety of novel categorization schemes for the configurations of an automatic multi-view facial expression recognition system is presented to address the impractical discrete categorization of facial expression of emotions in real-world scenarios. A significant improvement is observed when using the proposed categorizations in the proposed system framework using a novel implementation of the block based local ternary pattern approach
Fusion features ensembling models using Siamese convolutional neural network for kinship verification
Family is one of the most important entities in the community. Mining the genetic information through facial images is increasingly being utilized in wide range of real-world applications to facilitate family members tracing and kinship analysis to become remarkably easy, inexpensive, and fast as compared to the procedure of profiling Deoxyribonucleic acid (DNA). However, the opportunities of building reliable models for kinship recognition are still suffering from the insufficient determination of the familial features, unstable reference cues of kinship, and the genetic influence factors of family features. This research proposes enhanced methods for extracting and selecting the effective familial features that could provide evidences of kinship leading to improve the kinship verification accuracy through visual facial images. First, the Convolutional Neural Network based on Optimized Local Raw Pixels Similarity Representation (OLRPSR) method is developed to improve the accuracy performance by generating a new matrix representation in order to remove irrelevant information. Second, the Siamese Convolutional Neural Network and Fusion of the Best Overlapping Blocks (SCNN-FBOB) is proposed to track and identify the most informative kinship clues features in order to achieve higher accuracy. Third, the Siamese Convolutional Neural Network and Ensembling Models Based on Selecting Best Combination (SCNN-EMSBC) is introduced to overcome the weak performance of the individual image and classifier. To evaluate the performance of the proposed methods, series of experiments are conducted using two popular benchmarking kinship databases; the KinFaceW-I and KinFaceW-II which then are benchmarked against the state-of-art algorithms found in the literature. It is indicated that SCNN-EMSBC method achieves promising results with the average accuracy of 92.42% and 94.80% on KinFaceW-I and KinFaceW-II, respectively. These results significantly improve the kinship verification performance and has outperformed the state-of-art algorithms for visual image-based kinship verification
Ubiquitous Technologies for Emotion Recognition
Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions
Biometric Systems
Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study
- …