300 research outputs found
Recommended from our members
Novel algorithms for 3D human face recognition
textAutomated human face recognition is a computer vision problem of considerable practical significance. Existing two dimensional (2D) face recognition techniques perform poorly for faces with uncontrolled poses, lighting and facial expressions. Face recognition technology based on three dimensional (3D) facial models is now emerging. Geometric facial models can be easily corrected for pose variations. They are illumination invariant, and provide structural information about the facial surface. Algorithms for 3D face recognition exist, however the area is far from being a matured technology. In this dissertation we address a number of open questions in the area of 3D human face recognition. Firstly, we make available to qualified researchers in the field, at no cost, a large Texas 3D Face Recognition Database, which was acquired as a part of this research work. This database contains 1149 2D and 3D images of 118 subjects. We also provide 25 manually located facial fiducial points on each face in this database. Our next contribution is the development of a completely automatic novel 3D face recognition algorithm, which employs discriminatory anthropometric distances between carefully selected local facial features. This algorithm neither uses general purpose pattern recognition approaches, nor does it directly extend 2D face recognition techniques to the 3D domain. Instead, it is based on an understanding of the structurally diverse characteristics of human faces, which we isolate from the scientific discipline of facial anthropometry. We demonstrate the effectiveness and superior performance of the proposed algorithm, relative to existing benchmark 3D face recognition algorithms. A related contribution is the development of highly accurate and reliable 2D+3D algorithms for automatically detecting 10 anthropometric facial fiducial points. While developing these algorithms, we identify unique structural/textural properties associated with the facial fiducial points. Furthermore, unlike previous algorithms for detecting facial fiducial points, we systematically evaluate our algorithms against manually located facial fiducial points on a large database of images. Our third contribution is the development of an effective algorithm for computing the structural dissimilarity of 3D facial surfaces, which uses a recently developed image similarity index called the complex-wavelet structural similarity index. This algorithm is unique in that unlike existing approaches, it does not require that the facial surfaces be finely registered before they are compared. Furthermore, it is nearly an order of magnitude more accurate than existing facial surface matching based approaches. Finally, we propose a simple method to combine the two new 3D face recognition algorithms that we developed, resulting in a 3D face recognition algorithm that is competitive with the existing state-of-the-art algorithms.Electrical and Computer Engineerin
Recurrent Attention Models for Depth-Based Person Identification
We present an attention-based model that reasons on human body shape and
motion dynamics to identify individuals in the absence of RGB information,
hence in the dark. Our approach leverages unique 4D spatio-temporal signatures
to address the identification problem across days. Formulated as a
reinforcement learning task, our model is based on a combination of
convolutional and recurrent neural networks with the goal of identifying small,
discriminative regions indicative of human identity. We demonstrate that our
model produces state-of-the-art results on several published datasets given
only depth images. We further study the robustness of our model towards
viewpoint, appearance, and volumetric changes. Finally, we share insights
gleaned from interpretable 2D, 3D, and 4D visualizations of our model's
spatio-temporal attention.Comment: Computer Vision and Pattern Recognition (CVPR) 201
Gait recognition with shifted energy image and structural feature extraction
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In this paper, we present a novel and efficient gait recognition system. The proposed system uses two novel gait representations, i.e., the shifted energy image and the gait structural profile, which have increased robustness to some classes of structural variations. Furthermore, we introduce a novel method for the simulation of walking conditions and the generation of artificial subjects that are used for the application of linear discriminant analysis. In the decision stage, the two representations are fused. Thorough experimental evaluation, conducted using one traditional and two new databases, demonstrates the advantages of the proposed system in comparison with current state-of-the-art systems
A Survey on Soft Biometrics for Human Identification
The focus has been changed to multi-biometrics due to the security demands. The ancillary information extracted from primary biometric (face and body) traits such as facial measurements, gender, color of the skin, ethnicity, and height is called soft biometrics and can be integrated to improve the speed and overall system performance of a primary biometric system (e.g., fuse face with facial marks) or to generate human semantic interpretation description (qualitative) of a person and limit the search in the whole dataset when using gender and ethnicity (e.g., old African male with blue eyes) in a fusion framework. This chapter provides a holistic survey on soft biometrics that show major works while focusing on facial soft biometrics and discusses some of the features of extraction and classification techniques that have been proposed and show their strengths and limitations
Robust signatures for 3D face registration and recognition
PhDBiometric authentication through face recognition has been an active area of
research for the last few decades, motivated by its application-driven demand. The popularity
of face recognition, compared to other biometric methods, is largely due to its
minimum requirement of subject co-operation, relative ease of data capture and similarity
to the natural way humans distinguish each other.
3D face recognition has recently received particular interest since three-dimensional
face scans eliminate or reduce important limitations of 2D face images, such as illumination
changes and pose variations. In fact, three-dimensional face scans are usually captured
by scanners through the use of a constant structured-light source, making them invariant
to environmental changes in illumination. Moreover, a single 3D scan also captures the
entire face structure and allows for accurate pose normalisation.
However, one of the biggest challenges that still remain in three-dimensional face
scans is the sensitivity to large local deformations due to, for example, facial expressions.
Due to the nature of the data, deformations bring about large changes in the 3D geometry
of the scan. In addition to this, 3D scans are also characterised by noise and artefacts such
as spikes and holes, which are uncommon with 2D images and requires a pre-processing
stage that is speci c to the scanner used to capture the data.
The aim of this thesis is to devise a face signature that is compact in size and
overcomes the above mentioned limitations. We investigate the use of facial regions and
landmarks towards a robust and compact face signature, and we study, implement and
validate a region-based and a landmark-based face signature. Combinations of regions and
landmarks are evaluated for their robustness to pose and expressions, while the matching
scheme is evaluated for its robustness to noise and data artefacts
Effectiveness of Multi-View Face Images and Anthropometric Data In Real-Time Networked Biometrics
Over the years, biometric systems have evolved into a reliable mechanism for establishing identity of individuals in the context of applications such as access control, personnel screening and criminal identification. However, recent terror attacks, security threats and intrusion attempts have necessitated a transition to modern biometric systems that can identify humans under unconstrained environments, in real-time. Specifically, the following are three critical transitions that are needed and which form the focus of this thesis: (1) In contrast to operation in an offline mode using previously acquired photographs and videos obtained under controlled environments, it is required that identification be performed in a real-time dynamic mode using images that are continuously streaming in, each from a potentially different view (front, profile, partial profile) and with different quality (pose and resolution). (2) While different multi-modal fusion techniques have been developed to improve system accuracy, these techniques have mainly focused on combining the face biometrics with modalities such as iris and fingerprints that are more reliable but require user cooperation for acquisition. In contrast, the challenge in a real-time networked biometric system is that of combining opportunistically captured multi-view facial images along with soft biometric traits such as height, gait, attire and color that do not require user cooperation. (3) Typical operation is expected to be in an open-set mode where the number of subjects that enrolled in the system is much smaller than the number of probe subjects; yet the system is required to generate high accuracy.;To address these challenges and to make a successful transition to real-time human identification systems, this thesis makes the following contributions: (1) A score-based multi- modal, multi-sample fusion technique is designed to combine face images acquired by a multi-camera network and the effectiveness of opportunistically acquired multi-view face images using a camera network in improving the identification performance is characterized; (2) The multi-view face acquisition system is complemented by a network of Microsoft Kinects for extracting human anthropometric features (specifically height, shoulder width and arm length). The score-fusion technique is augmented to utilize human anthropometric data and the effectiveness of this data is characterized. (3) The performance of the system is demonstrated using a database of 51 subjects collected using the networked biometric data acquisition system.;Our results show improved recognition accuracy when face information from multiple views is utilized for recognition and also indicate that a given level of accuracy can be attained with fewer probe images (lesser time) when compared with a uni-modal biometric system
Ear Biometrics: A Comprehensive Study of Taxonomy, Detection, and Recognition Methods
Due to the recent challenges in access control, surveillance and security, there is an increased need for efficient human authentication solutions. Ear recognition is an appealing choice to identify individuals in controlled or challenging environments. The outer part of the ear demonstrates high discriminative information across individuals and has shown to be robust for recognition. In addition, the data acquisition procedure is contactless, non-intrusive, and covert. This work focuses on using ear images for human authentication in visible and thermal spectrums. We perform a systematic study of the ear features and propose a taxonomy for them. Also, we investigate the parts of the head side view that provides distinctive identity cues. Following, we study the different modules of the ear recognition system. First, we propose an ear detection system that uses deep learning models. Second, we compare machine learning methods to state traditional systems\u27 baseline ear recognition performance. Third, we explore convolutional neural networks for ear recognition and the optimum learning process setting. Fourth, we systematically evaluate the performance in the presence of pose variation or various image artifacts, which commonly occur in real-life recognition applications, to identify the robustness of the proposed ear recognition models. Additionally, we design an efficient ear image quality assessment tool to guide the ear recognition system. Finally, we extend our work for ear recognition in the long-wave infrared domains
- …