2,240 research outputs found
Unconstrained Face Verification using Deep CNN Features
In this paper, we present an algorithm for unconstrained face verification
based on deep convolutional features and evaluate it on the newly released
IARPA Janus Benchmark A (IJB-A) dataset. The IJB-A dataset includes real-world
unconstrained faces from 500 subjects with full pose and illumination
variations which are much harder than the traditional Labeled Face in the Wild
(LFW) and Youtube Face (YTF) datasets. The deep convolutional neural network
(DCNN) is trained using the CASIA-WebFace dataset. Extensive experiments on the
IJB-A dataset are provided
Local Higher-Order Statistics (LHS) describing images with statistics of local non-binarized pixel patterns
Accepted for publication in International Journal of Computer Vision and Image Understanding (CVIU)International audienceWe propose a new image representation for texture categorization and facial analysis, relying on the use of higher-order local differential statistics as features. It has been recently shown that small local pixel pattern distributions can be highly discriminative while being extremely efficient to compute, which is in contrast to the models based on the global structure of images. Motivated by such works, we propose to use higher-order statistics of local non-binarized pixel patterns for the image description. The proposed model does not require either (i) user specified quantization of the space (of pixel patterns) or (ii) any heuristics for discarding low occupancy volumes of the space. We propose to use a data driven soft quantization of the space, with parametric mixture models, combined with higher-order statistics, based on Fisher scores. We demonstrate that this leads to a more expressive representation which, when combined with discriminatively learned classifiers and metrics, achieves state-of-the-art performance on challenging texture and facial analysis datasets, in low complexity setup. Further, it is complementary to higher complexity features and when combined with them improves performance
Reference face graph for face recognition
Face recognition has been studied extensively; however, real-world face recognition still remains a challenging task. The demand for unconstrained practical face recognition is rising with the explosion of online multimedia such as social networks, and video surveillance footage where face analysis is of significant importance. In this paper, we approach face recognition in the context of graph theory. We recognize an unknown face using an external reference face graph (RFG). An RFG is generated and recognition of a given face is achieved by comparing it to the faces in the constructed RFG. Centrality measures are utilized to identify distinctive faces in the reference face graph. The proposed RFG-based face recognition algorithm is robust to the changes in pose and it is also alignment free. The RFG recognition is used in conjunction with DCT locality sensitive hashing for efficient retrieval to ensure scalability. Experiments are conducted on several publicly available databases and the results show that the proposed approach outperforms the state-of-the-art methods without any preprocessing necessities such as face alignment. Due to the richness in the reference set construction, the proposed method can also handle illumination and expression variation
Robust face recognition
University of Technology Sydney. Faculty of Engineering and Information Technology.Face recognition is one of the most important and promising biometric techniques. In face recognition, a similarity score is automatically calculated between face images to further decide their identity. Due to its non-invasive characteristics and ease of use, it has shown great potential in many real-world applications, e.g., video surveillance, access control systems, forensics and security, and social networks. This thesis addresses key challenges inherent in real-world face recognition systems including pose and illumination variations, occlusion, and image blur. To tackle these challenges, a series of robust face recognition algorithms are proposed. These can be summarized as follows:
In Chapter 2, we present a novel, manually designed face image descriptor named “Dual-Cross Patterns” (DCP). DCP efficiently encodes the seconder-order statistics of facial textures in the most informative directions within a face image. It proves to be more descriptive and discriminative than previous descriptors. We further extend DCP into a comprehensive face representation scheme named “Multi-Directional Multi-Level Dual-Cross Patterns” (MDML-DCPs). MDML-DCPs efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. MDML-DCPs achieves the best performance on the challenging FERET, FRGC 2.0, CAS-PEAL-R1, and LFW databases.
In Chapter 3, we develop a deep learning-based face image descriptor named “Multimodal Deep Face Representation” (MM-DFR) to automatically learn face representations from multimodal image data. In brief, convolutional neural networks (CNNs) are designed to extract complementary information from the original holistic face image, the frontal pose image rendered by 3D modeling, and uniformly sampled image patches. The recognition ability of each CNN is optimized by carefully integrating a number of published or newly developed tricks. A feature level fusion approach using stacked auto-encoders is designed to fuse the features extracted from the set of CNNs, which is advantageous for non-linear dimension reduction. MM-DFR achieves over 99% recognition rate on LFW using publicly available training data.
In Chapter 4, based on our research on handcrafted face image descriptors, we propose a powerful pose-invariant face recognition (PIFR) framework capable of handling the full range of pose variations within ±90° of yaw. The framework has two parts: the first is Patch-based Partial Representation (PBPR), and the second is Multi-task Feature Transformation Learning (MtFTL). PBPR transforms the original PIFR problem into a partial frontal face recognition problem. A robust patch-based face representation scheme is developed to represent the synthesized partial frontal faces. For each patch, a transformation dictionary is learnt under the MtFTL scheme. The transformation dictionary transforms the features of different poses into a discriminative subspace in which face matching is performed. The PBPR-MtFTL framework outperforms previous state-of-the-art PIFR methods on the FERET, CMU-PIE, and Multi-PIE databases.
In Chapter 5, based on our research on deep learning-based face image descriptors, we design a novel framework named Trunk-Branch Ensemble CNN (TBE-CNN) to handle challenges in video-based face recognition (VFR) under surveillance circumstances. Three major challenges are considered: image blur, occlusion, and pose variation. First, to learn blur-robust face representations, we artificially blur training data composed of clear still images to account for a shortfall in real-world video training data. Second, to enhance the robustness of CNN features to pose variations and occlusion, we propose the TBE-CNN architecture, which efficiently extracts complementary information from holistic face images and patches cropped around facial components. Third, to further promote the discriminative power of the representations learnt by TBE-CNN, we propose an improved triplet loss function. With the proposed techniques, TBE-CNN achieves state-of-the-art performance on three popular video face databases: PaSC, COX Face, and YouTube Faces
A multimodal deep learning framework using local feature representations for face recognition
YesThe most recent face recognition systems are
mainly dependent on feature representations obtained using
either local handcrafted-descriptors, such as local binary patterns
(LBP), or use a deep learning approach, such as deep
belief network (DBN). However, the former usually suffers
from the wide variations in face images, while the latter
usually discards the local facial features, which are proven
to be important for face recognition. In this paper, a novel
framework based on merging the advantages of the local
handcrafted feature descriptors with the DBN is proposed to
address the face recognition problem in unconstrained conditions.
Firstly, a novel multimodal local feature extraction
approach based on merging the advantages of the Curvelet
transform with Fractal dimension is proposed and termed
the Curvelet–Fractal approach. The main motivation of this
approach is that theCurvelet transform, a newanisotropic and
multidirectional transform, can efficiently represent themain
structure of the face (e.g., edges and curves), while the Fractal
dimension is one of the most powerful texture descriptors
for face images. Secondly, a novel framework is proposed,
termed the multimodal deep face recognition (MDFR)framework,
to add feature representations by training aDBNon top
of the local feature representations instead of the pixel intensity
representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary
to those acquired by the Curvelet–Fractal approach.
Finally, the performance of the proposed approaches has
been evaluated by conducting a number of extensive experiments
on four large-scale face datasets: the SDUMLA-HMT,
FERET, CAS-PEAL-R1, and LFW databases. The results
obtained from the proposed approaches outperform other
state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by
achieving new state-of-the-art results on all the employed
datasets
- …