20 research outputs found

    Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences

    Get PDF
    The major contribution of this research is the development of deep architectures for face liveness detection on a static image as well as video sequences that use a combination of texture analysis and deep Convolutional Neural Network (CNN) to classify the captured image or video as real or fake. Face recognition is a popular and efficient form of biometric authentication used in many software applications. One drawback of this technique is that, it is prone to face spoofing attacks, where an impostor can gain access to the system by presenting a photograph or recorded video of a valid user to the sensor. Thus, face liveness detection is a critical preprocessing step in face recognition authentication systems. The first part of our research was on face liveness detection on a static image, where we applied nonlinear diffusion based on an additive operator splitting scheme and a tri-diagonal matrix block-solver algorithm to the image, which enhances the edges and surface texture in the real image. The diffused image was then fed to a deep CNN to identify the complex and deep features for classification. We obtained high accuracy on the NUAA Photograph Impostor dataset using one of our enhanced architectures. In the second part of our research, we developed an end-to-end real-time solution for face liveness detection on static images, where instead of using a separate preprocessing step for diffusing the images, we used a combined architecture where the diffusion process and CNN were implemented in a single step. This integrated approach gave promising results with two different architectures, on the Replay-Attack and Replay-Mobile datasets. We also developed a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep CNN and Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Performance evaluation of our architecture on the Replay-Attack and Replay-Mobile datasets gave very competitive results. We performed liveness detection on video sequences using diffusion and the Two-Stream Inflated 3D ConvNet (I3D) architecture, and our experiments on the Replay-Attack and Replay-Mobile datasets gave very good results

    An efficient multiscale scheme using local zernike moments for face recognition

    Get PDF
    In this study, we propose a face recognition scheme using local Zernike moments (LZM), which can be used for both identification and verification. In this scheme, local patches around the landmarks are extracted from the complex components obtained by LZM transformation. Then, phase magnitude histograms are constructed within these patches to create descriptors for face images. An image pyramid is utilized to extract features at multiple scales, and the descriptors are constructed for each image in this pyramid. We used three different public datasets to examine the performance of the proposed method:Face Recognition Technology (FERET), Labeled Faces in the Wild (LFW), and Surveillance Cameras Face (SCface). The results revealed that the proposed method is robust against variations such as illumination, facial expression, and pose. Aside from this, it can be used for low-resolution face images acquired in uncontrolled environments or in the infrared spectrum. Experimental results show that our method outperforms state-of-the-art methods on FERET and SCface datasets.WOS:000437326800174Scopus - Affiliation ID: 60105072Science Citation Index ExpandedQ2 - Q3ArticleUluslararası işbirliği ile yapılmayan - HAYIRMayıs2018YÖK - 2017-1

    Optimizing Deep CNN Architectures for Face Liveness Detection

    Get PDF
    Face recognition is a popular and efficient form of biometric authentication used in many software applications. One drawback of this technique is that it is prone to face spoofing attacks, where an impostor can gain access to the system by presenting a photograph of a valid user to the sensor. Thus, face liveness detection is a necessary step before granting authentication to the user. In this paper, we have developed deep architectures for face liveness detection that use a combination of texture analysis and a convolutional neural network (CNN) to classify the captured image as real or fake. Our development greatly improved upon a recent approach that applies nonlinear diffusion based on an additive operator splitting scheme and a tridiagonal matrix block-solver algorithm to the image, which enhances the edges and surface texture in the real image. We then fed the diffused image to a deep CNN to identify the complex and deep features for classification. We obtained 100% accuracy on the NUAA Photograph Impostor dataset for face liveness detection using one of our enhanced architectures. Further, we gained insight into the enhancement of the face liveness detection architecture by evaluating three different deep architectures, which included deep CNN, residual network, and the inception network version 4. We evaluated the performance of each of these architectures on the NUAA dataset and present here the experimental results showing under what conditions an architecture would be better suited for face liveness detection. While the residual network gave us competitive results, the inception network version 4 produced the optimal accuracy of 100% in liveness detection (with nonlinear anisotropic diffused images with a smoothness parameter of 15). Our approach outperformed all current state-of-the-art methods.http://dx.doi.org/10.3390/e2104042

    Recognizing Visual Object Using Machine Learning Techniques

    Get PDF
    Nowadays, Visual Object Recognition (VOR) has received growing interest from researchers and it has become a very active area of research due to its vital applications including handwriting recognition, diseases classification, face identification ..etc. However, extracting the relevant features that faithfully describe the image represents the challenge of most existing VOR systems. This thesis is mainly dedicated to the development of two VOR systems, which are presented in two different contributions. As a first contribution, we propose a novel generic feature-independent pyramid multilevel (GFIPML) model for extracting features from images. GFIPML addresses the shortcomings of two existing schemes namely multi-level (ML) and pyramid multi-level (PML), while also taking advantage of their pros. As its name indicates, the proposed model can be used by any kind of the large variety of existing features extraction methods. We applied GFIPML for the task of Arabic literal amount recognition. Indeed, this task is challenging due to the specific characteristics of Arabic handwriting. While most literary works have considered structural features that are sensitive to word deformations, we opt for using Local Phase Quantization (LPQ) and Binarized Statistical Image Feature (BSIF) as Arabic handwriting can be considered as texture. To further enhance the recognition yields, we considered a multimodal system based on the combination of LPQ with multiple BSIF descriptors, each one with a different filter size. As a second contribution, a novel simple yet effcient, and speedy TR-ICANet model for extracting features from unconstrained ear images is proposed. To get rid of unconstrained conditions (e.g., scale and pose variations), we suggested first normalizing all images using CNN. The normalized images are fed then to the TR-ICANet model, which uses ICA to learn filters. A binary hashing and block-wise histogramming are used then to compute the local features. At the final stage of TR-ICANet, we proposed to use an effective normalization method namely Tied Rank normalization in order to eliminate the disparity within blockwise feature vectors. Furthermore, to improve the identification performance of the proposed system, we proposed a softmax average fusing of CNN-based feature extraction approaches with our proposed TR-ICANet at the decision level using SVM classifier

    No intruders - securing face biometric systems from spoofing attacks

    Get PDF
    The use of face verification systems as a primary source of authentication has been very common over past few years. Better and more reliable face recognition system are coming into existence. But despite of the advance in face recognition systems, there are still many open breaches left in this domain. One of the practical challenge is to secure face biometric systems from intruder’s attacks, where an unauthorized person tries to gain access by showing the counterfeit evidence in front of face biometric system. The face-biometric system having only single 2-D camera is unaware that it is facing an attack by an unauthorized person. The idea here is to propose a solution which can be easily integrated to the existing systems without any additional hardware deployment. This field of detection of imposter attempts is still an open research problem, as more sophisticated and advanced spoofing attempts come into play. In this thesis, the problem of securing the biometric systems from these unauthorized or spoofing attacks is addressed. Moreover, independent multi-view face detection framework is also proposed in this thesis. We proposed three different counter-measures which can detect these imposter attempts and can be easily integrated into existing systems. The proposed solutions can run parallel with face recognition module. Mainly, these counter-measures are proposed to encounter the digital photo, printed photo and dynamic videos attacks. To exploit the characteristics of these attacks, we used a large set of features in the proposed solutions, namely local binary patterns, gray-level co-occurrence matrix, Gabor wavelet features, space-time autocorrelation of gradients, image quality based features. We further performed extensive evaluations of these approaches on two different datasets. Support Vector Machine (SVM) with the linear kernel and Partial Least Square Regression (PLS) are used as the classifier for classification. The experimental results improve the current state-of-the-art reference techniques under the same attach categories

    Ear Biometrics: A Comprehensive Study of Taxonomy, Detection, and Recognition Methods

    Get PDF
    Due to the recent challenges in access control, surveillance and security, there is an increased need for efficient human authentication solutions. Ear recognition is an appealing choice to identify individuals in controlled or challenging environments. The outer part of the ear demonstrates high discriminative information across individuals and has shown to be robust for recognition. In addition, the data acquisition procedure is contactless, non-intrusive, and covert. This work focuses on using ear images for human authentication in visible and thermal spectrums. We perform a systematic study of the ear features and propose a taxonomy for them. Also, we investigate the parts of the head side view that provides distinctive identity cues. Following, we study the different modules of the ear recognition system. First, we propose an ear detection system that uses deep learning models. Second, we compare machine learning methods to state traditional systems\u27 baseline ear recognition performance. Third, we explore convolutional neural networks for ear recognition and the optimum learning process setting. Fourth, we systematically evaluate the performance in the presence of pose variation or various image artifacts, which commonly occur in real-life recognition applications, to identify the robustness of the proposed ear recognition models. Additionally, we design an efficient ear image quality assessment tool to guide the ear recognition system. Finally, we extend our work for ear recognition in the long-wave infrared domains

    Face recognition using statistical adapted local binary patterns.

    Get PDF
    Biometrics is the study of methods of recognizing humans based on their behavioral and physical characteristics or traits. Face recognition is one of the biometric modalities that received a great amount of attention from many researchers during the past few decades because of its potential applications in a variety of security domains. Face recognition however is not only concerned with recognizing human faces, but also with recognizing faces of non-biological entities or avatars. Fortunately, the need for secure and affordable virtual worlds is attracting the attention of many researchers who seek to find fast, automatic and reliable ways to identify virtual worlds’ avatars. In this work, I propose new techniques for recognizing avatar faces, which also can be applied to recognize human faces. Proposed methods are based mainly on a well-known and efficient local texture descriptor, Local Binary Pattern (LBP). I am applying different versions of LBP such as: Hierarchical Multi-scale Local Binary Patterns and Adaptive Local Binary Pattern with Directional Statistical Features in the wavelet space and discuss the effect of this application on the performance of each LBP version. In addition, I use a new version of LBP called Local Difference Pattern (LDP) with other well-known descriptors and classifiers to differentiate between human and avatar face images. The original LBP achieves high recognition rate if the tested images are pure but its performance gets worse if these images are corrupted by noise. To deal with this problem I propose a new definition to the original LBP in which the LBP descriptor will not threshold all the neighborhood pixel based on the central pixel value. A weight for each pixel in the neighborhood will be computed, a new value for each pixel will be calculated and then using simple statistical operations will be used to compute the new threshold, which will change automatically, based on the pixel’s values. This threshold can be applied with the original LBP or any other version of LBP and can be extended to work with Local Ternary Pattern (LTP) or any version of LTP to produce different versions of LTP for recognizing noisy avatar and human faces images

    Skin Texture as a Source of Biometric Information

    Get PDF
    Traditional face recognition systems have achieved remarkable performances when the whole face image is available. However, recognising people from partial view of their facial image is a challenging task. Face recognition systems' performances may also be degraded due to low resolution image quality. These limitations can restrict the practicality of such systems in real-world scenarios such as surveillance, and forensic applications. Therefore, there is a need to identify people from whatever information is available and one of the possible approaches would be to use the texture information from available facial skin regions for the biometric identification of individuals. This thesis presents the design, implementation and experimental evaluation of an automated skin-based biometric framework. The proposed system exploits the skin information from facial regions for person recognition. Such a system is applicable where only a partial view of a face is captured by imaging devices. The system automatically detects the regions of interest by using a set of facial landmarks. Four regions were investigated in this study: forehead, right cheek, left cheek, and chin. A skin purity assessment scheme determines whether the region of interest contains enough skin pixels for biometric analysis. Texture features were extracted from non-overlapping sub-regions and categorised using a number of classification schemes. To further improve the reliability of the system, the study also investigated various techniques to deal with the challenge where the face images may be acquired at different resolutions to that available at the time of enrolment or sub-regions themselves be partially occluded. The study also presented an adaptive scheme for exploiting the available information from the corrupt regions of interest. Extensive experiments were conducted using publicly available databases to evaluate both the performance of the prototype system and the adaptive framework for different operational conditions, such as level of occlusion and mixture of different resolution skin images. Results suggest that skin information can provide useful discriminative characteristics for individual identification. The comparison analyses with state-of-the-art methods show that the proposed system achieved a promising performance
    corecore