119 research outputs found

    2D Face Recognition System Based on Selected Gabor Filters and Linear Discriminant Analysis LDA

    Full text link
    We present a new approach for face recognition system. The method is based on 2D face image features using subset of non-correlated and Orthogonal Gabor Filters instead of using the whole Gabor Filter Bank, then compressing the output feature vector using Linear Discriminant Analysis (LDA). The face image has been enhanced using multi stage image processing technique to normalize it and compensate for illumination variation. Experimental results show that the proposed system is effective for both dimension reduction and good recognition performance when compared to the complete Gabor filter bank. The system has been tested using CASIA, ORL and Cropped YaleB 2D face images Databases and achieved average recognition rate of 98.9 %

    Learning as a Nonlinear Line of Attraction for Pattern Association, Classification and Recognition

    Get PDF
    Development of a mathematical model for learning a nonlinear line of attraction is presented in this dissertation, in contrast to the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete location in state space. A nonlinear line of attraction is the encapsulation of attractive fixed points scattered in state space as an attractive nonlinear line, describing patterns with similar characteristics as a family of patterns. It is usually of prime imperative to guarantee the convergence of the dynamics of the recurrent network for associative learning and recall. We propose to alter this picture. That is, if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented by an unknown encoded representation of a visual image. The conception of the dynamics of the nonlinear line attractor network to operate between stable and unstable states is the second contribution in this dissertation research. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilizes stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can manifest complex dynamics in an unsupervised manner. The third contribution of this dissertation is the introduction of the concept of manifold of color perception. The fourth contribution of this dissertation is the development of a nonlinear dimensionality reduction technique by embedding a set of related observations into a low-dimensional space utilizing the result attained by the learned memory matrices of the nonlinear line attractor network. Development of a system for affective states computation is also presented in this dissertation. This system is capable of extracting the user\u27s mental state in real time using a low cost computer. It is successfully interfaced with an advanced learning environment for human-computer interaction

    Biometric Authentication System on Mobile Personal Devices

    Get PDF
    We propose a secure, robust, and low-cost biometric authentication system on the mobile personal device for the personal network. The system consists of the following five key modules: 1) face detection; 2) face registration; 3) illumination normalization; 4) face verification; and 5) information fusion. For the complicated face authentication task on the devices with limited resources, the emphasis is largely on the reliability and applicability of the system. Both theoretical and practical considerations are taken. The final system is able to achieve an equal error rate of 2% under challenging testing protocols. The low hardware and software cost makes the system well adaptable to a large range of security applications

    Race classification using gaussian-based weight K-nn algorithm for face recognition

    Get PDF
    One of the greatest challenges in facial recognition systems is to recognize faces around different race and illuminations. Chromaticity is an essential factor in facial recognition and shows the intensity of the color in a pixel, it can greatly vary depending on the lighting conditions. The race classification scheme proposed which is Gaussian based-weighted K-Nearest Neighbor classifier in this paper, has very sensitive to illumination intensity. The main idea is first to identify the minority class instances in the training data and then generalize them to Gaussian function as concept for the minority class. By using combination of K-NN algorithm with Gaussian formula for race classification. In this paper, image processing is divided into two phases. The first is preprocessing phase. There are three preprocessing comprises of auto contrast balance, noise reduction and auto-color balancing. The second phase is face processing which contains six steps; face detection, illumination normalization, feature extraction, skin segmentation, race classification and face recognition. There are two type of dataset are being used; first FERET dataset where images inside this dataset involve of illumination variations. The second is Caltech dataset which images side this dataset contains noises

    Human Recognition from Video Sequences and Off-Angle Face Images Supported by Respiration Signatures

    Get PDF
    In this work, we study the problem of human identity recognition using human respiratory waveforms extracted from videos combined with component-based off- angle human facial images. Our proposed system is composed of (i) a physiology- based human clustering module and (ii) an identification module based on facial features (nose, mouth, etc.) fetched from face videos. In our proposed methodology we, first, manage to passively extract an important vital sign (breath), cluster human subjects into nostril motion vs. nostril non-motion groups, and, then, localize a set of facial features, before we apply feature extraction and matching.;Our novel human identity recognition system is very robust, since it is working well when dealing with breath signals and a combination of different facial components acquired in uncontrolled luminous conditions. This is achieved by using our proposed Motion Classification approach and Feature Clustering technique based on the breathing waveforms we produce. The contributions of this work are three-fold. First, we collected a set of different datasets where we tested our proposed approach. Specifically, we considered six different types of facial components and their combination, to generate face-based video datasets, which present two diverse data collection conditions, i.e. videos acquired in head fully frontal position (baseline) and head looking up pose. Second, we propose a new way of passively measuring human breath from face videos and show comparatively identical output against baseline breathing waveforms produced by an ADInstruments device. Third, we demonstrate good human recognition performance when using the pro- posed pre-processing procedure of Motion Classification and Feature Clustering, working on partial features of human faces.;Our method achieves increased identification rates across all datasets used, and it manages to obtain a significantly high identification rate (ranging from 96%-100% when using a single or a combination of facial features), yielding an average of 7% raise, when compared to the baseline scenario. To the best of our knowledge, this is the first time that a biometric system is composed of an important human vital sign (breath) that is fused with facial features is such an efficient manner

    Gender Classification from Facial Images

    Get PDF
    Gender classification based on facial images has received increased attention in the computer vision community. In this work, a comprehensive evaluation of state-of-the-art gender classification methods is carried out on publicly available databases and extended to reallife face images, where face detection and face normalization are essential for the success of the system. Next, the possibility of predicting gender from face images acquired in the near-infrared spectrum (NIR) is explored. In this regard, the following two questions are addressed: (a) Can gender be predicted from NIR face images; and (b) Can a gender predictor learned using visible (VIS) images operate successfully on NIR images and vice-versa? The experimental results suggest that NIR face images do have some discriminatory information pertaining to gender, although the degree of discrimination is noticeably lower than that of VIS images. Further, the use of an illumination normalization routine may be essential for facilitating cross-spectral gender prediction. By formulating the problem of gender classification in the framework of both visible and near-infrared images, the guidelines for performing gender classification in a real-world scenario is provided, along with the strengths and weaknesses of each methodology. Finally, the general problem of attribute classification is addressed, where features such as expression, age and ethnicity are derived from a face image

    Biologically Inspired Processing for Lighting Robust Face Recognition

    Get PDF
    ISBN 978-953-307-489-4, Hard cover, 314 pagesNo abstrac

    Facial Landmarks Detection and Expression Recognition in the Dark

    Get PDF
    Facial landmark detection has been widely adopted for body language analysis and facial identification task. A variety of facial landmark detectors have been proposed in different approaches, such as AAM, AdaBoost, LBF and DPM. However, most detectors were trained and tested on high resolution images with controlled environments. Recent study has focused on robust landmark detectors and obtained increasing excellent performance under different poses and light conditions. However, it remains an open question about implementing facial landmark detection in extremely dark images. Our implementation is to build an application for facial expression analysis in extremely dark environments by landmarks. To address this problem, we explored different dark image enhancement methods to facilitate landmark detection. And we designed landmark correct- ness methods to evaluate landmarks’ localization. This step guarantees the accuracy of expression recognition. Then, we analyzed the feature extraction methods, such as HOG, polar coordinate and landmarks’ distance, and normalization methods for facial expression recognition. Compared with the existing facial expression recognition system, our system is more robust in the dark environment, and performs very well in detecting happy and surprising

    Illumination Processing in Face Recognition

    Get PDF
    corecore