6 research outputs found

    Active illumination and appearance model for face alignment

    Get PDF

    Robust Face Alignment for Illumination and Pose Invariant Face Recognition

    Get PDF
    In building a face recognition system for real-life scenarios, one usually faces the problem that is the selection of a feature-space and preprocessing methods such as alignment under varying illumination conditions and poses. In this study, we developed a robust face alignment approach based on Active Appearance Model (AAM) by inserting an illumination normalization module into the standard AAM searching procedure and inserting different poses of the same identity into the training set. The modified AAM search can now handle both illumination and pose variations in the same epoch, hence it provides better convergence in both point-to-point and point-to-curve senses. We also investigate how face recognition performance is affected by the selection of feature space as well as the proposed alignment method. The experimental results show that the combined pose alignment and illumination normalization methods increase the recognition rates considerably for all featurespaces. 1

    Illumination Processing in Face Recognition

    Get PDF

    Human Face Recognition

    Get PDF
    Face recognition, as the main biometric used by human beings, has become more popular for the last twenty years. Automatic recognition of human faces has many commercial and security applications in identity validation and recognition and has become one of the hottest topics in the area of image processing and pattern recognition since 1990. Availability of feasible technologies as well as the increasing request for reliable security systems in today’s world has been a motivation for many researchers to develop new methods for face recognition. In automatic face recognition we desire to either identify or verify one or more persons in still or video images of a scene by means of a stored database of faces. One of the important features of face recognition is its non-intrusive and non-contact property that distinguishes it from other biometrics like iris or finger print recognition that require subjects’ participation. During the last two decades several face recognition algorithms and systems have been proposed and some major advances have been achieved. As a result, the performance of face recognition systems under controlled conditions has now reached a satisfactory level. These systems, however, face some challenges in environments with variations in illumination, pose, expression, etc. The objective of this research is designing a reliable automated face recognition system which is robust under varying conditions of noise level, illumination and occlusion. A new method for illumination invariant feature extraction based on the illumination-reflectance model is proposed which is computationally efficient and does not require any prior information about the face model or illumination. A weighted voting scheme is also proposed to enhance the performance under illumination variations and also cancel occlusions. The proposed method uses mutual information and entropy of the images to generate different weights for a group of ensemble classifiers based on the input image quality. The method yields outstanding results by reducing the effect of both illumination and occlusion variations in the input face images

    Feature extraction and fusion techniques for patch-based face recognition

    Get PDF
    Face recognition is one of the most addressed pattern recognition problems in recent studies due to its importance in security applications and human computer interfaces. After decades of research in the face recognition problem, feasible technologies are becoming available. However, there is still room for improvement for challenging cases. As such, face recognition problem still attracts researchers from image processing, pattern recognition and computer vision disciplines. Although there exists other types of personal identification such as fingerprint recognition and retinal/iris scans, all these methods require the collaboration of the subject. However, face recognition differs from these systems as facial information can be acquired without collaboration or knowledge of the subject of interest. Feature extraction is a crucial issue in face recognition problem and the performance of the face recognition systems depend on the reliability of the features extracted. Previously, several dimensionality reduction methods were proposed for feature extraction in the face recognition problem. In this thesis, in addition to dimensionality reduction methods used previously for face recognition problem, we have implemented recently proposed dimensionality reduction methods on a patch-based face recognition system. Patch-based face recognition is a recent method which uses the idea of analyzing face images locally instead of using global representation, in order to reduce the effects of illumination changes and partial occlusions. Feature fusion and decision fusion are two distinct ways to make use of the extracted local features. Apart from the well-known decision fusion methods, a novel approach for calculating weights for the weighted sum rule is proposed in this thesis. On two separate databases, we have conducted both feature fusion and decision fusion experiments and presented recognition accuracies for different dimensionality reduction and normalization methods. Improvements in recognition accuracies are shown and superiority of decision fusion over feature fusion is advocated. Especially in the more challenging AR database, we obtain significantly better results using decision fusion as compared to conventional methods and feature fusion methods
    corecore