109,484 research outputs found

    Facial analysis in video : detection and recognition

    Get PDF
    Biometric authentication systems automatically identify or verify individuals using physiological (e.g., face, fingerprint, hand geometry, retina scan) or behavioral (e.g., speaking pattern, signature, keystroke dynamics) characteristics. Among these biometrics, facial patterns have the major advantage of being the least intrusive. Automatic face recognition systems thus have great potential in a wide spectrum of application areas. Focusing on facial analysis, this dissertation presents a face detection method and numerous feature extraction methods for face recognition. Concerning face detection, a video-based frontal face detection method has been developed using motion analysis and color information to derive field of interests, and distribution-based distance (DBD) and support vector machine (SVM) for classification. When applied to 92 still images (containing 282 faces), this method achieves 98.2% face detection rate with two false detections, a performance comparable to the state-of-the-art face detection methods; when applied to videQ streams, this method detects faces reliably and efficiently. Regarding face recognition, extensive assessments of face recognition performance in twelve color spaces have been performed, and a color feature extraction method defined by color component images across different color spaces is shown to help improve the baseline performance of the Face Recognition Grand Challenge (FRGC) problems. The experimental results show that some color configurations, such as YV in the YUV color space and YJ in the YIQ color space, help improve face recognition performance. Based on these improved results, a novel feature extraction method implementing genetic algorithms (GAs) and the Fisher linear discriminant (FLD) is designed to derive the optimal discriminating features that lead to an effective image representation for face recognition. This method noticeably improves FRGC ver1.0 Experiment 4 baseline recognition rate from 37% to 73%, and significantly elevates FRGC xxxx Experiment 4 baseline verification rate from 12% to 69%. Finally, four two-dimensional (2D) convolution filters are derived for feature extraction, and a 2D+3D face recognition system implementing both 2D and 3D imaging modalities is designed to address the FRGC problems. This method improves FRGC ver2.0 Experiment 3 baseline performance from 54% to 72%

    A study of eigenvector based face verification in static images

    Get PDF
    As one of the most successful application of image analysis and understanding, face recognition has recently received significant attention, especially during the past few years. There are at least two reasons for this trend the first is the wide range of commercial and law enforcement applications and the second is the availability of feasible technologies after 30 years of research. The problem of machine recognition of human faces continues to attract researchers from disciplines such as image processing, pattern recognition, neural networks, computer vision, computer graphics, and psychology. The strong need for user-friendly systems that can secure our assets and protect our privacy without losing our identity in a sea of numbers is obvious. Although very reliable methods of biometric personal identification exist, for example, fingerprint analysis and retinal or iris scans, these methods depend on the cooperation of the participants, whereas a personal identification system based on analysis of frontal or profile images of the face is often effective without the participant’s cooperation or knowledge. The three categories of face recognition are face detection, face identification and face verification. Face Detection means extract the face from total image of the person. Face identification means the input to the system is an unknown face, and the system reports back the determined identity from a database of known individuals. Face verification means the system needs to confirm or reject the claimed identity of the input. My thesis was face verification in static images. Here a static image means the images which are not in motion. The eigenvectors based face verification algorithm gave the results on face verification in static images based upon the eigenvectors and neural network backpropagation algorithm. Eigen vectors are used for give the geometrical information about the faces. First we take 10 images for each person in same angle with different expressions and apply principle component analysis. Here we consider image dimension as 48 x48 then we get 48 eigenvalues. Out of 48 eigenvalues we consider only 10 highest eigenvaues corresponding eigenvectors. These eigenvectors are given as input to the neural network for training. Here we used backpropagation algorithm for training the neural network. After completion of training we give an image which is in different angle for testing purpose. Here we check the verification rate (the rate at which legitimate users is granted access) and false acceptance rate (the rate at which imposters are granted access). Here neural network take more time for training purpose. The proposed algorithm gives the results on face verification in static images based upon the eigenvectors and neural network modified backpropagation algorithm. In modified backpropagation algorithm momentum term is added for decrease the training time. Here for using the modified backpropagation algorithm verification rate also slightly increased and false acceptance rate also slightly decreased

    Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition

    Full text link
    Two approaches are proposed for cross-pose face recognition, one is based on the 3D reconstruction of facial components and the other is based on the deep Convolutional Neural Network (CNN). Unlike most 3D approaches that consider holistic faces, the proposed approach considers 3D facial components. It segments a 2D gallery face into components, reconstructs the 3D surface for each component, and recognizes a probe face by component features. The segmentation is based on the landmarks located by a hierarchical algorithm that combines the Faster R-CNN for face detection and the Reduced Tree Structured Model for landmark localization. The core part of the CNN-based approach is a revised VGG network. We study the performances with different settings on the training set, including the synthesized data from 3D reconstruction, the real-life data from an in-the-wild database, and both types of data combined. We investigate the performances of the network when it is employed as a classifier or designed as a feature extractor. The two recognition approaches and the fast landmark localization are evaluated in extensive experiments, and compared to stateof-the-art methods to demonstrate their efficacy.Comment: 14 pages, 12 figures, 4 table

    Unconstrained Face Verification using Deep CNN Features

    Full text link
    In this paper, we present an algorithm for unconstrained face verification based on deep convolutional features and evaluate it on the newly released IARPA Janus Benchmark A (IJB-A) dataset. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the traditional Labeled Face in the Wild (LFW) and Youtube Face (YTF) datasets. The deep convolutional neural network (DCNN) is trained using the CASIA-WebFace dataset. Extensive experiments on the IJB-A dataset are provided
    corecore