35 research outputs found

    A new approach to face recognition using Curvelet Transform

    Get PDF
    Multiresolution tools have been profusely employed in face recognition. Wavelet Transform is the best known among these multiresolution tools and is widely used for identification of human faces. Of late, following the success of wavelets a number of new multiresolution tools have been developed. Curvelet Transform is a recent addition to that list. It has better directional ability and effective curved edge representation capability. These two properties make curvelet transform a powerful weapon for extracting edge information from facial images. Our work aims at exploring the possibilities of curvelet transform for feature extraction from human faces in order to introduce a new alternative approach towards face recognition

    A study of eigenvector based face verification in static images

    Get PDF
    As one of the most successful application of image analysis and understanding, face recognition has recently received significant attention, especially during the past few years. There are at least two reasons for this trend the first is the wide range of commercial and law enforcement applications and the second is the availability of feasible technologies after 30 years of research. The problem of machine recognition of human faces continues to attract researchers from disciplines such as image processing, pattern recognition, neural networks, computer vision, computer graphics, and psychology. The strong need for user-friendly systems that can secure our assets and protect our privacy without losing our identity in a sea of numbers is obvious. Although very reliable methods of biometric personal identification exist, for example, fingerprint analysis and retinal or iris scans, these methods depend on the cooperation of the participants, whereas a personal identification system based on analysis of frontal or profile images of the face is often effective without the participant’s cooperation or knowledge. The three categories of face recognition are face detection, face identification and face verification. Face Detection means extract the face from total image of the person. Face identification means the input to the system is an unknown face, and the system reports back the determined identity from a database of known individuals. Face verification means the system needs to confirm or reject the claimed identity of the input. My thesis was face verification in static images. Here a static image means the images which are not in motion. The eigenvectors based face verification algorithm gave the results on face verification in static images based upon the eigenvectors and neural network backpropagation algorithm. Eigen vectors are used for give the geometrical information about the faces. First we take 10 images for each person in same angle with different expressions and apply principle component analysis. Here we consider image dimension as 48 x48 then we get 48 eigenvalues. Out of 48 eigenvalues we consider only 10 highest eigenvaues corresponding eigenvectors. These eigenvectors are given as input to the neural network for training. Here we used backpropagation algorithm for training the neural network. After completion of training we give an image which is in different angle for testing purpose. Here we check the verification rate (the rate at which legitimate users is granted access) and false acceptance rate (the rate at which imposters are granted access). Here neural network take more time for training purpose. The proposed algorithm gives the results on face verification in static images based upon the eigenvectors and neural network modified backpropagation algorithm. In modified backpropagation algorithm momentum term is added for decrease the training time. Here for using the modified backpropagation algorithm verification rate also slightly increased and false acceptance rate also slightly decreased

    Adaptive 3D facial action intensity estimation and emotion recognition

    Get PDF
    Automatic recognition of facial emotion has been widely studied for various computer vision tasks (e.g. health monitoring, driver state surveillance and personalized learning). Most existing facial emotion recognition systems, however, either have not fully considered subject-independent dynamic features or were limited to 2D models, thus are not robust enough for real-life recognition tasks with subject variation, head movement and illumination change. Moreover, there is also lack of systematic research on effective newly arrived novel emotion class detection. To address these challenges, we present a real-time 3D facial Action Unit (AU) intensity estimation and emotion recognition system. It automatically selects 16 motion-based facial feature sets using minimal-redundancy–maximal-relevance criterion based optimization and estimates the intensities of 16 diagnostic AUs using feedforward Neural Networks and Support Vector Regressors. We also propose a set of six novel adaptive ensemble classifiers for robust classification of the six basic emotions and the detection of newly arrived unseen novel emotion classes (emotions that are not included in the training set). A distance-based clustering and uncertainty measures of the base classifiers within each ensemble model are used to inform the novel class detection. Evaluated with the Bosphorus 3D database, the system has achieved the best performance of 0.071 overall Mean Squared Error (MSE) for AU intensity estimation using Support Vector Regressors, and 92.2% average accuracy for the recognition of the six basic emotions using the proposed ensemble classifiers. In comparison with other related work, our research outperforms other state-of-the-art research on 3D facial emotion recognition for the Bosphorus database. Moreover, in on-line real-time evaluation with real human subjects, the proposed system also shows superior real-time performance with 84% recognition accuracy and great flexibility and adaptation for newly arrived novel (e.g. ‘contempt’ which is not included in the six basic emotions) emotion detection

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    Machine Analysis of Facial Expressions

    Get PDF

    A survey of face recognition techniques under occlusion

    Get PDF
    The limited capacity to recognize faces under occlusions is a long-standing problem that presents a unique challenge for face recognition systems and even for humans. The problem regarding occlusion is less covered by research when compared to other challenges such as pose variation, different expressions, etc. Nevertheless, occluded face recognition is imperative to exploit the full potential of face recognition for real-world applications. In this paper, we restrict the scope to occluded face recognition. First, we explore what the occlusion problem is and what inherent difficulties can arise. As a part of this review, we introduce face detection under occlusion, a preliminary step in face recognition. Second, we present how existing face recognition methods cope with the occlusion problem and classify them into three categories, which are 1) occlusion robust feature extraction approaches, 2) occlusion aware face recognition approaches, and 3) occlusion recovery based face recognition approaches. Furthermore, we analyze the motivations, innovations, pros and cons, and the performance of representative approaches for comparison. Finally, future challenges and method trends of occluded face recognition are thoroughly discussed

    Contributions on 3D Biometric Face Recognition for point clouds in low-resolution devices

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2020.Recentemente, diversos processos de automação fazem uso de conhecimentos relacionados a visão computacional, utilizando-se das informações digitalizadas que auxiliam na tomada de decisões destes processos. O estudo de informações 3D é um assunto que vem sendo recorrente em comu- nidades de visão computacional e atividades gráficas. Uma gama de métodos vem sendo propostos visando obter melhores resultados de performance, em termos de acurácia e robustez. O objetivo deste trabalho é contribuir com métodos de reconhecimento facial em dispositivos de baixa res- olução de núvens de ponto. Neste trabalho realiza-se um processo de reconhecimento facial em uma base de dados contendo 31 sujeitos, em que cada sujeito apresenta 3 imagens de profundidade e 3 imagens de cor (RGB). As imagens de cor são utilizadas para detecção facial por uso de um Haar Cascade, que permite a extração dos pontos da face da imagem de profundidade formando uma nuvem de pontos 3D. Da nuvem de pontos foram extraídas a intensidade normal e a intensi- dade do índice de curvatura de cada ponto permitindo a formação de uma imagem bidimensional, intitulada de mapa de curvatura, a partir da qual extrai-se histogramas utilizados no processo de reconhecimento facial. Junto com os mapas de curvature, Um novo método de correspondência é proposto por meio da adaptação do algoritmo clássico de Bozorth, formando uma representação 3D de marcos faciais em nuvens de ponto de baixa resolução para prover um descritor dos pontos chaves da nuvem e extrair uma representação única de cada indivíduo. A validação é realizada e comparada com uma técnica de linha de base para reconhecimento facial 3D. O manuscrito apre- sentado provê multiplos cenários de teste (faces frontais, acurácia, escala e orientação) para ambos métodos atingindo uma acurácia de 98.92% no melhor caso dos mapas de curvature e uma acurácio de 100% no melhor caso do algoritmo clássico de Bozorth adaptado.Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Recently, many automation processes make use of knowledge related to computer vision, exploiting digital information in the form of images or data that assists the decision-making of these processes. 3D data recognition is a trending topic in computer vision and graphics tasks. Many methods had been proposed for applications on 3D, expecting a better performance in accuracy and robustness. The main goal of this manuscript is to contribute with face recognition methods for low-resolution point cloud devices. In this manuscript, a face recognition process was accomplished in a 31 subject database, using colorful images (RGB) and depth images for each subject. The colorful images are utilized for face detection by a Haar Cascade algorithm, allowing the extraction of facial points in the depth image and the generation of a face 3D point cloud. The point cloud is used to extract the normal intensity and the curvature index intensity of each point, allowing the confection of a bidimensional image, entitled curvature map, of which histograms are obtained to perform the facial recognition task. Along with the curvature maps, a novel matching method is proposed by an adaptation of the classic Bozorth’s algorithm, forming a net-based 3D representation of facial landmarks in a low resolution point cloud in order to provide a descriptor of the cloud key points and extract an unique representation for each individual. The validation was fulfilled and compared with a baseline technique for 3D face recognition. The presented manuscript provide multiple testing scenarios (frontal faces, accuracy, scale and orientation) for both methods, achieving an accuracy of 98.92% in the best case of the curvature maps and an 100% accuracy in the best case of the classic Bozorth’s algorithm adaptation

    Biometric fusion methods for adaptive face recognition in computer vision

    Get PDF
    PhD ThesisFace recognition is a biometric method that uses different techniques to identify the individuals based on the facial information received from digital image data. The system of face recognition is widely used for security purposes, which has challenging problems. The solutions to some of the most important challenges are proposed in this study. The aim of this thesis is to investigate face recognition across pose problem based on the image parameters of camera calibration. In this thesis, three novel methods have been derived to address the challenges of face recognition and offer solutions to infer the camera parameters from images using a geomtric approach based on perspective projection. The following techniques were used: camera calibration CMT and Face Quadtree Decomposition (FQD), in order to develop the face camera measurement technique (FCMT) for human facial recognition. Facial information from a feature extraction and identity-matching algorithm has been created. The success and efficacy of the proposed algorithm are analysed in terms of robustness to noise, the accuracy of distance measurement, and face recognition. To overcome the intrinsic and extrinsic parameters of camera calibration parameters, a novel technique has been developed based on perspective projection, which uses different geometrical shapes to calibrate the camera. The parameters used in novel measurement technique CMT that enables the system to infer the real distance for regular and irregular objects from the 2-D images. The proposed system of CMT feeds into FQD to measure the distance between the facial points. Quadtree decomposition enhances the representation of edges and other singularities along curves of the face, and thus improves directional features from face detection across face pose. The proposed FCMT system is the new combination of CMT and FQD to recognise the faces in the various pose. The theoretical foundation of the proposed solutions has been thoroughly developed and discussed in detail. The results show that the proposed algorithms outperform existing algorithms in face recognition, with a 2.5% improvement in main error recognition rate compared with recent studies
    corecore