359,304 research outputs found

    Entropy Projection Curved Gabor with Random Forest and SVM for Face Recognition

    Get PDF
    In this work, we propose a workflow for face recognition under occlusion using the entropy projection from the curved Gabor filter, and create a representative and compact features vector that describes a face. Despite the reduced vector obtained by the entropy projection, it still presents opportunity for further dimensionality reduction. Therefore, we use a Random Forest classifier as an attribute selector, providing a 97% reduction of the original vector while keeping suitable accuracy. A set of experiments using three public image databases: AR Face, Extended Yale B with occlusion and FERET illustrates the proposed methodology, evaluated using the SVM classifier. The results obtained in the experiments show promising results when compared to the available approaches in the literature, obtaining 98.05% accuracy for the complete AR Face, 97.26% for FERET and 81.66% with Yale with 50% occlusion

    EmoNets: Multimodal deep learning approaches for emotion recognition in video

    Full text link
    The task of the emotion recognition in the wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based "bag-of-mouths" model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67% on the 2014 dataset

    Facial emotion recognition using min-max similarity classifier

    Full text link
    Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods

    Shape and Texture Combined Face Recognition for Detection of Forged ID Documents

    Get PDF
    This paper proposes a face recognition system that can be used to effectively match a face image scanned from an identity (ID) doc-ument against the face image stored in the biometric chip of such a document. The purpose of this specific face recognition algorithm is to aid the automatic detection of forged ID documents where the photography printed on the document’s surface has been altered or replaced. The proposed algorithm uses a novel combination of texture and shape features together with sub-space representation techniques. In addition, the robustness of the proposed algorithm when dealing with more general face recognition tasks has been proven with the Good, the Bad & the Ugly (GBU) dataset, one of the most challenging datasets containing frontal faces. The proposed algorithm has been complement-ed with a novel method that adopts two operating points to enhance the reliability of the algorithm’s final verification decision.Final Accepted Versio
    • …
    corecore