4,735 research outputs found

    Facial emotion recognition using min-max similarity classifier

    Full text link
    Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods

    Pooling Faces: Template based Face Recognition with Pooled Face Images

    Full text link
    We propose a novel approach to template based face recognition. Our dual goal is to both increase recognition accuracy and reduce the computational and storage costs of template matching. To do this, we leverage on an approach which was proven effective in many other domains, but, to our knowledge, never fully explored for face images: average pooling of face photos. We show how (and why!) the space of a template's images can be partitioned and then pooled based on image quality and head pose and the effect this has on accuracy and template size. We perform extensive tests on the IJB-A and Janus CS2 template based face identification and verification benchmarks. These show that not only does our approach outperform published state of the art despite requiring far fewer cross template comparisons, but also, surprisingly, that image pooling performs on par with deep feature pooling.Comment: Appeared in the IEEE Computer Society Workshop on Biometrics, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June, 201

    Recognizing Surgically Altered Face Images and 3D Facial Expression Recognition

    Get PDF
    AbstractAltering Facial appearances using surgical procedures are common now days. But it raised challenges for face recognition algorithms. Plastic surgery introduces non linear variations. Because of these variations it is difficult to be modeled by the existing face recognition system. Here presents a multi objective evolutionary granular algorithm. It operates on several granules extracted from a face images at multiple level of granularity. This granular information is unified in an evolutionary manner using multi objective genetic approach. Then identify the facial expression from the face images. For that 3D facial shapes are considering here. A novel automatic feature selection method is proposed based on maximizing the average relative entropy of marginalized class-conditional feature distributions and apply it to a complete pool of candidate features composed of normalized Euclidian distances between 83 facial feature points in the 3D space. A regularized multi-class AdaBoost classification algorithm is used here to get the highest average recognition rate

    Web-Scale Training for Face Identification

    Full text link
    Scaling machine learning methods to very large datasets has attracted considerable attention in recent years, thanks to easy access to ubiquitous sensing and data from the web. We study face recognition and show that three distinct properties have surprising effects on the transferability of deep convolutional networks (CNN): (1) The bottleneck of the network serves as an important transfer learning regularizer, and (2) in contrast to the common wisdom, performance saturation may exist in CNN's (as the number of training samples grows); we propose a solution for alleviating this by replacing the naive random subsampling of the training set with a bootstrapping process. Moreover, (3) we find a link between the representation norm and the ability to discriminate in a target domain, which sheds lights on how such networks represent faces. Based on these discoveries, we are able to improve face recognition accuracy on the widely used LFW benchmark, both in the verification (1:1) and identification (1:N) protocols, and directly compare, for the first time, with the state of the art Commercially-Off-The-Shelf system and show a sizable leap in performance

    Human metrology for person classification and recognition

    Get PDF
    Human metrological features generally refers to geometric measurements extracted from humans, such as height, chest circumference or foot length. Human metrology provides an important soft biometric that can be used in challenging situations, such as person classification and recognition at a distance, where hard biometric traits such as fingerprints and iris information cannot easily be acquired. In this work, we first study the question of predictability and correlation in human metrology. We show that partial or available measurements can be used to predict other missing measurements. We then investigate the use of human metrology for the prediction of other soft biometrics, viz. gender and weight. The experimental results based on our proposed copula-based model suggest that human body metrology contains enough information for reliable prediction of gender and weight. Also, the proposed copula-based technique is observed to reduce the impact of noise on prediction performance. We then study the question of whether face metrology can be exploited for reliable gender prediction. A new method based solely on metrological information from facial landmarks is developed. The performance of the proposed metrology-based method is compared with that of a state-of-the-art appearance-based method for gender classification. Results on several face databases show that the metrology-based approach resulted in comparable accuracy to that of the appearance-based method. Furthermore, we study the question of person recognition (classification and identification) via whole body metrology. Using CAESAR 1D database as baseline, we simulate intra-class variation with various noise models. The experimental results indicate that given enough number of features, our metrology-based recognition system can have promising performance that is comparable to several recent state-of-the-art recognition systems. We propose a non-parametric feature selection methodology, called adapted k-nearest neighbor estimator, which does not rely on intra-class distribution of the query set. This leads to improved results over other nearest neighbor estimators (as feature selection criteria) for moderate number of features. Finally we quantify the discrimination capability of human metrology, from both individuality and capacity perspectives. Generally, a biometric-based recognition technique relies on an assumption that the given biometric is unique to an individual. However, the validity of this assumption is not yet generally confirmed for most soft biometrics, such as human metrology. In this work, we first develop two schemes that can be used to quantify the individuality of a given soft-biometric system. Then, a Poisson channel model is proposed to analyze the recognition capacity of human metrology. Our study suggests that the performance of such a system depends more on the accuracy of the ground truth or training set

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods
    • …
    corecore