41 research outputs found
An efficient P-KCCA algorithm for 2D-3D face recognition using SVM
In this paper, a novel face recognition system for face recognition and identification based on a combination of Principal Component Analysis and Kernel Canonical Correlation Analysis (P-KCCA) using Support Vector Machine (SVM) is proposed. First, the P-KCCA method is utilized to detect and extract the important features from the input images. This method makes it possible to match the 2D face image with enrolled 3D face data. The resulting features are then classified using the SVM method. The proposed methods were tested on TEXAS database with 200 subjects. The experimental results in the TEXAS face database produce interesting results from the point of view of recognition success, rate, and robustness of the face recognition algorithm. We compare the performance of our proposed face recognition method to other commonly-used methods. The experimental results show that the combination of P-KCCA method using SVM achieves a higher performance compared to the alone PCA, CCA and KCCA algorithms
Recommended from our members
Face image super-resolution using 2D CCA
In this paper a face super-resolution method using two-dimensional canonical correlation analysis (2D CCA) is presented. A detail compensation step is followed to add high-frequency components to the reconstructed high-resolution face. Unlike most of the previous researches on face super-resolution algorithms that first transform the images into vectors, in our approach the relationship between the high-resolution and the low-resolution face image are maintained in their original 2D representation. In addition, rather than approximating the entire face, different parts of a face image are super-resolved separately to better preserve the local structure. The proposed method is compared with various state-of-the-art super-resolution algorithms using multiple evaluation criteria including face recognition performance. Results on publicly available datasets show that the proposed method super-resolves high quality face images which are very close to the ground-truth and performance gain is not dataset dependent. The method is very efficient in both the training and testing phases compared to the other approaches. © 2013 Elsevier B.V
Relative Facial Action Unit Detection
This paper presents a subject-independent facial action unit (AU) detection
method by introducing the concept of relative AU detection, for scenarios where
the neutral face is not provided. We propose a new classification objective
function which analyzes the temporal neighborhood of the current frame to
decide if the expression recently increased, decreased or showed no change.
This approach is a significant change from the conventional absolute method
which decides about AU classification using the current frame, without an
explicit comparison with its neighboring frames. Our proposed method improves
robustness to individual differences such as face scale and shape, age-related
wrinkles, and transitions among expressions (e.g., lower intensity of
expressions). Our experiments on three publicly available datasets (Extended
Cohn-Kanade (CK+), Bosphorus, and DISFA databases) show significant improvement
of our approach over conventional absolute techniques. Keywords: facial action
coding system (FACS); relative facial action unit detection; temporal
information;Comment: Accepted at IEEE Winter Conference on Applications of Computer
Vision, Steamboat Springs Colorado, USA, 201
Effects of cultural characteristics on building an emotion classifier through facial expression analysis
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier. (C) 2015 SPIE and IS&TFacial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural gro24219FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)FAPESP [2011/22749-8, 2014/04020-9]CNPq [307113/2012-4]2011/22749-8; 2014/04020-9307113/2012-