12,542 research outputs found
Greedy Search for Descriptive Spatial Face Features
Facial expression recognition methods use a combination of geometric and
appearance-based features. Spatial features are derived from displacements of
facial landmarks, and carry geometric information. These features are either
selected based on prior knowledge, or dimension-reduced from a large pool. In
this study, we produce a large number of potential spatial features using two
combinations of facial landmarks. Among these, we search for a descriptive
subset of features using sequential forward selection. The chosen feature
subset is used to classify facial expressions in the extended Cohn-Kanade
dataset (CK+), and delivered 88.7% recognition accuracy without using any
appearance-based features.Comment: International Conference on Acoustics, Speech and Signal Processing
(ICASSP), 201
Island Loss for Learning Discriminative Features in Facial Expression Recognition
Over the past few years, Convolutional Neural Networks (CNNs) have shown
promise on facial expression recognition. However, the performance degrades
dramatically under real-world settings due to variations introduced by subtle
facial appearance changes, head pose variations, illumination changes, and
occlusions.
In this paper, a novel island loss is proposed to enhance the discriminative
power of the deeply learned features. Specifically, the IL is designed to
reduce the intra-class variations while enlarging the inter-class differences
simultaneously. Experimental results on four benchmark expression databases
have demonstrated that the CNN with the proposed island loss (IL-CNN)
outperforms the baseline CNN models with either traditional softmax loss or the
center loss and achieves comparable or better performance compared with the
state-of-the-art methods for facial expression recognition.Comment: 8 pages, 3 figure
Machine Analysis of Facial Expressions
No abstract
A graphical model based solution to the facial feature point tracking problem
In this paper a facial feature point tracker that is motivated by applications
such as human-computer interfaces and facial expression analysis systems is
proposed. The proposed tracker is based on a graphical model framework. The
facial features are tracked through video streams by incorporating statistical relations in time as well as spatial relations between feature points. By exploiting the spatial relationships between feature points, the proposed method provides robustness in real-world conditions such as arbitrary head movements and occlusions. A Gabor feature-based occlusion detector is developed and used to handle occlusions. The performance of the proposed tracker has been evaluated
on real video data under various conditions including occluded facial gestures
and head movements. It is also compared to two popular methods, one based
on Kalman filtering exploiting temporal relations, and the other based on active
appearance models (AAM). Improvements provided by the proposed approach
are demonstrated through both visual displays and quantitative analysis
- …