21,291 research outputs found
An Approach for Improving Automatic Mouth Emotion Recognition
The study proposes and tests a technique for automated emotion recognition
through mouth detection via Convolutional Neural Networks (CNN), meant to be
applied for supporting people with health disorders with communication skills
issues (e.g. muscle wasting, stroke, autism, or, more simply, pain) in order to
recognize emotions and generate real-time feedback, or data feeding supporting
systems. The software system starts the computation identifying if a face is
present on the acquired image, then it looks for the mouth location and
extracts the corresponding features. Both tasks are carried out using Haar
Feature-based Classifiers, which guarantee fast execution and promising
performance. If our previous works focused on visual micro-expressions for
personalized training on a single user, this strategy aims to train the system
also on generalized faces data sets
ICface: Interpretable and Controllable Face Reenactment Using GANs
This paper presents a generic face animator that is able to control the pose
and expressions of a given face image. The animation is driven by human
interpretable control signals consisting of head pose angles and the Action
Unit (AU) values. The control information can be obtained from multiple sources
including external driving videos and manual controls. Due to the interpretable
nature of the driving signal, one can easily mix the information between
multiple sources (e.g. pose from one image and expression from another) and
apply selective post-production editing. The proposed face animator is
implemented as a two-stage neural network model that is learned in a
self-supervised manner using a large video collection. The proposed
Interpretable and Controllable face reenactment network (ICface) is compared to
the state-of-the-art neural network-based face animation techniques in multiple
tasks. The results indicate that ICface produces better visual quality while
being more versatile than most of the comparison methods. The introduced model
could provide a lightweight and easy to use tool for a multitude of advanced
image and video editing tasks.Comment: Accepted in WACV-202
Recommended from our members
Efficient smile detection by Extreme Learning Machine
Smile detection is a specialized task in facial expression analysis with applications such as photo selection, user experience analysis, and patient monitoring. As one of the most important and informative expressions, smile conveys the underlying emotion status such as joy, happiness, and satisfaction. In this paper, an efficient smile detection approach is proposed based on Extreme Learning Machine (ELM). The faces are first detected and a holistic flow-based face registration is applied which does not need any manual labeling or key point detection. Then ELM is used to train the classifier. The proposed smile detector is tested with different feature descriptors on publicly available databases including real-world face images. The comparisons against benchmark classifiers including Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) suggest that the proposed ELM based smile detector in general performs better and is very efficient. Compared to state-of-the-art smile detector, the proposed method achieves competitive results without preprocessing and manual registration
- …