74,437 research outputs found
An efficient deep learning technique for facial emotion recognition
Emotion recognition from facial images is considered as a challenging task due to the varying nature of facial expressions. The prior studies on emotion classification from facial images using deep learning models have focused on emotion recognition from facial images but face the issue of performance degradation due to poor selection of layers in the convolutional neural network model.To address this issue, we propose an efficient deep learning technique using a convolutional neural network model for classifying emotions from facial images and detecting age and gender from the facial expressions efficiently. Experimental results show that the proposed model outperformed baseline works by achieving an accuracy of 95.65% for emotion recognition, 98.5% for age recognition, and 99.14% for gender recognition
Facial Landmark Feature Fusion in Transfer Learning of Child Facial Expressions
Automatic classification of child facial expressions is challenging due to the scarcity of image samples with annotations. Transfer learning of deep convolutional neural networks (CNNs), pretrained on adult facial expressions, can be effectively finetuned for child facial expression classification using limited facial images of children. Recent work inspired by facial age estimation and age-invariant face recognition proposes a fusion of facial landmark features with deep representation learning to augment facial expression classification performance. We hypothesize that deep transfer learning of child facial expressions may also benefit from fusing facial landmark features. Our proposed model architecture integrates two input branches: a CNN branch for image feature extraction and a fully connected branch for processing landmark-based features. The model-derived features of these two branches are concatenated into a latent feature vector for downstream expression classification. The architecture is trained on an adult facial expression classification task. Then, the trained model is finetuned to perform child facial expression classification. The combined feature fusion and transfer learning approach is compared against multiple models: training on adult expressions only (adult baseline), child expression only (child baseline), and transfer learning from adult to child data. We also evaluate the classification performance of feature fusion without transfer learning on model performance. Training on child data, we find that feature fusion improves the 10-fold cross validation mean accuracy from 80.32% to 83.72% with similar variance. Proposed fine-tuning with landmark feature fusion of child expressions yields the best mean accuracy of 85.14%, a more than 30% improvement over the adult baseline and nearly 5% improvement over the child baseline
Quantifying Facial Age by Posterior of Age Comparisons
We introduce a novel approach for annotating large quantity of in-the-wild
facial images with high-quality posterior age distribution as labels. Each
posterior provides a probability distribution of estimated ages for a face. Our
approach is motivated by observations that it is easier to distinguish who is
the older of two people than to determine the person's actual age. Given a
reference database with samples of known ages and a dataset to label, we can
transfer reliable annotations from the former to the latter via
human-in-the-loop comparisons. We show an effective way to transform such
comparisons to posterior via fully-connected and SoftMax layers, so as to
permit end-to-end training in a deep network. Thanks to the efficient and
effective annotation approach, we collect a new large-scale facial age dataset,
dubbed `MegaAge', which consists of 41,941 images. Data can be downloaded from
our project page mmlab.ie.cuhk.edu.hk/projects/MegaAge and
github.com/zyx2012/Age_estimation_BMVC2017. With the dataset, we train a
network that jointly performs ordinal hyperplane classification and posterior
distribution learning. Our approach achieves state-of-the-art results on
popular benchmarks such as MORPH2, Adience, and the newly proposed MegaAge.Comment: To appear on BMVC 2017 (oral) revised versio
Deep Adaptation of Adult-Child Facial Expressions by Fusing Landmark Features
Imaging of facial affects may be used to measure psychophysiological
attributes of children through their adulthood, especially for monitoring
lifelong conditions like Autism Spectrum Disorder. Deep convolutional neural
networks have shown promising results in classifying facial expressions of
adults. However, classifier models trained with adult benchmark data are
unsuitable for learning child expressions due to discrepancies in
psychophysical development. Similarly, models trained with child data perform
poorly in adult expression classification. We propose domain adaptation to
concurrently align distributions of adult and child expressions in a shared
latent space to ensure robust classification of either domain. Furthermore, age
variations in facial images are studied in age-invariant face recognition yet
remain unleveraged in adult-child expression classification. We take
inspiration from multiple fields and propose deep adaptive FACial Expressions
fusing BEtaMix SElected Landmark Features (FACE-BE-SELF) for adult-child facial
expression classification. For the first time in the literature, a mixture of
Beta distributions is used to decompose and select facial features based on
correlations with expression, domain, and identity factors. We evaluate
FACE-BE-SELF on two pairs of adult-child data sets. Our proposed FACE-BE-SELF
approach outperforms adult-child transfer learning and other baseline domain
adaptation methods in aligning latent representations of adult and child
expressions
Image-based family verification in the wild
Facial image analysis has been an important subject of study in the communities of pat-
tern recognition and computer vision. Facial images contain much information about the
person they belong to: identity, age, gender, ethnicity, expression and many more. For that
reason, the analysis of facial images has many applications in real world problems such
as face recognition, age estimation, gender classification or facial expression recognition.
Visual kinship recognition is a new research topic in the scope of facial image analysis.
It is essential for many real-world applications. However, nowadays
there exist only a few practical vision systems capable to handle such tasks. Hence, vision
technology for kinship-based problems has not matured enough to be applied to real-
world problems. This leads to a concern of unsatisfactory performance when attempted
on real-world datasets.
Kinship verification is to determine pairwise kin relations for a pair of given images. It
can be viewed as a typical binary classification problem, i.e., a face pair is either related
by kinship or it is not. Prior research works have addressed kinship types
for which pre-existing datasets have provided images, annotations and a verification task
protocol. Namely, father-son, father-daughter, mother-son and mother-daughter.
The main objective of this Master work is the study and development of feature selection
and fusion for the problem of family verification from facial images.
To achieve this objective, there is a main tasks that can be addressed: perform a compara-
tive study on face descriptors that include classic descriptors as well as deep descriptors.
The main contributions of this Thesis work are:
1. Studying the state of the art of the problem of family verification in images.
2. Implementing and comparing several criteria that correspond to different face rep-
resentations (Local Binary Patterns (LBP), Histogram Oriented Gradients (HOG),
deep descriptors)
Gender Classification from Facial Images
Gender classification based on facial images has received increased attention in the computer vision community. In this work, a comprehensive evaluation of state-of-the-art gender classification methods is carried out on publicly available databases and extended to reallife face images, where face detection and face normalization are essential for the success of the system. Next, the possibility of predicting gender from face images acquired in the near-infrared spectrum (NIR) is explored. In this regard, the following two questions are addressed: (a) Can gender be predicted from NIR face images; and (b) Can a gender predictor learned using visible (VIS) images operate successfully on NIR images and vice-versa? The experimental results suggest that NIR face images do have some discriminatory information pertaining to gender, although the degree of discrimination is noticeably lower than that of VIS images. Further, the use of an illumination normalization routine may be essential for facilitating cross-spectral gender prediction. By formulating the problem of gender classification in the framework of both visible and near-infrared images, the guidelines for performing gender classification in a real-world scenario is provided, along with the strengths and weaknesses of each methodology. Finally, the general problem of attribute classification is addressed, where features such as expression, age and ethnicity are derived from a face image
An image-based children age range verification and classification based on facial features angle distribution and face shape elliptical ratio
Verifying children are much easier than verifying adults, based on physical and body appearances. However it would be rather difficult to verify children’s age referring only to their face properties. Therefore, this research presents an image-based method to classify children from adult and to verify children’s age range. The method consists of two main stages; the process to distinguish children from adult based on input facial image and the process to verify children age range. The classification and verification algorithm was based on face shape elliptical ratio and facial features angle distribution. The angle that forms on human face images has been calculated based on selected facial features landmark points. The method was tested on FG-NET aging database. The classification of children from adults and the verification of children age range are implemented using SVM and Multi-SVM classification process. The results show an accuracy of classifying children from adults which are 92% more accurate than previous works
Recognizing Emotions Conveyed through Facial Expressions
Emotional communication is a key element of habilitation care of persons with dementia. It is, therefore, highly preferable for assistive robots that are used to supplement human care provided to persons with dementia, to possess the ability to recognize and respond to emotions expressed by those who are being cared-for. Facial expressions are one of the key modalities through which emotions are conveyed. This work focuses on computer vision-based recognition of facial expressions of emotions conveyed by the elderly.
Although there has been much work on automatic facial expression recognition, the algorithms have been experimentally validated primarily on young faces. The facial expressions on older faces has been totally excluded. This is due to the fact that the facial expression databases that were available and that have been used in facial expression recognition research so far do not contain images of facial expressions of people above the age of 65 years. To overcome this problem, we adopt a recently published database, namely, the FACES database, which was developed to address exactly the same problem in the area of human behavioural research. The FACES database contains 2052 images of six different facial expressions, with almost identical and systematic representation of the young, middle-aged and older age-groups.
In this work, we evaluate and compare the performance of two of the existing imagebased approaches for facial expression recognition, over a broad spectrum of age ranging from 19 to 80 years. The evaluated systems use Gabor filters and uniform local binary patterns (LBP) for feature extraction, and AdaBoost.MH with multi-threshold stump learner for expression classification. We have experimentally validated the hypotheses that facial expression recognition systems trained only on young faces perform poorly on middle-aged and older faces, and that such systems confuse ageing-related facial features on neutral faces with other expressions of emotions. We also identified that, among the three age-groups, the middle-aged group provides the best generalization performance across the entire age spectrum. The performance of the systems was also compared to the performance of humans in recognizing facial expressions of emotions. Some similarities were observed, such as, difficulty in recognizing the expressions on older faces, and difficulty in recognizing the expression of sadness.
The findings of our work establish the need for developing approaches for facial expression recognition that are robust to the effects of ageing on the face. The scientific results of our work can be used as a basis to guide future research in this direction
- …