4 research outputs found
A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE)
Computing environment is moving towards human-centered designs instead of
computer centered designs and human's tend to communicate wealth of information
through affective states or expressions. Traditional Human Computer Interaction
(HCI) based systems ignores bulk of information communicated through those
affective states and just caters for user's intentional input. Generally, for
evaluating and benchmarking different facial expression analysis algorithms,
standardized databases are needed to enable a meaningful comparison. In the
absence of comparative tests on such standardized databases it is difficult to
find relative strengths and weaknesses of different facial expression
recognition algorithms. In this article we present a novel video database for
Children's Spontaneous facial Expressions (LIRIS-CSE). Proposed video database
contains six basic spontaneous facial expressions shown by 12 ethnically
diverse children between the ages of 6 and 12 years with mean age of 7.3 years.
To the best of our knowledge, this database is first of its kind as it records
and shows spontaneous facial expressions of children. Previously there were few
database of children expressions and all of them show posed or exaggerated
expressions which are different from spontaneous or natural expressions. Thus,
this database will be a milestone for human behavior researchers. This database
will be a excellent resource for vision community for benchmarking and
comparing results. In this article, we have also proposed framework for
automatic expression recognition based on convolutional neural network (CNN)
architecture with transfer learning approach. Proposed architecture achieved
average classification accuracy of 75% on our proposed database i.e. LIRIS-CSE
Progressive ShallowNet for large scale dynamic and spontaneous facial behaviour analysis in children
COVID-19 has severely disrupted every aspect of society and left negative impact on our life. Resisting the temptation in engaging face-to-face social connection is not as easy as we imagine. Breaking ties within social circle makes us lonely and isolated, that in turns increase the likelihood of depression related disease and even can leads to death by increasing the chance of heart disease. Not only adults, children's are equally impacted where the contribution of emotional competence to social competence has long term implications. Early identification skill for facial behaviour emotions, deficits, and expression may help to prevent the low social functioning. Deficits in young children's ability to differentiate human emotions can leads to social functioning impairment. However, the existing work focus on adult emotions recognition mostly and ignores emotion recognition in children. By considering the working of pyramidal cells in the cerebral cortex, in this paper, we present progressive lightweight shallow learning for the classification by efficiently utilizing the skip-connection for spontaneous facial behaviour recognition in children. Unlike earlier deep neural networks, we limit the alternative path for the gradient at the earlier part of the network by increase gradually with the depth of the network. Progressive ShallowNet is not only able to explore more feature space but also resolve the over-fitting issue for smaller data, due to limiting the residual path locally, making the network vulnerable to perturbations. We have conducted extensive experiments on benchmark facial behaviour analysis in children that showed significant performance gain comparatively
Face Expression Classification in Children Using CNN
One of the turbulent emotions can be recognized from facial expressions. When compared with adults, children's facial expressions are more expressive for positive emotions and ambiguous for negative emotions so that they are much more difficult to recognize. Ambiguous in terms of negative emotions, for example, when children are angry, sometimes they show an expressionless face, making it difficult to know what emotions the child is experiencing. Therefore, it is proposed research using Convolutional Neural Network with ResNet-50 architecture. According to [1] CNN Resnet-50 is superior to other facial recognition methods, specifically in the classification of facial expressions. CNN ResNet-50 generates a model during the training process, and the model will be used during the testing process. The dataset used is Children's Spontaneous facial Expressions (LIRIS-CSE) data proposed by [2]. CNN ResNet-50 can identify children's expressions well, including expressions of anger, disgust, fear, happy, sad and surprise. The results showed a very significant increase in accuracy, namely in testing data testing reached 99.89%
Deep Adaptation of Adult-Child Facial Expressions by Fusing Landmark Features
Imaging of facial affects may be used to measure psychophysiological
attributes of children through their adulthood, especially for monitoring
lifelong conditions like Autism Spectrum Disorder. Deep convolutional neural
networks have shown promising results in classifying facial expressions of
adults. However, classifier models trained with adult benchmark data are
unsuitable for learning child expressions due to discrepancies in
psychophysical development. Similarly, models trained with child data perform
poorly in adult expression classification. We propose domain adaptation to
concurrently align distributions of adult and child expressions in a shared
latent space to ensure robust classification of either domain. Furthermore, age
variations in facial images are studied in age-invariant face recognition yet
remain unleveraged in adult-child expression classification. We take
inspiration from multiple fields and propose deep adaptive FACial Expressions
fusing BEtaMix SElected Landmark Features (FACE-BE-SELF) for adult-child facial
expression classification. For the first time in the literature, a mixture of
Beta distributions is used to decompose and select facial features based on
correlations with expression, domain, and identity factors. We evaluate
FACE-BE-SELF on two pairs of adult-child data sets. Our proposed FACE-BE-SELF
approach outperforms adult-child transfer learning and other baseline domain
adaptation methods in aligning latent representations of adult and child
expressions