107 research outputs found
Regression-based Multi-View Facial Expression Recognition
We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-art facial expression recognition method. To learn the mapping functions we investigate four regression models: Linear Regression (LR), Support Vector Regression (SVR), Relevance Vector Regression (RVR) and Gaussian Process Regression (GPR). Our extensive experiments on the CMU Multi-PIE facial expression database show that the proposed scheme outperforms view-specific classifiers by utilizing considerably less training data
Improving Multi-view Facial Expression Recognition in Unconstrained Environments
Facial expression and emotion-related research has been a longstanding activity in psychology while computerized/automatic facial expression recognition of emotion is a relative recent and still emerging but active research area. Although many automatic computer systems have been proposed to address facial expression recognition problems, the majority of them fail to cope with the requirements of many practical application scenarios arising from either environmental factors or unexpected behavioural bias introduced by the users, such as illumination conditions and large head pose variation to the camera. In this thesis, two of the most influential and common issues raised in practical application scenarios when applying automatic facial expression recognition system are comprehensively explored and investigated. Through a series of experiments carried out under a proposed texture-based system framework for multi-view facial expression recognition, several novel texture feature representations are introduced for implementing multi-view facial expression recognition systems in practical environments, for which the state-of-the-art performance is achieved. In addition, a variety of novel categorization schemes for the configurations of an automatic multi-view facial expression recognition system is presented to address the impractical discrete categorization of facial expression of emotions in real-world scenarios. A significant improvement is observed when using the proposed categorizations in the proposed system framework using a novel implementation of the block based local ternary pattern approach
Gaussian process domain experts for model adaptation in facial behavior analysis
We present a novel approach for supervised domain adaptation that is based upon the probabilistic framework of Gaussian processes (GPs). Specifically, we introduce domain-specific GPs as local experts for facial expression classification from face images. The adaptation of the classifier is facilitated in probabilistic fashion by conditioning the target expert on multiple source experts. Furthermore, in contrast to existing adaptation approaches, we also learn a target expert from available target data solely. Then, a single and confident classifier is obtained by combining the predictions from multiple experts based on their confidence. Learning of the model is efficient and requires no retraining/reweighting of the source classifiers. We evaluate the proposed approach on two publicly available datasets for multi-class (MultiPIE) and multi-label (DISFA) facial expression classification. To this end, we perform adaptation of two contextual factors: where (view) and who (subject). We show in our experiments that the proposed approach consistently outperforms both source and target classifiers, while using as few as 30 target examples. It also outperforms the state-of-the-art approaches for supervised domain adaptation
Contrastive Learning of View-Invariant Representations for Facial Expressions Recognition
Although there has been much progress in the area of facial expression
recognition (FER), most existing methods suffer when presented with images that
have been captured from viewing angles that are non-frontal and substantially
different from those used in the training process. In this paper, we propose
ViewFX, a novel view-invariant FER framework based on contrastive learning,
capable of accurately classifying facial expressions regardless of the input
viewing angles during inference. ViewFX learns view-invariant features of
expression using a proposed self-supervised contrastive loss which brings
together different views of the same subject with a particular expression in
the embedding space. We also introduce a supervised contrastive loss to push
the learnt view-invariant features of each expression away from other
expressions. Since facial expressions are often distinguished with very subtle
differences in the learned feature space, we incorporate the Barlow twins loss
to reduce the redundancy and correlations of the representations in the learned
representations. The proposed method is a substantial extension of our
previously proposed CL-MEx, which only had a self-supervised loss. We test the
proposed framework on two public multi-view facial expression recognition
datasets, KDEF and DDCF. The experiments demonstrate that our approach
outperforms previous works in the area and sets a new state-of-the-art for both
datasets while showing considerably less sensitivity to challenging angles and
the number of output labels used for training. We also perform detailed
sensitivity and ablation experiments to evaluate the impact of different
components of our model as well as its sensitivity to different parameters.Comment: Accepted in ACM Transactions on Multimedia Computing, Communications,
and Application
Web-based visualisation of head pose and facial expressions changes: monitoring human activity using depth data
Despite significant recent advances in the field of head pose estimation and
facial expression recognition, raising the cognitive level when analysing human
activity presents serious challenges to current concepts. Motivated by the need
of generating comprehensible visual representations from different sets of
data, we introduce a system capable of monitoring human activity through head
pose and facial expression changes, utilising an affordable 3D sensing
technology (Microsoft Kinect sensor). An approach build on discriminative
random regression forests was selected in order to rapidly and accurately
estimate head pose changes in unconstrained environment. In order to complete
the secondary process of recognising four universal dominant facial expressions
(happiness, anger, sadness and surprise), emotion recognition via facial
expressions (ERFE) was adopted. After that, a lightweight data exchange format
(JavaScript Object Notation-JSON) is employed, in order to manipulate the data
extracted from the two aforementioned settings. Such mechanism can yield a
platform for objective and effortless assessment of human activity within the
context of serious gaming and human-computer interaction.Comment: 8th Computer Science and Electronic Engineering, (CEEC 2016),
University of Essex, UK, 6 page
Robust Facial Expression Recognition with Convolutional Visual Transformers
Facial Expression Recognition (FER) in the wild is extremely challenging due
to occlusions, variant head poses, face deformation and motion blur under
unconstrained conditions. Although substantial progresses have been made in
automatic FER in the past few decades, previous studies are mainly designed for
lab-controlled FER. Real-world occlusions, variant head poses and other issues
definitely increase the difficulty of FER on account of these
information-deficient regions and complex backgrounds. Different from previous
pure CNNs based methods, we argue that it is feasible and practical to
translate facial images into sequences of visual words and perform expression
recognition from a global perspective. Therefore, we propose Convolutional
Visual Transformers to tackle FER in the wild by two main steps. First, we
propose an attentional selective fusion (ASF) for leveraging the feature maps
generated by two-branch CNNs. The ASF captures discriminative information by
fusing multiple features with global-local attention. The fused feature maps
are then flattened and projected into sequences of visual words. Second,
inspired by the success of Transformers in natural language processing, we
propose to model relationships between these visual words with global
self-attention. The proposed method are evaluated on three public in-the-wild
facial expression datasets (RAF-DB, FERPlus and AffectNet). Under the same
settings, extensive experiments demonstrate that our method shows superior
performance over other methods, setting new state of the art on RAF-DB with
88.14%, FERPlus with 88.81% and AffectNet with 61.85%. We also conduct
cross-dataset evaluation on CK+ show the generalization capability of the
proposed method
- β¦