466 research outputs found
Facial Landmark Based Region of Interest Localization for Deep Facial Expression Recognition
Automated facial expression recognition has gained much attention in the last years due to growing application areas such as computer animated agents, sociable robots and human computer interaction. The realization of a reliable facial expression recognition system through machine learning is still a challenging task particularly on databases with large number of images. Convolutional Neural Network (CNN) architectures have been proposed to deal with large numbers of training data for better accuracy. For CNNs, a task related best achieving architectural structure does not exist. In addition, the representation of the input image is equivalently important as the architectural structure and the training data. Therefore, this study focuses on the performances of various CNN architectures trained by different region of interests of the same input data. Experiments are performed on three distinct CNN architectures with three different crops of the same dataset. Results show that by appropriately localizing the facial region and selecting the correct CNN architecture it is possible to boost the recognition rate from 84% to 98% while decreasing the training time for proposed CNN architectures
3D-CNN for Facial Micro- and Macro-expression Spotting on Long Video Sequences using Temporal Oriented Reference Frame
Facial expression spotting is the preliminary step for micro- and
macro-expression analysis. The task of reliably spotting such expressions in
video sequences is currently unsolved. The current best systems depend upon
optical flow methods to extract regional motion features, before categorisation
of that motion into a specific class of facial movement. Optical flow is
susceptible to drift error, which introduces a serious problem for motions with
long-term dependencies, such as high frame-rate macro-expression. We propose a
purely deep learning solution which, rather than track frame differential
motion, compares via a convolutional model, each frame with two temporally
local reference frames. Reference frames are sampled according to calculated
micro- and macro-expression durations. We show that our solution achieves
state-of-the-art performance (F1-score of 0.126) in a dataset of high
frame-rate (200 fps) long video sequences (SAMM-LV) and is competitive in a low
frame-rate (30 fps) dataset (CAS(ME)2). In this paper, we document our deep
learning model and parameters, including how we use local contrast
normalisation, which we show is critical for optimal results. We surpass a
limitation in existing methods, and advance the state of deep learning in the
domain of facial expression spotting
Shallow Triple Stream Three-dimensional CNN (STSTNet) for Micro-expression Recognition
In the recent year, state-of-the-art for facial micro-expression recognition
have been significantly advanced by deep neural networks. The robustness of
deep learning has yielded promising performance beyond that of traditional
handcrafted approaches. Most works in literature emphasized on increasing the
depth of networks and employing highly complex objective functions to learn
more features. In this paper, we design a Shallow Triple Stream
Three-dimensional CNN (STSTNet) that is computationally light whilst capable of
extracting discriminative high level features and details of micro-expressions.
The network learns from three optical flow features (i.e., optical strain,
horizontal and vertical optical flow fields) computed based on the onset and
apex frames of each video. Our experimental results demonstrate the
effectiveness of the proposed STSTNet, which obtained an unweighted average
recall rate of 0.7605 and unweighted F1-score of 0.7353 on the composite
database consisting of 442 samples from the SMIC, CASME II and SAMM databases.Comment: 5 pages, 1 figure, Accepted and published in IEEE FG 201
Deep Structure Inference Network for Facial Action Unit Recognition
Facial expressions are combinations of basic components called Action Units
(AU). Recognizing AUs is key for developing general facial expression analysis.
In recent years, most efforts in automatic AU recognition have been dedicated
to learning combinations of local features and to exploiting correlations
between Action Units. In this paper, we propose a deep neural architecture that
tackles both problems by combining learned local and global features in its
initial stages and replicating a message passing algorithm between classes
similar to a graphical model inference approach in later stages. We show that
by training the model end-to-end with increased supervision we improve
state-of-the-art by 5.3% and 8.2% performance on BP4D and DISFA datasets,
respectively
- …