6 research outputs found
Objective Classes for Micro-Facial Expression Recognition
Micro-expressions are brief spontaneous facial expressions that appear on a
face when a person conceals an emotion, making them different to normal facial
expressions in subtlety and duration. Currently, emotion classes within the
CASME II dataset are based on Action Units and self-reports, creating conflicts
during machine learning training. We will show that classifying expressions
using Action Units, instead of predicted emotion, removes the potential bias of
human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D
feature descriptors. The experiments are evaluated on two benchmark FACS coded
datasets: CASME II and SAMM. The best result achieves 86.35\% accuracy when
classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the
result of the state-of-the-art 5-class emotional-based classification in CASME
II. Results indicate that classification based on Action Units provides an
objective method to improve micro-expression recognition.Comment: 11 pages, 4 figures and 5 tables. This paper will be submitted for
journal revie
Micro Expression Spotting through Appearance Based Descriptor and Distance Analysis
Micro-Expressions (MEs) are a typical kind of expressions which are subtle and short lived in nature and reveal the hidden emotion of human beings. Due to processing an entire video, the MEs recognition constitutes huge computational burden and also consumes more time. Hence, MEs spotting is required which locates the exact frames at which the movement of ME persists. Spotting is regarded as a primary step for MEs recognition. This paper proposes a new method for ME spotting which comprises three stages; pre-processing, feature extraction and discrimination. Pre-processing aligns the facial region in every frame based on three landmark points derived from three landmark regions. To do alignment, an in-plane rotation matrix is used which rotates the non-aligned coordinates into aligned coordinates. For feature extraction, two texture based descriptors are deployed; they are Local Binary Pattern (LBP) and Local Mean Binary Pattern (LMBP). Finally at discrimination stage, Feature Difference Analysis is employed through Chi-Squared Distance (CSD) and the distance of each frame is compared with a threshold to spot there frames namely Onset, Apex and Offset. Simulation done over a Standard CASME dataset and performance is verified through Feature Difference and F1-Score. The obtained results prove that the proposed method is superior than the state-of-the-art methods
A review of automated micro-expression analysis
Micro-expression is a type of facial expression that is manifested for a very short duration. It is difficult to recognize the
expression manually because it involves very subtle facial movements. Such expressions often occur unconsciously, and
therefore are defined as a basis to help identify the real human emotions. Hence, an automated approach to micro-expression
recognition has become a popular research topic of interest recently. Historically, the early researches on automated micro-expression have utilized traditional machine learning methods, while the more recent development has focused on the deep
learning approach. Compared to traditional machine learning, which relies on manual feature processing and requires
the use of formulated rules, deep learning networks produce more accurate micro-expression recognition performances
through an end-to-end methodology, whereby the features of interest were extracted optimally through the training process,
utilizing a large set of data. This paper reviews the developments and trends in micro-expression recognition from the
earlier studies (hand-crafted approach) to the present studies (deep learning approach). Some of the important topics
that will be covered include the detection of micro-expression from short videos, apex frame spotting, micro-expression
recognition as well as performance discussion on the reviewed methods. Furthermore, major limitations that hamper
the development of automated micro-expression recognition systems are also analyzed, followed by recommendations of
possible future research directions
Face Image and Video Analysis in Biometrics and Health Applications
Computer Vision (CV) enables computers and systems to derive meaningful information from acquired visual inputs, such as images and videos, and make decisions based on the extracted information. Its goal is to acquire, process, analyze, and understand the information by developing a theoretical and algorithmic model. Biometrics are distinctive and measurable human characteristics used to label or describe individuals by combining computer vision with knowledge of human physiology (e.g., face, iris, fingerprint) and behavior (e.g., gait, gaze, voice). Face is one of the most informative biometric traits. Many studies have investigated the human face from the perspectives of various different disciplines, ranging from computer vision, deep learning, to neuroscience and biometrics. In this work, we analyze the face characteristics from digital images and videos in the areas of morphing attack and defense, and autism diagnosis. For face morphing attacks generation, we proposed a transformer based generative adversarial network to generate more visually realistic morphing attacks by combining different losses, such as face matching distance, facial landmark based loss, perceptual loss and pixel-wise mean square error. In face morphing attack detection study, we designed a fusion-based few-shot learning (FSL) method to learn discriminative features from face images for few-shot morphing attack detection (FS-MAD), and extend the current binary detection into multiclass classification, namely, few-shot morphing attack fingerprinting (FS-MAF). In the autism diagnosis study, we developed a discriminative few shot learning method to analyze hour-long video data and explored the fusion of facial dynamics for facial trait classification of autism spectrum disorder (ASD) in three severity levels. The results show outstanding performance of the proposed fusion-based few-shot framework on the dataset. Besides, we further explored the possibility of performing face micro- expression spotting and feature analysis on autism video data to classify ASD and control groups. The results indicate the effectiveness of subtle facial expression changes on autism diagnosis