3 research outputs found

    A review of automated micro-expression analysis

    Get PDF
    Micro-expression is a type of facial expression that is manifested for a very short duration. It is difficult to recognize the expression manually because it involves very subtle facial movements. Such expressions often occur unconsciously, and therefore are defined as a basis to help identify the real human emotions. Hence, an automated approach to micro-expression recognition has become a popular research topic of interest recently. Historically, the early researches on automated micro-expression have utilized traditional machine learning methods, while the more recent development has focused on the deep learning approach. Compared to traditional machine learning, which relies on manual feature processing and requires the use of formulated rules, deep learning networks produce more accurate micro-expression recognition performances through an end-to-end methodology, whereby the features of interest were extracted optimally through the training process, utilizing a large set of data. This paper reviews the developments and trends in micro-expression recognition from the earlier studies (hand-crafted approach) to the present studies (deep learning approach). Some of the important topics that will be covered include the detection of micro-expression from short videos, apex frame spotting, micro-expression recognition as well as performance discussion on the reviewed methods. Furthermore, major limitations that hamper the development of automated micro-expression recognition systems are also analyzed, followed by recommendations of possible future research directions

    Optimal Compact Network for Micro-Expression Analysis System

    No full text
    Micro-expression analysis is the study of subtle and fleeting facial expressions that convey genuine human emotions. Since such expressions cannot be controlled, many believe that it is an excellent way to reveal a human’s inner thoughts. Analyzing micro-expressions manually is a very time-consuming and complicated task, hence many researchers have incorporated deep learning techniques to produce a more efficient analysis system. However, the insufficient amount of micro-expression data has limited the network’s ability to be fully optimized, as overfitting is likely to occur if a deeper network is utilized. In this paper, a complete deep learning-based micro-expression analysis system is introduced that covers the two main components of a general automated system: spotting and recognition, with also an additional element of synthetic data augmentation. For the spotting part, an optimized continuous labeling scheme is introduced to spot the apex frame in a video. Once the apex frames have been recognized, they are passed to the generative adversarial network to produce an additional set of augmented apex frames. Meanwhile, for the recognition part, a novel convolutional neural network, coined as Optimal Compact Network (OC-Net), is introduced for the purpose of emotion recognition. The proposed system achieved the best F1-score of 0.69 in categorizing the emotions with the highest accuracy of 79.14%. In addition, the generated synthetic data used in the training phase also contributed to performance improvement of at least 0.61% for all tested networks. Therefore, the proposed optimized and compact deep learning system is suitable for mobile-based micro-expression analysis to detect the genuine human emotions

    Apex frame spotting using convolutional neural networks with continuous labeling

    No full text
    Apex frame is the frame containing the highest intensity changes of facial movements in a sequence of video. It plays a crucial role in the analysis of micro-expressions, which generally have minute facial movements. This frame is hard to be identified that requires a laborious and time-consuming effort from highly skilled specialists. Therefore, a convolutional neural networks-based technique is proposed to automate apex frame detection using a novel continuous labeling scheme. The network is trained using ascending and descending labels according to the linear and exponential functions, pivoted on the ground truth apex frame. Two datasets, CASME II and SAMM databases are used to verify the proposed algorithm, where the apex frame is determined according to the maximum label intensity and a sliding window of the maximum label intensity. The results show that a linear continuous label with the sliding window approach produced the lowest average error of 14.37 frames
    corecore