13,655 research outputs found

    Spontaneous facial expression analysis using optical flow

    Full text link
    © 2017 IEEE. Investigation of emotions manifested through facial expressions has valuable applications in predictive behavioural studies. This has piqued interest towards developing intelligent visual surveillance using facial expression analysis coupled with Closed Circuit Television (CCTV). However, a facial recognition program tailored to evaluating facial behaviour for forensic and security purposes can be met if patterns of emotions in general can be detected. The present study assesses whether emotional expression derived from frontal or profile views of the face can be used to determine differences between three emotions: Amusement, Sadness and Fear using the optical flow technique. Analysis was in the form of emotion maps constructed from feature vectors obtained from using the Lucas-Kanade implementation of optical flow. These feature vectors were selected as inputs for classification. It was anticipated that the findings would assist in improving the optical flow algorithm for feature extraction. However, further data analyses are necessary to confirm if different types of emotion can be identified clearly using optical flow or other such techniques

    Efficient Neural Architecture Search for Emotion Recognition

    Full text link
    Automated human emotion recognition from facial expressions is a well-studied problem and still remains a very challenging task. Some efficient or accurate deep learning models have been presented in the literature. However, it is quite difficult to design a model that is both efficient and accurate at the same time. Moreover, identifying the minute feature variations in facial regions for both macro and micro-expressions requires expertise in network design. In this paper, we proposed to search for a highly efficient and robust neural architecture for both macro and micro-level facial expression recognition. To the best of our knowledge, this is the first attempt to design a NAS-based solution for both macro and micro-expression recognition. We produce lightweight models with a gradient-based architecture search algorithm. To maintain consistency between macro and micro-expressions, we utilize dynamic imaging and convert microexpression sequences into a single frame, preserving the spatiotemporal features in the facial regions. The EmoNAS has evaluated over 13 datasets (7 macro expression datasets: CK+, DISFA, MUG, ISED, OULU-VIS CASIA, FER2013, RAF-DB, and 6 micro-expression datasets: CASME-I, CASME-II, CAS(ME)2, SAMM, SMIC, MEGC2019 challenge). The proposed models outperform the existing state-of-the-art methods and perform very well in terms of speed and space complexity

    Sensitivity to fine-grained and coarse visual information: The effect of blurring on anticipation skill

    Get PDF
    Copyright @ 2009 Edizione l PozziWe examined skilled tennis players’ ability to perceive fine and coarse information by assessing their ability to predict serve direction under three levels of visual blur. A temporal occlusion design was used in which skilled players viewed serves struck by two players that were occluded at one of four points relative to ball-racquet impact (-320ms, -160ms, 0ms, +160ms) and shown with one of three levels of blur (no blur, 20% blur, 40% blur). Using a within-task criterion to establish good and poor anticipators, the results revealed a significant interaction between anticipation skill and level of blur. Anticipation skill was significantly disrupted in the ‘20% blur’ condition; however, judgment accuracy of both groups then improved in the ‘40% blur’ condition while confidence in judgments declined. We conclude that there is evidence for processing of coarse configural information but that anticipation skill in this task was primarily driven by perception of fine-grained information.This research was supported by a University of Hong Kong Seed Funding for Basic Research grant awarded to the second author

    Spatio-Temporal Analysis of Facial Actions using Lifecycle-Aware Capsule Networks

    Full text link
    Most state-of-the-art approaches for Facial Action Unit (AU) detection rely upon evaluating facial expressions from static frames, encoding a snapshot of heightened facial activity. In real-world interactions, however, facial expressions are usually more subtle and evolve in a temporal manner requiring AU detection models to learn spatial as well as temporal information. In this paper, we focus on both spatial and spatio-temporal features encoding the temporal evolution of facial AU activation. For this purpose, we propose the Action Unit Lifecycle-Aware Capsule Network (AULA-Caps) that performs AU detection using both frame and sequence-level features. While at the frame-level the capsule layers of AULA-Caps learn spatial feature primitives to determine AU activations, at the sequence-level, it learns temporal dependencies between contiguous frames by focusing on relevant spatio-temporal segments in the sequence. The learnt feature capsules are routed together such that the model learns to selectively focus more on spatial or spatio-temporal information depending upon the AU lifecycle. The proposed model is evaluated on the commonly used BP4D and GFT benchmark datasets obtaining state-of-the-art results on both the datasets.Comment: Updated Figure 6 and the Acknowledgements. Corrected typos. 11 pages, 6 figures, 3 table

    Py-Feat: Python Facial Expression Analysis Toolbox

    Full text link
    Studying facial expressions is a notoriously difficult endeavor. Recent advances in the field of affective computing have yielded impressive progress in automatically detecting facial expressions from pictures and videos. However, much of this work has yet to be widely disseminated in social science domains such as psychology. Current state of the art models require considerable domain expertise that is not traditionally incorporated into social science training programs. Furthermore, there is a notable absence of user-friendly and open-source software that provides a comprehensive set of tools and functions that support facial expression research. In this paper, we introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data. Py-Feat makes it easy for domain experts to disseminate and benchmark computer vision models and also for end users to quickly process, analyze, and visualize face expression data. We hope this platform will facilitate increased use of facial expression data in human behavior research.Comment: 25 pages, 3 figures, 5 table
    • …
    corecore