25,163 research outputs found

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi

    A framework for evaluating automatic image annotation algorithms

    Get PDF
    Several Automatic Image Annotation (AIA) algorithms have been introduced recently, which have been found to outperform previous models. However, each one of them has been evaluated using either different descriptors, collections or parts of collections, or "easy" settings. This fact renders their results non-comparable, while we show that collection-specific properties are responsible for the high reported performance measures, and not the actual models. In this paper we introduce a framework for the evaluation of image annotation models, which we use to evaluate two state-of-the-art AIA algorithms. Our findings reveal that a simple Support Vector Machine (SVM) approach using Global MPEG-7 Features outperforms state-of-the-art AIA models across several collection settings. It seems that these models heavily depend on the set of features and the data used, while it is easy to exploit collection-specific properties, such as tag popularity especially in the commonly used Corel 5K dataset and still achieve good performance

    Beat-Event Detection in Action Movie Franchises

    Get PDF
    While important advances were recently made towards temporally localizing and recognizing specific human actions or activities in videos, efficient detection and classification of long video chunks belonging to semantically defined categories such as "pursuit" or "romance" remains challenging.We introduce a new dataset, Action Movie Franchises, consisting of a collection of Hollywood action movie franchises. We define 11 non-exclusive semantic categories - called beat-categories - that are broad enough to cover most of the movie footage. The corresponding beat-events are annotated as groups of video shots, possibly overlapping.We propose an approach for localizing beat-events based on classifying shots into beat-categories and learning the temporal constraints between shots. We show that temporal constraints significantly improve the classification performance. We set up an evaluation protocol for beat-event localization as well as for shot classification, depending on whether movies from the same franchise are present or not in the training data

    The effects of motivational climate interventions on psychobiosocial states in high school physical education

    Get PDF
    Purpose: The purpose of this study was to examine the effects of task- and ego-involving climate manipulations on students’ climate perception and psychobiosocial (PBS) states in a physical education setting. Method: Two subsamples of female students (N = 108, 14-15 years of age) participated for 12 lessons on either a task- or an ego-involving climate intervention as grounded in the TARGET model. Results: At the end of the treatment, the participants of the ego-involved group reported lower scores in perceived task-involving climate and higher scores in perceived ego-involving climate than their peers in the task-involved group. Lower scores in pleasant/functional PBS states and higher scores in unpleasant/dysfunctional PBS states were also observed in the ego-involved group as a consequence of the intervention. Conclusion(s): Findings suggested that teacher’s induced achievement motivational climates can influence students’ perceptions and prompt PBS states consistent with the motivational atmosphere

    Extraction of emotional impact in colour images

    No full text
    International audienceThis paper proposes a method to extract the emotional impact of images. Emotions are often associated with facial expressions, but we decided consider other features as first emotional characteristic of natural images, which, in general, does not contain faces. For a seek of generally we have built a new image database composed of a large variety of low semantic images. We used colour images because often colours and emotions are supposed to be linked. For the modelling of the emotions, we considered colours features completed with other recent and efficient descriptors. We supposed that different features used could also implicitly encode high-level information about emotions. The concept of emotion is not easy to model. The perception of emotion is not only influenced by the content and the colour of the images. It is also modified by some personal experiences like cultural aspects and personal semantic associated to some colours or objects. The complexity of emotion modelling was considered in classification process through psycho-visual tests. The twenty-five observers assessed the nature and the power of emotions they felt. These tests allowed us to distinguish three classes of emotions, which are "Negative", "Neutral" and "Positive" emotions. We used a Support Vector Machine for classification and the average success rate is 51,75%; that is really relevant regarding the equivalent results in the literature
    • …
    corecore