3,344 research outputs found

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi

    Extended Calculations of Spectroscopic Data: Energy Levels, Lifetimes and Transition rates for O-like ions from Cr XVII to Zn XXIII

    Full text link
    Employing two state-of-the-art methods, multiconfiguration Dirac--Hartree--Fock and second-order many-body perturbation theory, the excitation energies and lifetimes for the lowest 200 states of the 2s22p42s^2 2p^4, 2s2p52s 2p^5, 2p62p^6, 2s22p33s2s^2 2p^3 3s, 2s22p33p2s^2 2p^3 3p, 2s22p33d2s^2 2p^3 3d, 2s2p43s2s 2p^4 3s, 2s2p43p2s 2p^4 3p, and 2s2p43d2s 2p^4 3d configurations, and multipole (electric dipole (E1), magnetic dipole (M1), and electric quadrupole (E2)) transition rates, line strengths, and oscillator strengths among these states are calculated for each O-like ion from Cr XVII to Zn XXIII. Our two data sets are compared with the NIST and CHIANTI compiled values, and previous calculations. The data are accurate enough for identification and deblending of new emission lines from the sun and other astrophysical sources. The amount of data of high accuracy is significantly increased for the n=3n = 3 states of several O-like ions of astrophysics interest, where experimental data are very scarce

    Volatility distribution in the S&P500 Stock Index

    Full text link
    We study the volatility of the S&P500 stock index from 1984 to 1996 and find that the volatility distribution can be very well described by a log-normal function. Further, using detrended fluctuation analysis we show that the volatility is power-law correlated with Hurst exponent α0.9\alpha\cong0.9.Comment: 6 pages, 5 figure

    Automatic detection of a driver’s complex mental states

    Get PDF
    Automatic classification of drivers’ mental states is an important yet relatively unexplored topic. In this paper, we define a taxonomy of a set of complex mental states that are relevant to driving, namely: Happy, Bothered, Concentrated and Confused. We present our video segmentation and annotation methodology of a spontaneous dataset of natural driving videos from 10 different drivers. We also present our real-time annotation tool used for labelling the dataset via an emotion perception experiment and discuss the challenges faced in obtaining the ground truth labels. Finally, we present a methodology for automatic classification of drivers’ mental states. We compare SVM models trained on our dataset with an existing nearest neighbour model pre-trained on posed dataset, using facial Action Units as input features. We demonstrate that our temporal SVM approach yields better results. The dataset’s extracted features and validated emotion labels, together with the annotation tool, will be made available to the research community

    The role of motion and intensity in deaf children’s recognition of real human facial expressions of emotion

    Get PDF
    © 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.There is substantial evidence to suggest that deafness is associated with delays in emotion understanding, which has been attributed to delays in language acquisition and opportunities to converse. However, studies addressing the ability to recognise facial expressions of emotion have produced equivocal findings. The two experiments presented here attempt to clarify emotion recognition in deaf children by considering two aspects: the role of motion and the role of intensity in deaf children’s emotion recognition. In Study 1, 26 deaf children were compared to 26 age-matched hearing controls on a computerised facial emotion recognition task involving static and dynamic expressions of 6 emotions. Eighteen of the deaf and 18 age-matched hearing controls additionally took part in Study 2, involving the presentation of the same 6 emotions at varying intensities. Study 1 showed that deaf children’s emotion recognition was better in the dynamic rather than static condition, whereas the hearing children showed no difference in performance between the two conditions. In Study 2, the deaf children performed no differently from the hearing controls, showing improved recognition rates with increasing rates of intensity. With the exception of disgust, no differences in individual emotions were found. These findings highlight the importance of using ecologically valid stimuli to assess emotion recognition.Peer reviewedFinal Published versio

    Facial expressions depicting compassionate and critical emotions: the development and validation of a new emotional face stimulus set

    Get PDF
    Attachment with altruistic others requires the ability to appropriately process affiliative and kind facial cues. Yet there is no stimulus set available to investigate such processes. Here, we developed a stimulus set depicting compassionate and critical facial expressions, and validated its effectiveness using well-established visual-probe methodology. In Study 1, 62 participants rated photographs of actors displaying compassionate/kind and critical faces on strength of emotion type. This produced a new stimulus set based on N = 31 actors, whose facial expressions were reliably distinguished as compassionate, critical and neutral. In Study 2, 70 participants completed a visual-probe task measuring attentional orientation to critical and compassionate/kind faces. This revealed that participants lower in self-criticism demonstrated enhanced attention to compassionate/kind faces whereas those higher in self-criticism showed no bias. To sum, the new stimulus set produced interpretable findings using visual-probe methodology and is the first to include higher order, complex positive affect displays

    The analysis of facial beauty: an emerging area of research in pattern analysis

    Get PDF
    Much research presented recently supports the idea that the human perception of attractiveness is data-driven and largely irrespective of the perceiver. This suggests using pattern analysis techniques for beauty analysis. Several scientific papers on this subject are appearing in image processing, computer vision and pattern analysis contexts, or use techniques of these areas. In this paper, we will survey the recent studies on automatic analysis of facial beauty, and discuss research lines and practical application

    Observational Study Design in Veterinary Pathology, Part 1: Study Design

    Get PDF
    Observational studies are the basis for much of our knowledge of veterinary pathology and are highly relevant to the daily practice of pathology. However, recommendations for conducting pathology-based observational studies are not readily available. In part 1 of this series, we offer advice on planning and conducting an observational study with examples from the veterinary pathology literature. Investigators should recognize the importance of creativity, insight, and innovation in devising studies that solve problems and fill important gaps in knowledge. Studies should focus on specific and testable hypotheses, questions, or objectives. The methodology is developed to support these goals. We consider the merits and limitations of different types of analytic and descriptive studies, as well as of prospective vs retrospective enrollment. Investigators should define clear inclusion and exclusion criteria and select adequate numbers of study subjects, including careful selection of the most appropriate controls. Studies of causality must consider the temporal relationships between variables and the advantages of measuring incident cases rather than prevalent cases. Investigators must consider unique aspects of studies based on archived laboratory case material and take particular care to consider and mitigate the potential for selection bias and information bias. We close by discussing approaches to adding value and impact to observational studies. Part 2 of the series focuses on methodology and validation of methods

    Recognition of Face Identity and Emotion in Expressive Specific Language Impairment

    Get PDF
    Objective: To study face and emotion recognition in children with mostly expressive specific language impairment (SLI-E). Subjects and Methods: A test movie to study perception and recognition of faces and mimic-gestural expression was applied to 24 children diagnosed as suffering from SLI-E and an age-matched control group of normally developing children. Results: Compared to a normal control group, the SLI-E children scored significantly worse in both the face and expression recognition tasks with a preponderant effect on emotion recognition. The performance of the SLI-E group could not be explained by reduced attention during the test session. Conclusion: We conclude that SLI-E is associated with a deficiency in decoding non-verbal emotional facial and gestural information, which might lead to profound and persistent problems in social interaction and development. Copyright (C) 2012 S. Karger AG, Base

    Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema

    No full text
    In this paper, a psychologically-inspired binary cascade classification schema is proposed for speech emotion recognition. Performance is enhanced because commonly confused pairs of emotions are distinguishable from one another. Extracted features are related to statistics of pitch, formants, and energy contours, as well as spectrum, cepstrum, perceptual and temporal features, autocorrelation, MPEG-7 descriptors, Fujisakis model parameters, voice quality, jitter, and shimmer. Selected features are fed as input to K nearest neighborhood classifier and to support vector machines. Two kernels are tested for the latter: Linear and Gaussian radial basis function. The recently proposed speaker-independent experimental protocol is tested on the Berlin emotional speech database for each gender separately. The best emotion recognition accuracy, achieved by support vector machines with linear kernel, equals 87.7%, outperforming state-of-the-art approaches. Statistical analysis is first carried out with respect to the classifiers error rates and then to evaluate the information expressed by the classifiers confusion matrices. © Springer Science+Business Media, LLC 2011
    corecore