3,905 research outputs found

    The role of motion and intensity in deaf children’s recognition of real human facial expressions of emotion

    Get PDF
    © 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.There is substantial evidence to suggest that deafness is associated with delays in emotion understanding, which has been attributed to delays in language acquisition and opportunities to converse. However, studies addressing the ability to recognise facial expressions of emotion have produced equivocal findings. The two experiments presented here attempt to clarify emotion recognition in deaf children by considering two aspects: the role of motion and the role of intensity in deaf children’s emotion recognition. In Study 1, 26 deaf children were compared to 26 age-matched hearing controls on a computerised facial emotion recognition task involving static and dynamic expressions of 6 emotions. Eighteen of the deaf and 18 age-matched hearing controls additionally took part in Study 2, involving the presentation of the same 6 emotions at varying intensities. Study 1 showed that deaf children’s emotion recognition was better in the dynamic rather than static condition, whereas the hearing children showed no difference in performance between the two conditions. In Study 2, the deaf children performed no differently from the hearing controls, showing improved recognition rates with increasing rates of intensity. With the exception of disgust, no differences in individual emotions were found. These findings highlight the importance of using ecologically valid stimuli to assess emotion recognition.Peer reviewedFinal Published versio

    Facial expressions depicting compassionate and critical emotions: the development and validation of a new emotional face stimulus set

    Get PDF
    Attachment with altruistic others requires the ability to appropriately process affiliative and kind facial cues. Yet there is no stimulus set available to investigate such processes. Here, we developed a stimulus set depicting compassionate and critical facial expressions, and validated its effectiveness using well-established visual-probe methodology. In Study 1, 62 participants rated photographs of actors displaying compassionate/kind and critical faces on strength of emotion type. This produced a new stimulus set based on N = 31 actors, whose facial expressions were reliably distinguished as compassionate, critical and neutral. In Study 2, 70 participants completed a visual-probe task measuring attentional orientation to critical and compassionate/kind faces. This revealed that participants lower in self-criticism demonstrated enhanced attention to compassionate/kind faces whereas those higher in self-criticism showed no bias. To sum, the new stimulus set produced interpretable findings using visual-probe methodology and is the first to include higher order, complex positive affect displays

    Lie experts' beliefs about non-verbal indicators of deception

    Get PDF
    ABSTRACT.. Beliefs about behavioral clues to deception were investigated in 212 people, consisting of prisoners, police detectives, patrol police officers, prison guards, customs officers, and college students. Previous studies, mainly conducted with college students as subjects, showed that people have some incorrect beliefs about behavioral clues to deception. It was hypothesized that prisoners would have the best notion about clues of deception, due to the fact that they receive the most adequate feedback about successful deception strategies. The results supported this hypothesis

    OSTEOCHONDRAL LESIONS IN DISTAL TARSAL JOINTS OF ICELANDIC HORSES REVEAL STRONG ASSOCIATIONS BETWEEN HYALINE AND CALCIFIED CARTILAGE ABNORMALITIES

    Get PDF
    Royal Swedish Academy of Agriculture and Forestry (H10-0265-CFH), the Swedish Norwegian Foundation for Equine Research (H0847237), and the SLU travel fund

    Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children

    Get PDF
    Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions

    The analysis of facial beauty: an emerging area of research in pattern analysis

    Get PDF
    Much research presented recently supports the idea that the human perception of attractiveness is data-driven and largely irrespective of the perceiver. This suggests using pattern analysis techniques for beauty analysis. Several scientific papers on this subject are appearing in image processing, computer vision and pattern analysis contexts, or use techniques of these areas. In this paper, we will survey the recent studies on automatic analysis of facial beauty, and discuss research lines and practical application

    Recognition of Face Identity and Emotion in Expressive Specific Language Impairment

    Get PDF
    Objective: To study face and emotion recognition in children with mostly expressive specific language impairment (SLI-E). Subjects and Methods: A test movie to study perception and recognition of faces and mimic-gestural expression was applied to 24 children diagnosed as suffering from SLI-E and an age-matched control group of normally developing children. Results: Compared to a normal control group, the SLI-E children scored significantly worse in both the face and expression recognition tasks with a preponderant effect on emotion recognition. The performance of the SLI-E group could not be explained by reduced attention during the test session. Conclusion: We conclude that SLI-E is associated with a deficiency in decoding non-verbal emotional facial and gestural information, which might lead to profound and persistent problems in social interaction and development. Copyright (C) 2012 S. Karger AG, Base

    Recognising facial expressions in video sequences

    Full text link
    We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real-time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated to facial expressions are represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold in order to compute a posterior probability associated to a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89\% recognition rate in a set of 333 sequences from the Cohn-Kanade data base

    Pseudorapidity distributions of charged particles from Au+Au collisions at the maximum RHIC energy, Sqrt(s_NN) = 200 GeV

    Get PDF
    We present charged particle densities as a function of pseudorapidity and collision centrality for the 197Au+197Au reaction at Sqrt{s_NN}=200 GeV. For the 5% most central events we obtain dN_ch/deta(eta=0) = 625 +/- 55 and N_ch(-4.7<= eta <= 4.7) = 4630+-370, i.e. 14% and 21% increases, respectively, relative to Sqrt{s_NN}=130 GeV collisions. Charged-particle production per pair of participant nucleons is found to increase from peripheral to central collisions around mid-rapidity. These results constrain current models of particle production at the highest RHIC energy.Comment: 4 pages, 5 figures; fixed fig. 5 caption; revised text and figures to show corrected calculation of and ; final version accepted for publicatio
    corecore