4,900 research outputs found

    The effect of relationship status on communicating emotions through touch

    Get PDF
    Research into emotional communication to date has largely focused on facial and vocal expressions. In contrast, recent studies by Hertenstein, Keltner, App, Bulleit, and Jaskolka (2006) and Hertenstein, Holmes, McCullough, and Keltner (2009) exploring nonverbal communication of emotion discovered that people could identify anger, disgust, fear, gratitude, happiness, love, sadness and sympathy from the experience of being touched on either the arm or body by a stranger, without seeing the touch. The study showed that strangers were unable to communicate the self-focused emotions embarrassment, envy and pride, or the universal emotion surprise. Literature relating to touch indicates that the interpretation of a tactile experience is significantly influenced by the relationship between the touchers (Coan, Schaefer, & Davidson, 2006). The present study compared the ability of romantic couples and strangers to communicate emotions solely via touch. Results showed that both strangers and romantic couples were able to communicate universal and prosocial emotions, whereas only romantic couples were able to communicate the self-focused emotions envy and pride

    Inversion improves the recognition of facial expression in thatcherized images

    Get PDF
    The Thatcher illusion provides a compelling example of the face inversion effect. However, the marked effect of inversion in the Thatcher illusion contrasts to other studies that report only a small effect of inversion on the recognition of facial expressions. To address this discrepancy, we compared the effects of inversion and thatcherization on the recognition of facial expressions. We found that inversion of normal faces caused only a small reduction in the recognition of facial expressions. In contrast, local inversion of facial features in upright thatcherized faces resulted in a much larger reduction in the recognition of facial expressions. Paradoxically, inversion of thatcherized faces caused a relative increase in the recognition of facial expressions. Together, these results suggest that different processes explain the effects of inversion on the recognition of facial expressions and on the perception of the Thatcher illusion. The grotesque perception of thatcherized images is based on a more orientation-sensitive representation of the face. In contrast, the recognition of facial expression is dependent on a more orientation-insensitive representation. A similar pattern of results was evident when only the mouth or eye region was visible. These findings demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the features of the face

    Cultural-based visual expression: Emotional analysis of human face via Peking Opera Painted Faces (POPF)

    Get PDF
    © 2015 The Author(s) Peking Opera as a branch of Chinese traditional cultures and arts has a very distinct colourful facial make-up for all actors in the stage performance. Such make-up is stylised in nonverbal symbolic semantics which all combined together to form the painted faces to describe and symbolise the background, the characteristic and the emotional status of specific roles. A study of Peking Opera Painted Faces (POPF) was taken as an example to see how information and meanings can be effectively expressed through the change of facial expressions based on the facial motion within natural and emotional aspects. The study found that POPF provides exaggerated features of facial motion through images, and the symbolic semantics of POPF provides a high-level expression of human facial information. The study has presented and proved a creative structure of information analysis and expression based on POPF to improve the understanding of human facial motion and emotion

    Recognizing Emotions in a Foreign Language

    Get PDF
    Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables

    Seasonal variation of aerosol water uptake and its impact on the direct radiative effect at Ny-Ă…lesund, Svalbard

    Get PDF
    © Author(s) 2014. This work is distributed under the Creative Commons Attribution 3.0 LicenseIn this study we investigated the impact of water uptake by aerosol particles in ambient atmosphere on their optical properties and their direct radiative effect (ADRE, W m-2) in the Arctic at Ny-Ålesund, Svalbard, during 2008. To achieve this, we combined three models, a hygroscopic growth model, a Mie model and a radiative transfer model, with an extensive set of observational data. We found that the seasonal variation of dry aerosol scattering coefficients showed minimum values during the summer season and the beginning of fall (July-August-September), when small particles (< 100 nm in diameter) dominate the aerosol number size distribution. The maximum scattering by dry particles was observed during the Arctic haze period (March-April-May) when the average size of the particles was larger. Considering the hygroscopic growth of aerosol particles in the ambient atmosphere had a significant impact on the aerosol scattering coefficients: the aerosol scattering coefficients were enhanced by on average a factor of 4.30 ± 2.26 (mean ± standard deviation), with lower values during the haze period (March-April-May) as compared to summer and fall. Hygroscopic growth of aerosol particles was found to cause 1.6 to 3.7 times more negative ADRE at the surface, with the smallest effect during the haze period (March-April-May) and the highest during late summer and beginning of fall (July-August-September).Peer reviewe

    Extended Calculations of Spectroscopic Data: Energy Levels, Lifetimes and Transition rates for O-like ions from Cr XVII to Zn XXIII

    Full text link
    Employing two state-of-the-art methods, multiconfiguration Dirac--Hartree--Fock and second-order many-body perturbation theory, the excitation energies and lifetimes for the lowest 200 states of the 2s22p42s^2 2p^4, 2s2p52s 2p^5, 2p62p^6, 2s22p33s2s^2 2p^3 3s, 2s22p33p2s^2 2p^3 3p, 2s22p33d2s^2 2p^3 3d, 2s2p43s2s 2p^4 3s, 2s2p43p2s 2p^4 3p, and 2s2p43d2s 2p^4 3d configurations, and multipole (electric dipole (E1), magnetic dipole (M1), and electric quadrupole (E2)) transition rates, line strengths, and oscillator strengths among these states are calculated for each O-like ion from Cr XVII to Zn XXIII. Our two data sets are compared with the NIST and CHIANTI compiled values, and previous calculations. The data are accurate enough for identification and deblending of new emission lines from the sun and other astrophysical sources. The amount of data of high accuracy is significantly increased for the n=3n = 3 states of several O-like ions of astrophysics interest, where experimental data are very scarce

    Assessing the impact of verbal and visuospatial working memory load on eye-gaze cueing

    Get PDF
    Observers tend to respond more quickly to peripheral stimuli that are being gazed at by a centrally presented face, than to stimuli that are not being gazed at. While this gaze-cueing effect was initially seen as reflexive, there have also been some indications that top-down control processes may be involved. Therefore, the present investigation employed a dual-task paradigm to attempt to disrupt the putative control processes involved in gaze cueing. Two experiments examined the impact of working memory load on gaze cueing. In Experiment 1, participants were required to hold a set of digits in working memory during each gaze trial. In Experiment 2, the gaze task was combined with an auditory task that required the manipulation and maintenance of visuo-spatial information. Gaze cueing effects were observed, but they were not modulated by dual-task load in either experiment. These results are consistent with traditional accounts of gaze cueing as a highly reflexive process

    Individual differences in gelotophobia predict responses to joy and contempt

    Get PDF
    In a paradigm facilitating smile misattribution, facial responses and ratings to contempt and joy were investigated in individuals with or without gelotophobia (fear of being laughed at). Participants from two independent samples (N1 = 83, N2 = 50) rated the intensity of eight emotions in 16 photos depicting joy, contempt, and different smiles. Facial responses were coded by the Facial Action Coding System in the second study. Compared with non-fearful individuals, gelotophobes rated joy smiles as less joyful and more contemptuous. Moreover, gelotophobes showed less facial joy and more contempt markers. The contempt ratings were comparable between the two groups. Looking at the photos of smiles lifted the positive mood of non-gelotophobes, whereas gelotophobes did not experience an increase. We hypothesize that the interpretation bias of “joyful faces hiding evil minds” (i.e., being also contemptuous) and exhibiting less joy facially may complicate social interactions for gelotophobes and serve as a maintaining factor of gelotophobia.The research leading to these results has received funding from a research grant from the Swiss National Science Foundation (SNSF; 100014_126967-1

    A rule-based approach to implicit emotion detection in text

    Get PDF
    Most research in the area of emotion detection in written text focused on detecting explicit expressions of emotions in text. In this paper, we present a rule-based pipeline approach for detecting implicit emotions in written text without emotion-bearing words based on the OCC Model. We have evaluated our approach on three different datasets with five emotion categories. Our results show that the proposed approach outperforms the lexicon matching method consistently across all the three datasets by a large margin of 17–30% in F-measure and gives competitive performance compared to a supervised classifier. In particular, when dealing with formal text which follows grammatical rules strictly, our approach gives an average F-measure of 82.7% on “Happy”, “Angry-Disgust” and “Sad”, even outperforming the supervised baseline by nearly 17% in F-measure. Our preliminary results show the feasibility of the approach for the task of implicit emotion detection in written text

    Automatic detection of a driver’s complex mental states

    Get PDF
    Automatic classification of drivers’ mental states is an important yet relatively unexplored topic. In this paper, we define a taxonomy of a set of complex mental states that are relevant to driving, namely: Happy, Bothered, Concentrated and Confused. We present our video segmentation and annotation methodology of a spontaneous dataset of natural driving videos from 10 different drivers. We also present our real-time annotation tool used for labelling the dataset via an emotion perception experiment and discuss the challenges faced in obtaining the ground truth labels. Finally, we present a methodology for automatic classification of drivers’ mental states. We compare SVM models trained on our dataset with an existing nearest neighbour model pre-trained on posed dataset, using facial Action Units as input features. We demonstrate that our temporal SVM approach yields better results. The dataset’s extracted features and validated emotion labels, together with the annotation tool, will be made available to the research community
    • …
    corecore