366 research outputs found

    EquiFACS: the Equine Facial Action Coding System

    Get PDF
    Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS) provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus) through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS) and consistently code behavioural sequences was high—and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats). EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices

    Dogs and humans respond to emotionally competent stimuli by producing different facial actions

    Get PDF
    The commonality of facial expressions of emotion has been studied in different species since Darwin, with most of the research focusing on closely related primate species. However, it is unclear to what extent there exists common facial expression in species more phylogenetically distant, but sharing a need for common interspecific emotional understanding. Here we used the objective, anatomically-based tools, FACS and DogFACS (Facial Action Coding Systems), to quantify and compare human and domestic dog facial expressions in response to emotionally-competent stimuli associated with different categories of emotional arousal. We sought to answer two questions: Firstly, do dogs display specific discriminatory facial movements in response to different categories of emotional stimuli? Secondly, do dogs display similar facial movements to humans when reacting in emotionally comparable contexts? We found that dogs displayed distinctive facial actions depending on the category of stimuli. However, dogs produced different facial movements to humans in comparable states of emotional arousal. These results refute the commonality of emotional expression across mammals, since dogs do not display human-like facial expressions. Given the unique interspecific relationship between dogs and humans, two highly social but evolutionarily distant species sharing a common environment, these findings give new insight into the origin of emotion expression

    Medial prefrontal cortex serotonin 1A and 2A receptor binding interacts to predict threat-related amygdala reactivity

    Get PDF
    Background\ud The amygdala and medial prefrontal cortex (mPFC) comprise a key corticolimbic circuit that helps shape individual differences in sensitivity to threat and the related risk for psychopathology. Although serotonin (5-HT) is known to be a key modulator of this circuit, the specific receptors mediating this modulation are unclear. The colocalization of 5-HT1A and 5-HT2A receptors on mPFC glutamatergic neurons suggests that their functional interactions may mediate 5-HT effects on this circuit through top-down regulation of amygdala reactivity. Using a multimodal neuroimaging strategy in 39 healthy volunteers, we determined whether threat-related amygdala reactivity, assessed with blood oxygen level-dependent functional magnetic resonance imaging, was significantly predicted by the interaction between mPFC 5-HT1A and 5-HT2A receptor levels, assessed by positron emission tomography.\ud \ud Results\ud 5-HT1A binding in the mPFC significantly moderated an inverse correlation between mPFC 5-HT2A binding and threat-related amygdala reactivity. Specifically, mPFC 5-HT2A binding was significantly inversely correlated with amygdala reactivity only when mPFC 5-HT1A binding was relatively low.\ud \ud Conclusions\ud Our findings provide evidence that 5-HT1A and 5-HT2A receptors interact to shape serotonergic modulation of a functional circuit between the amygdala and mPFC. The effect of the interaction between mPFC 5-HT1A and 5-HT2A binding and amygdala reactivity is consistent with the colocalization of these receptors on glutamatergic neurons in the mPFC

    Mood Induction in Depressive Patients: A Comparative Multidimensional Approach

    Get PDF
    Anhedonia, reduced positive affect and enhanced negative affect are integral characteristics of major depressive disorder (MDD). Emotion dysregulation, e.g. in terms of different emotion processing deficits, has consistently been reported. The aim of the present study was to investigate mood changes in depressive patients using a multidimensional approach for the measurement of emotional reactivity to mood induction procedures. Experimentally, mood states can be altered using various mood induction procedures. The present study aimed at validating two different positive mood induction procedures in patients with MDD and investigating which procedure is more effective and applicable in detecting dysfunctions in MDD. The first procedure relied on the presentation of happy vs. neutral faces, while the second used funny vs. neutral cartoons. Emotional reactivity was assessed in 16 depressed and 16 healthy subjects using self-report measures, measurements of electrodermal activity and standardized analyses of facial responses. Positive mood induction was successful in both procedures according to subjective ratings in patients and controls. In the cartoon condition, however, a discrepancy between reduced facial activity and concurrently enhanced autonomous reactivity was found in patients. Relying on a multidimensional assessment technique, a more comprehensive estimate of dysfunctions in emotional reactivity in MDD was available than by self-report measures alone and this was unsheathed especially by the mood induction procedure relying on cartoons. The divergent facial and autonomic responses in the presence of unaffected subjective reactivity suggest an underlying deficit in the patients' ability to express the felt arousal to funny cartoons. Our results encourage the application of both procedures in functional imaging studies for investigating the neural substrates of emotion dysregulation in MDD patients. Mood induction via cartoons appears to be superior to mood induction via faces and autobiographical material in uncovering specific emotional dysfunctions in MDD

    Music-aided affective interaction between human and service robot

    Get PDF
    This study proposes a music-aided framework for affective interaction of service robots with humans. The framework consists of three systems, respectively, for perception, memory, and expression on the basis of the human brain mechanism. We propose a novel approach to identify human emotions in the perception system. The conventional approaches use speech and facial expressions as representative bimodal indicators for emotion recognition. But, our approach uses the mood of music as a supplementary indicator to more correctly determine emotions along with speech and facial expressions. For multimodal emotion recognition, we propose an effective decision criterion using records of bimodal recognition results relevant to the musical mood. The memory and expression systems also utilize musical data to provide natural and affective reactions to human emotions. For evaluation of our approach, we simulated the proposed human-robot interaction with a service robot, iRobiQ. Our perception system exhibited superior performance over the conventional approach, and most human participants noted favorable reactions toward the music-aided affective interaction.open0

    Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models

    Get PDF
    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions

    Structural View of a Non Pfam Singleton and Crystal Packing Analysis

    Get PDF
    Comparative genomic analysis has revealed that in each genome a large number of open reading frames have no homologues in other species. Such singleton genes have attracted the attention of biochemists and structural biologists as a potential untapped source of new folds. Cthe_2751 is a 15.8 kDa singleton from an anaerobic, hyperthermophile Clostridium thermocellum. To gain insights into the architecture of the protein and obtain clues about its function, we decided to solve the structure of Cthe_2751.The protein crystallized in 4 different space groups that diffracted X-rays to 2.37 Å (P3(1)21), 2.17 Å (P2(1)2(1)2(1)), 3.01 Å (P4(1)22), and 2.03 Å (C222(1)) resolution, respectively. Crystal packing analysis revealed that the 3-D packing of Cthe_2751 dimers in P4(1)22 and C222(1) is similar with only a rotational difference of 2.69° around the C axes. A new method developed to quantify the differences in packing of dimers in crystals from different space groups corroborated the findings of crystal packing analysis. Cthe_2751 is an all α-helical protein with a central hydrophobic core providing thermal stability via π:cation and π: π interactions. A ProFunc analysis retrieved a very low match with a splicing endonuclease, suggesting a role for the protein in the processing of nucleic acids.Non-Pfam singleton Cthe_2751 folds into a known all α-helical fold. The structure has increased sequence coverage of non-Pfam proteins such that more protein sequences can be amenable to modelling. Our work on crystal packing analysis provides a new method to analyze dimers of the protein crystallized in different space groups. The utility of such an analysis can be expanded to oligomeric structures of other proteins, especially receptors and signaling molecules, many of which are known to function as oligomers

    How private is private information?:The ability to spot deception in an economic game

    Get PDF
    First Online: 22 February 2016We provide experimental evidence on the ability to detect deceit in a buyer–seller game with asymmetric information. Sellers have private information about the value of a good and sometimes have incentives to mislead buyers. We examine if buyers can spot deception in face-to-face encounters. We vary whether buyers can interrogate the seller and the contextual richness. The buyers’ prediction accuracy is above chance, and is substantial for confident buyers. There is no evidence that the option to interrogate is important and only weak support that contextual richness matters. These results show that the information asymmetry is partly eliminated by people’s ability to spot deception
    corecore