389 research outputs found

    Neural correlates of enhanced visual short-term memory for angry faces: An fMRI study

    Get PDF
    Copyright: © 2008 Jackson et al.Background: Fluid and effective social communication requires that both face identity and emotional expression information are encoded and maintained in visual short-term memory (VSTM) to enable a coherent, ongoing picture of the world and its players. This appears to be of particular evolutionary importance when confronted with potentially threatening displays of emotion - previous research has shown better VSTM for angry versus happy or neutral face identities.Methodology/Principal Findings: Using functional magnetic resonance imaging, here we investigated the neural correlates of this angry face benefit in VSTM. Participants were shown between one and four to-be-remembered angry, happy, or neutral faces, and after a short retention delay they stated whether a single probe face had been present or not in the previous display. All faces in any one display expressed the same emotion, and the task required memory for face identity. We find enhanced VSTM for angry face identities and describe the right hemisphere brain network underpinning this effect, which involves the globus pallidus, superior temporal sulcus, and frontal lobe. Increased activity in the globus pallidus was significantly correlated with the angry benefit in VSTM. Areas modulated by emotion were distinct from those modulated by memory load.Conclusions/Significance: Our results provide evidence for a key role of the basal ganglia as an interface between emotion and cognition, supported by a frontal, temporal, and occipital network.The authors were supported by a Wellcome Trust grant (grant number 077185/Z/05/Z) and by BBSRC (UK) grant BBS/B/16178

    Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping

    Full text link
    We present an autoencoder-based semi-supervised approach to classify perceived human emotions from walking styles obtained from videos or motion-captured data and represented as sequences of 3D poses. Given the motion on each joint in the pose at each time step extracted from 3D pose sequences, we hierarchically pool these joint motions in a bottom-up manner in the encoder, following the kinematic chains in the human body. We also constrain the latent embeddings of the encoder to contain the space of psychologically-motivated affective features underlying the gaits. We train the decoder to reconstruct the motions per joint per time step in a top-down manner from the latent embeddings. For the annotated data, we also train a classifier to map the latent embeddings to emotion labels. Our semi-supervised approach achieves a mean average precision of 0.84 on the Emotion-Gait benchmark dataset, which contains both labeled and unlabeled gaits collected from multiple sources. We outperform current state-of-art algorithms for both emotion recognition and action recognition from 3D gaits by 7%--23% on the absolute. More importantly, we improve the average precision by 10%--50% on the absolute on classes that each makes up less than 25% of the labeled part of the Emotion-Gait benchmark dataset.Comment: In proceedings of the 16th European Conference on Computer Vision, 2020. Total pages 18. Total figures 5. Total tables

    None in Three: The Design and Development of a Low-cost Violence Prevention Game for the Caribbean Region

    Get PDF
    Domestic violence is a persistent and universal problem occurring in every culture and social group, with lack of empathy identified as a contributing factor. On average, one in three women and girls in the Caribbean experience domestic violence in their lifetime. In this paper we demonstrate the techniques used during the creation of a low-cost, violence prevention game titled None in Three, targeted at enhancing empathy and awareness among young people in Barbados and Grenada. A research trip was undertaken to gather photographic reference and to meet with young people. Methods to measure the emotional state of players and awareness of characters in-game were explored. Cost-saving measures such as asset store purchases were evaluated. Custom tools were created in order to speed up production, including a bespoke event editor for multiple-choice dialogue sequences, and the use of motion capture libraries and auto-rigging tools to speed up character animation workflows

    The Assessment of Post-Vasectomy Pain in Mice Using Behaviour and the Mouse Grimace Scale

    Get PDF
    Background: Current behaviour-based pain assessments for laboratory rodents have significant limitations. Assessment of facial expression changes, as a novel means of pain scoring, may overcome some of these limitations. The Mouse Grimace Scale appears to offer a means of assessing post-operative pain in mice that is as effective as manual behavioural-based scoring, without the limitations of such schemes. Effective assessment of post-operative pain is not only critical for animal welfare, but also the validity of science using animal models. Methodology/Principal Findings: This study compared changes in behaviour assessed using both an automated system (‘‘HomeCageScan’’) and using manual analysis with changes in facial expressions assessed using the Mouse Grimace Scale (MGS). Mice (n = 6/group) were assessed before and after surgery (scrotal approach vasectomy) and either received saline, meloxicam or bupivacaine. Both the MGS and manual scoring of pain behaviours identified clear differences between the pre and post surgery periods and between those animals receiving analgesia (20 mg/kg meloxicam or 5 mg/kg bupivacaine) or saline post-operatively. Both of these assessments were highly correlated with those showing high MGS scores also exhibiting high frequencies of pain behaviours. Automated behavioural analysis in contrast was only able to detect differences between the pre and post surgery periods. Conclusions: In conclusion, both the Mouse Grimace Scale and manual scoring of pain behaviours are assessing th

    The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Get PDF
    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

    Maternal Use of Antibiotics, Hospitalisation for Infection during Pregnancy, and Risk of Childhood Epilepsy: A Population-Based Cohort Study

    Get PDF
    BACKGROUND: Maternal infection during pregnancy may be a risk factor for epilepsy in offspring. Use of antibiotics is a valid marker of infection. METHODOLOGY/PRINCIPAL FINDINGS: To examine the relationship between maternal infection during pregnancy and risk of childhood epilepsy we conducted a historical cohort study of singletons born in northern Denmark from 1998 through 2008 who survived ≥29 days. We used population-based medical databases to ascertain maternal use of antibiotics or hospital contacts with infection during pregnancy, as well as first-time hospital contacts with a diagnosis of epilepsy among offspring. We compared incidence rates (IR) of epilepsy among children of mothers with and without infection during pregnancy. We examined the outcome according to trimester of exposure, type of antibiotic, and total number of prescriptions, using Poisson regression to estimate incidence rate ratios (IRRs) while adjusting for covariates. Among 191,383 children in the cohort, 948 (0.5%) were hospitalised or had an outpatient visit for epilepsy during follow-up, yielding an IR of 91 per 100 000 person-years (PY). The five-year cumulative incidence of epilepsy was 4.5 per 1000 children. Among children exposed prenatally to maternal infection, the IR was 117 per 100,000 PY, with an adjusted IRR of 1.40 (95% confidence interval (CI): 1.22-1.61), compared with unexposed children. The association was unaffected by trimester of exposure, antibiotic type, or prescription count. CONCLUSIONS/SIGNIFICANCE: Prenatal exposure to maternal infection is associated with an increased risk of epilepsy in childhood. The similarity of estimates across types of antibiotics suggests that processes common to all infections underlie this outcome, rather than specific pathogens or drugs

    Interrogating domain-domain interactions with parsimony based approaches

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The identification and characterization of interacting domain pairs is an important step towards understanding protein interactions. In the last few years, several methods to predict domain interactions have been proposed. Understanding the power and the limitations of these methods is key to the development of improved approaches and better understanding of the nature of these interactions.</p> <p>Results</p> <p>Building on the previously published Parsimonious Explanation method (PE) to predict domain-domain interactions, we introduced a new Generalized Parsimonious Explanation (GPE) method, which (i) adjusts the granularity of the domain definition to the granularity of the input data set and (ii) permits domain interactions to have different costs. This allowed for preferential selection of the so-called "co-occurring domains" as possible mediators of interactions between proteins. The performance of both variants of the parsimony method are competitive to the performance of the top algorithms for this problem even though parsimony methods use less information than some of the other methods. We also examined possible enrichment of co-occurring domains and homo-domains among domain interactions mediating the interaction of proteins in the network. The corresponding study was performed by surveying domain interactions predicted by the GPE method as well as by using a combinatorial counting approach independent of any prediction method. Our findings indicate that, while there is a considerable propensity towards these special domain pairs among predicted domain interactions, this overrepresentation is significantly lower than in the iPfam dataset.</p> <p>Conclusion</p> <p>The Generalized Parsimonious Explanation approach provides a new means to predict and study domain-domain interactions. We showed that, under the assumption that all protein interactions in the network are mediated by domain interactions, there exists a significant deviation of the properties of domain interactions mediating interactions in the network from that of iPfam data.</p

    Sex Differences in Neural Activation to Facial Expressions Denoting Contempt and Disgust

    Get PDF
    The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the hypothesis of preferential amygdala responses to contempt expressions versus disgust. Second, to investigate whether, at a neural level, men would respond stronger to biological signals of interpersonal superiority (e.g., contempt) than women. We performed an experiment using functional magnetic resonance imaging (fMRI), in which participants watched facial expressions of contempt and disgust in addition to neutral expressions. The faces were presented as distractors in an oddball task in which participants had to react to one target face. Facial expressions of contempt and disgust activated a network of brain regions, including prefrontal areas (superior, middle and medial prefrontal gyrus), anterior cingulate, insula, amygdala, parietal cortex, fusiform gyrus, occipital cortex, putamen and thalamus. Contemptuous faces did not elicit stronger amygdala activation than did disgusted expressions. To limit the number of statistical comparisons, we confined our analyses of sex differences to the frontal and temporal lobes. Men displayed stronger brain activation than women to facial expressions of contempt in the medial frontal gyrus, inferior frontal gyrus, and superior temporal gyrus. Conversely, women showed stronger neural responses than men to facial expressions of disgust. In addition, the effect of stimulus sex differed for men versus women. Specifically, women showed stronger responses to male contemptuous faces (as compared to female expressions), in the insula and middle frontal gyrus. Contempt has been conceptualized as signaling perceived moral violations of social hierarchy, whereas disgust would signal violations of physical purity. Thus, our results suggest a neural basis for sex differences in moral sensitivity regarding hierarchy on the one hand and physical purity on the other

    Reciprocal Modulation of Cognitive and Emotional Aspects in Pianistic Performances

    Get PDF
    Background: High level piano performance requires complex integration of perceptual, motor, cognitive and emotive skills. Observations in psychology and neuroscience studies have suggested reciprocal inhibitory modulation of the cognition by emotion and emotion by cognition. However, it is still unclear how cognitive states may influence the pianistic performance. The aim of the present study is to verify the influence of cognitive and affective attention in the piano performances. Methods and Findings: Nine pianists were instructed to play the same piece of music, firstly focusing only on cognitive aspects of musical structure (cognitive performances), and secondly, paying attention solely on affective aspects (affective performances). Audio files from pianistic performances were examined using a computational model that retrieves nine specific musical features (descriptors) - loudness, articulation, brightness, harmonic complexity, event detection, key clarity, mode detection, pulse clarity and repetition. In addition, the number of volunteers' errors in the recording sessions was counted. Comments from pianists about their thoughts during performances were also evaluated. The analyses of audio files throughout musical descriptors indicated that the affective performances have more: agogics, legatos, pianos phrasing, and less perception of event density when compared to the cognitive ones. Error analysis demonstrated that volunteers misplayed more left hand notes in the cognitive performances than in the affective ones. Volunteers also played more wrong notes in affective than in cognitive performances. These results correspond to the volunteers' comments that in the affective performances, the cognitive aspects of piano execution are inhibited, whereas in the cognitive performances, the expressiveness is inhibited. Conclusions: Therefore, the present results indicate that attention to the emotional aspects of performance enhances expressiveness, but constrains cognitive and motor skills in the piano execution. In contrast, attention to the cognitive aspects may constrain the expressivity and automatism of piano performances.Brazilian government research agency: Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)[08/54844-7]Brazilian government research agency: Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)[07/59826-4
    corecore