21 research outputs found

    Representational structure of fMRI/EEG responses to dynamic facial expressions

    Get PDF
    Face perception provides an excellent example of how the brain processes nuanced visual differences and trans-forms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expres-sion, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.Peer reviewe

    Spatio-temporal dynamics of face perception

    Get PDF
    The temporal and spatial neural processing of faces has been investigated rigorously, but few studies have unified these dimensions to reveal the spatio-temporal dynamics postulated by the models of face processing. We used support vector machine decoding and representational similarity analysis to combine information from different locations (fMRI), time windows (EEG), and theoretical models. By correlating representational dissimilarity matrices (RDMs) derived from multiple pairwise classifications of neural responses to different facial expressions (neutral, happy, fearful, angry), we found early EEG time windows (starting around 130 ​ms) to match fMRI data from primary visual cortex (V1), and later time windows (starting around 190 ​ms) to match data from lateral occipital, fusiform face complex, and temporal-parietal-occipital junction (TPOJ). According to model comparisons, the EEG classification results were based more on low-level visual features than expression intensities or categories. In fMRI, the model comparisons revealed change along the processing hierarchy, from low-level visual feature coding in V1 to coding of intensity of expressions in the right TPOJ. The results highlight the importance of a multimodal approach for understanding the functional roles of different brain regions in face processing.Peer reviewe

    Obligatory integration of face features in expression discrimination

    Get PDF
    Previous composite face studies have shown that an unattended face half that differs in identity or in expression from the attended face half distracts face perception. These studies have typically not controlled for the amount of information in different face halves. We investigated feature integration while participants discriminated angry and happy expressions. The stimuli were scaled using individual thresholds to control the expression strength in face halves. In the first experiment, we varied the relative amount of information in upper and lower parts. In the second experiment, participants tried to ignore one half of the face, which was either congruent or incongruent with the attended half. We found both beneficial and obligatory integration of face halves. A robust face composite effect was found both when attending eyes or mouth, and both for congruent and incongruent expressions, suggesting similar processing of face halves when the amount of information is controlled for.Peer reviewe

    Exploring Musical Activities and Their Relationship to Emotional Well-Being in Elderly People across Europe : A Study Protocol

    Get PDF
    Music is a powerful, pleasurable stimulus that can induce positive feelings and can therefore be used for emotional self-regulation. Musical activities such as listening to music, playing an instrument, singing or dancing are also an important source for social contact, promoting interaction and the sense of belonging with others. Recent evidence has suggested that after retirement, other functions of music, such as self-conceptual processing related to autobiographical memories, become more salient. However, few studies have addressed the meaningfulness of music in the elderly. This study aims to investigate elderly people's habits and preferences related to music, study the role music plays in their everyday life, and explore the relationship between musical activities and emotional well-being across different countries of Europe. A survey will be administered to elderly people over the age of 65 from five different European countries (Bosnia and Herzegovina, Czechia, Germany, Ireland, and UK) and to a control group. Participants in both groups will be asked about basic sociodemographic information, habits and preferences in their participation in musical activities and emotional well-being. Overall, the aim of this study is to gain a deeper understanding of the role of music in the elderly from a psychological perspective. This advanced knowledge could help to develop therapeutic applications, such as musical recreational programs for healthy older people or elderly in residential care, which are better able to meet their emotional and social needs.Peer reviewe

    Extracting locations from sport and exercise-related social media messages using a neural network-based bilingual toponym recognition model

    Get PDF
    Sport and exercise contribute to health and well-being in cities. While previous research has mainly focused on activities at specific locations such as sport facilities, "informal sport" that occur at arbitrary locations across the city have been largely neglected. Such activities are more challenging to observe, but this challenge may be addressed using data collected from social media platforms, because social media users regularly generate content related to sports and exercise at given locations. This allows studying all sport, including those "informal sport" which are at arbitrary locations, to better understand sports and exercise-related activities in cities. However, user-generated geographical information available on social media platforms is becoming scarcer and coarser. This places increased emphasis on extracting location information from free-form text content on social media, which is complicated by multilingualism and informal language. To support this effort, this article presents an end-to-end deep learning-based bilingual toponym recognition model for extracting location information from social media content related to sports and exercise. We show that our approach outperforms five state-of-the-art deep learning and machine learning models. We further demonstrate how our model can be deployed in a geoparsing framework to support city planners in promoting healthy and active lifestyles.Peer reviewe
    corecore