22 research outputs found

    A data-driven investigation of human action representations

    Get PDF
    Understanding actions performed by others requires us to integrate different types of information about people, scenes, objects, and their interactions. What organizing dimensions does the mind use to make sense of this complex action space? To address this question, we collected intuitive similarity judgments across two large-scale sets of naturalistic videos depicting everyday actions. We used cross-validated sparse non-negative matrix factorization (NMF) to identify the structure underlying action similarity judgments. A low-dimensional representation, consisting of nine to ten dimensions, was sufficient to accurately reconstruct human similarity judgments. The dimensions were robust to stimulus set perturbations and reproducible in a separate odd-one-out experiment. Human labels mapped these dimensions onto semantic axes relating to food, work, and home life; social axes relating to people and emotions; and one visual axis related to scene setting. While highly interpretable, these dimensions did not share a clear one-to-one correspondence with prior hypotheses of action-relevant dimensions. Together, our results reveal a low-dimensional set of robust and interpretable dimensions that organize intuitive action similarity judgments and highlight the importance of data-driven investigations of behavioral representations

    Event Structure In Vision And Language

    Get PDF
    Our visual experience is surprisingly rich: We do not only see low-level properties such as colors or contours; we also see events, or what is happening. Within linguistics, the examination of how we talk about events suggests that relatively abstract elements exist in the mind which pertain to the relational structure of events, including general thematic roles (e.g., Agent), Causation, Motion, and Transfer. For example, “Alex gave Jesse flowers” and “Jesse gave Alex flowers” both refer to an event of transfer, with the directionality of the transfer having different social consequences. The goal of the present research is to examine the extent to which abstract event information of this sort (event structure) is generated in visual perceptual processing. Do we perceive this information, just as we do with more ‘traditional’ visual properties like color and shape? In the first study (Chapter 2), I used a novel behavioral paradigm to show that event roles – who is acting on whom – are rapidly and automatically extracted from visual scenes, even when participants are engaged in an orthogonal task, such as color or gender identification. In the second study (Chapter 3), I provided functional magnetic resonance (fMRI) evidence for commonality in content between neural representations elicited by static snapshots of actions and by full, dynamic action sequences. These two studies suggest that relatively abstract representations of events are spontaneously extracted from sparse visual information. In the final study (Chapter 4), I return to language, the initial inspiration for my investigations of events in vision. Here I test the hypothesis that the human brain represents verbs in part via their associated event structures. Using a model of verbs based on event-structure semantic features (e.g., Cause, Motion, Transfer), it was possible to successfully predict fMRI responses in language-selective brain regions as people engaged in real-time comprehension of naturalistic speech. Taken together, my research reveals that in both perception and language, the mind rapidly constructs a representation of the world that includes events with relational structure

    From Observed Action Identity to Social Affordances

    Get PDF
    Others' observed actions cause continuously changing retinal images, making it challenging to build neural representations of action identity. The monkey anterior intraparietal area (AIP) and its putative human homologue (phAIP) host neurons selective for observed manipulative actions (OMAs). The neuronal activity of both AIP and phAIP allows a stable readout of OMA identity across visual formats, but human neurons exhibit greater invariance and generalize from observed actions to action verbs. These properties stem from the convergence in AIP of superior temporal signals concerning: (i) observed body movements; and (ii) the changes in the body-object relationship. We propose that evolutionarily preserved mechanisms underlie the specification of observed-actions identity and the selection of motor responses afforded by them, thereby promoting social behavior

    The cognitive and neural representations of actions

    Get PDF
    The aim of this thesis is to explore how people assign meaning to actions. For that reason, I investigated the cognitive and neural structures underlying action representations, taking into account the key features of actions and the categories that these actions form. The thesis consists of five chapters. The first chapter provides a general background on the topic of action understanding and the methods I chose for analyzing the data. Chapters 2 - 4 incorporate the three main studies that I conducted throughout the PhD project. Chapter 5 includes a general discussion, the implications of the studies, their limitations, and ideas for future studies. The article presented in Chapter 2 has been published in Behavior Research Methods (impact factor = 5.95) on July 5th 2022. The manuscript in Chapter 3 has been submitted to Human Brain Mapping (impact factor = 5.40) and is currently in revision. All references have been combined into one bibliography at the end of the thesis. Supplementary files and figures for all three manuscripts have been merged in the Appendix following Chapter 5. No further changes have been made to the text of the published article. The PhD project has been funded by a Research Grant from the German Research Foundation (Li 2840/1-1). I was supported by the Research Grant from the German Research Foundation (Li 2840/1-1) and the stipend from Finanzielles Anreizsystem zur Förderung der Gleichstellung

    Neuroimaging investigations of cortical specialisation for different types of semantic knowledge

    Get PDF
    Embodied theories proposed that semantic knowledge is grounded in motor and perceptual experiences. This leads to two questions: (1) whether the neural underpinnings of perception are also necessary for semantic cognition; (2) how do biases towards different sensorimotor experiences cause brain regions to specialise for particular types of semantic information. This thesis tackles these questions in a series of neuroimaging and behavioural investigations. Regarding question 1, strong embodiment theory holds that semantic representation is reenactment of corresponding experiences, and brain regions for perception are necessary for comprehending modality-specific concepts. However, the weak embodiment view argues that reenactment may not be necessary, and areas near to perceiving regions may be sufficient to support semantic representation. In the particular case of motion concepts, lateral occipital temporal cortex (LOTC) has been long identified as an important area, but the roles of its different subregions are still uncertain. Chapter 3 examined how different parts of LOTC reacted to written descriptions of motion and static events, using multiple analysis methods. A series of anterior to posterior sub-regions were analyzed through univariate, multivariate pattern analysis (MVPA), and psychophysical interaction (PPI) analyses. MVPA revealed strongest decoding effects for motion vs. static events in the posterior parts of LOTC, including both visual motion area (V5) and posterior middle temporal gyrus (pMTG). In contrast, only the middle portion of LOTC showed increased activation for motion sentences in univariate analyses. PPI analyses showed increased functional connectivity between posterior LOTC and the multiple demand network for motion events. These findings suggest that posterior LOTC, which overlapped with the motion perception V5 region, is selectively involved in comprehending motion events, while the anterior part of LOTC contributes to general semantic processing. Regarding question 2, the hub-and-spoke theory suggests that anterior temporal lobe (ATL) acts as a hub, using inputs from modality-specific regions to construct multimodal concepts. However, some researchers propose temporal parietal cortex (TPC) as an additional hub, specialised in processing and integrating interaction and contextual information (e.g., for actions and locations). These hypotheses are summarized as the "dual-hub theory" and different aspects of this theory were investigated in in Chapters 4 and 5. Chapter 4 focuses on taxonomic and thematic relations. Taxonomic relations (or categorical relations) occur when two concepts belong to the same category (e.g., ‘dog’ and ‘wolf’ are both canines). In contrast, thematic relations (or associative relations) refer to situations that two concepts co-occur in events or scenes (e.g., ‘dog’ and ‘bone’), focusing on the interaction or association between concepts. Some studies have indicated ATL specialization for taxonomic relations and TPC specialization for thematic relations, but others have reported inconsistent or even converse results. Thus Chapter 4 first conducted an activation likelihood estimation (ALE) meta-analysis of neuroimaging studies contrasting taxonomic and thematic relations. This found that thematic relations reliably engage action and location processing regions (left pMTG and SMG), while taxonomic relations only showed consistent effects in the right occipital lobe. A primed semantic judgement task was then used to test the dual-hub theory’s prediction that taxonomic relations are heavily reliant on colour and shape knowledge, while thematic relations rely on action and location knowledge. This behavioural experiment revealed that action or location priming facilitated thematic relation processing, but colour and shape did not lead to priming effects for taxonomic relations. This indicates that thematic relations rely more on action and location knowledge, which may explain why the preferentially engage TPC, whereas taxonomic relations are not specifically linked to shape and colour features. This may explain why they did not preferentially engage left ATL. Chapter 5 concentrates on event and object concepts. Previous studies suggest ATL specialization for coding similarity of objects’ semantics, and angular gyrus (AG) specialization for sentence and event structure representation. In addition, in neuroimaging studies, event semantics are usually investigated using complex temporally extended stimuli, unlike than the single-concept stimuli used to investigate object semantics. Thus chapter 5 used representational similarity analysis (RSA), univariate analysis, and PPI analysis to explore neural activation patterns for event and object concepts presented as static images. Bilateral AGs encoded semantic similarity for event concepts, with the left AG also coding object similarity. Bilateral ATLs encoded semantic similarity for object concepts but also for events. Left ATL exhibited stronger coding for events than objects. PPI analysis revealed stronger connections between left ATL and right pMTG, and between right AG and bilateral inferior temporal gyrus (ITG) and middle occipital gyrus, for event concepts compared to object concepts. Consistent with the meta-analysis in chapter 4, the results in chapter 5 support the idea of partial specialization in AG for event semantics but do not support ATL specialization for object semantics. In fact, both the meta-analysis and chapter 5 findings suggest greater ATL involvement in coding objects' associations compared to their similarity. To conclude, the thesis provides support for the idea that perceptual brain regions are engaged in conceptual processing, in the case of motion concepts. It also provides evidence for a specialised role for TPC regions in processing thematic relations (pMTG) and event concepts (AG). There was mixed evidence for specialisation within the ATLs and this remains an important target for future research

    Spatio-temporal dynamics of oscillatory brain activity during the observation of actions and interactions between point-light agents

    Get PDF
    Predicting actions from non-verbal cues and using them to optimise one's response behaviour (i.e. interpersonal predictive coding) is essential in everyday social interactions. We aimed to investigate the neural correlates of different cognitive processes evolving over time during interpersonal predictive coding. Thirty-nine participants watched two agents depicted by moving point-light stimuli while an electroencephalogram (EEG) was recorded. One well-recognizable agent performed either a 'communicative' or an 'individual' action. The second agent either was blended into a cluster of noise dots (i.e. present) or was entirely replaced by noise dots (i.e. absent), which participants had to differentiate. EEG amplitude and coherence analyses for theta, alpha and beta frequency bands revealed a dynamic pattern unfolding over time: Watching communicative actions was associated with enhanced coupling within medial anterior regions involved in social and mentalising processes and with dorsolateral prefrontal activation indicating a higher deployment of cognitive resources. Trying to detect the agent in the cluster of noise dots without having seen communicative cues was related to enhanced coupling in posterior regions for social perception and visual processing. Observing an expected outcome was modulated by motor system activation. Finally, when the agent was detected correctly, activation in posterior areas for visual processing of socially relevant features was increased. Taken together, our results demonstrate that it is crucial to consider the temporal dynamics of social interactions and of their neural correlates to better understand interpersonal predictive coding. This could lead to optimised treatment approaches for individuals with problems in social interactions

    Psychophysiological evidence of a role for emotion in the learning of trustworthiness from identity-contingent eye-gaze cues

    Get PDF
    There is an increasing recognition that emotion influences cognition. This is particularly clear in the domain of social cognition where the perception of social cues is most often a potent source of emotional arousal. One process likely to be influenced by emotion is the forming of face evaluations. Judgements of trustworthiness are particularly important due to the potential costs and benefits of the decision to rely upon another person. Face trustworthiness is modulated by one particular social cue, eye-gaze. In reaction time tasks, when faces gaze away from target objects, incongruently, responses are slower than when faces gaze towards target objects, congruently. Faces that consistently gaze incongruently are also judged less trustworthy than faces that consistently gaze congruently.! In six experiments, we investigated the role of emotion in the learning of trustworthiness from identity-contingent gaze-cues using event-related potentials (ERP) and facial electromyography (EMG), which are sensitive electrophysiological measures of emotion. We found that the learning of trust was paralleled by an increase in the emotionrelated late positive potential (LPP) to incongruent faces across blocks. These findings were further supported by EMG measurements, which showed that corrugator muscle activity related to negative embodied emotional states, was greater to incongruent faces in those participants who showed expected changes in trust ratings. Although effects of gaze-cues on trust were consistent, they were not modulated by extremes in initial trust or priming of emotions elicited by social exclusion. Effects appeared to be due to a special relationship between faces, gaze, emotion and trust as the effects of validity on liking ratings were much weaker or non-existent compared to trust despite evidence of similar emotion-related LPP ERP and EMG activity. The effects also did not generalise to non-social arrow cues. In sum, we conclude that emotion mediates the learning of trust from identity-contingent gaze-cues

    Social and Affective Neuroscience of Everyday Human Interaction

    Get PDF
    This Open Access book presents the current state of the art knowledge on social and affective neuroscience based on empirical findings. This volume is divided into several sections first guiding the reader through important theoretical topics within affective neuroscience, social neuroscience and moral emotions, and clinical neuroscience. Each chapter addresses everyday social interactions and various aspects of social interactions from a different angle taking the reader on a diverse journey. The last section of the book is of methodological nature. Basic information is presented for the reader to learn about common methodologies used in neuroscience alongside advanced input to deepen the understanding and usability of these methods in social and affective neuroscience for more experienced readers
    corecore