53 research outputs found

    Top-Down and Bottom-Up Contributions to Understanding Sentences Describing Objects in Motion

    Get PDF
    Theories of embodied language comprehension propose that the neural systems used for perception, action, and emotion are also engaged during language comprehension. Consistent with these theories, behavioral studies have shown that the comprehension of language that describes motion is affected by simultaneously perceiving a moving stimulus (Kaschak et al., 2005). In two neuroimaging studies, we investigate whether comprehension of sentences describing moving objects activates brain areas known to support the visual perception of moving objects (i.e., area MT/V5). Our data indicate that MT/V5 is indeed selectively engaged by sentences describing objects in motion toward the comprehender compared to sentences describing visual scenes without motion. Moreover, these sentences activate areas along the cortical midline of the brain, known to be engaged when participants process self-referential information. The current data thus suggest that sentences describing situations with potential relevance to one's own actions activate both higher-order visual cortex as well brain areas involved in processing information about the self. The data have consequences for embodied theories of language comprehension: first, they show that perceptual brain areas support sentential-semantic processing. Second the data indicate that sensory-motor simulation of events described through language are susceptible to top-down modulation of factors such as relevance of the described situation to the self

    “I know something you don't know” : Discourse and social context effects on the N400 in adolescents

    Get PDF
    Adolescence is a time of great cognitive and social development. Despite this, relatively few studies to date have investigated how perspective taking affects on-line language comprehension in adolescents. In the current study, we addressed this gap in the literature, making use of a Joint Comprehension Task in which two individuals with differing background knowledge jointly attend to linguistic stimuli. Using event-related potentials, we investigated adolescents’ electrophysiological responses to (a) semantically anomalous sentence stimuli in discourse context and (b) semantically plausible sentence stimuli that the participants believe another individual finds semantically implausible. Our results demonstrate that a robust “N400 effect” (i.e., a well-established event-related potential, known to be sensitive to lexical-semantic integration difficulties) is elicited by semantically anomalous sentences; this N400 effect is subsequently attenuated by discourse context. Lastly, a “social N400 effect” is elicited by sentences that are semantically plausible for the participants if they believe that another individual finds the sentences implausible. The results suggest that adolescents integrate the perspective of others during on-line language comprehension via simulation; that is, adolescents use their own language processing system to interpret language input from the perspective of other jointly attending individuals

    Context Effects in Embodied Lexical-Semantic Processing

    Get PDF
    The embodied view of language comprehension proposes that the meaning of words is grounded in perception and action rather than represented in abstract amodal symbols. Support for embodied theories of language processing comes from behavioral studies showing that understanding a sentence about an action can modulate congruent and incongruent physical responses, suggesting motor involvement during comprehension of sentences referring to bodily movement. Additionally, several neuroimaging studies have provided evidence that comprehending single words denoting manipulable objects elicits specific responses in the neural motor system. An interesting question that remains is whether action semantic knowledge is directly activated as motor simulations in the brain, or rather modulated by the semantic context in which action words are encountered. In the current paper we investigated the nature of conceptual representations using a go/no-go lexical decision task. Specifically, target words were either presented in a semantic context that emphasized dominant action features (features related to the functional use of an object) or non-dominant action features. The response latencies in a lexical decision task reveal that participants were faster to respond to words denoting objects for which the functional use was congruent with the prepared movement. This facilitation effect, however, was only apparent when the semantic context emphasized corresponding motor properties. These findings suggest that conceptual processing is a context-dependent process that incorporates motor-related knowledge in a flexible manner

    Observing, performing, and understanding actions : revisiting the role of cortical motor areas in processing of action words

    Get PDF
    Language content and action/perception have been shown to activate common brain areas in previous neuroimaging studies. However, it is unclear whether overlapping cortical activation reflects a common neural source or adjacent, but distinct, sources. We address this issue by using multivoxel pattern analysis on fMRI data. Specifically, participants were instructed to engage in five tasks: (1) execute hand actions (AE), (2) observe hand actions (AO), (3) observe nonbiological motion (MO), (4) read action verbs, and (5) read nonaction verbs. A classifier was trained to distinguish between data collected from neural motor areas during (1) AE versus MO and (2) AO versus MO. These two algorithms were then used to test for a distinction between data collected during the reading of action versus nonaction verbs. The results show that the algorithm trained to distinguish between AE and MO distinguishes between word categories using signal recorded from the left parietal cortex and pre-SMA, but not from ventrolateral premotor cortex. In contrast, the algorithm trained to distinguish between AO and MO discriminates between word categories using the activity pattern in the left premotor and left parietal cortex. This shows that the sensitivity of premotor areas to language content is more similar to the process of observing others acting than to acting oneself. Furthermore, those parts of the brain that show comparable neural pattern for action execution and action word comprehension are high-level integrative motor areas rather than low-level motor areas

    Shared neural processes support semantic control and action understanding

    Get PDF
    Executive-semantic control and action understanding appear to recruit overlapping brain regions but existing evidence from neuroimaging meta-analyses and neuropsychology lacks spatial precision; we therefore manipulated difficulty and feature type (visual vs. action) in a single fMRI study. Harder judgements recruited an executive-semantic network encompassing medial and inferior frontal regions (including LIFG) and posterior temporal cortex (including pMTG). These regions partially overlapped with brain areas involved in action but not visual judgements. In LIFG, the peak responses to action and difficulty were spatially identical across participants, while these responses were overlapping yet spatially distinct in posterior temporal cortex. We propose that the co-activation of LIFG and pMTG allows the flexible retrieval of semantic information, appropriate to the current context; this might be necessary both for semantic control and understanding actions. Feature selection in difficult trials also recruited ventral occipital-temporal areas, not implicated in action understanding

    Shared neural processes support semantic control and action understanding

    Get PDF
    Executive-semantic control and action understanding appear to recruit overlapping brain regions but existing evidence from neuroimaging meta-analyses and neuropsychology lacks spatial precision; we therefore manipulated difficulty and feature type (visual vs. action) in a single fMRI study. Harder judgements recruited an executive-semantic network encompassing medial and inferior frontal regions (including LIFG) and posterior temporal cortex (including pMTG). These regions partially overlapped with brain areas involved in action but not visual judgements. In LIFG, the peak responses to action and difficulty were spatially identical across participants, while these responses were overlapping yet spatially distinct in posterior temporal cortex. We propose that the co-activation of LIFG and pMTG allows the flexible retrieval of semantic information, appropriate to the current context; this might be necessary both for semantic control and understanding actions. Feature selection in difficult trials also recruited ventral occipital-temporal areas, not implicated in action understanding

    Bound Together: Social binding leads to faster processing, spatial distortion and enhanced memory of interacting partners.

    Get PDF
    The binding of features into perceptual wholes is a well-established phenomenon, which has previously only been studied in the context of early vision and low-level features, such as colour or proximity. We hypothesised that a similar binding process, based on higher level information, could bind people into interacting groups, facilitating faster processing and enhanced memory of social situations. To investigate this possibility we used three experimental approaches to explore grouping effects in displays involving interacting people. First, using a visual search task we demonstrate more rapid processing for interacting (versus non-interacting) pairs in an odd-quadrant paradigm (Experiments 1a & 1b). Second, using a spatial judgment task, we show that interacting individuals are remembered as physically closer than are non-interacting individuals (Experiments 2a & 2b). Finally, we show that memory retention of group- relevant and irrelevant features is enhanced when recalling interacting partners in a surprise memory task (Experiments 3a & 3b). Each of these results is consistent with the social binding hypothesis, and alternative explanations based on low level perceptual features and attentional effects are ruled out. We conclude that automatic mid-level grouping processes bind individuals into groups on the basis of their perceived interaction. Such social binding could provide the basis for more sophisticated social processing. Identifying the automatic encoding of social interactions in visual search, distortions of spatial working memory, and facilitated retrieval of object properties from longer-term memory, opens new approaches to studying social cognition with possible practical applications

    Neuronal interactions between mentalizing and action systems during indirect request processing

    Get PDF
    Human communication relies on the ability to process linguistic structure and to map words and utterances onto our environment. Furthermore, as what we communicate is often not directly encoded in our language (e.g., in the case of irony, jokes, or indirect requests), we need to extract additional cues to infer the beliefs and desires of our conversational partners. Although the functional interplay between language and the ability to mentalize has been discussed in theoretical accounts in the past, the neurobiological underpinnings of these dynamics are currently not well understood. Here, we address this issue using functional imaging (fMRI). Participants listened to question-reply dialogues. In these dialogues, a reply is interpreted as a direct reply, an indirect reply, or a request for action, depending on the question. We show that inferring meaning from indirect replies engages parts of the mentalizing network (mPFC) while requests for action also activate the cortical motor system (IPL). Subsequent connectivity analysis using Dynamic Causal Modelling (DCM) revealed that this pattern of activation is best explained by an increase in effective connectivity from the mentalizing network (mPFC) to the action system (IPL). These results are an important step towards a more integrative understanding of the neurobiological basis of indirect speech processing

    Imagining Sounds and Images : Decoding the Contribution of Unimodal and Transmodal Brain Regions to Semantic Retrieval in the Absence of Meaningful Input

    Get PDF
    In the absence of sensory information, we can generate meaningful images and sounds from representations in memory. However, it remains unclear which neural systems underpin this process and whether tasks requiring the top-down generation of different kinds of features recruit similar or different neural networks. We asked people to internally generate the visual and auditory features of objects, either in isolation (car, dog) or in specific and complex meaning-based contexts (car/dog race). Using an fMRI decoding approach, in conjunction with functional connectivity analysis, we examined the role of auditory/visual cortex and transmodal brain regions. Conceptual retrieval in the absence of external input recruited sensory and transmodal cortex. The response in transmodal regions-including anterior middle temporal gyrus-was of equal magnitude for visual and auditory features yet nevertheless captured modality information in the pattern of response across voxels. In contrast, sensory regions showed greater activation for modality-relevant features in imagination (even when external inputs did not differ). These data are consistent with the view that transmodal regions support internally generated experiences and that they play a role in integrating perceptual features encoded in memory

    Fractionating the anterior temporal lobe : MVPA reveals differential responses to input and conceptual modality

    Get PDF
    Words activate cortical regions in accordance with their modality of presentation (i.e., written vs. spoken), yet there is a long-standing debate about whether patterns of activity in any specific brain region capture modality-invariant conceptual information. Deficits in patients with semantic dementia highlight the anterior temporal lobe (ATL) as an amodal store of semantic knowledge but these studies do not permit precise localisation of this function. The current investigation used multiple imaging methods in healthy participants to examine functional dissociations within ATL. Multi-voxel pattern analysis identified spatially segregated regions: a response to input modality in anterior superior temporal gyrus (aSTG) and a response to meaning in more ventral anterior temporal lobe (vATL). This functional dissociation was supported by resting-state connectivity that found greater coupling for aSTG with primary auditory cortex and vATL with the default mode network. A meta-analytic decoding of these connectivity patterns implicated aSTG in processes closely tied to auditory processing (such as phonology and language) and vATL in meaning-based tasks (such as comprehension or social cognition). Thus we provide converging evidence for the segregation of meaning and input modality in the ATL
    • …
    corecore