686 research outputs found

    Neural overlap of L1 and L2 semantic representations across visual and auditory modalities : a decoding approach/

    Get PDF
    This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations

    Written sentence context effects on acoustic-phonetic perception: fMRI reveals cross-modal semantic-perceptual interactions

    Get PDF
    Available online 3 October 2019.This study examines cross-modality effects of a semantically-biased written sentence context on the perception of an acoustically-ambiguous word target identifying neural areas sensitive to interactions between sentential bias and phonetic ambiguity. Of interest is whether the locus or nature of the interactions resembles those previously demonstrated for auditory-only effects. FMRI results show significant interaction effects in right mid-middle temporal gyrus (RmMTG) and bilateral anterior superior temporal gyri (aSTG), regions along the ventral language comprehension stream that map sound onto meaning. These regions are more anterior than those previously identified for auditory-only effects; however, the same cross-over interaction pattern emerged implying similar underlying computations at play. The findings suggest that the mechanisms that integrate information across modality and across sentence and phonetic levels of processing recruit amodal areas where reading and spoken lexical and semantic access converge. Taken together, results support interactive accounts of speech and language processing.This work was supported in part by the National Institutes of Health, NIDCD grant RO1 DC006220

    Semantic memory

    Get PDF
    The Encyclopedia of Human Behavior, Second Edition is a comprehensive three-volume reference source on human action and reaction, and the thoughts, feelings, and physiological functions behind those actions

    Six challenges for embodiment research

    No full text
    20 years after Barsalou's seminal perceptual symbols paper (Barsalou, 1999), embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved in status from an outlandish proposal advanced by a fringe movement in psychology to a mainstream position adopted by large numbers of researchers in the psychological and cognitive (neuro)sciences. While it has generated highly productive work in the cognitive sciences as a whole, it had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, for example, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the state of the affairs in the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled and key questions remain unanswered. We present several suggestions for a productive way forward

    The Contribution of the Parietal Lobes to Speaking and Writing

    Get PDF
    The left parietal lobe has been proposed as a major language area. However, parietal cortical function is more usually considered in terms of the control of actions, contributing both to attention and cross-modal integration of external and reafferent sensory cues. We used positron emission tomography to study normal subjects while they overtly generated narratives, both spoken and written. The purpose was to identify the parietal contribution to the modality-specific sensorimotor control of communication, separate from amodal linguistic and memory processes involved in generating a narrative. The majority of left and right parietal activity was associated with the execution of writing under visual and somatosensory control irrespective of whether the output was a narrative or repetitive reproduction of a single grapheme. In contrast, action-related parietal activity during speech production was confined to primary somatosensory cortex. The only parietal area with a pattern of activity compatible with an amodal central role in communication was the ventral part of the left angular gyrus (AG). The results of this study indicate that the cognitive processing of language within the parietal lobe is confined to the AG and that the major contribution of parietal cortex to communication is in the sensorimotor control of writing

    On staying grounded and avoiding Quixotic dead ends

    Get PDF
    The 15 articles in this special issue on The Representation of Concepts illustrate the rich variety of theoretical positions and supporting research that characterize the area. Although much agreement exists among contributors, much disagreement exists as well, especially about the roles of grounding and abstraction in conceptual processing. I first review theoretical approaches raised in these articles that I believe are Quixotic dead ends, namely, approaches that are principled and inspired but likely to fail. In the process, I review various theories of amodal symbols, their distortions of grounded theories, and fallacies in the evidence used to support them. Incorporating further contributions across articles, I then sketch a theoretical approach that I believe is likely to be successful, which includes grounding, abstraction, flexibility, explaining classic conceptual phenomena, and making contact with real-world situations. This account further proposes that (1) a key element of grounding is neural reuse, (2) abstraction takes the forms of multimodal compression, distilled abstraction, and distributed linguistic representation (but not amodal symbols), and (3) flexible context-dependent representations are a hallmark of conceptual processing

    On the need for Embodied and Dis-Embodied Cognition

    Get PDF
    This essay proposes and defends a pluralistic theory of conceptual embodiment. Our concepts are represented in at least two ways: (i) through sensorimotor simulations of our interactions with objects and events and (ii) through sensorimotor simulations of natural language processing. Linguistic representations are “dis-embodied” in the sense that they are dynamic and multimodal but, in contrast to other forms of embodied cognition, do not inherit semantic content from this embodiment. The capacity to store information in the associations and inferential relationships among linguistic representations extends our cognitive reach and provides an explanation of our ability to abstract and generalize. This theory is supported by a number of empirical considerations, including the large body of evidence from cognitive neuroscience and neuropsychology supporting a multiple semantic code explanation of imageability effects

    On the need for embodied and dis-embodied cognition

    Get PDF
    This essay proposes and defends a pluralistic theory of conceptual embodiment. Our concepts are represented in at least two ways: (i) through sensorimotor simulations of our interactions with objects and events and (ii) through sensorimotor simulations of natural language processing. Linguistic representations are “dis-embodied” in the sense that they are dynamic and multimodal but, in contrast to other forms of embodied cognition, do not inherit semantic content from this embodiment. The capacity to store information in the associations and inferential relationships among linguistic representations extends our cognitive reach and provides an explanation of our ability to abstract and generalize. This theory is supported by a number of empirical considerations, including the large body of evidence from cognitive neuroscience and neuropsychology supporting a multiple semantic code explanation of imageability effects

    Abstract and Concrete Sentences, Embodiment, and Languages

    Get PDF
    One of the main challenges of embodied theories is accounting for meanings of abstract words. The most common explanation is that abstract words, like concrete ones, are grounded in perception and action systems. According to other explanations, abstract words, differently from concrete ones, would activate situations and introspection; alternatively, they would be represented through metaphoric mapping. However, evidence provided so far pertains to specific domains. To be able to account for abstract words in their variety we argue it is necessary to take into account not only the fact that language is grounded in the sensorimotor system, but also that language represents a linguistic–social experience. To study abstractness as a continuum we combined a concrete (C) verb with both a concrete and an abstract (A) noun; and an abstract verb with the same nouns previously used (grasp vs. describe a flower vs. a concept). To disambiguate between the semantic meaning and the grammatical class of the words, we focused on two syntactically different languages: German and Italian. Compatible combinations (CC, AA) were processed faster than mixed ones (CA, AC). This is in line with the idea that abstract and concrete words are processed preferentially in parallel systems – abstract in the language system and concrete more in the motor system, thus costs of processing within one system are the lowest. This parallel processing takes place most probably within different anatomically predefined routes. With mixed combinations, when the concrete word preceded the abstract one (CA), participants were faster, regardless of the grammatical class and the spoken language. This is probably due to the peculiar mode of acquisition of abstract words, as they are acquired more linguistically than perceptually. Results confirm embodied theories which assign a crucial role to both perception–action and linguistic experience for abstract words

    Meta-analytic evidence for a novel hierarchical model of conceptual processing

    Get PDF
    Conceptual knowledge plays a pivotal role in human cognition. Grounded cognition theories propose that concepts consist of perceptual-motor features represented in modality-specific perceptual-motor cortices. However, it is unclear whether conceptual processing consistently engages modality-specific areas. Here, we performed an activation likelihood estimation (ALE) meta-analysis across 212 neuroimaging experiments on conceptual processing related to 7 perceptual-motor modalities (action, sound, visual shape, motion, color, olfaction-gustation, and emotion). We found that conceptual processing consistently engages brain regions also activated during real perceptual-motor experience of the same modalities. In addition, we identified multimodal convergence zones that are recruited for multiple modalities. In particular, the left inferior parietal lobe (IPL) and posterior middle temporal gyrus (pMTG) are engaged for three modalities: action, motion, and sound. These “trimodal” regions are surrounded by “bimodal” regions engaged for two modalities. Our findings support a novel model of the conceptual system, according to which conceptual processing relies on a hierarchical neural architecture from modality-specific to multimodal areas up to an amodal hub
    corecore