11 research outputs found

    Augmented Modality Exclusivity Norms for Concrete and Abstract Italian Property Words

    Get PDF
    How perceptual information is encoded into language and conceptual knowledge is a debated topic in cognitive (neuro)science. We present modality norms for 643 Italian adjectives, which referred to one of the five perceptual modalities or were abstract. Overall, words were rated as mostly connected to the visual modality and least connected to the olfactory and gustatory modality. We found that words associated to visual and auditory experience were more unimodal compared to words associated to other sensory modalities. A principal components analysis highlighted a strong coupling between gustatory and olfactory information in word meaning, and the tendency of words referring to tactile experience to also include information from the visual dimension. Abstract words were found to encode only marginal perceptual information, mostly from visual and auditory experience. The modality norms were augmented with corpus-based (e.g., Zipf Frequency, Orthographic Levenshtein Distance 20) and ratings-based psycholinguistic variables (Age of Acquisition, Familiarity, Contextual Availability). Split-half correlations performed for each experimental variable and comparisons with similar databases confirmed that our norms are highly reliable. This database thus provides a new important tool for investigating the interplay between language, perception and cognition

    Brain Signatures of Embodied Semantics and Language: A Consensus Paper

    Get PDF
    According to embodied theories (including embodied, embedded, extended, enacted, situated, and grounded approaches to cognition), language representation is intrinsically linked to our interactions with the world around us, which is reflected in specific brain signatures during language processing and learning. Moving on from the original rivalry of embodied vs. amodal theories, this consensus paper addresses a series of carefully selected questions that aim at determining when and how rather than whether motor and perceptual processes are involved in language processes. We cover a wide range of research areas, from the neurophysiological signatures of embodied semantics, e.g., event-related potentials and fields as well as neural oscillations, to semantic processing and semantic priming effects on concrete and abstract words, to first and second language learning and, finally, the use of virtual reality for examining embodied semantics. Our common aim is to better understand the role of motor and perceptual processes in language representation as indexed by language comprehension and learning. We come to the consensus that, based on seminal research conducted in the field, future directions now call for enhancing the external validity of findings by acknowledging the multimodality, multidimensionality, flexibility and idiosyncrasy of embodied and situated language and semantic processes

    The impact of human language on perceptual categorization: electrophysiological insights.

    No full text
    110 p.How does learning cultural systems like language impact cognition and perception? The last few years have seen increased interest into this topic, yet with little theoretical advance. One fundamental question concerns the nature of the neural mechanism through which language affects perceptual processes. Some accounts suggest that effects of language are Âżhigh-levelÂż, meaning that language does not affect early perceptual processes, but rather interact at later conceptual or decision-making stages. More recent proposals posit that language can alter perceptual processes at early sensory levels. This latter account is in line with current predictive processing theories of perception, which suggest that sensory processes are largely influenced by prior knowledge and expectation. The present thesis investigates whether and how language shapes perceptual processing. We focus on two specific types of language-perception interactions: (i) the effect of linguistic labels on the recognition of visual object categories; and (ii) the effect of linguistic knowledge on neural processing of rhythmic sounds. Using an interdisciplinary approach combining behavioral measures, human electrophysiology, and advanced statistical methods, we demonstrate that language can impact visual and auditory perception at early stages of processing, shaping how we perceive sensory events. This thesis also offers novel insights into the neurophysiological implementation of predictive processing in the human neocortex

    Language experience shapes predictive coding of rhythmic sound sequences

    No full text
    This repository contains the data and scripts (R, Matlab) used in the study "Language Experience Shapes Predictive Coding of Rhythmic SoundSequence". The preprint of the article can be found here: https://www.biorxiv.org/content/10.1101/2023.04.28.538247v2.abstrac

    The concreteness advantage in lexical decision does not depend on perceptual simulations.

    Get PDF
    Online First Publication, September 9, 2021Abstract words are typically more difficult to identify than concrete words in lexical-decision, word-naming, and recall tasks. This behavioral advantage, known as the concreteness effect, is often considered as evidence for embodied semantics, which emphasizes the role of sensorimotor experience in the comprehension of word meaning. In this view, online sensorimotor simulations triggered by concrete words, but not by abstract words, facilitate access to word meaning and speed up word identification. To test whether perceptual simulation is the driving force underlying the concreteness effect, we compared data from early-blind and sighted individuals performing an auditory lexical-decision task. Subjects were presented with property words referring to abstract (e.g., “logic”), concrete multimodal (e.g., “spherical”), and concrete unimodal visual concepts (e.g., “blue”). According to the embodied account, the processing advantage for concrete unimodal visual words should disappear in the early blind because they cannot rely on visual experience and simulation during semantics processing (i.e., purely visual words should be abstract for early-blind people). On the contrary, we found that both sighted and blind individuals are faster when processing multimodal and unimodal visual words compared with abstract words. This result suggests that the concreteness effect does not depend on perceptual simulations but might be driven by modality-independent properties of word meaning.This research was supported in part by the Research Projects of National Relevance (PRIN) grant “How Do We Make Sense of Words?” (Project Grant 2015PCNJ5F) awarded to Davide Crepaldi and Olivier Collignon by the Italian Ministry of Education, the European Research Council (ERC) grant “MADVIS—Mapping the Deprived Visual System: Cracking Function for Prediction” (Project 337573, ERC-20130StG), and the Belgian Excellence of Science (EOS) program (Project 30991544) awarded to Olivier Collignon

    The concreteness advantage in lexical decision does not depend on perceptual simulations

    No full text
    words are typically more difficult to identify than concrete words in lexical-decision, word-naming, and recall tasks. This behavioral advantage, known as the concreteness effect, is often considered as evidence for embodied semantics, which emphasizes the role of sensorimotor experience in the comprehension of word meaning. In this view, online sensorimotor simulations triggered by concrete words, but not by abstract words, facilitate access to word meaning and speed up word identification. To test whether perceptual simulation is the driving force underlying the concreteness effect, we compared data from early-blind and sighted individuals performing an auditory lexical-decision task. Subjects were presented with property words referring to abstract (e.g., "logic"), concrete multimodal (e.g., "spherical"), and concrete unimodal visual concepts (e.g., "blue"). According to the embodied account, the processing advantage for concrete unimodal visual words should disappear in the early blind because they cannot rely on visual experience and simulation during semantics processing (i.e., purely visual words should be abstract for early-blind people). On the contrary, we found that both sighted and blind individuals are faster when processing multimodal and unimodal visual words compared with abstract words. This result suggests that the concreteness effect does not depend on perceptual simulations but might be driven by modality-independent properties of word meaning

    The concreteness advantage in lexical decision does not depend on perceptual simulations

    No full text
    Abstract words are typically more difficult to identify than concrete words in lexical decision, word naming and recall tasks. This behavioral advantage, known as concreteness effect, is often considered as evidence for embodied semantics, which emphasizes the role of sensorimotor experience in the comprehension of word meaning. Under this view, on-line sensorimotor simulations triggered by concrete words, but not by abstract words, facilitate the access to words meaning and speed up word identification. To test whether perceptual simulation is the driving force underlying the concreteness effect, we compared data from early blind and sighted individuals performing an auditory lexical decision task. Subjects were presented with property words referring to abstract (e.g. logic), concrete multimodal (e.g. spherical) and concrete unimodal visual concepts (e.g. blue). According to the embodied account, the processing advantage for concrete unimodal visual words should disappear in the early blind, as they cannot rely on visual experience and simulation during semantics processing (i.e. purely visual words should be abstract for early blind people). On the contrary, we found that both sighted and blind individuals are faster when processing multimodal and unimodal visual words compared to abstract words. This result suggests that the concreteness effect does not depend on perceptual simulations but might be driven by modality-independent properties of word meaning

    An #EEGManyLabs study to test the role of the alpha phase on visual perception (a replication and new evidence)

    No full text
    Several studies have suggested that low-frequency brain oscillations could be key to understanding how the brain samples sensory information via rhythmic alternation of low and high excitability periods. However, this hypothesis has recently been called into question following the publication of some null findings. As part of the #EEGManyLabs initiative, we set out to undertake a high-powered, multi-site replication of an influential study on this topic. In the original study, Mathewson et al. (2009) showed that during high amplitude fluctuations of alpha activity (8-13 Hz), the visibility of a visual target stimulus depended on the time the target was presented relative to the phase of the pre-target alpha activity. Furthermore, visual evoked potentials (e.g., N1, P1, P2 and P3) were larger in amplitude when the target was presented at the pre-stimulus alpha peaks, which were also associated with higher visibility. If we are successful in replicating the results of Mathewson et al. (2009), we intend to extend the original findings by conducting a second, original, experiment that varies the pre-stimulus time unpredictably to determine whether the phase-behavioural relationship depends on the target stimulus having a predictable onset time

    An EEGManyLabs study to test the role of the alpha phase on visual perception (a replication and new evidence)

    No full text
    Several studies have suggested that low-frequency brain oscillations could be key to understanding how the brain samples sensory information via rhythmic alternation of low and high excitability periods. However, this hypothesis has recently been called into question following the publication of some null findings. As part of the #EEGManyLabs initiative, we set out to undertake a high-powered, multi-site replication of an influential study on this topic. In the original study, Mathewson et al. (2009) showed that during high amplitude fluctuations of alpha activity (8-13 Hz), the visibility of a visual target stimulus depended on the time the target was presented relative to the phase of the pre-target alpha activity. Furthermore, visual evoked potentials (e.g., N1, P1, P2 and P3) were larger in amplitude when the target was presented at the pre-stimulus alpha peaks, which were also associated with higher visibility. If we are successful in replicating the results of Mathewson et al. (2009), we intend to extend the original findings by conducting a second, original, experiment that varies the pre-stimulus time unpredictably to determine whether the phase-behavioural relationship depends on the target stimulus having a predictable onset time
    corecore