21 research outputs found

    Synchronising internal and external information: a commentary on Meyer, Sun & Martin (2020)

    Get PDF
    Published online: 19 Mar 2020AKG was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska- Curie grant agreement No 798971. NM was supported by the Spanish Ministry of Science, Innovation and Universities (grant RTI2018-096311-B-I00), the Agencia Estatal de Investigación (AEI), the Fondo Europeo de Desarrollo Regional (FEDER). The authors acknowledge financial support from the “Severo Ochoa” Programme for Center/Unit of Excellence in R&D (SEV-2015-490) and by the Basque Government through the BERC 2018–2021 programme

    One Way or Another: Cortical Language Areas Flexibly Adapt Processing Strategies to Perceptual And Contextual Properties of Speech

    Get PDF
    Published:07 April 2021Cortical circuits rely on the temporal regularities of speech to optimize signal parsing for sound-to-meaning mapping. Bottom-up speech analysis is accelerated by top–down predictions about upcoming words. In everyday communications, however, listeners are regularly presented with challenging input—fluctuations of speech rate or semantic content. In this study, we asked how reducing speech temporal regularity affects its processing—parsing, phonological analysis, and ability to generate context-based predictions. To ensure that spoken sentences were natural and approximated semantic constraints of spontaneous speech we built a neural network to select stimuli from large corpora. We analyzed brain activity recorded with magnetoencephalography during sentence listening using evoked responses, speech-to-brain synchronization and representational similarity analysis. For normal speech theta band (6.5–8 Hz) speech-to-brain synchronization was increased and the left fronto-temporal areas generated stronger contextual predictions. The reverse was true for temporally irregular speech—weaker theta synchronization and reduced top–down effects. Interestingly, delta-band (0.5 Hz) speech tracking was greater when contextual/semantic predictions were lower or if speech was temporally jittered. We conclude that speech temporal regularity is relevant for (theta) syllabic tracking and robust semantic predictions while the joint support of temporal and contextual predictability reduces word and phrase-level cortical tracking (delta).The European Union’s Horizon 2020 research and innovation programme (under the Marie Sklodowska-Curie grant agreement No 798971 awarded to A.K.G.); the Spanish Ministry of Science, Innovation and Universities (grant RTI2018-096311-B-I00 to N.M.); the Agencia Estatal de Investigación (AEI), the Fondo Europeo de Desarrollo Regional (FEDER); the Basque Government (through the BERC 2018-2021 program), the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation (SEV-2015-0490), DeepText project (KK-2020/00088) and Ixa excellence research group (IT1343-19). the UPV/EHU (a postdoctoral grant ESPDOC18/101 to A.B.); the NVIDIA Corporation (to A.B. with the donation of a Titan V GPU used for this research)

    Phonological deficits in dyslexia impede lexical processing of spoken words : Linking behavioural and MEG data

    Get PDF
    Acknowledgements We thank Nicola Molinaro for sharing the data and input on the conceptualisation of the study, Olaf Hauk for assistance with the ERRC methods, Efthymia Kapnoula for help with the auditory stimuli, Mirjana Bozic and Brechtje Post for feedback on the manuscript, and Manuel Carreiras, Marıa Paz Suarez Coalla and Fernando Cuetos for the recruitment of participants.Peer reviewe

    One-to-One or One Too Many? Linking Sound-to-Letter Mappings to Speech Sound Perception and Production in Early Readers

    Get PDF
    Published online: Nov 4, 2022Purpose: Effects related to literacy acquisition have been observed at different levels of speech processing. This study investigated the link between orthographic knowledge and children’s perception and production of specific speech sounds. Method: Sixty Spanish-speaking second graders, differing in their phonological decoding skills, completed a speech perception and a production task. In the perception task, a behavioral adaptation of the oddball paradigm was used. Children had to detect orthographically consistent /t/, which has a unique orthographic representation (hti), and inconsistent /k/, which maps onto three different graphemes (hci, hqui, and hki), both appearing infrequently within a repetitive auditory sequence. In the production task, children produced these same sounds in meaningless syllables. Results: Perception results show that all children were faster at detecting consistent than inconsistent sounds regardless of their decoding skills. In the production task, however, the same facilitation for consistent sounds was linked to better decoding skills. Conclusions: These findings demonstrate differences in speech sound processing related to literacy acquisition. Literacy acquisition may therefore affect already-formed speech sound representations. Crucially, the strength of this link in production is modulated by individual decoding skills.This project has received funding from the European Research Council under the European Union’s Horizon 2020 Research and Innovation Program (Grant Agreement No. 819093 to C.D.M.) and under the Marie Sklodowska-Curie Grant Agreement No 843533 to A.S. This work was also supported by the Spanish State Research Agency through Basque Center on Cognition, Brain and Language Severo Ochoa excellence accreditation CEX2020-001010-S, the Spanish Ministry of Economy and Competitiveness (PSI2017 82941-P and PID2020-113926GB-I00), and the Basque Government (BERC 2022-2025 and PIBA18_29). M.J. was supported by a Predoctoral fellowship (associated to the Project PSI2017 82941-P; Grant No. PRE-2018-083946) from the Spanish Ministry of Science, Innovation and Universities and the Fondo Social Europeo

    Balancing Prediction and Sensory Input in Speech Comprehension: The Spatiotemporal Dynamics of Word Recognition in Context.

    Get PDF
    Spoken word recognition in context is remarkably fast and accurate, with recognition times of ∼200 ms, typically well before the end of the word. The neurocomputational mechanisms underlying these contextual effects are still poorly understood. This study combines source-localized electroencephalographic and magnetoencephalographic (EMEG) measures of real-time brain activity with multivariate representational similarity analysis to determine directly the timing and computational content of the processes evoked as spoken words are heard in context, and to evaluate the respective roles of bottom-up and predictive processing mechanisms in the integration of sensory and contextual constraints. Male and female human participants heard simple (modifier-noun) English phrases that varied in the degree of semantic constraint that the modifier (W1) exerted on the noun (W2), as in pairs, such as "yellow banana." We used gating tasks to generate estimates of the probabilistic predictions generated by these constraints as well as measures of their interaction with the bottom-up perceptual input for W2. Representation similarity analysis models of these measures were tested against electroencephalographic and magnetoencephalographic brain data across a bilateral fronto-temporo-parietal language network. Consistent with probabilistic predictive processing accounts, we found early activation of semantic constraints in frontal cortex (LBA45) as W1 was heard. The effects of these constraints (at 100 ms after W2 onset in left middle temporal gyrus and at 140 ms in left Heschl's gyrus) were only detectable, however, after the initial phonemes of W2 had been heard. Within an overall predictive processing framework, bottom-up sensory inputs are still required to achieve early and robust spoken word recognition in context.SIGNIFICANCE STATEMENT Human listeners recognize spoken words in natural speech contexts with remarkable speed and accuracy, often identifying a word well before all of it has been heard. In this study, we investigate the brain systems that support this important capacity, using neuroimaging techniques that can track real-time brain activity during speech comprehension. This makes it possible to locate the brain areas that generate predictions about upcoming words and to show how these expectations are integrated with the evidence provided by the speech being heard. We use the timing and localization of these effects to provide the most specific account to date of how the brain achieves an optimal balance between prediction and sensory input in the interpretation of spoken language

    Domain-general and domain-specific computations in single word processing

    Get PDF
    Available online 19 August 2019.Language comprehension relies on a multitude of domain-general and domain-specific cognitive operations. This study asks whether the domain-specific grammatical computations are obligatorily invoked whenever we process linguistic inputs. Using fMRI and three complementary measures of neural activity, we tested how domain-general and domain-specific demands of single word comprehension engage cortical language networks, and whether the left frontotemporal network (commonly taken to support domain-specific grammatical computations) automatically processes grammatical information present in inflectionally complex words. In a natural listening task, participants were presented with words that manipulated domain-general and domain-specific processing demands in a 2 2 manner. The results showed that only domain-general demands of mapping words onto their representations consistently engaged the language processing system during single word comprehension, triggering increased activity and connectivity in bilateral frontotemporal regions, as well as bilateral encoding across multivoxel activity patterns. In contrast, inflectional complexity failed to activate left frontotemporal regions in this task, implying that domain-specific grammatical processing in the left hemisphere is not automatically triggered when the processing context does not specifically require such analysis. This suggests that cortical computations invoked by language processing critically depend on the current communicative goals and demands, underlining the importance of domain-general processes in language comprehension, and arguing against the strong domain-specific view of the LH network function.This work was supported by the University of Cambridge RCS award to MB, and the Marie Skłodowska-Curie award (grant agreement No 798971) to AKG

    Domain-general and domain-specific computations in single word processing.

    Get PDF
    Language comprehension relies on a multitude of domain-general and domain-specific cognitive operations. This study asks whether the domain-specific grammatical computations are obligatorily invoked whenever we process linguistic inputs. Using fMRI and three complementary measures of neural activity, we tested how domain-general and domain-specific demands of single word comprehension engage cortical language networks, and whether the left frontotemporal network (commonly taken to support domain-specific grammatical computations) automatically processes grammatical information present in inflectionally complex words. In a natural listening task, participants were presented with words that manipulated domain-general and domain-specific processing demands in a 2 × 2 manner. The results showed that only domain-general demands of mapping words onto their representations consistently engaged the language processing system during single word comprehension, triggering increased activity and connectivity in bilateral frontotemporal regions, as well as bilateral encoding across multivoxel activity patterns. In contrast, inflectional complexity failed to activate left frontotemporal regions in this task, implying that domain-specific grammatical processing in the left hemisphere is not automatically triggered when the processing context does not specifically require such analysis. This suggests that cortical computations invoked by language processing critically depend on the current communicative goals and demands, underlining the importance of domain-general processes in language comprehension, and arguing against the strong domain-specific view of the LH network function

    Stimuli

    No full text
    List of auditory stimuli and selected lexical variable

    Preprint

    No full text
    Article Preprin

    Extracted ERRCs from Sensor Data

    No full text
    Averaged Sensor ERRCs for Regression Analysi
    corecore