94 research outputs found

    Cortical tracking of speech in noise accounts for reading strategies in children

    Get PDF
    Humans’ propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy

    Real-time lexical competitions during speech-in-speech comprehension

    Full text link

    L'amorçage sémantique masqué en situation de cocktail party.

    No full text
    International audienceCette étude vise à tester l'automaticité du traitement sémantique durant la perception de la parole grâce à la situation de cocktail party. Les participants devaient effectuer une tâche de décision lexicale sur un item cible inséré dans un cocktail de parole. Celui-ci était composé de voix prononçant des mots sémantiquement liés à la cible (voix amorces) , et d'autres voix prononçant des mots sémantiquement indépendants les uns des autres (voix masquante). L'analyse des résultats a montré qu'un effet d'amorçage n'apparaissait que lorsque le nombre de voix amorces était strictement supérieur au nombre de voix masquantes, mettant en évidence un besoin d'intelligibilité de l'amorce et la nature stratégique de l'effet d'amorçage observé

    The effects of adverse conditions on speech recognition by non-native listeners: Electrophysiological and behavioural evidence

    Get PDF
    This thesis investigated speech recognition by native (L1) and non-native (L2) listeners (i.e., native English and Korean speakers) in diverse adverse conditions using electroencephalography (EEG) and behavioural measures. Study 1 investigated speech recognition in noise for read and casually produced, spontaneous speech using behavioural measures. The results showed that the detrimental effect of casual speech was greater for L2 than L1 listeners, demonstrating real-life L2 speech recognition problems caused by casual speech. Intelligibility was also shown to decrease when the accents of the talker and listener did not match when listening to casual speech as well as read speech. Study 2 set out to develop EEG methods to measure L2 speech processing difficulties for natural, continuous speech. This study thus examined neural entrainment to the amplitude envelope of speech (i.e., slow amplitude fluctuations in speech) while subjects listened to their L1, L2 and a language that they did not understand. The results demonstrate that neural entrainment to the speech envelope is not modulated by whether or not listeners understand the language, opposite to previously reported positive relationships between speech entrainment and intelligibility. Study 3 investigated speech processing in a two-talker situation using measures of neural entrainment and N400, combined with a behavioural speech recognition task. L2 listeners had greater entrainment for target talkers than did L1 listeners, likely because their difficulty with L2 speech comprehension caused them to focus greater attention on the speech signal. L2 listeners also had a greater degree of lexical processing (i.e., larger N400) for highly predictable words than did native listeners, while native listeners had greater lexical processing when listening to foreign-accented speech. The results suggest that the increased listening effort experienced by L2 listeners during speech recognition modulates their auditory and lexical processing

    Relationship between speech-evoked neural responses and perception of speech in noise in older adults

    Get PDF
    Speech-in-noise (SPIN) perception involves neural encoding of temporal acoustic cues. Cues include temporal fine structure (TFS) and envelopes that modulate at syllable (Slow-rate ENV) and fundamental frequency (F0-rate ENV) rates. Here the relationship between speech-evoked neural responses to these cues and SPIN perception was investigated in older adults. Theta-band phase-locking values (PLV) that reflect cortical sensitivity to Slow-rate ENV and peripheral/brainstem frequency-following responses phase-locked to F0-rate ENV (FFRENV_F0) and TFS (FFRTFS) were measured from scalp-EEG responses to a repeated speech syllable in steady-state speech-shaped (SpN) and 16-speaker babble (BbN) noises. The results showed that: 1) SPIN performance and PLV were significantly higher under SpN than BbN, implying differential cortical encoding may serve as the neural mechanism of SPIN performance that varies as a function of noise types; 2) PLV and FFRTFS at resolved harmonics were significantly related to good SPIN performance, supporting the importance of phase-locked neural encoding of Slow-rate ENV and TFS of resolved harmonics during SPIN perception; 3) FFRENV_F0 was not associated to SPIN performance until audiometric threshold was controlled for, indicating that hearing loss should be carefully controlled when studying the role of neural encoding of F0-rate ENV. Implications are drawn with respect to fitting auditory prostheses

    Acquiring L2 sentence comprehension : a longitudinal study of word monitoring in noise

    Get PDF
    This study investigated the development of second language online auditory processing with ab initio German learners of Dutch. We assessed the influence of different levels of background noise and different levels of semantic and syntactic target word predictability on word-monitoring latencies. There was evidence of syntactic, but not lexical-semantic, transfer from the L1 to the L2 from the onset of L2 learning. An initial stronger adverse effect of noise on syntactic compared to phonological processing disappeared after two weeks of learning Dutch suggesting a change towards more robust syntactic processing. At the same time the L2 learners started to exploit semantic constraints predicting upcoming target words. The use of semantic predictability remained less efficient compared to native speakers until the end of the observation period. The improvement and the persistent problems in semantic processing we found were independent of noise and rather seem to reflect the need for more context information to build up online semantic representations in L2 listening.peer-reviewe

    Listening in a second language: a pupillometric investigation of the effect of semantic and acoustic cues on listening effort

    Get PDF
    Non-native listeners live a great part of their day immersed in a second language environment. Challenges arise because many linguistic interactions happen in noisy environments, and because their linguistic knowledge is imperfect. Pupillometry was shown to provide a reliable measure of cognitive effort during listening. This research aims to investigate by means of pupillometry how listening effort is modulated by the intelligibility level of the listening task, the availability of contextual and acoustic cues and by the language background of listeners (native vs non-native). In Study 1, listening effort in native and non-native listeners was evaluated during a sentence perception task in noise across different intelligibility levels. Results indicated that listening effort was increased for non-native compared to native listeners, when the intelligibility levels were equated across the two groups. In Study 2, using a similar method, materials included predictable and semantically anomalous sentences, presented in a plain and a clear speaking style. Results confirmed an increased listening effort for non-native compared to native listeners. Listening effort was overall reduced when participants attended to clear speech. Moreover, effort reduction after the sentence ended was delayed for less proficient non-native listeners. In Study 3, the contribution of semantic content spanning over several sentences was evaluated using lists of semantically related and unrelated stimuli. The presence of semantic cues across sentences led to a reduction in listening effort for native listeners as reflected by the peak pupil dilation, while non-native listeners did not show the same benefit. In summary, this research consistently showed an increased listening effort for non-native compared to native listeners, at equated levels of intelligibility. Additionally, the use of a clear speaking style proved to be an effective strategy to enhance comprehension and to reduce cognitive effort in native and non-native listeners
    corecore