84 research outputs found

    Individual theta-band cortical entrainment to speech in quiet predicts word-in-noise comprehension

    Full text link
    Speech elicits brain activity time-locked to its amplitude envelope. The resulting speech-brain synchrony (SBS) is thought to be crucial to speech parsing and comprehension. It has been shown that higher speech-brain coherence is associated with increased speech intelligibility. However, studies depending on the experimental manipulation of speech stimuli do not allow conclusion about the causality of the observed tracking. Here, we investigate whether individual differences in the intrinsic propensity to track the speech envelope when listening to speech-in-quiet is predictive of individual differences in speech-recognition-in-noise, in an independent task. We evaluated the cerebral tracking of speech in source-localized magnetoencephalography, at timescales corresponding to the phrases, words, syllables and phonemes. We found that individual differences in syllabic tracking in right superior temporal gyrus and in left middle temporal gyrus (MTG) were positively associated with recognition accuracy in an independent words-in-noise task. Furthermore, directed connectivity analysis showed that this relationship is partially mediated by top-down connectivity from premotor cortex—associated with speech processing and active sensing in the auditory domain—to left MTG. Thus, the extent of SBS—even during clear speech—reflects an active mechanism of the speech processing system that may confer resilience to noise

    The Predictive Value of Individual Electric Field Modeling for Transcranial Alternating Current Stimulation Induced Brain Modulation

    Get PDF
    There is considerable individual variability in the reported effectiveness of non-invasive brain stimulation. This variability has often been ascribed to differences in the neuroanatomy and resulting differences in the induced electric field inside the brain. In this study, we addressed the question whether individual differences in the induced electric field can predict the neurophysiological and behavioral consequences of gamma band tACS. In a within-subject experiment, bi-hemispheric gamma band tACS and sham stimulation was applied in alternating blocks to the participants’ superior temporal lobe, while task-evoked auditory brain activity was measured with concurrent functional magnetic resonance imaging (fMRI) and a dichotic listening task. Gamma tACS was applied with different interhemispheric phase lags. In a recent study, we could show that anti-phase tACS (180° interhemispheric phase lag), but not in-phase tACS (0° interhemispheric phase lag), selectively modulates interhemispheric brain connectivity. Using a T1 structural image of each participant’s brain, an individual simulation of the induced electric field was computed. From these simulations, we derived two predictor variables: maximal strength (average of the 10,000 voxels with largest electric field values) and precision of the electric field (spatial correlation between the electric field and the task evoked brain activity during sham stimulation). We found considerable variability in the individual strength and precision of the electric fields. Importantly, the strength of the electric field over the right hemisphere predicted individual differences of tACS induced brain connectivity changes. Moreover, we found in both hemispheres a statistical trend for the effect of electric field strength on tACS induced BOLD signal changes. In contrast, the precision of the electric field did not predict any neurophysiological measure. Further, neither strength, nor precision predicted interhemispheric integration. In conclusion, we found evidence for the dose-response relationship between individual differences in electric fields and tACS induced activity and connectivity changes in concurrent fMRI. However, the fact that this relationship was stronger in the right hemisphere suggests that the relationship between the electric field parameters, neurophysiology, and behavior may be more complex for bi-hemispheric tACS

    Global and localized network characteristics of the resting brain predict and adapt to foreign language learning in older adults

    Full text link
    Resting brain (rs) activity has been shown to be a reliable predictor of the level of foreign language (L2) proficiency younger adults can achieve in a given time-period. Since rs properties change over the lifespan, we investigated whether L2 attainment in older adults (aged 64–74 years) is also predicted by individual differences in rs activity, and to what extent rs activity itself changes as a function of L2 proficiency. To assess how neuronal assemblies communicate at specific frequencies to facilitate L2 development, we examined localized and global measures (Minimum Spanning Trees) of connectivity. Results showed that central organization within the beta band (~ 13–29.5 Hz) predicted measures of L2 complexity, fluency and accuracy, with the latter additionally predicted by a left-lateralized centro-parietal beta network. In contrast, reduced connectivity in a right-lateralized alpha (~ 7.5–12.5 Hz) network predicted development of L2 complexity. As accuracy improved, so did central organization in beta, whereas fluency improvements were reflected in localized changes within an interhemispheric beta network. Our findings highlight the importance of global and localized network efficiency and the role of beta oscillations for L2 learning and suggest plasticity even in the ageing brain. We interpret the findings against the background of networks identified in socio-cognitive processes

    Executive Control of Language in the Bilingual Brain: Integrating the Evidence from Neuroimaging to Neuropsychology

    Get PDF
    In this review we will focus on delineating the neural substrates of the executive control of language in the bilingual brain, based on the existing neuroimaging, intracranial, transcranial magnetic stimulation, and neuropsychological evidence. We will also offer insights from ongoing brain-imaging studies into the development of expertise in multilingual language control. We will concentrate specifically on evidence regarding how the brain selects and controls languages for comprehension and production. This question has been addressed in a number of ways and using various tasks, including language switching during production or perception, translation, and interpretation. We will attempt to synthesize existing evidence in order to bring to light the neural substrates that are crucial to executive control of language

    fMRI of Simultaneous Interpretation Reveals the Neural Basis of Extreme Language Control

    Get PDF
    We used functional magnetic resonance imaging (fMRI) to examine the neural basis of extreme multilingual language control in a group of 50 multilingual participants. Comparing brain responses arising during simultaneous interpretation (SI) with those arising during simultaneous repetition revealed activation of regions known to be involved in speech perception and production, alongside a network incorporating the caudate nucleus that is known to be implicated in domain-general cognitive control. The similarity between the networks underlying bilingual language control and general executive control supports the notion that the frequently reported bilingual advantage on executive tasks stems from the day-to-day demands of language control in the multilingual brain. We examined neural correlates of the management of simultaneity by correlating brain activity during interpretation with the duration of simultaneous speaking and hearing. This analysis showed significant modulation of the putamen by the duration of simultaneity. Our findings suggest that, during SI, the caudate nucleus is implicated in the overarching selection and control of the lexico-semantic system, while the putamen is implicated in ongoing control of language output. These findings provide the first clear dissociation of specific dorsal striatum structures in polyglot language control, roles that are consistent with previously described involvement of these regions in nonlinguistic executive contro

    The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds

    Full text link
    Second-language learners often experience major difficulties in producing non-native speech sounds. This paper introduces a training method that uses a real-time analysis of the acoustic properties of vowels produced by non-native speakers to provide them with immediate, trial-by-trial visual feedback about their articulation alongside that of the same vowels produced by native speakers. The Mahalanobis acoustic distance between non-native productions and target native acoustic spaces was used to assess L2 production accuracy. The experiment shows that 1 h of training per vowel improves the production of four non-native Danish vowels: the learners' productions were closer to the corresponding Danish target vowels after training. The production performance of a control group remained unchanged. Comparisons of pre- and post-training vowel discrimination performance in the experimental group showed improvements in perception. Correlational analyses of training-related changes in production and perception revealed no relationship. These results suggest, first, that this training method is effective in improving non-native vowel production. Second, training purely on production improves perception. Finally, it appears that improvements in production and perception do not systematically progress at equal rates within individuals

    Mothers Reveal More of Their Vocal Identity When Talking to Infants

    Full text link
    Voice timbre – the unique acoustic information in a voice by which its speaker can be recognized – is particularly critical in mother-infant interaction. Correct identification of vocal timbre is necessary in order for infants to recognize their mothers as familiar both before and after birth, providing a basis for social bonding between infant and mother. The exact mechanisms underlying infant voice recognition remain ambiguous and have predominantly been studied in terms of cognitive voice recognition abilities of the infant. Here, we show – for the first time – that caregivers actively maximize their chances of being correctly recognized by presenting more details of their vocal timbre through adjustments to their voices known as infant-directed speech (IDS) or baby talk, a vocal register which is wide-spread through most of the world’s cultures. Using acoustic modelling (k-means clustering of Mel Frequency Cepstral Coefficients) of IDS in comparison with adult-directed speech (ADS), we found in two cohorts of speakers - US English and Swiss German mothers - that voice timbre clusters of in IDS are significantly larger to comparable clusters in ADS. This effect leads to a more detailed representation of timbre in IDS with subsequent benefits for recognition. Critically, an automatic speaker identification using a Gaussian-mixture model based on Mel Frequency Cepstral Coefficients showed significantly better performance in two experiments when trained with IDS as opposed to ADS. We argue that IDS has evolved as part of an adaptive set of evolutionary strategies that serve to promote indexical signalling by caregivers to their offspring which thereby promote social bonding via voice and acquiring linguistic systems

    Amygdala activation for eye contact despite complete cortical blindness

    Get PDF
    Cortical blindness refers to the loss of vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be processed in the absence of any functional visual cortex. A well-known patient with bilateral destruction of his visual cortex and subsequent cortical blindness was investigated in an fMRI paradigm during which blocks of faces were presented either with their gaze directed toward or away from the viewer. Increased right amygdala activation was found in response to directed compared with averted gaze. Activity in this region was further found to be functionally connected to a larger network associated with face and gaze processing. The present study demonstrates that, in human subjects, the amygdala response to eye contact does not require an intact primary visual cortex

    Neuroimaging of simultaneous conference interpreters

    Full text link
    Conference interpreting demands the coordination of multiple cognitive processes required to attend to a source message, process that source message, convert it to the target language and ultimately produce the target utterance. This chapter focuses on simultaneous (as opposed to consecutive) interpreting, which has the particular demand of requiring attention to be divided between the source and the target, with the concomitant demands this places on resource management. At the heart of this sits the concept of cognitive control. Specifically, the control that must be executed on language processes, attention and motor processes in order to execute a simultaneous interpreting task. This chapter reviews the literature on the neural bases of simultaneous interpreting in order to provide a broad overview of the underlying cerebral systems that are implicated, and their relationships to other cognitive domains. It begins by providing a brief primer on the neural basis of language and multilingual language control, alongside neuroimaging methods. A narrative review of studies that employed neuroimaging to study the neural basis of simultaneous interpreting follows, with a proposal for mapping the published data onto a schematic model of the interpreting process. The chapter concludes with suggestions for avenues of potential future research

    Adolfo M. García. The neurocognition of translation and interpreting

    Full text link
    corecore