88 research outputs found

    Electrophysiological and haemodynamic biomarkers of rapid acquisition of novel wordforms

    Get PDF
    Humans are unique in developing large lexicons; to achieve this, they are able to learn new words rapidly. However, the neural bases of this rapid learning, which may be an expression of a more general mechanism rooted in plasticity at cellular and synaptic levels, are not yet understood. Here, we highlight a selection of recent EEG and fMRI studies that attempted to trace word-learning in the human brain non-invasively. They show a rapid development of cortical memory traces for novel wordforms over a short session of auditory exposure to these items. Moreover, they demonstrate that this effect appears to be independent of attention, reflecting the largely automatic nature of word acquisition. At the same time, it seems to be limited to stimuli with native phonology, likely benefiting from pre-existing perception-articulation links in the brain, and thus suggesting different neural strategies for learning words in native and non-native languages. We also show a complex interplay between overnight consolidation, amount of exposure to novel vocabulary and attention to speech input, all of which influence learning outcomes. In sum, the available evidence suggests that the brain may effectively form new cortical circuits online, as it gets exposed to novel linguistic elements in the sensory input. A number of brain areas, most notably in the hippocampus and neocortex, appear to take part in word acquisition. Critically, the currently available data not only demonstrate a hippocampal role in rapid encoding followed by slow-rate consolidation of cortical memory traces, but also suggest immediate neocortical involvement in the word memory trace formation

    Early neurophysiological indices of second language morphosyntax learning

    Get PDF
    Humans show variable degrees of success in acquiring a second language (L2). In many cases, morphological and syntactic knowledge remain deficient, although some learners succeed in reaching nativelike levels, even if they begin acquiring their L2 relatively late. In this study, we use psycholinguistic, online language proficiency tests and a neurophysiological index of syntactic processing, the syntactic mismatch negativity (sMMN) to local agreement violations, to compare behavioural and neurophysiological markers of grammar processing between native speakers (NS) of English and non-native speakers (NNS). Variable grammar proficiency was measured by psycholinguistic tests. When NS heard ungrammatical word sequences lacking agreement between subject and verb (e.g. *we kicks), the MMN was enhanced compared with syntactically legal sentences (e.g. he kicks). More proficient NNS also showed this difference, but less proficient NNS did not. The main cortical sources of the MMN responses were localised in bilateral superior temporal areas, where, crucially, source strength of grammar-related neuronal activity correlated significantly with grammatical proficiency of individual L2 speakers as revealed by the psycholinguistic tests. As our results show similar, early MMN indices to morpho-syntactic agreement violations among both native speakers and non-native speakers with high grammar proficiency, they appear consistent with the use of similar brain mechanisms for at least certain aspects of L1 and L2 grammars.This research was supported by the Medical Research Council (MC_US_A060_0034, U1055.04.003.00001.01 to F.P., MC_US_A060_0043, MC-A060-5PQ90 to Y.S.), the Freie Universität Berlin, the Deutsche Forschungsgemeinschaft (Excellence Cluster Languages of Emotion, Project Pu 97/16-1 on “Construction and Combination”) to F.P. and J.H., and the Overseas Research Student Award Scheme, the Cambridge Trust, and the Language Learning Dissertation Grant to J.H

    Sensitivity of the human auditory cortex to acoustic degradation of speech and non-speech sounds

    Get PDF
    The perception of speech is usually an effortless and reliable process even in highly adverse listening conditions. In addition to external sound sources, the intelligibility of speech can be reduced by degradation of the structure of speech signal itself, for example by digital compression of sound. This kind of distortion may be even more detrimental to speech intelligibility than external distortion, given that the auditory system will not be able to utilize sound source-specific acoustic features, such as spatial location, to separate the distortion from the speech signal. The perceptual consequences of acoustic distortions on speech intelligibility have been extensively studied. However, the cortical mechanisms of speech perception in adverse listening conditions are not well known at present, particularly in situations where the speech signal itself is distorted. The aim of this thesis was to investigate the cortical mechanisms underlying speech perception in conditions where speech is less intelligible due to external distortion or as a result of digital compression. In the studies of this thesis, the intelligibility of speech was varied either by digital compression or addition of stochastic noise. Cortical activity related to the speech stimuli was measured using magnetoencephalography (MEG). The results indicated that degradation of speech sounds by digital compression enhanced the evoked responses originating from the auditory cortex, whereas addition of stochastic noise did not modulate the cortical responses. Furthermore, it was shown that if the distortion was presented continuously in the background, the transient activity of auditory cortex was delayed. On the perceptual level, digital compression reduced the comprehensibility of speech more than additive stochastic noise. In addition, it was also demonstrated that prior knowledge of speech content enhanced the intelligibility of distorted speech substantially, and this perceptual change was associated with an increase in cortical activity within several regions adjacent to auditory cortex. In conclusion, the results of this thesis show that the auditory cortex is very sensitive to the acoustic features of the distortion, while at later processing stages, several cortical areas reflect the intelligibility of speech. These findings suggest that the auditory system rapidly adapts to the variability of the auditory environment, and can efficiently utilize previous knowledge of speech content in deciphering acoustically degraded speech signals.Puheen havaitseminen on useimmiten vaivatonta ja luotettavaa myÜs erittäin huonoissa kuunteluolosuhteissa. Puheen ymmärrettävyys voi kuitenkin heikentyä ympäristÜn häiriÜlähteiden lisäksi myÜs silloin, kun puhesignaalin rakennetta muutetaan esimerkiksi pakkaamalla digitaalista ääntä. Tällainen häiriÜ voi heikentää ymmärrettävyyttä jopa ulkoisia häiriÜitä voimakkaammin, koska kuulojärjestelmä ei pysty hyÜdyntämään äänilähteen ominaisuuksia, kuten äänen tulosuuntaa, häiriÜn erottelemisessa puheesta. Akustisten häiriÜiden vaikutuksia puheen havaitsemiseen on tutkttu laajalti, mutta havaitsemiseen liittyvät aivomekanismit tunnetaan edelleen melko puutteelisesti etenkin tilanteissa, joissa itse puhesignaali on laadultaan heikentynyt. Tämän väitÜskirjan tavoitteena oli tutkia puheen havaitsemisen aivomekanismeja tilanteissa, joissa puhesignaali on vaikeammin ymmärrettävissä joko ulkoisen äänilähteen tai digitaalisen pakkauksen vuoksi. VäitÜskirjan neljässä osatutkimuksessa lyhyiden puheäänien ja jatkuvan puheen ymmärrettävyyttä muokattiin joko digitaalisen pakkauksen kautta tai lisäämällä puhesignaaliin satunnaiskohinaa. Puheärsykkeisiin liittyvää aivotoimintaa tutkittiin magnetoenkefalografia-mittauksilla. Tutkimuksissa havaittiin, että kuuloaivokuorella syntyneet herätevasteet voimistuivat, kun puheääntä pakattiin digitaalisesti. Sen sijaan puheääniin lisätty satunnaiskohina ei vaikuttanut herätevasteisiin. Edelleen, mikäli puheäänien taustalla esitettiin jatkuvaa häiriÜtä, kuuloaivokuoren aktivoituminen viivästyi häiriÜn intensiteetin kasvaessa. Kuuntelukokeissa havaittiin, että digitaalinen pakkaus heikentää puheäänien ymmärrettävyyttä voimakkaammin kuin satunnaiskohina. Lisäksi osoitettiin, että aiempi tieto puheen sisällÜstä paransi merkittävästi häiriÜisen puheen ymmärrettävyyttä, mikä heijastui aivotoimintaan kuuloaivokuoren viereisillä aivoalueilla siten, että ymmärrettävä puhe aiheutti suuremman aktivaation kuin heikosti ymmärrettävä puhe. VäitÜskirjan tulokset osoittavat, että kuuloaivokuori on erittäin herkkä puheäänien akustisille häiriÜille, ja myÜhemmissä prosessoinnin vaiheissa useat kuuloaivokuoren viereiset aivoalueet heijastavat puheen ymmärrettävyyttä. Tulosten mukaan voi olettaa, että kuulojärjestelmä mukautuu nopeasti ääniympäristÜn vaihteluihin muun muassa hyÜdyntämällä aiempaa tietoa puheen sisällÜstä tulkitessaan häiriÜistä puhesignaalia

    Neurophysiological evidence for rapid processing of verbal and gestural information in understanding communicative actions

    Get PDF
    During everyday social interaction, gestures are a fundamental part of human communication. The communicative pragmatic role of hand gestures and their interaction with spoken language has been documented at the earliest stage of language development, in which two types of indexical gestures are most prominent: the pointing gesture for directing attention to objects and the give-me gesture for making requests. Here we study, in adult human participants, the neurophysiological signatures of gestural-linguistic acts of communicating the pragmatic intentions of naming and requesting by simultaneously presenting written words and gestures. Already at ~150 ms, brain responses diverged between naming and request actions expressed by word-gesture combination, whereas the same gestures presented in isolation elicited their earliest neurophysiological dissociations significantly later (at ~210 ms). There was an early enhancement of request-evoked brain activity as compared with naming, which was due to sources in the frontocentral cortex, consistent with access to action knowledge in request understanding. In addition, an enhanced N400-like response indicated late semantic integration of gesture-language interaction. The present study demonstrates that word-gesture combinations used to express communicative pragmatic intentions speed up the brain correlates of comprehension processes – compared with gesture-only understanding – thereby calling into question current serial linguistic models viewing pragmatic function decoding at the end of a language comprehension cascade. Instead, information about the social-interactive role of communicative acts is processed instantaneously

    Integration of Consonant and Pitch Processing as Revealed by the Absence of Additivity in Mismatch Negativity

    Get PDF
    Consonants, unlike vowels, are thought to be speech specific and therefore no interactions would be expected between consonants and pitch, a basic element for musical tones. The present study used an electrophysiological approach to investigate whether, contrary to this view, there is integrative processing of consonants and pitch by measuring additivity of changes in the mismatch negativity (MMN) of evoked potentials. The MMN is elicited by discriminable variations occurring in a sequence of repetitive, homogeneous sounds. In the experiment, event-related potentials (ERPs) were recorded while participants heard frequently sung consonant-vowel syllables and rare stimuli deviating in either consonant identity only, pitch only, or in both dimensions. Every type of deviation elicited a reliable MMN. As expected, the two single-deviant MMNs had similar amplitudes, but that of the double-deviant MMN was also not significantly different from them. This absence of additivity in the double-deviant MMN suggests that consonant and pitch variations are processed, at least at a pre-attentive level, in an integrated rather than independent way. Domain-specificity of consonants may depend on higher-level processes in the hierarchy of speech perception

    Sensorimotor semantics on the spot: brain activity dissociates between conceptual categories within 150 ms

    Get PDF
    Although semantic processing has traditionally been associated with brain responses maximal at 350–400 ms, recent studies reported that words of different semantic types elicit topographically distinct brain responses substantially earlier, at 100–200 ms. These earlier responses have, however, been achieved using insufficiently precise source localisation techniques, therefore casting doubt on reported differences in brain generators. Here, we used high-density MEG-EEG recordings in combination with individual MRI images and state-of-the-art source reconstruction techniques to compare localised early activations elicited by words from different semantic categories in different cortical areas. Reliable neurophysiological word-category dissociations emerged bilaterally at ~ 150 ms, at which point action-related words most strongly activated frontocentral motor areas and visual object-words occipitotemporal cortex. These data now show that different cortical areas are activated rapidly by words with different meanings and that aspects of their category-specific semantics is reflected by dissociating neurophysiological sources in motor and visual brain systems

    Sensory and cognitive mechanisms of change detection in the context of speech

    Get PDF
    The aim of this study was to dissociate the contributions of memory-based (cognitive) and adaptation-based (sensory) mechanisms underlying deviance detection in the context of natural speech. Twenty healthy right-handed native speakers of English participated in an event-related design scan in which natural speech stimuli, /de:/ (“deh”) and /deI/ (“day”); (/te:/ (“teh”) and /teI/ (“tay”) served as standards and deviants within functional magnetic resonance imaging event-related “oddball” paradigm designed to elicit the mismatch negativity component. Thus, “oddball” blocks could involve either a word deviant (“day”) resulting in a “word advantage” effect, or a non-word deviant (“deh” or “tay”). We utilized an experimental protocol controlling for refractoriness similar to that used previously when deviance detection was studied in the context of tones. Results showed that the cognitive and sensory mechanisms of deviance detection were located in the anterior and posterior auditory cortices, respectively, as was previously found in the context of tones. The cognitive effect, that was most robust for the word deviant, diminished in the “oddball” condition. In addition, the results indicated that the lexical status of the speech stimulus interacts with acoustic factors exerting a top-down modulation of the extent to which novel sounds gain access to the subject’s awareness through memory-based processes. Thus, the more salient the deviant stimulus is the more likely it is to be released from the effects of adaptation exerted by the posterior auditory cortex
    • …
    corecore