12 research outputs found

    After-effects of 10 Hz tACS over the prefrontal cortex on phonological word decisions

    No full text
    Introduction Previous work in the language domain has shown that 10ā€ÆHz rTMS of the left or right posterior inferior frontal gyrus (pIFG) in the prefrontal cortex impaired phonological decision-making, arguing for a causal contribution of the bilateral pIFG to phonological processing. However, the neurophysiological correlates of these effects are unclear. The present study addressed the question whether neural activity in the prefrontal cortex could be modulated by 10ā€ÆHz tACS and how this would affect phonological decisions. Methods In three sessions, 24 healthy participants received tACS at 10ā€ÆHz or 16.18ā€ÆHz (control frequency) or sham stimulation over the bilateral prefrontal cortex before task processing. Resting state EEG was recorded before and after tACS. We also recorded EEG during task processing. Results Relative to sham stimulation, 10ā€ÆHz tACS significantly facilitated phonological response speed. This effect was task-specific as tACS did not affect a simple control task. Moreover, 10ā€ÆHz tACS significantly increased theta power during phonological decisions. The individual increase in theta power was positively correlated with the behavioral facilitation after 10ā€ÆHz tACS. Conclusion Our results show a facilitation of phonological decisions after 10ā€ÆHz tACS over the bilateral prefrontal cortex. This might indicate that 10ā€ÆHz tACS increased task-related activity in the stimulated area to a level that was optimal for phonological performance. The significant correlation with the individual increase in theta power suggests that the behavioral facilitation might be related to increased theta power during language processing

    Development of auditory repetition effects with age : evidence from EEG time-frequency analysis

    Get PDF
    La preĢsentation reĢpeĢteĢe dā€™un son inconnu conduit aĢ€ des effets de reĢpeĢtition comprenant la suppression (ā€˜repetition suppressionā€™ ou RS) ou lā€™augmentation (ā€˜repetition enhancementā€™ ou RE) de lā€™activiteĢ neuronale. Ces pheĢnomeĢ€nes refleĢ€tent des meĢcanismes ceĢreĢbraux impliquant un apprentissage perceptuel. Lā€™objectif de ce meĢmoire de maitrise eĢtait dā€™apporter une perspective deĢveloppementale de lā€™activiteĢ ceĢreĢbrale sous-tendant lā€™apprentissage perceptuel auditif. Lā€™EEG a eĢteĢ enregistreĢ chez 101 participants sains aĢ‚geĢs de 3 aĢ€ 40 ans pendant un paradigme auditif passif durant lequel 30 pseudo-mots eĢtaient reĢpeĢteĢs 6 fois chacun. Des analyses en temps- freĢquence ont eĢteĢ calculeĢes pour chaque reĢpeĢtition. La puissance spectrale enregistreĢes en EEG entre chaque reĢpeĢtition a eĢteĢ compareĢe au moyen de modeĢ€les lineĢaires mixtes. Les reĢsultats montrent quā€™un effet de reĢpeĢtition survient au cours du deĢveloppement mais varie en fonction de lā€™aĢ‚ge et des bandes de freĢquences. Du RS et RE ont eĢteĢ observeĢs aĢ€ tous les aĢ‚ges dans le theĢ‚ta bas et le gamma respectivement. Un effet deĢveloppemental a eĢteĢ trouveĢ de facĢ§on plus preĢcoce pour le RS dans le theĢ‚ta haut et de facĢ§on tardive pour le RE dans le theĢ‚ta bas. Ces reĢsultats montrent que les processus impliquant un apprentissage perceptif auditif, tel que le RS et le RE, suivent une trajectoire deĢveloppementale speĢcifique en fonction des rythmes ceĢreĢbraux. Les effets de reĢpeĢtition refleĢ€teraient diffeĢrents niveaux de traitement des stimuli qui se deĢvelopperaient de manieĢ€re indeĢpendante. Des recherches suppleĢmentaires seront neĢcessaires pour preĢciser le roĢ‚le fonctionnel des effets de reĢpeĢtitions sur le deĢveloppement cognitif.The repeated presentation of unfamiliar sounds leads to repetition effects comprising repetition suppression (RS) and enhancement (RE) of neural activity. These phenomena reflect mechanisms involved in perceptual learning and are associated with a decrease or increase in EEG spectral powers. The objective of this Masterā€™s thesis is to provide a developmental perspective of the cortical activity underlying auditory perceptual learning. EEG was recorded in 101 healthy participants ranging from 3 to 40 years during an auditory paradigm comprising 30 pseudowords repeated six times each. EEG time-frequency spectral power was calculated for each presentation and was compared to quantify repetition effects. Linear mixed model analysis revealed that some repetition effects occurred across ages and others varied with age in specific frequency bands. More precisely, RS and RE were found across ages in lower theta and gamma frequency bands respectively between the first and all subsequent pseudoword presentations. Developmental effects were seen in the RS observed in the higher theta/low alpha band and in the later occurring RE in the lower theta band. These results show that processes involved in auditory perceptual learning, such as RS and RE, are modulated by maturation. Further, repetition effects reflect different levels of stimulus processing and these levels seem to develop independently. More research is required to identify the exact functional roles of auditory repetitions effects on cognitive development

    Scale-free amplitude modulation of neuronal oscillations tracks comprehension of accelerated speech

    Get PDF
    Speech comprehension is preserved up to a threefold acceleration, but deteriorates rapidly at higher speeds. Current models posit that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech into syllabic units using Ī“/Īø oscillations. Here, we investigated whether the involvement of neuronal oscillations in processing accelerated speech also relates to their scale-free amplitude modulation as indexed by the strength of long-range temporal correlations (LRTC). We recorded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensible rates, at a mostly unintelligible rate and at this same speed interleaved with silence gaps. Ī“, Īø, and low-Ī³ oscillations followed the nonlinear variation of comprehension, with LRTC rising only at the highest speed. In contrast, increasing the rate was associated with a monotonic increase in LRTC in high-Ī³ activity. When intelligibility was restored with the insertion of silence gaps, LRTC in the Ī“, Īø, and low-Ī³ oscillations resumed the low levels observed for intelligible speech. Remarkably, the lower the individual subject scaling exponents of Ī“/Īø oscillations, the greater the comprehension of the fastest speech rate. Moreover, the strength of LRTC of the speech envelope decreased at the maximal rate, suggesting an inverse relationship with the LRTC of brain dynamics when comprehension halts. Our findings show that scale-free amplitude modulation of cortical oscillations and speech signals are tightly coupled to speech uptake capacity.SIGNIFICANCE STATEMENT One may read this statement in 20-30 s, but reading it in less than five leaves us clueless. Our minds limit how much information we grasp in an instant. Understanding the neural constraints on our capacity for sensory uptake is a fundamental question in neuroscience. Here, MEG was used to investigate neuronal activity while subjects listened to radio news played faster and faster until becoming unintelligible. We found that speech comprehension is related to the scale-free dynamics of Ī“ and Īø bands, whereas this property in high-Ī³ fluctuations mirrors speech rate. We propose that successful speech processing imposes constraints on the self-organization of synchronous cell assemblies and their scale-free dynamics adjusts to the temporal properties of spoken language

    Atypical MEG inter-subject correlation during listening to continuous natural speech in dyslexia

    Get PDF
    Listening to speech elicits brain activity time-locked to the speech sounds. This so-called neural entrainment to speech was found to be atypical in dyslexia, a reading impairment associated with neural speech processing deficits. We hypothesized that the brain responses of dyslexic vs. normal readers to real-life speech would be different, and thus the strength of inter-subject correlation (ISC) would differ from that of typical readers and be reflected in reading-related measures. We recorded magnetoencephalograms (MEG) of 23 dyslexic and 21 typically-reading adults during listening to āˆ¼10 min of natural Finnish speech consisting of excerpts from radio news, a podcast, a self-recorded audiobook chapter and small talk. The amplitude envelopes of band-pass-filtered MEG source signals were correlated between subjects in a cortically-constrained source space in six frequency bands. The resulting ISCs of dyslexic and typical readers were compared with a permutation-based t-test. Neuropsychological measures of phonological processing, technical reading, and working memory were correlated with the ISCs utilizing the Mantel test. During listening to speech, ISCs were mainly reduced in dyslexic compared to typical readers in delta (0.5ā€“4 Hz) and high gamma (55ā€“90 Hz) frequency bands. In the theta (4āˆ’8 Hz), beta (12ā€“25 Hz), and low gamma (25āˆ’45 Hz) bands, dyslexics had enhanced ISC to speech compared to controls. Furthermore, we found that ISCs across both groups were associated with phonological processing, technical reading, and working memory. The atypical ISC to natural speech in dyslexics supports the temporal sampling deficit theory of dyslexia. It also suggests over-synchronization to phoneme-rate information in speech, which could indicate more effort-demanding sampling of phonemes from speech in dyslexia. These irregularities in parsing speech are likely some of the complex neural factors contributing to dyslexia. The associations between neural coupling and reading-related skills further support this notion.Peer reviewe

    Lower Beta: A Central Coordinator of Temporal Prediction in Multimodal Speech

    Get PDF
    How the brain decomposes and integrates information in multimodal speech perception is linked to oscillatory dynamics. However, how speech takes advantage of redundancy between different sensory modalities, and how this translates into specific oscillatory patterns remains unclear. We address the role of lower beta activity (~20 Hz), generally associated with motor functions, as an amodal central coordinator that receives bottom-up delta-theta copies from specific sensory areas and generate top-down temporal predictions for auditory entrainment. Dissociating temporal prediction from entrainment may explain how and why visual input benefits speech processing rather than adding cognitive load in multimodal speech perception. On the one hand, body movements convey prosodic and syllabic features at delta and theta rates (i.e., 1ā€“3 Hz and 4ā€“7 Hz). On the other hand, the natural precedence of visual input before auditory onsets may prepare the brain to anticipate and facilitate the integration of auditory delta-theta copies of the prosodic-syllabic structure. Here, we identify three fundamental criteria based on recent evidence and hypotheses, which support the notion that lower motor beta frequency may play a central and generic role in temporal prediction during speech perception. First, beta activity must respond to rhythmic stimulation across modalities. Second, beta power must respond to biological motion and speech-related movements conveying temporal information in multimodal speech processing. Third, temporal prediction may recruit a communication loop between motor and primary auditory cortices (PACs) via delta-to-beta cross-frequency coupling. We discuss evidence related to each criterion and extend these concepts to a beta-motivated framework of multimodal speech processing

    Is conscious perception a series of discrete temporal frames?

    Get PDF
    This paper reviews proposals that conscious perception consists, in whole or part, of successive discrete temporal frames on the sub-second time scale, each frame containing information registered as simultaneous or static. Although the idea of discrete frames in conscious perception cannot be regarded as falsified, there are many problems. Evidence does not consistently support any proposed duration or range of durations for frames. EEG waveforms provide evidence of periodicity in brain activity, but not necessarily in conscious perception. Temporal properties of perceptual processes are flexible in response to competing processing demands, which is hard to reconcile with the relative inflexibility of regular frames. There are also problems concerning the definition of frames, the need for informational connections between frames, the means by which boundaries between frames are established, and the apparent requirement for a storage buffer for information awaiting entry to the next frame

    Cortical and subcortical speech-evoked responses in young and older adults: Effects of background noise, arousal states, and neural excitability

    Get PDF
    This thesis investigated how the brain processes speech signals in human adults across a wide age-range in the sensory auditory systems using electroencephalography (EEG). Two types of speech-evoked phase-locked responses were focused on: (i) cortical responses (theta-band phase-locked responses) that reflect processing of low-frequency slowly-varying envelopes of speech; (ii) subcortical/peripheral responses (frequency-following responses; FFRs) that reflect encoding of speech periodicity and temporal fine structure information. The aims are to elucidate how these neural activities are affected by different internal (aging, hearing loss, level of arousal and neural excitability) and external (background noise) factors during our daily life through three studies. Study 1 investigated theta-band phase-locking and FFRs in noisy environments in young and older adults. It investigated how aging and hearing loss affect these activities under quiet and noisy environments, and how these activities are associated with speech-in-noise perception. The results showed that ageing and hearing loss affect speech-evoked phase-locked responses through different mechanisms, and the effects of aging on cortical and subcortical activities take different roles in speech-in-noise perception. Study 2 investigated how level of arousal, or consciousness, affects phase-locked responses in young and older adults. The results showed that both theta-band phase-locking and FFRs decreases following decreases in the level of arousal. It was further found that neuro-regulatory role of sleep spindles on theta-band phase-locking is distinct between young and older adults, indicating that the mechanisms of neuro-regulation for phase-locked responses in different arousal states are age-dependent. Study 3 established a causal relationship between the auditory cortical excitability and FFRs using combined transcranial direct current stimulation (tDCS) and EEG. FFRs were measured before and after tDCS was applied over the auditory cortices. The results showed that changes in neural excitability of the right auditory cortex can alter FFR magnitudes along the contralateral pathway. This shows important theoretical and clinical implications that causally link functions of auditory cortex with neural encoding of speech periodicity. Taken together, findings of this thesis will advance our understanding of how speech signals are processed via neural phase-locking in our everyday life across the lifespan

    Testing Low-Frequency Neural Activity in Sentence Understanding

    Full text link
    Human language has the unique characteristic where we can create infinite and novel phrases or sentences; this stems from the ability of composition, which allows us to combine smaller units into bigger meaningful units. Composition involves us following syntactic rules stored in memory and building well-formed structures incrementally. Research has shown that neural circuits can be associated with cognitive faculties such as memory and language and there is evidence indicating where and when the neural indices of the processing of composition are. However, it is not yet clear "how" neural circuits actually implement compositional processes. This dissertation aims to probe "how" composition of meaning is represented by neural circuits by investigating the role of low-frequency neural activity in carrying out composition. Neuroelectric signals were recorded with Electroencephalography (EEG) to examine the functional interpretation of low-frequency neural activity in the so-called delta band of 0.5 to 3 Hz. Activities in this band have been associated with the processing of syntactic structures (Ding et al. 2016). First, whether these activities are indeed associated with hierarchy remains under debate. This dissertation uses a novel condition in which the same words are presented, but their order is changed to remove the syntactic structure. Only entrainment with syllables was found in this "reversed" condition, supporting the hypothesis that neural activities in the delta band entrain to abstract syntactic structures. Second, we test the timing for language users to combine words and comprehend sentences. How comprehension correlates with this low-frequency neural activity and whether it represents endogenous neural response or evoked response remains unclear. This dissertation manipulates the length of syllables and regularity between syllables to test the hypotheses. The results support the view that this neural activity reflects endogenous response and suggest that it reflects top-down processing. Third, what semantic information modulates this low-frequency neural activity is unknown. This dissertation examines several semantic variables typically associated with different aspects of semantic processing. The stimuli are created by varying the statistical association between words, world knowledge, and the conceptual results of semantic composition. The current results suggest that low-frequency neural activity is not driven by semantic processing. Based on the above findings, we propose that neural activities in the delta band reflect top-down predictive processing that involves syntactic information directly but not semantic information.PHDLinguisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169907/1/chiawenl_1.pd
    corecore