3 research outputs found

    Cross-modal functional connectivity supports speech understanding in cochlear implant users

    Get PDF
    Sensory deprivation can lead to cross-modal cortical changes, whereby sensory brain regions deprived of input may be recruited to perform atypical function. Enhanced cross-modal responses to visual stimuli observed in auditory cortex of postlingually deaf cochlear implant (CI) users are hypothesized to reflect increased activation of cortical language regions, but it is unclear if this cross-modal activity is adaptive or mal-adaptive for speech understanding. To determine if increased activation of language regions is correlated with better speech understanding in CI users, we assessed task-related activation and functional connectivity of auditory and visual cortices to auditory and visual speech and non-speech stimuli in CI users (n = 14) and normal-hearing listeners (n = 17) and used functional near-infrared spectroscopy to measure hemodynamic responses. We used visually presented speech and non-speech to investigate neural processes related to linguistic content and observed that CI users show beneficial cross-modal effects. Specifically, an increase in connectivity between the left auditory and visual cortices-presumed primary sites of cortical language processing-was positively correlated with CI users\u27 abilities to understand speech in background noise. Cross-modal activity in auditory cortex of postlingually deaf CI users may reflect adaptive activity of a distributed, multimodal speech network, recruited to enhance speech understanding

    Disentangling the influence of attention in the auditory efferent system during speech processing

    No full text
    Empirical thesis.Bibliography: pages 111-130.Chapter 1. Introduction -- Chapter 2. General methods -- Chapter 3. Results -- Chapter 4. General discussion -- Chapter 5. Implications for future studies and conclusions -- References -- Appendix.The physiological mechanisms allowing humans to selectively attend to a single conversation in acoustically adverse situations, such as overlapping conversations or background noise, are poorly understood. In particular, the extent to which goal-directed, top-down processes of auditory attention can modulate the inner ear activity via the auditory efferent system remains unclear. This thesis investigates the relationship between degraded speech and the auditory efferent control of the cochlea. Young, normal-hearing, participants were assessed in a series of three experiments where speech intelligibility was manipulated during Active and Passive listening to: 1) noise vocoded speech; 2) speech in babble noise and 3) speech in speech-shaped noise. A lexical decision task was used in the “Active” listening condition where subjects were instructed to press a button each time they heard a non-word. In the “Passive” listening condition they were instructed to ignore all auditory stimuli and watch a movie. Click-evoked OAEs (CEOAEs) were obtained from the ear contralateral to the speech stimuli, allowing the measurement of cochlear-gain changes. A 64-channel EEG was synchronized with the CEOAE recording system, enabling the simultaneous measurement of cortical speech-onset event-related potentials (ERPs), click-evoked auditory brainstem responses (ABRs) and behavioural responses. Behavioural results showed that accuracy declined as the speech signals were degraded, while ERPs components were enhanced during the Active condition compared to the Passive condition. A decrease in cochlear gain (reduction in CEOAE amplitudes) with increasing task difficulty was observed for noise vocoded speech, but not for speech in babble or speech-shaped noise. Brainstem components showed decreased activity linked to CEOAE suppression. These findings contribute to an integrative view of auditory attention as an adaptive mechanism that recruits cochlear gain control via the auditory efferent system in a manner dependent upon the auditory scene encountered.Mode of access: World wide web1 online resource (140 pages) diagrams, graphs, table

    Deep reinforcement learning guided graph neural networks for brain network analysis

    Full text link
    Modern neuroimaging techniques, such as diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI), enable us to model the human brain as a brain network or connectome. Capturing brain networks' structural information and hierarchical patterns is essential for understanding brain functions and disease states. Recently, the promising network representation learning capability of graph neural networks (GNNs) has prompted many GNN-based methods for brain network analysis to be proposed. Specifically, these methods apply feature aggregation and global pooling to convert brain network instances into meaningful low-dimensional representations used for downstream brain network analysis tasks. However, existing GNN-based methods often neglect that brain networks of different subjects may require various aggregation iterations and use GNN with a fixed number of layers to learn all brain networks. Therefore, how to fully release the potential of GNNs to promote brain network analysis is still non-trivial. To solve this problem, we propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network. Concretely, BN-GNN employs deep reinforcement learning (DRL) to train a meta-policy to automatically determine the optimal number of feature aggregations (reflected in the number of GNN layers) required for a given brain network. Extensive experiments on eight real-world brain network datasets demonstrate that our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks
    corecore