61 research outputs found

    Saccadic modulation of neural excitability in auditory areas of the neocortex

    Get PDF
    In natural "active" vision, humans and other primates use eye movements (saccades) to sample bits of information from visual scenes. In the visual cortex, non-retinal signals linked to saccades shift visual cortical neurons into a high excitability state as each saccade ends. The extent of this saccadic modulation outside of the visual system is unknown. Here, we show that during natural viewing, saccades modulate excitability in numerous auditory cortical areas with a temporal pattern complementary to that seen in visual areas. Control somatosensory cortical recordings indicate that the temporal pattern is unique to auditory areas. Bidirectional functional connectivity patterns suggest that these effects may arise from regions involved in saccade generation. We propose that by using saccadic signals to yoke excitability states in auditory areas to those in visual areas, the brain can improve information processing in complex natural settings

    Corticocortical evoked potentials reveal projectors and integrators in human brain networks.

    Get PDF
    The cerebral cortex is composed of subregions whose functional specialization is largely determined by their incoming and outgoing connections with each other. In the present study, we asked which cortical regions can exert the greatest influence over other regions and the cortical network as a whole. Previous research on this question has relied on coarse anatomy (mapping large fiber pathways) or functional connectivity (mapping inter-regional statistical dependencies in ongoing activity). Here we combined direct electrical stimulation with recordings from the cortical surface to provide a novel insight into directed, inter- regional influence within the cerebral cortex of awake humans. These networks of directed interaction were reproducible across strength thresholds and across subjects. Directed network properties included (1) a decrease in the reciprocity of connections with distance; (2) major projector nodes (sources of influence) were found in peri-Rolandic cortex and posterior, basal and polar regions of the temporal lobe; and (3) major receiver nodes (receivers of influence) were found in anterolateral frontal, superior parietal, and superior temporal regions. Connectivity maps derived from electrical stimulation and from resting electrocorticography (ECoG) correlations showed similar spatial distributions for the same source node. However, higher-level network topology analysis revealed differences between electrical stimulation and ECoG that were partially related to the reciprocity of connections. Together, these findings inform our understanding of large-scale corticocortical influence as well as the interpretation of functional connectivity networks

    Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception

    Full text link
    Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of “neuro-steered” hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)1 in which the information about the attended speech, as decoded from the subject’s brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices

    Decoding neural activity in sulcal and white matter areas of the brain to accurately predict individual finger movement and tactile stimuli of the human hand

    Get PDF
    Millions of people worldwide suffer motor or sensory impairment due to stroke, spinal cord injury, multiple sclerosis, traumatic brain injury, diabetes, and motor neuron diseases such as ALS (amyotrophic lateral sclerosis). A brain-computer interface (BCI), which links the brain directly to a computer, offers a new way to study the brain and potentially restore impairments in patients living with these debilitating conditions. One of the challenges currently facing BCI technology, however, is to minimize surgical risk while maintaining efficacy. Minimally invasive techniques, such as stereoelectroencephalography (SEEG) have become more widely used in clinical applications in epilepsy patients since they can lead to fewer complications. SEEG depth electrodes also give access to sulcal and white matter areas of the brain but have not been widely studied in brain-computer interfaces. Here we show the first demonstration of decoding sulcal and subcortical activity related to both movement and tactile sensation in the human hand. Furthermore, we have compared decoding performance in SEEG-based depth recordings versus those obtained with electrocorticography electrodes (ECoG) placed on gyri. Initial poor decoding performance and the observation that most neural modulation patterns varied in amplitude trial-to-trial and were transient (significantly shorter than the sustained finger movements studied), led to the development of a feature selection method based on a repeatability metric using temporal correlation. An algorithm based on temporal correlation was developed to isolate features that consistently repeated (required for accurate decoding) and possessed information content related to movement or touch-related stimuli. We subsequently used these features, along with deep learning methods, to automatically classify various motor and sensory events for individual fingers with high accuracy. Repeating features were found in sulcal, gyral, and white matter areas and were predominantly phasic or phasic-tonic across a wide frequency range for both HD (high density) ECoG and SEEG recordings. These findings motivated the use of long short-term memory (LSTM) recurrent neural networks (RNNs) which are well-suited to handling transient input features. Combining temporal correlation-based feature selection with LSTM yielded decoding accuracies of up to 92.04 ± 1.51% for hand movements, up to 91.69 ± 0.49% for individual finger movements, and up to 83.49 ± 0.72% for focal tactile stimuli to individual finger pads while using a relatively small number of SEEG electrodes. These findings may lead to a new class of minimally invasive brain-computer interface systems in the future, increasing its applicability to a wide variety of conditions

    Spatiotemporal structure of intracranial electric fields induced by transcranial electric stimulation in humans and nonhuman primates

    Get PDF
    Transcranial electric stimulation (TES) is an emerging technique, developed to non-invasively modulate brain function. However, the spatiotemporal distribution of the intracranial electric fields induced by TES remains poorly understood. In particular, it is unclear how much current actually reaches the brain, and how it distributes across the brain. Lack of this basic information precludes a firm mechanistic understanding of TES effects. In this study we directly measure the spatial and temporal characteristics of the electric field generated by TES using stereotactic EEG (s-EEG) electrode arrays implanted in cebus monkeys and surgical epilepsy patients. We found a small frequency dependent decrease (10%) in magnitudes of TES induced potentials and negligible phase shifts over space. Electric field strengths were strongest in superficial brain regions with maximum values of about 0.5 mV/mm. Our results provide crucial information of the underlying biophysics in TES applications in humans and the optimization and design of TES stimulation protocols. In addition, our findings have broad implications concerning electric field propagation in non-invasive recording techniques such as EEG/MEG

    Deep neural networks effectively model neural adaptation to changing background noise and suggest nonlinear noise filtering methods in auditory cortex

    No full text
    The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments
    corecore