63 research outputs found
Contextual Feature Extraction Hierarchies Converge in Large Language Models and the Brain
Recent advancements in artificial intelligence have sparked interest in the
parallels between large language models (LLMs) and human neural processing,
particularly in language comprehension. While prior research has established
similarities in the representation of LLMs and the brain, the underlying
computational principles that cause this convergence, especially in the context
of evolving LLMs, remain elusive. Here, we examined a diverse selection of
high-performance LLMs with similar parameter sizes to investigate the factors
contributing to their alignment with the brain's language processing
mechanisms. We find that as LLMs achieve higher performance on benchmark tasks,
they not only become more brain-like as measured by higher performance when
predicting neural responses from LLM embeddings, but also their hierarchical
feature extraction pathways map more closely onto the brain's while using fewer
layers to do the same encoding. We also compare the feature extraction pathways
of the LLMs to each other and identify new ways in which high-performing models
have converged toward similar hierarchical processing mechanisms. Finally, we
show the importance of contextual information in improving model performance
and brain similarity. Our findings reveal the converging aspects of language
processing in the brain and LLMs and offer new directions for developing models
that align more closely with human cognitive processing.Comment: 19 pages, 5 figures and 4 supplementary figure
Recommended from our members
Joint Representation of Spatial and Phonetic Features in the Human Core Auditory Cortex
The human auditory cortex simultaneously processes speech and determines the location of a speaker in space. Neuroimaging studies in humans have implicated core auditory areas in processing the spectrotemporal and the spatial content of sound; however, how these features are represented together is unclear. We recorded directly from human subjects implanted bilaterally with depth electrodes in core auditory areas as they listened to speech from different directions. We found local and joint selectivity to spatial and spectrotemporal speech features, where the spatial and spectrotemporal features are organized independently of each other. This representation enables successful decoding of both spatial and phonetic information. Furthermore, we found that the location of the speaker does not change the spectrotemporal tuning of the electrodes but, rather, modulates their mean response level. Our findings contribute to defining the functional organization of responses in the human auditory cortex, with implications for more accurate neurophysiological models of speech processing
Saccadic modulation of neural excitability in auditory areas of the neocortex
In natural "active" vision, humans and other primates use eye movements (saccades) to sample bits of information from visual scenes. In the visual cortex, non-retinal signals linked to saccades shift visual cortical neurons into a high excitability state as each saccade ends. The extent of this saccadic modulation outside of the visual system is unknown. Here, we show that during natural viewing, saccades modulate excitability in numerous auditory cortical areas with a temporal pattern complementary to that seen in visual areas. Control somatosensory cortical recordings indicate that the temporal pattern is unique to auditory areas. Bidirectional functional connectivity patterns suggest that these effects may arise from regions involved in saccade generation. We propose that by using saccadic signals to yoke excitability states in auditory areas to those in visual areas, the brain can improve information processing in complex natural settings
Corticocortical evoked potentials reveal projectors and integrators in human brain networks.
The cerebral cortex is composed of subregions whose
functional specialization is largely determined by their
incoming and outgoing connections with each other. In the
present study, we asked which cortical regions can exert the
greatest influence over other regions and the cortical
network as a whole. Previous research on this question has
relied on coarse anatomy (mapping large fiber pathways) or
functional connectivity (mapping inter-regional statistical
dependencies in ongoing activity). Here we combined direct
electrical stimulation with recordings from the cortical
surface to provide a novel insight into directed, inter-
regional influence within the cerebral cortex of awake
humans. These networks of directed interaction were
reproducible across strength thresholds and across subjects.
Directed network properties included (1) a decrease in the
reciprocity of connections with distance; (2) major projector
nodes (sources of influence) were found in peri-Rolandic
cortex and posterior, basal and polar regions of the temporal
lobe; and (3) major receiver nodes (receivers of influence)
were found in anterolateral frontal, superior parietal, and
superior temporal regions. Connectivity maps derived from
electrical stimulation and from resting electrocorticography
(ECoG) correlations showed similar spatial distributions for
the same source node. However, higher-level network topology
analysis revealed differences between electrical stimulation
and ECoG that were partially related to the reciprocity of
connections. Together, these findings inform our
understanding of large-scale corticocortical influence as
well as the interpretation of functional connectivity
networks
Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception
Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of “neuro-steered” hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)1 in which the information about the attended speech, as decoded from the subject’s brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices
Decoding neural activity in sulcal and white matter areas of the brain to accurately predict individual finger movement and tactile stimuli of the human hand
Millions of people worldwide suffer motor or sensory impairment due to stroke, spinal cord injury, multiple sclerosis, traumatic brain injury, diabetes, and motor neuron diseases such as ALS (amyotrophic lateral sclerosis). A brain-computer interface (BCI), which links the brain directly to a computer, offers a new way to study the brain and potentially restore impairments in patients living with these debilitating conditions. One of the challenges currently facing BCI technology, however, is to minimize surgical risk while maintaining efficacy. Minimally invasive techniques, such as stereoelectroencephalography (SEEG) have become more widely used in clinical applications in epilepsy patients since they can lead to fewer complications. SEEG depth electrodes also give access to sulcal and white matter areas of the brain but have not been widely studied in brain-computer interfaces. Here we show the first demonstration of decoding sulcal and subcortical activity related to both movement and tactile sensation in the human hand. Furthermore, we have compared decoding performance in SEEG-based depth recordings versus those obtained with electrocorticography electrodes (ECoG) placed on gyri. Initial poor decoding performance and the observation that most neural modulation patterns varied in amplitude trial-to-trial and were transient (significantly shorter than the sustained finger movements studied), led to the development of a feature selection method based on a repeatability metric using temporal correlation. An algorithm based on temporal correlation was developed to isolate features that consistently repeated (required for accurate decoding) and possessed information content related to movement or touch-related stimuli. We subsequently used these features, along with deep learning methods, to automatically classify various motor and sensory events for individual fingers with high accuracy. Repeating features were found in sulcal, gyral, and white matter areas and were predominantly phasic or phasic-tonic across a wide frequency range for both HD (high density) ECoG and SEEG recordings. These findings motivated the use of long short-term memory (LSTM) recurrent neural networks (RNNs) which are well-suited to handling transient input features. Combining temporal correlation-based feature selection with LSTM yielded decoding accuracies of up to 92.04 ± 1.51% for hand movements, up to 91.69 ± 0.49% for individual finger movements, and up to 83.49 ± 0.72% for focal tactile stimuli to individual finger pads while using a relatively small number of SEEG electrodes. These findings may lead to a new class of minimally invasive brain-computer interface systems in the future, increasing its applicability to a wide variety of conditions
Spatiotemporal structure of intracranial electric fields induced by transcranial electric stimulation in humans and nonhuman primates
Transcranial electric stimulation (TES) is an emerging technique, developed to non-invasively modulate brain function. However, the spatiotemporal distribution of the intracranial electric fields induced by TES remains poorly understood. In particular, it is unclear how much current actually reaches the brain, and how it distributes across the brain. Lack of this basic information precludes a firm mechanistic understanding of TES effects. In this study we directly measure the spatial and temporal characteristics of the electric field generated by TES using stereotactic EEG (s-EEG) electrode arrays implanted in cebus monkeys and surgical epilepsy patients. We found a small frequency dependent decrease (10%) in magnitudes of TES induced potentials and negligible phase shifts over space. Electric field strengths were strongest in superficial brain regions with maximum values of about 0.5 mV/mm. Our results provide crucial information of the underlying biophysics in TES applications in humans and the optimization and design of TES stimulation protocols. In addition, our findings have broad implications concerning electric field propagation in non-invasive recording techniques such as EEG/MEG
- …