13 research outputs found

    Neural Circuit Inference from Function to Structure

    Get PDF
    Advances in technology are opening new windows on the structural connectivity and functional dynamics of brain circuits. Quantitative frameworks are needed that integrate these data from anatomy and physiology. Here, we present a modeling approach that creates such a link. The goal is to infer the structure of a neural circuit from sparse neural recordings, using partial knowledge of its anatomy as a regularizing constraint. We recorded visual responses from the output neurons of the retina, the ganglion cells. We then generated a systematic sequence of circuit models that represents retinal neurons and connections and fitted them to the experimental data. The optimal models faithfully recapitulated the ganglion cell outputs. More importantly, they made predictions about dynamics and connectivity among unobserved neurons internal to the circuit, and these were subsequently confirmed by experiment. This circuit inference framework promises to facilitate the integration and understanding of big data in neuroscience

    Towards a state-space geometry of neural responses to natural scenes: A steady-state approach

    Get PDF
    Our understanding of information processing by the mammalian visual system has come through a variety of techniques ranging from psychophysics and fMRI to single unit recording and EEG. Each technique provides unique insights into the processing framework of the early visual system. Here, we focus on the nature of the information that is carried by steady state visual evoked potentials (SSVEPs). To study the information provided by SSVEPs, we presented human participants with a population of natural scenes and measured the relative SSVEP response. Rather than focus on particular features of this signal, we focused on the full state-space of possible responses and investigated how the evoked responses are mapped onto this space. Our results show that it is possible to map the relatively high-dimensional signal carried by SSVEPs onto a 2-dimensional space with little loss. We also show that a simple biologically plausible model can account for a high proportion of the explainable variance (~73%) in that space. Finally, we describe a technique for measuring the mutual information that is available about images from SSVEPs. The techniques introduced here represent a new approach to understanding the nature of the information carried by SSVEPs. Crucially, this approach is general and can provide a means of comparing results across different neural recording methods. Altogether, our study sheds light on the encoding principles of early vision and provides a much needed reference point for understanding subsequent transformations of the early visual response space to deeper knowledge structures that link different visual environments

    Hierarchical temporal prediction captures motion processing along the visual pathway

    Get PDF
    Visual neurons respond selectively to features that become increasingly complex from the eyes to the cortex. Retinal neurons prefer flashing spots of light, primary visual cortical (V1) neurons prefer moving bars, and those in higher cortical areas favor complex features like moving textures. Previously, we showed that V1 simple cell tuning can be accounted for by a basic model implementing temporal prediction – representing features that predict future sensory input from past input (Singer et al., 2018). Here, we show that hierarchical application of temporal prediction can capture how tuning properties change across at least two levels of the visual system. This suggests that the brain does not efficiently represent all incoming information; instead, it selectively represents sensory inputs that help in predicting the future. When applied hierarchically, temporal prediction extracts time-varying features that depend on increasingly high-level statistics of the sensory input

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    Advancing models of the visual system using biologically plausible unsupervised spiking neural networks

    Get PDF
    Spikes are thought to provide a fundamental unit of computation in the nervous system. The retina is known to use the relative timing of spikes to encode visual input, whereas primary visual cortex (V1) exhibits sparse and irregular spiking activity – but what do these different spiking patterns represent about sensory stimuli? To address this question, I set out to model the retina and V1 using a biologically-realistic spiking neural network (SNN), exploring the idea that temporal prediction underlies the sensory transformation of natural inputs. Firstly, I trained a recurrently-connected SNN of excitatory and inhibitory units to predict the sensory future in natural movies under metabolic-like constraints. This network exhibited V1-like spike statistics, simple and complex cell-like tuning, and - advancing prior studies - key physiological and tuning differences between excitatory and inhibitory neurons. Secondly, I modified this spiking network to model the retina to explore its role in visual processing. I found the model optimized for efficient prediction to capture retina-like receptive fields and - in contrast to previous studies - various retinal phenomena, such as latency coding, response omissions, and motion-tuning properties. Notably, the temporal prediction model also more accurately predicts retinal ganglion cell responses to natural images and movies across various animal species. Lastly, I developed a new method to accelerate the simulation and training of SNNs, obtaining a 10-50 times speedup, with performance on a par with the standard training approach on supervised classification benchmarks and for fitting electrophysiological recordings of cortical neurons. The retina and V1 models lay the foundation for developing normative models of increasing biological realism and link sensory processing to spiking activity, suggesting that temporal prediction is an underlying function of visual processing. This is complemented by a new approach to drastically accelerate computational research using SNNs

    Perception and Processing of Pitch and Timbre in Human Cortex

    Get PDF
    University of Minnesota Ph.D. dissertation. 2018. Major: Psychology. Advisor: Andrew Oxenham. 1 computer file (PDF); 157 pages.Pitch and timbre are integral components of auditory perception, yet our understanding of how they interact with one another and how they are processed cortically is enigmatic. Through a series of behavioral studies, neuroimaging, and computational modeling, we investigated these attributes. First, we looked at how variations in one dimension affect our perception of the other. Next, we explored how pitch and timbre are processed in the human cortex, in both a passive listening context and in the presence of attention, using univariate and multivariate analyses. Lastly, we used encoding models to predict cortical responses to timbre using natural orchestral sounds. We found that pitch and timbre interact with each other perceptually, and that musicians and non-musicians are similarly affected by these interactions. Our fMRI studies revealed that, in both passive and active listening conditions, pitch and timbre are processed in largely overlapping regions. However, their patterns of activation are separable, suggesting their underlying circuitry within these regions is unique. Finally, we found that a five-feature, subjectively derived encoding model could predict a significant portion of the variance in the cortical responses to timbre, suggesting our processing of timbral dimensions may align with our perceptual categorizations of them. Taken together, these findings help clarify aspects of both our perception and processing of pitch and timbre

    Characterizing neural mechanisms of attention-driven speech processing

    Get PDF

    Machine Learning As Tool And Theory For Computational Neuroscience

    Get PDF
    Computational neuroscience is in the midst of constructing a new framework for understanding the brain based on the ideas and methods of machine learning. This is effort has been encouraged, in part, by recent advances in neural network models. It is also driven by a recognition of the complexity of neural computation and the challenges that this poses for neuroscience’s methods. In this dissertation, I first work to describe these problems of complexity that have prompted a shift in focus. In particular, I develop machine learning tools for neurophysiology that help test whether tuning curves and other statistical models in fact capture the meaning of neural activity. Then, taking up a machine learning framework for understanding, I consider theories about how neural computation emerges from experience. Specifically, I develop hypotheses about the potential learning objectives of sensory plasticity, the potential learning algorithms in the brain, and finally the consequences for sensory representations of learning with such algorithms. These hypotheses pull from advances in several areas of machine learning, including optimization, representation learning, and deep learning theory. Each of these subfields has insights for neuroscience, offering up links for a chain of knowledge about how we learn and think. Together, this dissertation helps to further an understanding of the brain in the lens of machine learning
    corecore