120 research outputs found

    Central auditory neurons have composite receptive fields

    Get PDF
    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes

    Computational Analysis of Functional Imaging in the Primary Auditory Cortex

    Get PDF
    Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Historically, discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging with functional imaging as the studies have failed to reproduce results seen in electrophysiology. One difference might result from the way most functional imaging studies record the summed activity of multiple neurons. To test this effect, virtual mapping experiments were run in order to gauge the ability of functional imaging to accurately estimate underlying maps. The experiments suggest that spatial averaging improves the ability to estimate maps with low spatial frequencies or with large amounts of cortical variability, at the cost of decreasing the spatial resolution of the images. Despite the decrease in resolution, the results suggest that current functional imaging studies may be able to depict maps with high spatial frequencies better than electrophysiology can; therefore, the difficulties in recapitulating electrophysiology experiments with imaging may stem from underlying neural circuitry. One possible reason may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. A small percent of neurons have a response type that exhibits a receptive field size that increases with higher stimulus intensities, but they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of the primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite one subpopulation of neurons over the others have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments

    Disruption of large-scale neuronal activity patterns in Alzheimer’s disease models

    Get PDF
    The overexpression and aggregation of tau is observed in a class of neurodegenerative diseases termed tauopathies. Individuals with tauopathy, and animal models of tauopathy, show a loss of behavioural and cognitive function, but the neural underpinnings of these symptoms are poorly understood. We investigated changes in neural function in in the Tg4510 model of tauopathy in primary visual cortex (V1) - an area where the relationship between stimulus features, single unit responses, and the circuits and mechanisms underlying them, is relatively well characterised - and in CA1. We conducted chronic awake head-fixed recordings in V1 of 5-6.5 month old mice, presenting a variety of visual stimuli, including drifting grating stimuli that varied across feature dimensions such as orientation, contrast, or size. Mice were also trained to run in a virtual reality environment, either closed loop, open loop (playback) or in the dark. Tau+ and Tau- mice displayed clear differences in the oscillatory local field potentials in V1 and CA1, notably Tau+ mice showed a large decrease in high frequency power as well as minor changes in stimulus-evoked power and power in relation to running speed. Single unit responses in V1 of Tau+ mice were also altered. Tau+ mice showed greater orientation selectivity and suppression following orientation adaptation, and improved contrast tuning, but worse selectivity in response to sparse noise stimuli. Responses to other stimulus features, such as spatial frequency and size, were unchanged between the two groups. In conclusion, tauopathy in the Tg4510 mouse shows clear effects on information processing in the visual cortex and in CA1. This was not through a non-selective decrease in responsiveness, but instead enhanced some types of processing, such as orientation selectivity, while disrupting others such as responses to sparse noise. These selective effects on neural function may reflect selective patterns of tauopathy on different cell classes or brain areas

    Membrane resonance enables stable and robust gamma oscillations

    Get PDF
    Neuronal mechanisms underlying beta/gamma oscillations (20-80 Hz) are not completely understood. Here, we show that in vivo beta/gamma oscillations in the cat visual cortex sometimes exhibit remarkably stable frequency even when inputs fluctuate dramatically. Enhanced frequency stability is associated with stronger oscillations measured in individual units and larger power in the local field potential. Simulations of neuronal circuitry demonstrate that membrane properties of inhibitory interneurons strongly determine the characteristics of emergent oscillations. Exploration of networks containing either integrator or resonator inhibitory interneurons revealed that: (i) Resonance, as opposed to integration, promotes robust oscillations with large power and stable frequency via a mechanism called RING (Resonance INduced Gamma); resonance favors synchronization by reducing phase delays between interneurons and imposes bounds on oscillation cycle duration; (ii) Stability of frequency and robustness of the oscillation also depend on the relative timing of excitatory and inhibitory volleys within the oscillation cycle; (iii) RING can reproduce characteristics of both Pyramidal INterneuron Gamma (PING) and INterneuron Gamma (ING), transcending such classifications; (iv) In RING, robust gamma oscillations are promoted by slow but are impaired by fast inputs. Results suggest that interneuronal membrane resonance can be an important ingredient for generation of robust gamma oscillations having stable frequency

    Detecting cells and cellular activity from two-photon calcium imaging data

    Get PDF
    To understand how networks of neurons process information, it is essential to monitor their activity in living tissue. Information is transmitted between neurons by electrochemical impulses called action potentials or spikes. Calcium-sensitive fluorescent probes, which emit a characteristic pulse of fluorescence in response to a spike, are used to visualise spiking activity. Combined with two-photon microscopy, they enable the spiking activity of thousands of neurons to be monitored simultaneously at single-cell and single-spike resolution. In this thesis, we develop signal processing tools for detecting cells and cellular activity from two-photon calcium imaging data. Firstly, we present a method to detect the locations of cells within a video. In our framework, an active contour evolves guided by a model-based cost function to identify a cell boundary. We demonstrate that this method, which includes no assumptions about typical cell shape or temporal activity, is able to detect cells with varied properties from real imaging data. Once the location of a cell has been identified, its spiking activity must be inferred from the fluorescence signal. We present a metric that quantifies the similarity between inferred spikes and the ground truth. The proposed metric assesses the similarity of pulse trains obtained from convolution of the spike trains with a smoothing pulse, whose width is derived from the statistics of the data. We demonstrate that the proposed metric is more sensitive than existing metrics to the temporal and rate precision of inferred spike trains. Finally, we extend an existing framework for spike inference to accommodate a wider class of fluorescence signals. Our method, which is based on finite rate of innovation theory, exploits the known parametric structure of the signal to infer the unknown spike times. On in vitro imaging data, we demonstrate that the updated algorithm outperforms a state of the art approach.Open Acces

    Efficient Solutions to High-Dimensional and Nonlinear Neural Inverse Problems

    Get PDF
    Development of various data acquisition techniques has enabled researchers to study the brain as a complex system and gain insight into the high-level functions performed by different regions of the brain. These data are typically high-dimensional as they pertain to hundreds of sensors and span hours of recording. In many experiments involving sensory or cognitive tasks, the underlying cortical activity admits sparse and structured representations in the temporal, spatial, or spectral domains, or combinations thereof. However, current neural data analysis approaches do not take account of sparsity in order to harness the high-dimensionality. Also, many existing approaches suffer from high bias due to the heavy usage of linear models and estimation techniques, given that cortical activity is known to exhibit various degrees of non-linearity. Finally, the majority of current methods in computational neuroscience are tailored for static estimation in batch-mode and offline settings, and with the advancement of brain-computer interface technologies, these methods need to be extended to capture neural dynamics in a real-time fashion. The objective of this dissertation is to devise novel algorithms for real-time estimation settings and to incorporate the sparsity and non-linear properties of brain activity for providing efficient solutions to neural inverse problems involving high-dimensional data. Along the same line, our goal is to provide efficient representations of these high-dimensional data that are easy to interpret and assess statistically. First, we consider the problem of spectral estimation from binary neuronal spiking data. Due to the non-linearities involved in spiking dynamics, classical spectral representation methods fail to capture the spectral properties of these data. To address this challenge, we integrate point process theory, sparse estimation, and non-linear signal processing methods to propose a spectral representation modeling and estimation framework for spiking data. Our model takes into account the sparse spectral structure of spiking data, which is crucial in the analysis of electrophysiology data in conditions such as sleep and anesthesia. We validate the performance of our spectral estimation framework using simulated spiking data as well as multi-unit spike recordings from human subjects under general anesthesia. Next, we tackle the problem of real-time auditory attention decoding from electroencephalography (EEG) or magnetoencephalography (MEG) data in a competing-speaker environment. Most existing algorithms for this purpose operate offline and require access to multiple trials for a reliable performance; hence, they are not suitable for real-time applications. To address these shortcomings, we integrate techniques from state-space modeling, Bayesian filtering, and sparse estimation to propose a real-time algorithm for attention decoding that provides robust, statistically interpretable, and dynamic measures of the attentional state of the listener. We validate the performance of our proposed algorithm using simulated and experimentally-recorded M/EEG data. Our analysis reveals that our algorithms perform comparable to the state-of-the-art offline attention decoding techniques, while providing significant computational savings. Finally, we study the problem of dynamic estimation of Temporal Response Functions (TRFs) for analyzing neural response to auditory stimuli. A TRF can be viewed as the impulse response of the brain in a linear stimulus-response model. Over the past few years, TRF analysis has provided researchers with great insight into auditory processing, specially under competing speaker environments. However, most existing results correspond to static TRF estimates and do not examine TRF dynamics, especially in multi-speaker environments with attentional modulation. Using state-space models, we provide a framework for a robust and comprehensive dynamic analysis of TRFs using single trial data. TRF components at specific lags may exhibit peaks which arise, persist, and disappear over time according to the attentional state of the listener. To account for this specific behavior in our model, we consider a state-space model with a Gaussian mixture process noise, and devise an algorithm to efficiently estimate the process noise parameters from the recorded M/EEG data. Application to simulated and recorded MEG data shows that the {proposed state-space modeling and inference framework can reliably capture the dynamic changes in the TRF, which can in turn improve our access to the attentional state in competing-speaker environments

    The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Get PDF
    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.German Science Foundation (Graduate College "InterNeuro"

    Bayesian Modeling and Estimation Techniques for the Analysis of Neuroimaging Data

    Get PDF
    Brain function is hallmarked by its adaptivity and robustness, arising from underlying neural activity that admits well-structured representations in the temporal, spatial, or spectral domains. While neuroimaging techniques such as Electroencephalography (EEG) and magnetoencephalography (MEG) can record rapid neural dynamics at high temporal resolutions, they face several signal processing challenges that hinder their full utilization in capturing these characteristics of neural activity. The objective of this dissertation is to devise statistical modeling and estimation methodologies that account for the dynamic and structured representations of neural activity and to demonstrate their utility in application to experimentally-recorded data. The first part of this dissertation concerns spectral analysis of neural data. In order to capture the non-stationarities involved in neural oscillations, we integrate multitaper spectral analysis and state-space modeling in a Bayesian estimation setting. We also present a multitaper spectral analysis method tailored for spike trains that captures the non-linearities involved in neuronal spiking. We apply our proposed algorithms to both EEG and spike recordings, which reveal significant gains in spectral resolution and noise reduction. In the second part, we investigate cortical encoding of speech as manifested in MEG responses. These responses are often modeled via a linear filter, referred to as the temporal response function (TRF). While the TRFs estimated from the sensor-level MEG data have been widely studied, their cortical origins are not fully understood. We define the new notion of Neuro-Current Response Functions (NCRFs) for simultaneously determining the TRFs and their cortical distribution. We develop an efficient algorithm for NCRF estimation and apply it to MEG data, which provides new insights into the cortical dynamics underlying speech processing. Finally, in the third part, we consider the inference of Granger causal (GC) influences in high-dimensional time series models with sparse coupling. We consider a canonical sparse bivariate autoregressive model and define a new statistic for inferring GC influences, which we refer to as the LASSO-based Granger Causal (LGC) statistic. We establish non-asymptotic guarantees for robust identification of GC influences via the LGC statistic. Applications to simulated and real data demonstrate the utility of the LGC statistic in robust GC identification
    corecore