176 research outputs found

    Feature Selectivity of the Gamma-Band of the Local Field Potential in Primate Primary Visual Cortex

    Get PDF
    Extracellular voltage fluctuations (local field potentials, LFPs) reflecting neural mass action are ubiquitous across species and brain regions. Numerous studies have characterized the properties of LFP signals in the cortex to study sensory and motor computations as well as cognitive processes like attention, perception and memory. In addition, its extracranial counterpart – the electroencephalogram – is widely used in clinical applications. However, the link between LFP signals and the underlying activity of local populations of neurons remains largely elusive. Here, we review recent work elucidating the relationship between spiking activity of local neural populations and LFP signals. We focus on oscillations in the gamma-band (30–90 Hz) of the LFP in the primary visual cortex (V1) of the macaque that dominate during visual stimulation. Given that in area V1 much is known about the properties of single neurons and the cortical architecture, it provides an excellent opportunity to study the mechanisms underlying the generation of the LFP

    The effect of noise correlations in populations of diversely tuned neurons

    Get PDF
    The amount of information encoded by networks of neurons critically depends on the correlation structure of their activity. Neurons with similar stimulus preferences tend to have higher noise correlations than others. In homogeneous populations of neurons this limited range correlation structure is highly detrimental to the accuracy of a population code. Therefore, reduced spike count correlations under attention, after adaptation or after learning have been interpreted as evidence for a more efficient population code. Here we analyze the role of limited range correlations in more realistic, heterogeneous population models. We use Fisher information and maximum likelihood decoding to show that reduced correlations do not necessarily improve encoding accuracy. In fact, in populations with more than a few hundred neurons, increasing the level of limited range correlations can substantially improve encoding accuracy. We found that this improvement results from a decrease in noise entropy that is associated with increasing correlations if the marginal distributions are unchanged. Surprisingly, for constant noise entropy and in the limit of large populations the encoding accuracy is independent of both structure and magnitude of noise correlations

    Optimal Population Coding, Revisited

    Get PDF
    Cortical circuits perform the computations underlying rapid perceptual decisions within a few dozen milliseconds with each neuron emitting only a few spikes. Under these conditions, the theoretical analysis of neural population codes is challenging, as the most commonly used theoretical tool – Fisher information – can lead to erroneous conclusions about the optimality of different coding schemes. Here we revisit the effect of tuning function width and correlation structure on neural population codes based on ideal observer analysis in both a discrimination and reconstruction task. We show that the optimal tuning function width and the optimal correlation structure in both paradigms strongly depend on the available decoding time in a very similar way. In contrast, population codes optimized for Fisher information do not depend on decoding time and are severely suboptimal when only few spikes are available. In addition, we use the neurometric functions of the ideal observer in the classification task to investigate the differential coding properties of these Fisher-optimal codes for fine and coarse discrimination. We find that the discrimination error for these codes does not decrease to zero with increasing population size, even in simple coarse discrimination tasks. Our results suggest that quite different population codes may be optimal for rapid decoding in cortical computations than those inferred from the optimization of Fisher information

    Generalization properties of contrastive world models

    Full text link
    Recent work on object-centric world models aim to factorize representations in terms of objects in a completely unsupervised or self-supervised manner. Such world models are hypothesized to be a key component to address the generalization problem. While self-supervision has shown improved performance however, OOD generalization has not been systematically and explicitly tested. In this paper, we conduct an extensive study on the generalization properties of contrastive world model. We systematically test the model under a number of different OOD generalization scenarios such as extrapolation to new object attributes, introducing new conjunctions or new attributes. Our experiments show that the contrastive world model fails to generalize under the different OOD tests and the drop in performance depends on the extent to which the samples are OOD. When visualizing the transition updates and convolutional feature maps, we observe that any changes in object attributes (such as previously unseen colors, shapes, or conjunctions of color and shape) breaks down the factorization of object representations. Overall, our work highlights the importance of object-centric representations for generalization and current models are limited in their capacity to learn such representations required for human-level generalization.Comment: Accepted at the NeurIPS 2023 Workshop: Self-Supervised Learning - Theory and Practic

    A rotation-equivariant convolutional neural network model of primary visual cortex

    Full text link
    Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that models based on convolutional neural networks (CNNs) lead to much more accurate predictions, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this model to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1

    Comparing the Feature Selectivity of the Gamma-Band of the Local Field Potential and the Underlying Spiking Activity in Primate Visual Cortex

    Get PDF
    The local field potential (LFP), comprised of low-frequency extra-cellular voltage fluctuations, has been used extensively to study the mechanisms of brain function. In particular, oscillations in the gamma-band (30–90 Hz) are ubiquitous in the cortex of many species during various cognitive processes. Surprisingly little is known about the underlying biophysical processes generating this signal. Here, we examine the relationship of the local field potential to the activity of localized populations of neurons by simultaneously recording spiking activity and LFP from the primary visual cortex (V1) of awake, behaving macaques. The spatial organization of orientation tuning and ocular dominance in this area provides an excellent opportunity to study this question, because orientation tuning is organized at a scale around one order of magnitude finer than the size of ocular dominance columns. While we find a surprisingly weak correlation between the preferred orientation of multi-unit activity and gamma-band LFP recorded on the same tetrode, there is a strong correlation between the ocular preferences of both signals. Given the spatial arrangement of orientation tuning and ocular dominance, this leads us to conclude that the gamma-band of the LFP seems to sample an area considerably larger than orientation columns. Rather, its spatial resolution lies at the scale of ocular dominance columns

    DataJoint: managing big scientific data using MATLAB or Python

    Get PDF
    The rise of big data in modern research poses serious challenges for data management: Large and intricate datasets from diverse instrumentation must be precisely aligned, annotated, and processed in a variety of ways to extract new insights. While high levels of data integrity are expected, research teams have diverse backgrounds, are geographically dispersed, and rarely possess a primary interest in data science. Here we describe DataJoint, an open-source toolbox designed for manipulating and processing scientific data under the relational data model. Designed for scientists who need a flexible and expressive database language with few basic concepts and operations, DataJoint facilitates multi-user access, efficient queries, and distributed computing. With implementations in both MATLAB and Python, DataJoint is not limited to particular file formats, acquisition systems, or data modalities and can be quickly adapted to new experimental designs. DataJoint and related resources are available at http://datajoint.github.com

    Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks

    Get PDF
    Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns

    Few-Shot Attribute Learning

    Full text link
    Semantic concepts are frequently defined by combinations of underlying attributes. As mappings from attributes to classes are often simple, attribute-based representations facilitate novel concept learning with zero or few examples. A significant limitation of existing attribute-based learning paradigms, such as zero-shot learning, is that the attributes are assumed to be known and fixed. In this work we study the rapid learning of attributes that were not previously labeled. Compared to standard few-shot learning of semantic classes, in which novel classes may be defined by attributes that were relevant at training time, learning new attributes imposes a stiffer challenge. We found that supervised learning with training attributes does not generalize well to new test attributes, whereas self-supervised pre-training brings significant improvement. We further experimented with random splits of the attribute space and found that predictability of test attributes provides an informative estimate of a model's generalization ability.Comment: Technical report, 25 page
    corecore