23 research outputs found

    Local field potentials reflect multiple spatial scales in V4

    Get PDF
    Local field potentials (LFP) reflect the properties of neuronal circuits or columns recorded in a volume around a microelectrode (Buzsáki et al., 2012). The extent of this integration volume has been a subject of some debate, with estimates ranging from a few hundred microns (Katzner et al., 2009; Xing et al., 2009) to several millimeters (Kreiman et al., 2006). We estimated receptive fields (RFs) of multi-unit activity (MUA) and LFPs at an intermediate level of visual processing, in area V4 of two macaques. The spatial structure of LFP receptive fields varied greatly as a function of time lag following stimulus onset, with the retinotopy of LFPs matching that of MUAs at a restricted set of time lags. A model-based analysis of the LFPs allowed us to recover two distinct stimulus-triggered components: an MUA-like retinotopic component that originated in a small volume around the microelectrodes (~350 μm), and a second component that was shared across the entire V4 region; this second component had tuning properties unrelated to those of the MUAs. Our results suggest that the LFP reflects neural activity across multiple spatial scales, which both complicates its interpretation and offers new opportunities for investigating the large-scale structure of network processing

    Community-based benchmarking improves spike rate inference from two-photon calcium imaging data

    Get PDF
    In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience

    Neuromatch Academy: Teaching Computational Neuroscience with global accessibility

    Full text link
    Neuromatch Academy designed and ran a fully online 3-week Computational Neuroscience summer school for 1757 students with 191 teaching assistants working in virtual inverted (or flipped) classrooms and on small group projects. Fourteen languages, active community management, and low cost allowed for an unprecedented level of inclusivity and universal accessibility.Comment: 10 pages, 3 figures. Equal contribution by the executive committee members of Neuromatch Academy: Tara van Viegen, Athena Akrami, Kate Bonnen, Eric DeWitt, Alexandre Hyafil, Helena Ledmyr, Grace W. Lindsay, Patrick Mineault, John D. Murray, Xaq Pitkow, Aina Puce, Madineh Sedigh-Sarvestani, Carsen Stringer. and equal contribution by the board of directors of Neuromatch Academy: Gunnar Blohm, Konrad Kording, Paul Schrater, Brad Wyble, Sean Escola, Megan A. K. Peter

    Neuromatch Academy: a 3-week, online summer school in computational neuroscience

    Get PDF
    Neuromatch Academy (https://academy.neuromatch.io; (van Viegen et al., 2021)) was designed as an online summer school to cover the basics of computational neuroscience in three weeks. The materials cover dominant and emerging computational neuroscience tools, how they complement one another, and specifically focus on how they can help us to better understand how the brain functions. An original component of the materials is its focus on modeling choices, i.e. how do we choose the right approach, how do we build models, and how can we evaluate models to determine if they provide real (meaningful) insight. This meta-modeling component of the instructional materials asks what questions can be answered by different techniques, and how to apply them meaningfully to get insight about brain function

    Neuromatch Academy: a 3-week, online summer school in computational neuroscience

    Get PDF

    Parametric modelling of visual cortex at multiple scales

    No full text
    The visual system is confronted with the daunting task of extracting behaviourally relevant visual information from noisy and ambiguous patterns of luminance falling on the retina. It solves this problem through a hierarchical architecture, in which the visual stimulus is iteratively re-encoded into ever more abstract representations which can drive behaviour. This thesis explores the question of how the computations performed by neurons in the visual hierarchy create behaviourally relevant representations. This question requires probing the visual system at multiple scales: computation is the role of single neurons and ensembles of neurons; representation is the function of multiple neurons within an area; hierarchical processing is an emergent process which involves multiple areas; and behaviour is defined at the full scale of the system, the psychophysical observer. To study visual processing at multiple scales, I propose to develop and apply parametric modelling methods in the context of systems identification. Systems identification seeks to establish the deterministic relationship between the input and the output of a system. Systems identification has proven particularly useful in the study of visual processing, where the input to the system can be easily controlled via sensory stimulation.Parametric modeling, built on the theory of Generalized Linear Models (GLMs), furnishes a common framework to analyze signals with different statistical properties which occur in the analysis of neural systems: spike trains, multi-unit activity, local field potentials and psychophysical decisions.In Chapter 2, I develop the parametric modeling framework which is used throughout this thesis in the context of psychophysical classification images. Results show that parametric modeling can infer a psychophysical observer's decision process with fewer trials than previously proposed methods. This allows the exploration of more complex, and potentially more informative, models of decision processes while retaining statistical tractability.In Chapter 3, I extend and apply this framework to the analysis of visual representations at the level of neuronal ensembles in area V4. The results show that it is possible to infer, from multi-unit activity and local field potential (LFP) signals, the representation of visual space at a fine-grained scale over several millimeters of cortex. Analysis of the estimated visual representations reveals that LFPs reflect both local sources of input and global biases in visual representation. These results resolve a persistent puzzle in the literature regarding the spatial reach of the local field potential.In Chapter 4, I extend and apply the same framework to the analysis of single-neuron responses in area MST of the dorsal visual stream. Results reveal that MST responses can be explained by the integration of their afferent input from area MT, provided that this integration is nonlinear. Estimated models reveal long suspected, but previously unconfirmed receptive field organization in MST neurons that allow them to respond to complex optic flow patterns. This receptive field organization and nonlinear integration allows more accurate estimation of the velocity of approaching objects from the population of MST neurons, thus revealing their possible functional role in vergence control and object motion estimation.Put together, these results demonstrate that with powerful statistical methods, it is possible to infer the nature of visual representations at multiple scales. In the discussion, I show how these results may be expanded to gain a better understanding of hierarchical visual processing at large.Le système visuel est confronté à la difficile tâche d'extraire de l'information utile au comportement à partir de motifs complexes et ambigus détectés par la rétine. Il résout ce problème grâce à une architecture hiérarchique, dans laquelle le stimulus visuel est itérativement ré-encodé dans une représentation abstraite. Ce mémoire explore la question suivante : comment les computations performées par des neurones de la hiérarchie visuelle créent-elles des représentations permettant des comportements complexes?Cette question nécessite l'étude du système visuel à plusieurs échelles : la computation est le rôle de neurones et d'ensembles de neurones; la représentation est une fonction des neurones dans une aire du cerveau; la hiérarchie émerge de la communication entre de multiples aires du cerveau; et le comportement est défini à l'échelle du système visuel complet, l'observateur psychophysique.Afin d'étudier le système visuel à de multiple échelles, je développe et applique des méthodes de modélisation paramétrique dans le cadre de l'identification de système. Celle-ci a pour but d'établir la relation déterministe entre l'entrée d'un système et sa sortie. L'identification de système est particulièrement utile dans l'étude de la vision, où l'entrée du système peut être facilement contrôlée par stimulation sensorielle.La modélisation paramétrique, bâtie sur la théorie des modèles linéaires généralisés, offre un paradigme commun pour analyser des signaux ayant des propriétés statistiques disparates, souvent rencontrés dans l'étude du système nerveux: les potentiels d'action, l'activité d'ensemble de neurones, et les décisions psychophysiques.Dans le 2ème chapitre, je développe le paradigme d'analyse par modélisation paramétrique qui sera utilisé tout au long de ce mémoire dans le contexte des images de classification psychophysiques. Je démontre qu'il est possible d'inférer, grâce à ces méthodes, le processus décisionnel d'un observateur psychophysique avec moins de données que ce qui était précédemment possible. Cette avancée permet l'exploration de modèles psychophysiques plus complexes, et potentiellement plus informatifs sur le processus décisionnel de l'observateur.Dans le 3ème chapitre, j'applique ce paradigme à l'analyse des représentations visuelles au niveau d'ensembles neuronaux dans l'aire V4 du système visuel. Les résultats démontrent qu'il est possible, à partir de l'activité des champs de potentiel locaux (CPL), d'inférer la représentation corticale de l'espace visuel sur une échelle de plusieurs millimètres. Je démontre ainsi que les CPL reflètent à la fois des sources synaptiques locales et des biais globaux dans la représentation visuelle. Ces résultats résolvent une controverse dans la littérature concernant l'intégration spatiale des CPL.Dans le 4ème chapitre, j'applique ce même paradigme dans l'analyse de neurones dans l'aire MST du système visuel dorsal. Je révèle que les réponses dans MST peuvent être expliquées par l'intégration de sources afférentes provenant de l'aire MT; cependant, cette intégration se révèle nonlinéaire. Cette analyse révèle des propriétés longtemps soupçonnées mais jusqu'ici non confirmées des champs réceptifs des neurones dans MST; celles-ci leur permettent de communiquer de l'information sur les motifs de flux optique complexes. Cette organisation des champs réceptifs et l'intégration nonlinéaire permet d'extraire plus facilement la vélocité d'objets s'approchant de l'observateur à partir des réponses de la population de neurones dans MST, révélant un rôle insoupçonné de ces neurones dans l'estimation de la vélocité des objets.Pris ensemble, ces résultats démontrent qu'à l'aide de méthodes statistiques puissantes, il est possible d'inférer la nature des représentations visuelles à de multiples échelles. Dans la discussion, je démontre comment généraliser ces résultats afin d'obtenir une meilleure compréhension des computations hiérarchiques dans le système visuel

    Enhanced Spatial Resolution During Locomotion and Heightened Attention in Mouse Primary Visual Cortex

    No full text
    UnlabelledWe do not fully understand how behavioral state modulates the processing and transmission of sensory signals. Here, we studied the cortical representation of the retinal image in mice that spontaneously switched between a state of rest and a constricted pupil, and one of active locomotion and a dilated pupil, indicative of heightened attention. We measured the selectivity of neurons in primary visual cortex for orientation and spatial frequency, as well as their response gain, in these two behavioral states. Consistent with prior studies, we found that preferred orientation and spatial frequency remained invariant across states, whereas response gain increased during locomotion relative to rest. Surprisingly, relative gain, defined as the ratio between the gain during locomotion and the gain during rest, was not uniform across the population. Cells tuned to high spatial frequencies showed larger relative gain compared with those tuned to lower spatial frequencies. The preferential enhancement of high-spatial-frequency information was also reflected in our ability to decode the stimulus from population activity. Finally, we show that changes in gain originate from shifts in the operating point of neurons along a spiking nonlinearity as a function of behavioral state. Differences in the relative gain experienced by neurons with high and low spatial frequencies are due to corresponding differences in how these cells shift their operating points between behavioral states.Significance statementHow behavioral state modulates the processing and transmission of sensory signals remains poorly understood. Here, we show that the mean firing rate and neuronal gain increase during locomotion as a result in a shift of the operating point of neurons. We define relative gain as the ratio between the gain of neurons during locomotion and rest. Interestingly, relative gain is higher in cells with preferences for higher spatial frequencies than those with low-spatial-frequency selectivity. This means that, during a state of locomotion and heightened attention, the population activity in primary visual cortex can support better spatial acuity, a phenomenon that parallels the improved spatial resolution observed in human subjects during the allocation of spatial attention

    Enhanced Spatial Resolution During Locomotion and Heightened Attention in Mouse Primary Visual Cortex

    No full text
    We do not fully understand how behavioral state modulates the processing and transmission of sensory signals. Here, we studied the cortical representation of the retinal image in mice that spontaneously switched between a state of rest and a constricted pupil, and one of active locomotion and a dilated pupil, indicative of heightened attention. We measured the selectivity of neurons in primary visual cortex for orientation and spatial frequency, as well as their response gain, in these two behavioral states. Consistent with prior studies, we found that preferred orientation and spatial frequency remained invariant across states, whereas response gain increased during locomotion relative to rest. Surprisingly, relative gain, defined as the ratio between the gain during locomotion and the gain during rest, was not uniform across the population. Cells tuned to high spatial frequencies showed larger relative gain compared with those tuned to lower spatial frequencies. The preferential enhancement of high-spatial-frequency information was also reflected in our ability to decode the stimulus from population activity. Finally, we show that changes in gain originate from shifts in the operating point of neurons along a spiking nonlinearity as a function of behavioral state. Differences in the relative gain experienced by neurons with high and low spatial frequencies are due to corresponding differences in how these cells shift their operating points between behavioral states. SIGNIFICANCE STATEMENT How behavioral state modulates the processing and transmission of sensory signals remains poorly understood. Here, we show that the mean firing rate and neuronal gain increase during locomotion as a result in a shift of the operating point of neurons. We define relative gain as the ratio between the gain of neurons during locomotion and rest. Interestingly, relative gain is higher in cells with preferences for higher spatial frequencies than those with low-spatial-frequency selectivity. This means that, during a state of locomotion and heightened attention, the population activity in primary visual cortex can support better spatial acuity, a phenomenon that parallels the improved spatial resolution observed in human subjects during the allocation of spatial attention
    corecore