418 research outputs found

    Cortical circuits for integration of self-motion and visual-motion signals.

    Get PDF
    The cerebral cortex contains cells which respond to movement of the head, and these cells are thought to be involved in the perception of self-motion. In particular, studies in the primary visual cortex of mice show that both running speed and passive whole-body rotation modulates neuronal activity, and modern genetically targeted viral tracing approaches have begun to identify previously unknown circuits that underlie these responses. Here we review recent experimental findings and provide a road map for future work in mice to elucidate the functional architecture and emergent properties of a cortical network potentially involved in the generation of egocentric-based visual representations for navigation

    Psychophysical properties of odor processing can be quantitatively described by relative action potential latency patterns in mitral and tufted cells

    Get PDF
    Electrophysiological and population imaging data in rodents show that olfactory bulb (OB) activity is profoundly modulated by the odor sampling process while behavioral experiments indicate that odor discrimination can occur within a single sniff. This paper addresses the question of whether action potential (AP) latencies occurring across the mitral and tufted cell (M/TC) population within an individual sampling cycle could account for the psychophysical properties of odor processing. To determine this we created an OB model (50,000 M/TCs) exhibiting hallmarks of published in vivo properties and used a template-matching algorithm to assess stimulus separation. Such an AP latency-based scheme showed high reproducibility and sensitivity such that odor stimuli could be reliably separated independent of concentration. As in behavioral experiments we found that very dissimilar odors (“A vs. B”) were accurately and rapidly discerned while very similar odors (binary mixtures, 0.4A/0.6B vs. 0.6A/0.4B) required up to 90 ms longer. As in lesion studies we find that AP latency-based representation is rather insensitive to disruption of large regions of the OB. The AP latency-based scheme described here, therefore, captures both temporal and psychophysical properties of olfactory processing and suggests that the onset patterns of M/TC activity in the OB represent stimulus specific features of olfactory stimuli

    Probabilistic Tensor Decomposition of Neural Population Spiking Activity

    Get PDF
    The firing of neural populations is coordinated across cells, in time, and across experimental conditions or repeated experimental trials, and so a full understanding of the computational significance of neural responses must be based on a separation of these different contributions to structured activity. Tensor decomposition is an approach to untangling the influence of multiple factors in data that is common in many fields. However, despite some recent interest in neuroscience, wider applicability of the approach is hampered by the lack of a full probabilistic treatment allowing principled inference of a decomposition from non-Gaussian spike-count data. Here, we extend the Polya-Gamma (PG) augmentation, previously used in sampling-based Bayesian inference, to implement scalable variational inference in non-conjugate spike-count models. Using this new approach, we develop techniques related to automatic relevance determination to infer the most appropriate tensor rank, as well as to incorporate priors based on known brain anatomy such as the segregation of cell response properties by brain area. We apply the model to neural recordings taken under conditions of visual-vestibular sensory integration, revealing how the encoding of self- and visual-motion signals is modulated by the sensory information available to the animal

    Cortical Integration of Vestibular and Visual Cues for Navigation, Visual Processing, and Perception

    Get PDF
    Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making

    Multi-neuron intracellular recording in vivo via interacting autopatching robots

    Get PDF
    The activities of groups of neurons in a circuit or brain region are important for neuronal computations that contribute to behaviors and disease states. Traditional extracellular recordings have been powerful and scalable, but much less is known about the intracellular processes that lead to spiking activity. We present a robotic system, the multipatcher, capable of automatically obtaining blind whole-cell patch clamp recordings from multiple neurons simultaneously. The multipatcher significantly extends automated patch clamping, or ’autopatching’, to guide four interacting electrodes in a coordinated fashion, avoiding mechanical coupling in the brain. We demonstrate its performance in the cortex of anesthetized and awake mice. A multipatcher with four electrodes took an average of 10 min to obtain dual or triple recordings in 29% of trials in anesthetized mice, and in 18% of the trials in awake mice, thus illustrating practical yield and throughput to obtain multiple, simultaneous whole-cell recordings in vivo

    Visualizing anatomically registered data with Brainrender

    Get PDF
    Three-dimensional (3D) digital brain atlases and high-throughput brain wide imaging techniques generate large multidimensional datasets that can be registered to a common reference frame. Generating insights from such datasets depends critically on visualization and interactive data exploration, but this a challenging task. Currently available software is dedicated to single atlases, model species or data types, and generating 3D renderings that merge anatomically registered data from diverse sources requires extensive development and programming skills. Here, we present brainrender: an open-source Python package for interactive visualization of multidimensional datasets registered to brain atlases. Brainrender facilitates the creation of complex renderings with different data types in the same visualization and enables seamless use of different atlas sources. High-quality visualizations can be used interactively and exported as high-resolution figures and animated videos. By facilitating the visualization of anatomically registered data, brainrender should accelerate the analysis, interpretation, and dissemination of brain-wide multidimensional data

    BrainGlobe Atlas API: a common interface for neuroanatomical atlases

    Get PDF
    Summary: Neuroscientists routinely perform experiments aimed at recording or manipulating neural activity, uncovering physiological processes underlying brain function or elucidating aspects of brain anatomy. Understanding how the brain generates behaviour ultimately depends on merging the results of these experiments into a unified picture of brain anatomy and function. Brain atlases are crucial in this endeavour: by outlining the organization of brain regions they provide a reference upon which our understanding of brain function can be anchored. More recently, digital high-resolution 3d atlases have been produced for several model organisms providing an invaluable resource for the research community. Effective use of these atlases depends on the availability of an application programming interface (API) that enables researchers to develop software to access and query atlas data. However, while some atlases come with an API, these are generally specific for individual atlases, and this hinders the development and adoption of open-source neuroanatomy software. The BrainGlobe atlas API (BG-Atlas API) overcomes this problem by providing a common interface for programmers to download and process data across a variety of model organisms. By adopting the BG-Atlas API, software can then be developed agnostic to the atlas, increasing adoption and interoperability of packages in neuroscience and enabling direct integration of different experimental modalities and even comparisons across model organisms

    Mapping brain circuitry with a light microscope

    Get PDF
    The beginning of the 21st century has seen a renaissance in light microscopy and anatomical tract tracing that together are rapidly advancing our understanding of the form and function of neuronal circuits. The introduction of instruments for automated imaging of whole mouse brains, new cell type–specific and trans-synaptic tracers, and computational methods for handling the whole-brain data sets has opened the door to neuroanatomical studies at an unprecedented scale. We present an overview of the present state and future opportunities in charting long-range and local connectivity in the entire mouse brain and in linking brain circuits to function
    corecore