72 research outputs found

    Unsupervised space-time learning in primary visual cortex

    Get PDF
    The mammalian visual system is an incredibly complex computation device, capable of performing the various tasks of seeing: navigation, pattern and object recognition, motor coordination, trajectory extrapolation, among others. Decades of research has shown that experience-dependent plasticity of cortical circuitry underlies the impressive ability to rapidly learn many of these tasks and to adjust as required. One particular thread of investigation has focused on unsupervised learning, wherein changes to the visual environment lead to corresponding changes in cortical circuits. The most prominent example of unsupervised learning is ocular dominance plasticity, caused by visual deprivation to one eye and leading to a dramatic re-wiring of cortex. Other examples tend to make more subtle changes to the visual environment through passive exposure to novel visual stimuli. Here, we use one such unsupervised paradigm, sequence learning, to study experience-dependent plasticity in the mouse visual system. Through a combination of theory and experiment, we argue that the mammalian visual system is an unsupervised learning device. Beginning with a mathematical exploration of unsupervised learning in biology, engineering, and machine learning, we seek a more precise expression of our fundamental hypothesis. We draw connections between information theory, efficient coding, and common unsupervised learning algorithms such as Hebbian plasticity and principal component analysis. Efficient coding suggests a simple rule for transmitting information in the nervous system: use more spikes to encode unexpected information, and fewer spikes to encode expected information. Therefore, expectation violations ought to produce prediction errors, or brief periods of heightened firing when an unexpected event occurs. Meanwhile, modern unsupervised learning algorithms show how such expectations can be learned. Next, we review data from decades of visual neuroscience research, highlighting the computational principles and synaptic plasticity processes that support biological learning and seeing. By tracking the flow of visual information from the retina to thalamus and primary visual cortex, we discuss how the principle of efficient coding is evident in neural activity. One common example is predictive coding in the retina, where ganglion cells with canonical center-surround receptive fields compute a prediction error, sending spikes to the central nervous system only in response to locally-unpredictable visual stimuli. This behavior can be learned through simple Hebbian plasticity mechanisms. Similar models explain much of the activity of neurons in primary visual cortex, but we also discuss ways in which the theory fails to capture the rich biological complexity. Finally, we present novel experimental results from physiological investigations of the mouse primary visual cortex. We trained mice by passively exposing them to complex spatiotemporal patterns of light: rapidly-flashed sequences of images. We find evidence that visual cortex learns these sequences in a manner consistent with efficient coding, such that unexpected stimuli tend to elicit more firing than expected ones. Overall, we observe dramatic changes in evoked neural activity across days of passive exposure. Neural responses to the first, unexpected sequence element increase with days of training while responses at other, expected time points either decrease or stay the same. Furthermore, substituting an unexpected element for an expected one or omitting an expected element both cause brief bursts of increased firing. Our results therefore provide evidence for unsupervised learning and efficient coding in the mouse visual system, especially because unexpected events drive prediction errors. Overall, our analysis suggests novel experiments, which could be performed in the near future, and provides a useful framework to understand visual perception and learning

    25th Annual Computational Neuroscience Meeting: CNS-2016

    Get PDF
    Abstracts of the 25th Annual Computational Neuroscience Meeting: CNS-2016 Seogwipo City, Jeju-do, South Korea. 2–7 July 201

    Learning and Decision Making in Social Contexts: Neural and Computational Models

    Get PDF
    Social interaction is one of humanity's defining features. Through it, we develop ideas, express emotions, and form relationships. In this thesis, we explore the topic of social cognition by building biologically-plausible computational models of learning and decision making. Our goal is to develop mechanistic explanations for how the brain performs a variety of social tasks, to test those theories by simulating neural networks, and to validate our models by comparing to human and animal data. We begin by introducing social cognition from functional and anatomical perspectives, then present the Neural Engineering Framework, which we use throughout the thesis to specify functional brain models. Over the course of four chapters, we investigate many aspects of social cognition using these models. We begin by studying fear conditioning using an anatomically accurate model of the amygdala. We validate this model by comparing the response properties of our simulated neurons with real amygdala neurons, showing that simulated behavior is consistent with animal data, and exploring how simulated fear generalization relates to normal and anxious humans. Next, we show that biologically-detailed networks may realize cognitive operations that are essential for social cognition. We validate this approach by constructing a working memory network from multi-compartment cells and conductance-based synapses, then show that its mnemonic performance is comparable to animals performing a delayed match-to-sample task. In the next chapter, we study decision making and the tradeoffs between speed and accuracy: our network gathers information from the environment and tracks the value of choice alternatives, making a decision once certain criteria are met. We apply this model to a two-choice decision task, fit model parameters to recreate the behavior of individual humans, and reproduce the speed-accuracy tradeoff evident in the human population. Finally, we combine our networks for learning, working memory, and decision making into a cognitive agent that uses reinforcement learning to play a simple social game. We compare this model with two other cognitive architectures and with human data from an experiment we ran, and show that our three cognitive agents recreate important patterns in the human data, especially those related to social value orientation and cooperative behavior. Our concluding chapter summarizes our contributions to the field of social cognition and proposes directions for further research. The main contribution of this thesis is the demonstration that a diverse set of social cognitive abilities may be explained, simulated, and validated using a functionally-descriptive, biologically-plausible theoretical framework. Our models lay a foundation for studying increasingly-sophisticated forms of social cognition in future work

    Investigating Information Flows in Spiking Neural Networks With High Fidelity

    Get PDF
    The brains of many organisms are capable of a wide variety of complex computations. This capability must be undergirded by a more general purpose computational capacity. The exact nature of this capacity, how it is distributed across the brains of organisms and how it arises throughout the course of development is an open topic of scientific investigation. Individual neurons are widely considered to be the fundamental computational units of brains. Moreover, the finest scale at which large scale recordings of brain activity can be performed is the spiking activity of neurons and our ability to perform these recordings over large numbers of neurons and with fine spatial resolution is increasing rapidly. This makes the spiking activity of individual neurons a highly attractive data modality on which to study neural computation. The framework of information dynamics has proven to be a successful approach towards interrogating the capacity for general purpose computation. It does this by revealing the atomic information processing operations of information storage, transfer and modification. Unfortunately, the study of information flows and other information processing operations from the spiking activity of neurons has been severely hindered by the lack of effective tools for estimating these quantities on this data modality. This thesis remedies this situation by presenting an estimator for information flows, as measured by Transfer Entropy (TE), that operates in continuous time on event-based data such as spike trains. Unlike the previous approach to the estimation of this quantity, which discretised the process into time bins, this estimator operates on the raw inter-spike intervals. It is demonstrated to be far superior to the previous discrete-time approach in terms of consistency, rate of convergence and bias. Most importantly, unlike the discrete-time approach, which requires a hard tradeoff between capturing fine temporal precision or history effects occurring over reasonable time intervals, this estimator can capture history effects occurring over relatively large intervals without any loss of temporal precision. This estimator is applied to developing dissociated cultures of cortical rat neurons, therefore providing the first high-fidelity study of information flows on spiking data. It is found that the spatial structure of the flows locks in to a significant extent. at the point of their emergence and that certain nodes occupy specialised computational roles as either transmitters, receivers or mediators of information flow. Moreover, these roles are also found to lock in early. In order to fully understand the structure of neural information flows, however, we are required to go beyond pairwise interactions, and indeed multivariate information flows have become an important tool in the inference of effective networks from neuroscience data. These are directed networks where each node is connected to a minimal set of sources which maximally reduce the uncertainty in its present state. However, the application of multivariate information flows to the inference of effective networks from spiking data has been hampered by the above-mentioned issues with preexisting estimation techniques. Here, a greedy algorithm which iteratively builds a set of parents for each target node using multivariate transfer entropies, and which has already been well validated in the context of traditional discretely sampled time series, is adapted to use in conjunction with the newly developed estimator for event-based data. The combination of the greedy algorithm and continuous-time estimator is then validated on simulated examples for which the ground truth is known. The new capabilities in the estimation of information flows and the inference of effective networks on event-based data presented in this work represent a very substantial step forward in our ability to perform these analyses on the ever growing set of high resolution, large scale recordings of interacting neurons. As such, this work promises to enable substantial quantitative insights in the future regarding how neurons interact, how they process information, and how this changes under different conditions such as disease

    Taming neuronal noise with large networks

    Get PDF
    How does reliable computation emerge from networks of noisy neurons? While individual neurons are intrinsically noisy, the collective dynamics of populations of neurons taken as a whole can be almost deterministic, supporting the hypothesis that, in the brain, computation takes place at the level of neuronal populations. Mathematical models of networks of noisy spiking neurons allow us to study the effects of neuronal noise on the dynamics of large networks. Classical mean-field models, i.e., models where all neurons are identical and where each neuron receives the average spike activity of the other neurons, offer toy examples where neuronal noise is absorbed in large networks, that is, large networks behave like deterministic systems. In particular, the dynamics of these large networks can be described by deterministic neuronal population equations. In this thesis, I first generalize classical mean-field limit proofs to a broad class of spiking neuron models that can exhibit spike-frequency adaptation and short-term synaptic plasticity, in addition to refractoriness. The mean-field limit can be exactly described by a multidimensional partial differential equation; the long time behavior of which can be rigorously studied using deterministic methods. Then, we show that there is a conceptual link between mean-field models for networks of spiking neurons and latent variable models used for the analysis of multi-neuronal recordings. More specifically, we use a recently proposed finite-size neuronal population equation, which we first mathematically clarify, to design a tractable Expectation-Maximization-type algorithm capable of inferring the latent population activities of multi-population spiking neural networks from the spike activity of a few visible neurons only, illustrating the idea that latent variable models can be seen as partially observed mean-field models. In classical mean-field models, neurons in large networks behave like independent, identically distributed processes driven by the average population activity -- a deterministic quantity, by the law of large numbers. The fact the neurons are identically distributed processes implies a form of redundancy that has not been observed in the cortex and which seems biologically implausible. To show, numerically, that the redundancy present in classical mean-field models is unnecessary for neuronal noise absorption in large networks, I construct a disordered network model where networks of spiking neurons behave like deterministic rate networks, despite the absence of redundancy. This last result suggests that the concentration of measure phenomenon, which generalizes the ``law of large numbers'' of classical mean-field models, might be an instrumental principle for understanding the emergence of noise-robust population dynamics in large networks of noisy neurons

    Interplay between astrocytic and neuronal networks during virtual navigation in the mouse hippocampus

    Get PDF
    Encoding of spatial information in hippocapal place cells is believed to contribute to spatial cognition during navigation. Whether the processing of spatial information is exclusively limited to neuronal cells or it involves other cell types, e.g. glial cells, in the brain is currently unknown. In this thesis work, I developed an analysis pipeline to tackle this question using statistical methods and Information Theory approaches. I applied these analytical tools to two experimental data sets in which neuronal place cells in the hippocampus were imaged using two-photon microscopy, while selectively manipulating astrocytic calcium dynamics with pharmacogenetics during virtual navigation. Using custom analytical methods, we observed that pharmacogenetic perturbation of astrocytic calcium dynamics, through clozapine-n-oxyde (CNO) injection, induced a significant increase in neuronal place field and response profile width compared to control conditions. The distributions of neuronal place field and response profile center were also significantly different upon perturbation of astrocytic calcium dynamics compared to control conditions. Moreover, we found contrasting effect of astrocytic calcium dynamics perturbation on neuronal content of spatial information in the two data sets. In the first data set, we found that CNO injection resulted in a significant increase in the average information content in all neurons. In the second data set, we instead found that mutual information values were not significantly different upon CNO application compared to controls. Although the presented results are still preliminary and more experiments and analyses are needed, these findings suggest that astrocytic calcium dynamics may actively control the way hippocampal neuronal networks encode spatial information during virtual navigation. These data thus suggest a complex and tight interplay between neuronal and astrocytic networks during higher cognitive functions

    SpiNNaker - A Spiking Neural Network Architecture

    Get PDF
    20 years in conception and 15 in construction, the SpiNNaker project has delivered the world’s largest neuromorphic computing platform incorporating over a million ARM mobile phone processors and capable of modelling spiking neural networks of the scale of a mouse brain in biological real time. This machine, hosted at the University of Manchester in the UK, is freely available under the auspices of the EU Flagship Human Brain Project. This book tells the story of the origins of the machine, its development and its deployment, and the immense software development effort that has gone into making it openly available and accessible to researchers and students the world over. It also presents exemplar applications from ‘Talk’, a SpiNNaker-controlled robotic exhibit at the Manchester Art Gallery as part of ‘The Imitation Game’, a set of works commissioned in 2016 in honour of Alan Turing, through to a way to solve hard computing problems using stochastic neural networks. The book concludes with a look to the future, and the SpiNNaker-2 machine which is yet to come

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. AndrĂ© van Schaik and BernabĂ© Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Cardiac Arrhythmias

    Get PDF
    The most intimate mechanisms of cardiac arrhythmias are still quite unknown to scientists. Genetic studies on ionic alterations, the electrocardiographic features of cardiac rhythm and an arsenal of diagnostic tests have done more in the last five years than in all the history of cardiology. Similarly, therapy to prevent or cure such diseases is growing rapidly day by day. In this book the reader will be able to see with brighter light some of these intimate mechanisms of production, as well as cutting-edge therapies to date. Genetic studies, electrophysiological and electrocardiographyc features, ion channel alterations, heart diseases still unknown , and even the relationship between the psychic sphere and the heart have been exposed in this book. It deserves to be read
    • 

    corecore