58 research outputs found

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics

    How does the brain extract acoustic patterns? A behavioural and neural study

    Get PDF
    In complex auditory scenes the brain exploits statistical regularities to group sound elements into streams. Previous studies using tones that transition from being randomly drawn to regularly repeating, have highlighted a network of brain regions involved during this process of regularity detection, including auditory cortex (AC) and hippocampus (HPC; Barascud et al., 2016). In this thesis, I seek to understand how the neurons within AC and HPC detect and maintain a representation of deterministic acoustic regularity. I trained ferrets (n = 6) on a GO/NO-GO task to detect the transition from a random sequence of tones to a repeating pattern of tones, with increasing pattern lengths (3, 5 and 7). All animals performed significantly above chance, with longer reaction times and declining performance as the pattern length increased. During performance of the behavioural task, or passive listening, I recorded from primary and secondary fields of AC with multi-electrode arrays (behaving: n = 3), or AC and HPC using Neuropixels probes (behaving: n = 1; passive: n = 1). In the local field potential, I identified no differences in the evoked response between presentations of random or regular sequences. Instead, I observed significant increases in oscillatory power at the rate of the repeating pattern, and decreases at the tone presentation rate, during regularity. Neurons in AC, across the population, showed higher firing with more repetitions of the pattern and for shorter pattern lengths. Single-units within AC showed higher precision in their firing when responding to their best frequency during regularity. Neurons in AC and HPC both entrained to the pattern rate during presentation of the regular sequence when compared to the random sequence. Lastly, development of an optogenetic approach to inactivate AC in the ferret paves the way for future work to probe the causal involvement of these brain regions

    Efficient Numerical Population Density Techniques with an Application in Spinal Cord Modelling

    Get PDF
    MIIND is a neural simulator which uses an innovative numerical population density technique to simulate the behaviour of multiple interacting populations of neurons under the influence of noise. Recent efforts have produced similar techniques but they are often limited to a single neuron model or type of behaviour. Extensions to these require a great deal of further work and specialist knowledge. The technique used in MIIND overcomes this limitation by being agnostic to the underlying neuron model of each population. However, earlier versions of MIIND still required a high level of technical knowledge to set up the software and involved an often time-consuming manual pre-simulation process. It was also limited to only two-dimensional neuron models. This thesis presents the development of an alternative population density technique, based on that already in MIIND, which reduces the pre-simulation step to an automated process. The new technique is much more flexible and has no limit on the number of time-dependent variables in the underlying neuron model. For the first time, the population density over the state space of the Hodgkin-Huxley neuron model can be observed in an efficient manner on a single PC. The technique allows simulation time to be significantly reduced by gracefully degrading the accuracy without losing important behavioural features. The MIIND software itself has also been simplified, reducing technical barriers to entry, so that it can now be run from a Python script and installed as a Python module. With the improved usability, a model of neural populations in the spinal cord was simulated in MIIND. It showed how afferent signals can be integrated into common reflex circuits to produce observed patterns of muscle activation during an isometric knee extension task. The influence of proprioception in motor control is not fully understood as it can be both task and subject-specific. The results of this study show that afferent signals have a significant effect on sub-maximal muscle contractions even when the limb remains static. Such signals should be considered when developing methods to improve motor control in activities of daily living via therapeutic or mechanical means

    The Network Science Of Distributed Representational Systems

    Get PDF
    From brains to science itself, distributed representational systems store and process information about the world. In brains, complex cognitive functions emerge from the collective activity of billions of neurons, and in science, new knowledge is discovered by building on previous discoveries. In both systems, many small individual units—neurons and scientific concepts—interact to inform complex behaviors in the systems they comprise. The patterns in the interactions between units are telling; pairwise interactions not only trivially affect pairs of units, but they also form structural and dynamic patterns with more than just pairs, on a larger scale of the network. Recently, network science adapted methods from graph theory, statistical mechanics, information theory, algebraic topology, and dynamical systems theory to study such complex systems. In this dissertation, we use such cutting-edge methods in network science to study complex distributed representational systems in two domains: cascading neural networks in the domain of neuroscience and concept networks in the domain of science of science. In the domain of neuroscience, the brain is a system that supports complex behavior by storing and processing information from the environment on long time scales. Underlying such behavior is a network of millions of interacting neurons. Many recent studies measure neural activity on the scale of the whole brain with brain regions as units or on the scale of brain regions with individual neurons as units. While many studies have explored the neural correlates of behaviors on these scales, it is less explored how neural activity can be decomposed into low-level patterns. Network science has shown potential to advance our understanding of large-scale brain networks, and here, we apply network science to further our understanding of low-level patterns in small-scale neural networks. Specifically, we explore how the structure and dynamics of biological neural networks support information storage and computation in spontaneous neural activity in slice recordings of rodent brains. Our results illustrate the relationships between network structure, dynamics, and information processing in neural systems. In the domain of science of science, the practice of science itself is a system that discovers and curates information about the physical and social world. For centuries, philosophers, historians, and sociologists of science have theorized about the process and practice of scientific discovery. Recently, the field of science of science has emerged to use a more data-driven approach to quantify the process of science. However, it remains unclear how recent advances in science of science either support or refute the various theories from the philosophies of science. Here, we use a network science approach to operationalize theories from prominent philosophers of science, and we test those theories using networks of hyperlinked articles in Wikipedia, the largest online encyclopedia. Our results support a nuanced view of philosophies of science—that science does not grow outward, as many may intuit, but by filling in gaps in knowledge. In this dissertation, we examine cascading neural networks first in Chapters 2 through 4 and then concept networks in Chapter 5. The studies in Chapters 2 to 4 highlight the role of patterns in the connections of neural networks in storing information and performing computations. The study in Chapter 5 describes patterns in the historical growth of concept networks of scientific knowledge from Wikipedia. Together, these analyses aim to shed light on the network science of distributed representational systems that store and process information about the world

    Simulation and Theory of Large-Scale Cortical Networks

    Get PDF
    Cerebral cortex is composed of intricate networks of neurons. These neuronal networks are strongly interconnected: every neuron receives, on average, input from thousands or more presynaptic neurons. In fact, to support such a number of connections, a majority of the volume in the cortical gray matter is filled by axons and dendrites. Besides the networks, neurons themselves are also highly complex. They possess an elaborate spatial structure and support various types of active processes and nonlinearities. In the face of such complexity, it seems necessary to abstract away some of the details and to investigate simplified models. In this thesis, such simplified models of neuronal networks are examined on varying levels of abstraction. Neurons are modeled as point neurons, both rate-based and spike-based, and networks are modeled as block-structured random networks. Crucially, on this level of abstraction, the models are still amenable to analytical treatment using the framework of dynamical mean-field theory. The main focus of this thesis is to leverage the analytical tractability of random networks of point neurons in order to relate the network structure, and the neuron parameters, to the dynamics of the neurons—in physics parlance, to bridge across the scales from neurons to networks. More concretely, four different models are investigated: 1) fully connected feedforward networks and vanilla recurrent networks of rate neurons; 2) block-structured networks of rate neurons in continuous time; 3) block-structured networks of spiking neurons; and 4) a multi-scale, data-based network of spiking neurons. We consider the first class of models in the light of Bayesian supervised learning and compute their kernel in the infinite-size limit. In the second class of models, we connect dynamical mean-field theory with large-deviation theory, calculate beyond mean-field fluctuations, and perform parameter inference. For the third class of models, we develop a theory for the autocorrelation time of the neurons. Lastly, we consolidate data across multiple modalities into a layer- and population-resolved model of human cortex and compare its activity with cortical recordings. In two detours from the investigation of these four network models, we examine the distribution of neuron densities in cerebral cortex and present a software toolbox for mean-field analyses of spiking networks

    Taming neuronal noise with large networks

    Get PDF
    How does reliable computation emerge from networks of noisy neurons? While individual neurons are intrinsically noisy, the collective dynamics of populations of neurons taken as a whole can be almost deterministic, supporting the hypothesis that, in the brain, computation takes place at the level of neuronal populations. Mathematical models of networks of noisy spiking neurons allow us to study the effects of neuronal noise on the dynamics of large networks. Classical mean-field models, i.e., models where all neurons are identical and where each neuron receives the average spike activity of the other neurons, offer toy examples where neuronal noise is absorbed in large networks, that is, large networks behave like deterministic systems. In particular, the dynamics of these large networks can be described by deterministic neuronal population equations. In this thesis, I first generalize classical mean-field limit proofs to a broad class of spiking neuron models that can exhibit spike-frequency adaptation and short-term synaptic plasticity, in addition to refractoriness. The mean-field limit can be exactly described by a multidimensional partial differential equation; the long time behavior of which can be rigorously studied using deterministic methods. Then, we show that there is a conceptual link between mean-field models for networks of spiking neurons and latent variable models used for the analysis of multi-neuronal recordings. More specifically, we use a recently proposed finite-size neuronal population equation, which we first mathematically clarify, to design a tractable Expectation-Maximization-type algorithm capable of inferring the latent population activities of multi-population spiking neural networks from the spike activity of a few visible neurons only, illustrating the idea that latent variable models can be seen as partially observed mean-field models. In classical mean-field models, neurons in large networks behave like independent, identically distributed processes driven by the average population activity -- a deterministic quantity, by the law of large numbers. The fact the neurons are identically distributed processes implies a form of redundancy that has not been observed in the cortex and which seems biologically implausible. To show, numerically, that the redundancy present in classical mean-field models is unnecessary for neuronal noise absorption in large networks, I construct a disordered network model where networks of spiking neurons behave like deterministic rate networks, despite the absence of redundancy. This last result suggests that the concentration of measure phenomenon, which generalizes the ``law of large numbers'' of classical mean-field models, might be an instrumental principle for understanding the emergence of noise-robust population dynamics in large networks of noisy neurons

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic

    Mean-field limit of age and leaky memory dependent Hawkes processes

    Get PDF
    We propose a mean-field model of interacting point processes where each process has a memory of the time elapsed since its last event (age) and its recent past (leaky memory), generalizing Age-dependent Hawkes processes. The model is motivated by interacting nonlinear Hawkes processes with Markovian self-interaction and networks of spiking neurons with adaptation and short-term synaptic plasticity. By proving propagation of chaos and using a path integral representation for the law of the limit process, we show that, in the mean-field limit, the empirical measure of the system follows a multidimensional nonlocal transport equation

    A Neuromorphic Machine Learning Framework based on the Growth Transform Dynamical System

    Get PDF
    As computation increasingly moves from the cloud to the source of data collection, there is a growing demand for specialized machine learning algorithms that can perform learning and inference at the edge in energy and resource-constrained environments. In this regard, we can take inspiration from small biological systems like insect brains that exhibit high energy-efficiency within a small form-factor, and show superior cognitive performance using fewer, coarser neural operations (action potentials or spikes) than the high-precision floating-point operations used in deep learning platforms. Attempts at bridging this gap using neuromorphic hardware has produced silicon brains that are orders of magnitude inefficient in energy dissipation as well as performance. This is because neuromorphic machine learning (ML) algorithms are traditionally built bottom-up, starting with neuron models that mimic the response of biological neurons and connecting them together to form a network. Neural responses and weight parameters are therefore not optimized w.r.t. any system objective, and it is not evident how individual spikes and the associated population dynamics are related to a network objective. On the other hand, conventional ML algorithms follow a top-down synthesis approach, starting from a system objective (that usually only models task efficiency), and reducing the problem to the model of a non-spiking neuron with non-local updates and little or no control over the population dynamics. I propose that a reconciliation of the two approaches may be key to designing scalable spiking neural networks that optimize for both energy and task efficiency under realistic physical constraints, while enabling spike-based encoding and learning based on local updates in an energy-based framework like traditional ML models. To this end, I first present a neuron model implementing a mapping based on polynomial growth transforms, which allows for independent control over spike forms and transient firing statistics. I show how spike responses are generated as a result of constraint violation while minimizing a physically plausible energy functional involving a continuous-valued neural variable, that represents the local power dissipation in a neuron. I then show how the framework could be extended to coupled neurons in a network by remapping synaptic interactions in a standard spiking network. I show how the network could be designed to perform a limited amount of learning in an energy-efficient manner even without synaptic adaptation by appropriate choices of network structure and parameters - through spiking SVMs that learn to allocate switching energy to neurons that are more important for classification and through spiking associative memory networks that learn to modulate their responses based on global activity. Lastly, I describe a backpropagation-less learning framework for synaptic adaptation where weight parameters are optimized w.r.t. a network-level loss function that represents spiking activity across the network, but which produces updates that are local. I show how the approach can be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the network-level spiking activity. I build upon this framework to introduce end-to-end spiking neural network (SNN) architectures and demonstrate their applicability for energy and resource-efficient learning using a benchmark dataset
    • …
    corecore