62 research outputs found

    Simulation and Theory of Large-Scale Cortical Networks

    Get PDF
    Cerebral cortex is composed of intricate networks of neurons. These neuronal networks are strongly interconnected: every neuron receives, on average, input from thousands or more presynaptic neurons. In fact, to support such a number of connections, a majority of the volume in the cortical gray matter is filled by axons and dendrites. Besides the networks, neurons themselves are also highly complex. They possess an elaborate spatial structure and support various types of active processes and nonlinearities. In the face of such complexity, it seems necessary to abstract away some of the details and to investigate simplified models. In this thesis, such simplified models of neuronal networks are examined on varying levels of abstraction. Neurons are modeled as point neurons, both rate-based and spike-based, and networks are modeled as block-structured random networks. Crucially, on this level of abstraction, the models are still amenable to analytical treatment using the framework of dynamical mean-field theory. The main focus of this thesis is to leverage the analytical tractability of random networks of point neurons in order to relate the network structure, and the neuron parameters, to the dynamics of the neurons—in physics parlance, to bridge across the scales from neurons to networks. More concretely, four different models are investigated: 1) fully connected feedforward networks and vanilla recurrent networks of rate neurons; 2) block-structured networks of rate neurons in continuous time; 3) block-structured networks of spiking neurons; and 4) a multi-scale, data-based network of spiking neurons. We consider the first class of models in the light of Bayesian supervised learning and compute their kernel in the infinite-size limit. In the second class of models, we connect dynamical mean-field theory with large-deviation theory, calculate beyond mean-field fluctuations, and perform parameter inference. For the third class of models, we develop a theory for the autocorrelation time of the neurons. Lastly, we consolidate data across multiple modalities into a layer- and population-resolved model of human cortex and compare its activity with cortical recordings. In two detours from the investigation of these four network models, we examine the distribution of neuron densities in cerebral cortex and present a software toolbox for mean-field analyses of spiking networks

    Linking Network and Neuron-level Correlations by Renormalized Field Theory

    Full text link
    It is frequently hypothesized that cortical networks operate close to a critical point. Advantages of criticality include rich dynamics well-suited for computation and critical slowing down, which may offer a mechanism for dynamic memory. However, mean-field approximations, while versatile and popular, inherently neglect the fluctuations responsible for such critical dynamics. Thus, a renormalized theory is necessary. We consider the Sompolinsky-Crisanti-Sommers model which displays a well studied chaotic as well as a magnetic transition. Based on the analogue of a quantum effective action, we derive self-consistency equations for the first two renormalized Greens functions. Their self-consistent solution reveals a coupling between the population level activity and single neuron heterogeneity. The quantitative theory explains the population autocorrelation function, the single-unit autocorrelation function with its multiple temporal scales, and cross correlations

    Large Deviations Approach to Random Recurrent Neuronal Networks: Parameter Inference and Fluctuation-Induced Transitions

    Full text link
    We here unify the field theoretical approach to neuronal networks with large deviations theory. For a prototypical random recurrent network model with continuous-valued units, we show that the effective action is identical to the rate function and derive the latter using field theory. This rate function takes the form of a Kullback-Leibler divergence which enables data-driven inference of model parameters and calculation of fluctuations beyond mean-field theory. Lastly, we expose a regime with fluctuation-induced transitions between mean-field solutions.Comment: Extension to multiple population

    Usage and Scaling of an Open-Source Spiking Multi-Area Model of Monkey Cortex

    Full text link
    We are entering an age of `big' computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other's work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of ICT infrastructure for neuroscience

    A Microscopic Theory of Intrinsic Timescales in Spiking Neural Networks

    Full text link
    A complex interplay of single-neuron properties and the recurrent network structure shapes the activity of cortical neurons. The single-neuron activity statistics differ in general from the respective population statistics, including spectra and, correspondingly, autocorrelation times. We develop a theory for self-consistent second-order single-neuron statistics in block-structured sparse random networks of spiking neurons. In particular, the theory predicts the neuron-level autocorrelation times, also known as intrinsic timescales, of the neuronal activity. The theory is based on an extension of dynamic mean-field theory from rate networks to spiking networks, which is validated via simulations. It accounts for both static variability, e.g. due to a distributed number of incoming synapses per neuron, and temporal fluctuations of the input. We apply the theory to balanced random networks of generalized linear model neurons, balanced random networks of leaky integrate-and-fire neurons, and a biologically constrained network of leaky integrate-and-fire neurons. For the generalized linear model network with an error function nonlinearity, a novel analytical solution of the colored noise problem allows us to obtain self-consistent firing rate distributions, single-neuron power spectra, and intrinsic timescales. For the leaky integrate-and-fire networks, we derive an approximate analytical solution of the colored noise problem, based on the Stratonovich approximation of the Wiener-Rice series and a novel analytical solution for the free upcrossing statistics. Again closing the system self-consistently, in the fluctuation-driven regime this approximation yields reliable estimates of the mean firing rate and its variance across neurons, the inter-spike interval distribution, the single-neuron power spectra, and intrinsic timescales

    Simulation and theory of large-scale cortical networks

    No full text
    Cerebral cortex is composed of intricate networks of neurons. These neuronal networks are strongly interconnected: every neuron receives, on average, input from thousands or more presynaptic neurons. In fact, to support such a number of connections, a majority of the volume inthe cortical gray matter is filled by axons and dendrites. Besides the networks, neurons themselves are also highly complex. They possess an elaborate spatial structure and support various types of active processes and nonlinearities. In the face of such complexity, it seems necessary to abstract away some of the details and to investigate simplified models.In this thesis, such simplified models of neuronal networks are examined on varying levels of abstraction. Neurons are modeled as point neurons, both rate-based and spike-based, and networks are modeled as block-structured random networks. Crucially, on this level of abstraction, the models are still amenable to analytical treatment using the framework of dynamical mean-field theory.The main focus of this thesis is to leverage the analytical tractability of random networks of point neurons in order to relate the network structure, and the neuron parameters, to the dynamics of the neurons—in physics parlance, to bridge across the scales from neurons to networks.More concretely, four different models are investigated: 1) fully connected feedforward networks and vanilla recurrent networks of rate neurons; 2) block-structured networks of rate neurons in continuous time; 3) block-structured networks of spiking neurons; and 4) a multi-scale, data-based network of spiking neurons. We consider the first class of models in the light of Bayesian supervised learning and compute their kernel in the infinite-size limit. In the second class of models, we connect dynamical mean-field theory with large-deviation theory, calculate beyond mean-field fluctuations, and perform parameter inference. For the third class of models, we develop a theory for the autocorrelation time of the neurons. Lastly, we consolidate data across multiple modalities into a layer- and population-resolved model of human cortex and compare its activity with cortical recordings.In two detours from the investigation of these four network models, we examine the distribution of neuron densities in cerebral cortex and present a software toolbox for mean-field analyses of spiking networks

    Network Models I & II

    No full text

    Theory of Intrinsic Timescales in Spiking Neural Networks

    No full text
    We investigate intrinsic timescales, characterized by single unit autocorrelation times, in spiking neuralnetwork models that incorporate exhaustive experimental data about the network architecture [1, 2, 3].In vivo, electrophysiological recordings during the resting state reveal a hierarchical structure of intrinsictimescales that matches anatomical hierarchies remarkably well [4]. Using dynamical mean field theory,we try to elucidate this apparent interrelation between network structure and intrinsic timescales.In a first step, we reduce the dynamics of the recurrent network of spiking neurons to a set of self-consistent, one-dimensional stochastic differential equations. To this end, we make use of a dynamicalmean-field theory originally developed for spin glasses [5]. The starting point of this theory is the system’scharacteristic functional and it proceeds with a disorder average, a Hubbard-Stratonovich transformation,and a saddle point approximation. Although technically involved, the result is quite intuitive: The massiverecurrent input each neuron receives is replaced by an effective Gaussian process.To obtain the intrinsic timescale from the reduced dynamics, we have to calculate the correlation functionof a spiking neuron driven by a non-Markovian Gaussian process. In the low firing rate regime where themean interspike interval exceeds the correlation time of the input, a renewal approximation is admissible.As a renewal process is fully characterized by its hazard function, we derive novel approximations forthe hazard function of a leaky integrate-and-fire neuron driven by a non-Markovian Gaussian process.This enables us to obtain an analytically closed system of self-consistent equations for the autocorrelationfunctions of single neurons in recurrent networks. By formulating these analytical expressions, a thoroughinvestigation of the effect of network architecture on intrinsic timescales becomes possible
    corecore