23 research outputs found
How Chaotic is the Balanced State?
Large sparse circuits of spiking neurons exhibit a balanced state of highly irregular activity under a wide range of conditions. It occurs likewise in sparsely connected random networks that receive excitatory external inputs and recurrent inhibition as well as in networks with mixed recurrent inhibition and excitation. Here we analytically investigate this irregular dynamics in finite networks keeping track of all individual spike times and the identities of individual neurons. For delayed, purely inhibitory interactions we show that the irregular dynamics is not chaotic but stable. Moreover, we demonstrate that after long transients the dynamics converges towards periodic orbits and that every generic periodic orbit of these dynamical systems is stable. We investigate the collective irregular dynamics upon increasing the time scale of synaptic responses and upon iteratively replacing inhibitory by excitatory interactions. Whereas for small and moderate time scales as well as for few excitatory interactions, the dynamics stays stable, there is a smooth transition to chaos if the synaptic response becomes sufficiently slow (even in purely inhibitory networks) or the number of excitatory interactions becomes too large. These results indicate that chaotic and stable dynamics are equally capable of generating the irregular neuronal activity. More generally, chaos apparently is not essential for generating the high irregularity of balanced activity, and we suggest that a mechanism different from chaos and stochasticity significantly contributes to irregular activity in cortical circuits
Conedy: a scientific tool to investigate Complex Network Dynamics
We present Conedy, a performant scientific tool to numerically investigate
dynamics on complex networks. Conedy allows to create networks and provides
automatic code generation and compilation to ensure performant treatment of
arbitrary node dynamics. Conedy can be interfaced via an internal script
interpreter or via a Python module
Cell assembly dynamics of sparsely-connected inhibitory networks: a simple model for the collective activity of striatal projection neurons
Striatal projection neurons form a sparsely-connected inhibitory network, and
this arrangement may be essential for the appropriate temporal organization of
behavior. Here we show that a simplified, sparse inhibitory network of
Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal
population activity, as observed in brain slices [Carrillo-Reid et al., J.
Neurophysiology 99 (2008) 1435{1450]. In particular we develop a new metric to
determine the conditions under which sparse inhibitory networks form
anti-correlated cell assemblies with time-varying activity of individual cells.
We found that under these conditions the network displays an input-specific
sequence of cell assembly switching, that effectively discriminates similar
inputs. Our results support the proposal [Ponzi and Wickens, PLoS Comp Biol 9
(2013) e1002954] that GABAergic connections between striatal projection neurons
allow stimulus-selective, temporally-extended sequential activation of cell
assemblies. Furthermore, we help to show how altered intrastriatal GABAergic
signaling may produce aberrant network-level information processing in
disorders such as Parkinson's and Huntington's diseases.Comment: 22 pages, 9 figure
Death and rebirth of neural activity in sparse inhibitory networks
In this paper, we clarify the mechanisms underlying a general phenomenon
present in pulse-coupled heterogeneous inhibitory networks: inhibition can
induce not only suppression of the neural activity, as expected, but it can
also promote neural reactivation. In particular, for globally coupled systems,
the number of firing neurons monotonically reduces upon increasing the strength
of inhibition (neurons' death). However, the random pruning of the connections
is able to reverse the action of inhibition, i.e. in a sparse network a
sufficiently strong synaptic strength can surprisingly promote, rather than
depress, the activity of the neurons (neurons' rebirth). Thus the number of
firing neurons reveals a minimum at some intermediate synaptic strength. We
show that this minimum signals a transition from a regime dominated by the
neurons with higher firing activity to a phase where all neurons are
effectively sub-threshold and their irregular firing is driven by current
fluctuations. We explain the origin of the transition by deriving an analytic
mean field formulation of the problem able to provide the fraction of active
neurons as well as the first two moments of their firing statistics. The
introduction of a synaptic time scale does not modify the main aspects of the
reported phenomenon. However, for sufficiently slow synapses the transition
becomes dramatic, the system passes from a perfectly regular evolution to an
irregular bursting dynamics. In this latter regime the model provides
predictions consistent with experimental findings for a specific class of
neurons, namely the medium spiny neurons in the striatum.Comment: 19 pages, 10 figures, submitted to NJ
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
Emergent Properties of Interacting Populations of Spiking Neurons
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations
SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks
Spiking Neural Networks (SNNs) are biologically-inspired models that are
capable of processing information in streams of action potentials. However,
simulating and training SNNs is computationally expensive due to the need to
solve large systems of coupled differential equations. In this paper, we
introduce SparseProp, a novel event-based algorithm for simulating and training
sparse SNNs. Our algorithm reduces the computational cost of both the forward
and backward pass operations from O(N) to O(log(N)) per network spike, thereby
enabling numerically exact simulations of large spiking networks and their
efficient training using backpropagation through time. By leveraging the
sparsity of the network, SparseProp eliminates the need to iterate through all
neurons at each spike, employing efficient state updates instead. We
demonstrate the efficacy of SparseProp across several classical
integrate-and-fire neuron models, including a simulation of a sparse SNN with
one million LIF neurons. This results in a speed-up exceeding four orders of
magnitude relative to previous event-based implementations. Our work provides
an efficient and exact solution for training large-scale spiking neural
networks and opens up new possibilities for building more sophisticated
brain-inspired models.Comment: 10 pages, 4 figures, accepted at NeurIP
Biological Robustness: Paradigms, Mechanisms, and Systems Principles
Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior