44,649 research outputs found

    A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks

    Get PDF
    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest

    The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study

    Get PDF
    High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...Comment: 20 pages, 10 figures, supplement

    Eigenvalue spectral properties of sparse random matrices obeying Dale's law

    Full text link
    Understanding the dynamics of large networks of neurons with heterogeneous connectivity architectures is a complex physics problem that demands novel mathematical techniques. Biological neural networks are inherently spatially heterogeneous, making them difficult to mathematically model. Random recurrent neural networks capture complex network connectivity structures and enable mathematically tractability. Our paper generalises previous classical results to sparse connectivity matrices which have distinct excitatory (E) or inhibitory (I) neural populations. By investigating sparse networks we construct our analysis to examine the impacts of all levels of network sparseness, and discover a novel nonlinear interaction between the connectivity matrix and resulting network dynamics, in both the balanced and unbalanced cases. Specifically, we deduce new mathematical dependencies describing the influence of sparsity and distinct E/I distributions on the distribution of eigenvalues (eigenspectrum) of the networked Jacobian. Furthermore, we illustrate that the previous classical results are special cases of the more general results we have described here. Understanding the impacts of sparse connectivities on network dynamics is of particular importance for both theoretical neuroscience and mathematical physics as it pertains to the structure-function relationship of networked systems and their dynamics. Our results are an important step towards developing analysis techniques that are essential to studying the impacts of larger scale network connectivity on network function, and furthering our understanding of brain function and dysfunction.Comment: 18 pages, 6 figure

    On the low dimensional dynamics of structured random networks

    Get PDF
    Using a generalized random recurrent neural network model, and by extending our recently developed mean-field approach [J. Aljadeff, M. Stern, T. Sharpee, Phys. Rev. Lett. 114, 088101 (2015)], we study the relationship between the network connectivity structure and its low dimensional dynamics. Each connection in the network is a random number with mean 0 and variance that depends on pre- and post-synaptic neurons through a sufficiently smooth function gg of their identities. We find that these networks undergo a phase transition from a silent to a chaotic state at a critical point we derive as a function of gg. Above the critical point, although unit activation levels are chaotic, their autocorrelation functions are restricted to a low dimensional subspace. This provides a direct link between the network's structure and some of its functional characteristics. We discuss example applications of the general results to neuroscience where we derive the support of the spectrum of connectivity matrices with heterogeneous and possibly correlated degree distributions, and to ecology where we study the stability of the cascade model for food web structure.Comment: 16 pages, 4 figure

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset
    • …
    corecore