32 research outputs found

    Eigenvalues of block structured asymmetric random matrices

    Full text link
    We study the spectrum of an asymmetric random matrix with block structured variances. The rows and columns of the random square matrix are divided into DD partitions with arbitrary size (linear in NN). The parameters of the model are the variances of elements in each block, summarized in g∈R+D×Dg\in\mathbb{R}^{D\times D}_+. Using the Hermitization approach and by studying the matrix-valued Stieltjes transform we show that these matrices have a circularly symmetric spectrum, we give an explicit formula for their spectral radius and a set of implicit equations for the full density function. We discuss applications of this model to neural networks

    On the low dimensional dynamics of structured random networks

    Get PDF
    Using a generalized random recurrent neural network model, and by extending our recently developed mean-field approach [J. Aljadeff, M. Stern, T. Sharpee, Phys. Rev. Lett. 114, 088101 (2015)], we study the relationship between the network connectivity structure and its low dimensional dynamics. Each connection in the network is a random number with mean 0 and variance that depends on pre- and post-synaptic neurons through a sufficiently smooth function gg of their identities. We find that these networks undergo a phase transition from a silent to a chaotic state at a critical point we derive as a function of gg. Above the critical point, although unit activation levels are chaotic, their autocorrelation functions are restricted to a low dimensional subspace. This provides a direct link between the network's structure and some of its functional characteristics. We discuss example applications of the general results to neuroscience where we derive the support of the spectrum of connectivity matrices with heterogeneous and possibly correlated degree distributions, and to ecology where we study the stability of the cascade model for food web structure.Comment: 16 pages, 4 figure

    Chaos in heterogeneous neural networks: II. Multiple activity modes

    Get PDF
    We study the activity of a recurrent neural network consisting of multiple cell groups through the structure of its correlations by showing how the rules that govern the strengths of connections between the different cell groups shape the average autocorrelation found in each group. We derive an analytical expression for the number of independent autocorrelation modes the network can concurrently sustain. Each mode corresponds to a non-zero component of the network’s autocorrelation, when it is projected on a specific set of basis vectors. In a companion abstract we derive a formula for the first mode, and hence the entire network, to become active. When the network is just above the critical point where it becomes active all groups of cells have the same autocorrelation function up to a constant multiplicative factor. We derive here a formula for this multiplicative factor which is in fact the ratio of the average firing rate of each group. As the effective synaptic gain grows a second activity mode appears, the autocorrelation functions of each group have different shapes, and the network becomes doubly chaotic. We generalize this result to understand how many modes of activity can be found in a heterogeneous network based on its connectivity structure. Finally, we use our theory to understand the dynamics of a clustered network where cells from the same group are strongly connected compared to cells from different groups. We show how this structure can lead to a one or more activity modes and interesting switching effects in the identity of the dominant cluster

    Predicting the stability of large structured food webs

    Get PDF
    The stability of ecological systems has been a long-standing focus of ecology. Recently, tools from random matrix theory have identified the main drivers of stability in ecological communities whose network structure is random. However, empirical food webs differ greatly from random graphs. For example, their degree distribution is broader, they contain few trophic cycles, and they are almost interval. Here we derive an approximation for the stability of food webs whose structure is generated by the cascade model, in which 'larger' species consume 'smaller' ones. We predict the stability of these food webs with great accuracy, and our approximation also works well for food webs whose structure is determined empirically or by the niche model. We find that intervality and broad degree distributions tend to stabilize food webs, and that average interaction strength has little influence on stability, compared with the effect of variance and correlation

    Neural Responses to Structured Random Inputs

    No full text
    Recurrent random network models are a useful theoretical tool to understand the irregular activity of neural networks in the brain. To preserve the analytical tractability, often it is assumed that connectivity statistics are homogeneous. In contrast, experiments highlight the importance of the heterogeneity found in neural circuits. By extending the dynamic mean field method we solved the dynamics of a recurrent neural network with cell-type-dependent connectivity. These networks undergo a phase transition from a silent state to a state with chaotic activity, and can sustain multiple global activity modes that are predicted by our analysis. By finding the location of the critical point at which the phase transition occurs we derived a new mathematical result: the spectral radius of a random matrix with block structured variances, which serves as the network's connectivity matrix. Applying our results we explain how a small number of hyper-excitable neurons that are integrated into the network can lead to significant changes in its computational capacity; and show that a clustered architecture, where inter-cluster connectivity is weaker than intra-cluster connectivity, can also lead to network configurations that are advantageous from a computational standpoint. The heterogeneity of neural networks is perhaps rivaled only by the diversity of the external sensory environment. Every organism is constantly bombarded by stimuli that inherit their statistical structure from that environment, and tend to have strong correlations. The computation that neurons perform adapts to the stimulus statistics, making it important to find the features a cell is sensitive to using stimuli that are as close to natural as possible. Spike-Triggered- Covariance is a popular and computationally efficient dimensionality reduction method that finds the features that are relevant for a cell's computation. Using this technique to analyze model and retinal ganglion cell responses we show that strong stimulus correlations interfere with analysis of statistical significance of candidate input dimensions. Using results from random matrix theory we derive a correction scheme that eliminates these artifacts, allowing for an order of magnitude increase in the sensitivity of the metho

    Forgetting Leads to Chaos in Attractor Networks

    No full text
    Attractor networks are an influential theory for memory storage in brain systems. This theory has recently been challenged by the observation of strong temporal variability in neuronal recordings during memory tasks. In this work, we study a sparsely connected attractor network where memories are learned according to a Hebbian synaptic plasticity rule. After recapitulating known results for the continuous, sparsely connected Hopfield model, we investigate a model in which new memories are learned continuously and old memories are forgotten, using an online synaptic plasticity rule. We show that for a forgetting timescale that optimizes storage capacity, the qualitative features of the network’s memory retrieval dynamics are age dependent: most recent memories are retrieved as fixed-point attractors while older memories are retrieved as chaotic attractors characterized by strong heterogeneity and temporal fluctuations. Therefore, fixed-point and chaotic attractors coexist in the network phase space. The network presents a continuum of statistically distinguishable memory states, where chaotic fluctuations appear abruptly above a critical age and then increase gradually until the memory disappears. We develop a dynamical mean field theory to analyze the age-dependent dynamics and compare the theory with simulations of large networks. We compute the optimal forgetting timescale for which the number of stored memories is maximized. We found that the maximum age at which memories can be retrieved is given by an instability at which old memories destabilize and the network converges instead to a more recent one. Our numerical simulations show that a high degree of sparsity is necessary for the dynamical mean field theory to accurately predict the network capacity. To test the robustness and biological plausibility of our results, we study numerically the dynamics of a network with learning rules and transfer function inferred from in vivo data in the online learning scenario. We found that all aspects of the network’s dynamics characterized analytically in the simpler model also hold in this model. These results are highly robust to noise. Finally, our theory provides specific predictions for delay response tasks with aging memoranda. In particular, it predicts a higher degree of temporal fluctuations in retrieval states associated with older memories, and it also predicts fluctuations should be faster in older memories. Overall, our theory of attractor networks that continuously learn new information at the price of forgetting old memories can account for the observed diversity of retrieval states in the cortex, and in particular, the strong temporal fluctuations of cortical activity
    corecore