552 research outputs found

    Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder

    Full text link
    We study the statistical properties of the stationary firing-rate states of a neural network model with quenched disorder. The model has arbitrary size, discrete-time evolution equations and binary firing rates, while the topology and the strength of the synaptic connections are randomly generated from known, generally arbitrary, probability distributions. We derived semi-analytical expressions of the occurrence probability of the stationary states and the mean multistability diagram of the model, in terms of the distribution of the synaptic connections and of the external stimuli to the network. Our calculations rely on the probability distribution of the bifurcation points of the stationary states with respect to the external stimuli, which can be calculated in terms of the permanent of special matrices, according to extreme value theory. While our semi-analytical expressions are exact for any size of the network and for any distribution of the synaptic connections, we also specialized our calculations to the case of statistically-homogeneous multi-population networks. In the specific case of this network topology, we calculated analytically the permanent, obtaining a compact formula that outperforms of several orders of magnitude the Balasubramanian-Bax-Franklin-Glynn algorithm. To conclude, by applying the Fisher-Tippett-Gnedenko theorem, we derived asymptotic expressions of the stationary-state statistics of multi-population networks in the large-network-size limit, in terms of the Gumbel (double exponential) distribution. We also provide a Python implementation of our formulas and some examples of the results generated by the code.Comment: 30 pages, 6 figures, 2 supplemental Python script

    Inferring hidden states in Langevin dynamics on large networks: Average case performance

    Get PDF
    We present average performance results for dynamical inference problems in large networks, where a set of nodes is hidden while the time trajectories of the others are observed. Examples of this scenario can occur in signal transduction and gene regulation networks. We focus on the linear stochastic dynamics of continuous variables interacting via random Gaussian couplings of generic symmetry. We analyze the inference error, given by the variance of the posterior distribution over hidden paths, in the thermodynamic limit and as a function of the system parameters and the ratio {\alpha} between the number of hidden and observed nodes. By applying Kalman filter recursions we find that the posterior dynamics is governed by an "effective" drift that incorporates the effect of the observations. We present two approaches for characterizing the posterior variance that allow us to tackle, respectively, equilibrium and nonequilibrium dynamics. The first appeals to Random Matrix Theory and reveals average spectral properties of the inference error and typical posterior relaxation times, the second is based on dynamical functionals and yields the inference error as the solution of an algebraic equation.Comment: 20 pages, 5 figure

    The adaptive interpolation method for proving replica formulas. Applications to the Curie-Weiss and Wigner spike models

    Full text link
    In this contribution we give a pedagogic introduction to the newly introduced adaptive interpolation method to prove in a simple and unified way replica formulas for Bayesian optimal inference problems. Many aspects of this method can already be explained at the level of the simple Curie-Weiss spin system. This provides a new method of solution for this model which does not appear to be known. We then generalize this analysis to a paradigmatic inference problem, namely rank-one matrix estimation, also refered to as the Wigner spike model in statistics. We give many pointers to the recent literature where the method has been succesfully applied

    Instability of frozen-in states in synchronous Hebbian neural networks

    Full text link
    The full dynamics of a synchronous recurrent neural network model with Ising binary units and a Hebbian learning rule with a finite self-interaction is studied in order to determine the stability to synaptic and stochastic noise of frozen-in states that appear in the absence of both kinds of noise. Both, the numerical simulation procedure of Eissfeller and Opper and a new alternative procedure that allows to follow the dynamics over larger time scales have been used in this work. It is shown that synaptic noise destabilizes the frozen-in states and yields either retrieval or paramagnetic states for not too large stochastic noise. The indications are that the same results may follow in the absence of synaptic noise, for low stochastic noise.Comment: 14 pages and 4 figures; accepted for publication in J. Phys. A: Math. Ge
    corecore