79 research outputs found

    Scaling and Inverse Scaling in Anisotropic Bootstrap percolation

    Full text link
    In bootstrap percolation it is known that the critical percolation threshold tends to converge slowly to zero with increasing system size, or, inversely, the critical size diverges fast when the percolation probability goes to zero. To obtain higher-order terms (that is, sharp and sharper thresholds) for the percolation threshold in general is a hard question. In the case of two-dimensional anisotropic models, sometimes correction terms can be obtained from inversion in a relatively simple manner.Comment: Contribution to the proceedings of the 2013 EURANDOM workshop Probabilistic Cellular Automata: Theory, Applications and Future Perspectives, equation typo corrected, constant of generalisation correcte

    On Bootstrap Percolation in Living Neural Networks

    Full text link
    Recent experimental studies of living neural networks reveal that their global activation induced by electrical stimulation can be explained using the concept of bootstrap percolation on a directed random network. The experiment consists in activating externally an initial random fraction of the neurons and observe the process of firing until its equilibrium. The final portion of neurons that are active depends in a non linear way on the initial fraction. The main result of this paper is a theorem which enables us to find the asymptotic of final proportion of the fired neurons in the case of random directed graphs with given node degrees as the model for interacting network. This gives a rigorous mathematical proof of a phenomena observed by physicists in neural networks

    Remarks on Bootstrap Percolation in Metric Networks

    Full text link
    We examine bootstrap percolation in d-dimensional, directed metric graphs in the context of recent measurements of firing dynamics in 2D neuronal cultures. There are two regimes, depending on the graph size N. Large metric graphs are ignited by the occurrence of critical nuclei, which initially occupy an infinitesimal fraction, f_* -> 0, of the graph and then explode throughout a finite fraction. Smaller metric graphs are effectively random in the sense that their ignition requires the initial ignition of a finite, unlocalized fraction of the graph, f_* >0. The crossover between the two regimes is at a size N_* which scales exponentially with the connectivity range \lambda like_* \sim \exp\lambda^d. The neuronal cultures are finite metric graphs of size N \simeq 10^5-10^6, which, for the parameters of the experiment, is effectively random since N<< N_*. This explains the seeming contradiction in the observed finite f_* in these cultures. Finally, we discuss the dynamics of the firing front
    corecore