5 research outputs found

    Impact of targeted attack on the spontaneous activity in spatial and biologically-inspired neuronal networks

    Get PDF
    We study the structural and dynamical consequences of damage in spatial neuronal networks. Inspired by real in vitro networks, we construct directed networks embedded in a two-dimensional space and follow biological rules for designing the wiring of the system. As a result, synthetic cultures display strong metric correlations similar to those observed in real experiments. In its turn, neuronal dynamics is incorporated through the Izhikevich model adopting the parameters derived from observation in real cultures. We consider two scenarios for damage, targeted attacks on those neurons with the highest out-degree and random failures. By analyzing the evolution of both the giant connected component and the dynamical patterns of the neurons as nodes are removed, we observe that network activity halts for a removal of 50% of the nodes in targeted attacks, much lower than the 70% node removal required in the case of random failures. Notably, the decrease of neuronal activity is not gradual. Both damage scenarios portray "boosts" of activity just before full silencing that are not present in equivalent random (Erdös-Rényi) graphs. These boosts correspond to small, spatially compact subnetworks that are able to maintain high levels of activity. Since these subnetworks are absent in the equivalent random graphs, we hypothesize that metric correlations facilitate the existence of local circuits sufficiently integrated to maintain activity, shaping an intrinsic mechanism for resilience

    Modular architecture facilitates noise-driven control of synchrony in neuronal networks

    Get PDF
    H.Y., A.H.-I., and S.S. acknowledge MEXT Grant-in-Aid for Transformative Research Areas (B) “Multicellular Neurobiocomputing” (21H05164), JSPS KAKENHI (18H03325, 19H00846, 20H02194, 20K20550, 22H03657, 22K19821, 22KK0177, and 23H03489), JST-PRESTO (JMPJPR18MB), JST-CREST (JPMJCR19K3), and Tohoku University RIEC Cooperative Research Project Program for financial support. F.P.S., V.P., and J.Z. received support from the Max-Planck-Society. F.P.S. acknowledges funding by SMARTSTART, the joint training program in computational neuroscience by the VolkswagenStiftung and the Bernstein Network. F.P.S. and V.P. were funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), SFB-1528–Cognition of Interaction. V.P. was supported by the DFG under Germany’s Excellence Strategy EXC 2067/1- 390729940. V.B. and A.L. were supported by a Sofja Kovalevskaja Award from the Alexander von Humboldt Foundation, endowed by the Federal Ministry of Education and Research. A.L. is a member of the Machine Learning Cluster of Excellence EXC 2064/1- 39072764. M.A.M. acknowledges the Spanish Ministry and Agencia Estatal de investigación (AEI) through Project of I + D + i (PID2020-113681GB-I00), financed by MICIN/AEI/10.13039/501100011033 and FEDER “A way to make Europe”, and the Consejería de Conocimiento, Investigación Universidad, Junta de Andalucía and European Regional Development Fund (P20-00173) for financial support. J.Z. received financial support from the Joachim Herz Stiftung. J.S. acknowledges Horizon 2020 Future and Emerging Technologies (grant agreement 964877-NEUChiP), Ministerio de Ciencia, Innovación y Universidades (PID2019-108842GB-C21), and Departament de Recerca i Universitats, Generalitat de Catalunya (2017-SGR-1061 and 2021-SGR-00450) for financial support.Supplementary Materials This PDF file includes: Supplementary Text, file:///D:/Modular-architecture-facilitates-.pdfHigh-level information processing in the mammalian cortex requires both segregated processing in specialized circuits and integration across multiple circuits. One possible way to implement these seemingly opposing demands is by flexibly switching between states with different levels of synchrony. However, the mechanisms behind the control of complex synchronization patterns in neuronal networks remain elusive. Here, we use precision neuroengineering to manipulate and stimulate networks of cortical neurons in vitro, in combination with an in silico model of spiking neurons and a mesoscopic model of stochastically coupled modules to show that (i) a modular architecture enhances the sensitivity of the network to noise delivered as external asynchronous stimulation and that (ii) the persistent depletion of synaptic resources in stimulated neurons is the underlying mechanism for this effect. Together, our results demonstrate that the inherent dynamical state in structured networks of excitable units is determined by both its modular architecture and the properties of the external inputs.D+i: P20-00173, PID2020-113681GB-I00Innovación y Universidades PID2019-108842GB-C21Horizon2020 Future and Emerging Technologies 964877-NEUChiPMinisterio de Ciencia, Innovación y Universidades (PID2019-108842GB-C21)Departament de Recerca i Universitats, Generalitat de Catalunya (2017-SGR-1061, 2021-SGR-00450)MICIN/AEI/10.13039/501100011033FEDER “A way to make Europe”Junta de AndalucíaEuropean Regional Development Fun

    Impact of Physical Obstacles on the Structural and Effective Connectivity of in silico Neuronal Circuits

    Get PDF
    Scaffolds and patterned substrates are among the most successful strategies to dictate the connectivity between neurons in culture. Here, we used numerical simulations to investigate the capacity of physical obstacles placed on a flat substrate to shape structural connectivity, and in turn collective dynamics and effective connectivity, in biologically-realistic neuronal networks. We considered μ-sized obstacles placed in mm-sized networks. Three main obstacle shapes were explored, namely crosses, circles and triangles of isosceles profile. They occupied either a small area fraction of the substrate or populated it entirely in a periodic manner. From the point of view of structure, all obstacles promoted short length-scale connections, shifted the in- and out-degree distributions toward lower values, and increased the modularity of the networks. The capacity of obstacles to shape distinct structural traits depended on their density and the ratio between axonal length and substrate diameter. For high densities, different features were triggered depending on obstacle shape, with crosses trapping axons in their vicinity and triangles funneling axons along the reverse direction of their tip. From the point of view of dynamics, obstacles reduced the capacity of networks to spontaneously activate, with triangles in turn strongly dictating the direction of activity propagation. Effective connectivity networks, inferred using transfer entropy, exhibited distinct modular traits, indicating that the presence of obstacles facilitated the formation of local effective microcircuits. Our study illustrates the potential of physical constraints to shape structural blueprints and remodel collective activity, and may guide investigations aimed at mimicking organizational traits of biological neuronal circuits

    Leader neurons in leaky integrate and fire neural network simulations

    Get PDF
    In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of predicting which neuron is a good leader neuron and which is not. Our prediction formula correctly assesses leadership for at least ninety percent of neuron
    corecore