1,035 research outputs found

    Collective stability of networks of winner-take-all circuits

    Full text link
    The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But, these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations, while maintaining overall circuit stability. We consider the question of how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large stable networks. We use nonlinear Contraction Theory to establish conditions for stability in the fully nonlinear case, and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multi-stable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.Comment: 7 Figure

    Emergence of Modular Structure in a Large-Scale Brain Network with Interactions between Dynamics and Connectivity

    Get PDF
    A network of 32 or 64 connected neural masses, each representing a large population of interacting excitatory and inhibitory neurons and generating an electroencephalography/magnetoencephalography like output signal, was used to demonstrate how an interaction between dynamics and connectivity might explain the emergence of complex network features, in particular modularity. Network evolution was modeled by two processes: (i) synchronization dependent plasticity (SDP) and (ii) growth dependent plasticity (GDP). In the case of SDP, connections between neural masses were strengthened when they were strongly synchronized, and were weakened when they were not. GDP was modeled as a homeostatic process with random, distance dependent outgrowth of new connections between neural masses. GDP alone resulted in stable networks with distance dependent connection strengths, typical small-world features, but no degree correlations and only weak modularity. SDP applied to random networks induced clustering, but no clear modules. Stronger modularity evolved only through an interaction of SDP and GDP, with the number and size of the modules depending on the relative strength of both processes, as well as on the size of the network. Lesioning part of the network, after a stable state was achieved, resulted in a temporary disruption of the network structure. The model gives a possible scenario to explain how modularity can arise in developing brain networks, and makes predictions about the time course of network changes during development and following acute lesions

    Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons

    Get PDF
    We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuro-science. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds. We first consider a deterministic implementation of a neural timer and show that Theta(log t) (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1-delta, and should stop firing after at most 2t rounds with probability 1-delta for some input parameter delta in (0,1). Our key result is a construction of a neural timer with O(log log 1/delta) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Omega(min{log log 1/delta, log t}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks. Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS\u2790], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time

    Modular architecture facilitates noise-driven control of synchrony in neuronal networks

    Get PDF
    H.Y., A.H.-I., and S.S. acknowledge MEXT Grant-in-Aid for Transformative Research Areas (B) “Multicellular Neurobiocomputing” (21H05164), JSPS KAKENHI (18H03325, 19H00846, 20H02194, 20K20550, 22H03657, 22K19821, 22KK0177, and 23H03489), JST-PRESTO (JMPJPR18MB), JST-CREST (JPMJCR19K3), and Tohoku University RIEC Cooperative Research Project Program for financial support. F.P.S., V.P., and J.Z. received support from the Max-Planck-Society. F.P.S. acknowledges funding by SMARTSTART, the joint training program in computational neuroscience by the VolkswagenStiftung and the Bernstein Network. F.P.S. and V.P. were funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), SFB-1528–Cognition of Interaction. V.P. was supported by the DFG under Germany’s Excellence Strategy EXC 2067/1- 390729940. V.B. and A.L. were supported by a Sofja Kovalevskaja Award from the Alexander von Humboldt Foundation, endowed by the Federal Ministry of Education and Research. A.L. is a member of the Machine Learning Cluster of Excellence EXC 2064/1- 39072764. M.A.M. acknowledges the Spanish Ministry and Agencia Estatal de investigación (AEI) through Project of I + D + i (PID2020-113681GB-I00), financed by MICIN/AEI/10.13039/501100011033 and FEDER “A way to make Europe”, and the Consejería de Conocimiento, Investigación Universidad, Junta de Andalucía and European Regional Development Fund (P20-00173) for financial support. J.Z. received financial support from the Joachim Herz Stiftung. J.S. acknowledges Horizon 2020 Future and Emerging Technologies (grant agreement 964877-NEUChiP), Ministerio de Ciencia, Innovación y Universidades (PID2019-108842GB-C21), and Departament de Recerca i Universitats, Generalitat de Catalunya (2017-SGR-1061 and 2021-SGR-00450) for financial support.Supplementary Materials This PDF file includes: Supplementary Text, file:///D:/Modular-architecture-facilitates-.pdfHigh-level information processing in the mammalian cortex requires both segregated processing in specialized circuits and integration across multiple circuits. One possible way to implement these seemingly opposing demands is by flexibly switching between states with different levels of synchrony. However, the mechanisms behind the control of complex synchronization patterns in neuronal networks remain elusive. Here, we use precision neuroengineering to manipulate and stimulate networks of cortical neurons in vitro, in combination with an in silico model of spiking neurons and a mesoscopic model of stochastically coupled modules to show that (i) a modular architecture enhances the sensitivity of the network to noise delivered as external asynchronous stimulation and that (ii) the persistent depletion of synaptic resources in stimulated neurons is the underlying mechanism for this effect. Together, our results demonstrate that the inherent dynamical state in structured networks of excitable units is determined by both its modular architecture and the properties of the external inputs.D+i: P20-00173, PID2020-113681GB-I00Innovación y Universidades PID2019-108842GB-C21Horizon2020 Future and Emerging Technologies 964877-NEUChiPMinisterio de Ciencia, Innovación y Universidades (PID2019-108842GB-C21)Departament de Recerca i Universitats, Generalitat de Catalunya (2017-SGR-1061, 2021-SGR-00450)MICIN/AEI/10.13039/501100011033FEDER “A way to make Europe”Junta de AndalucíaEuropean Regional Development Fun

    Dynamics of Coupled Noisy Neural Oscillators with Heterogeneous Phase Resetting Curves

    Get PDF
    Pulse-coupled phase oscillators have been utilized in a variety of contexts. Motivated by neuroscience, we study a network of pulse-coupled phase oscillators receiving independent and correlated noise. An additional physiological attribute, heterogeneity, is incorporated in the phase resetting curve (PRC), which is a vital entity for modeling the biophysical dynamics of oscillators. An accurate probability density or mean field description is large dimensional, requiring reduction methods for tractability. We present a reduction method to capture the pairwise synchrony via the probability density of the phase differences, and explore the robustness of the method. We find the reduced methods can capture some of the synchronous dynamics in these networks. The variance of the noisy period (or spike times) in this network is also considered. In particular, we find phase oscillators with predominately positive PRCs (type 1) have larger variance with inhibitory pulse- coupling than PRCs with a larger negative regions (type 2), but with excitatory pulse-coupling the opposite happens – type 1 oscillators have lower variability than type 2. Analysis of this phenomena is provided via an asymptotic approximation with weak noise and weak coupling, where we demonstrate how the individual PRC alters variability with pulse-coupling. We make comparisons of the phase oscillators to full oscillator networks and discuss the utility and shortcomings
    corecore