11 research outputs found

    Collective stability of networks of winner-take-all circuits

    Full text link
    The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But, these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations, while maintaining overall circuit stability. We consider the question of how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large stable networks. We use nonlinear Contraction Theory to establish conditions for stability in the fully nonlinear case, and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multi-stable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.Comment: 7 Figure

    Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks

    Get PDF
    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSPs planar four-color graph coloring, maximum independent set, and Sudoku on this substrate, and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of non-saturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by non-linear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation, and also offer insight into the computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018

    Death and rebirth of neural activity in sparse inhibitory networks

    Get PDF
    In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of the neural activity, as expected, but it can also promote neural reactivation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neurons' death). However, the random pruning of the connections is able to reverse the action of inhibition, i.e. in a sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of the neurons (neurons' rebirth). Thus the number of firing neurons reveals a minimum at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by the neurons with higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving an analytic mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, the system passes from a perfectly regular evolution to an irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.Comment: 19 pages, 10 figures, submitted to NJ

    Computation in Dynamically Bounded Asymmetric Systems

    Get PDF
    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems

    Structural constraints on the emergence of oscillations in multi-population neural networks

    Full text link
    Oscillations arise in many real-world systems and are associated with both functional and dysfunctional states. Whether a network can oscillate can be estimated if we know the strength of interaction between nodes. But in real-world networks (in particular in biological networks) it is usually not possible to know the exact connection weights. Therefore, it is important to determine the structural properties of a network necessary to generate oscillations. Here, we provide a proof that uses dynamical system theory to prove that an odd number of inhibitory nodes and strong enough connections are necessary to generate oscillations in a single cycle threshold-linear network. We illustrate these analytical results in a biologically plausible network with either firing-rate based or spiking neurons. Our work provides structural properties necessary to generate oscillations in a network. We use this knowledge to reconcile recent experimental findings about oscillations in basal ganglia with classical findings.Comment: Main text: 30 pages, 5 Figures. Supplementary information: 20 pages, 9 Figures. Supplementary Information is integrated in the main fil

    Winner-take-all in a phase oscillator system with adaptation.

    Get PDF
    We consider a system of generalized phase oscillators with a central element and radial connections. In contrast to conventional phase oscillators of the Kuramoto type, the dynamic variables in our system include not only the phase of each oscillator but also the natural frequency of the central oscillator, and the connection strengths from the peripheral oscillators to the central oscillator. With appropriate parameter values the system demonstrates winner-take-all behavior in terms of the competition between peripheral oscillators for the synchronization with the central oscillator. Conditions for the winner-take-all regime are derived for stationary and non-stationary types of system dynamics. Bifurcation analysis of the transition from stationary to non-stationary winner-take-all dynamics is presented. A new bifurcation type called a Saddle Node on Invariant Torus (SNIT) bifurcation was observed and is described in detail. Computer simulations of the system allow an optimal choice of parameters for winner-take-all implementation

    Modeling the Evolution of Beliefs Using an Attentional Focus Mechanism

    Get PDF
    For making decisions in everyday life we often have first to infer the set of environmental features that are relevant for the current task. Here we investigated the computational mechanisms underlying the evolution of beliefs about the relevance of environmental features in a dynamical and noisy environment. For this purpose we designed a probabilistic Wisconsin card sorting task (WCST) with belief solicitation, in which subjects were presented with stimuli composed of multiple visual features. At each moment in time a particular feature was relevant for obtaining reward, and participants had to infer which feature was relevant and report their beliefs accordingly. To test the hypothesis that attentional focus modulates the belief update process, we derived and fitted several probabilistic and non-probabilistic behavioral models, which either incorporate a dynamical model of attentional focus, in the form of a hierarchical winner-take-all neuronal network, or a diffusive model, without attention-like features. We used Bayesian model selection to identify the most likely generative model of subjects’ behavior and found that attention-like features in the behavioral model are essential for explaining subjects’ responses. Furthermore, we demonstrate a method for integrating both connectionist and Bayesian models of decision making within a single framework that allowed us to infer hidden belief processes of human subjects

    Complex Dynamics in Winner-Take-All Neural Nets With Slow Inhibition

    No full text
    We consider a layer of excitatory neurons with small asymmetric excitatory connections and strong coupling to a single inhibitory interneuron. If the inhibition is fast, the network behaves as a winner-take-all network in which one cell fires at the expense of all others. As the inhibition slows down, oscillatory behavior begins. This is followed by a symmetric rotating solution in which neurons share the activity in a round-robin fashion. Finally, if the inhibition is sufficiently slower than excitation the neurons completely synchronize to a global periodic solution. Conditions guaranteeing stable synchrony are given

    Neurons' death and rebirth in sparse heterogeneous inhibitory networks

    Get PDF
    Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we discuss a general mechanism present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of the neural activity, as expected, but it can also promote neural reactivation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neurons? death). The introduction of a sparse connectivity in the network is able to reverse the action of inhibition, i.e. a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of the neurons (neurons? rebirth). Specifically, for small synaptic strengths, one observes an asynchronous activity of nearly independent supra-threshold neurons. By increasing the inhibition, a transition occurs towards a regime where the neurons are all effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain this transition from a mean-driven to a fluctuation-driven regime by deriving an analytic mean field approach able to provide the fraction of active neurons together with the first two moments of the firing time distribution. We show that, by varying the synaptic time scale, the mechanism underlying the reported phenomenon remains unchanged. However, for sufficiently slow synapses the effect becomes dramatic. For small synaptic coupling the fraction of active neurons is frozen over long times and their firing activity is perfectly regular. For larger inhibition the active neurons display an irregular bursting behaviour induced by the emergence of correlations in the current fluctuations. In this latter regime the model gives predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum

    Synchronization Analysis of Winner-Take-All Neuronal Networks

    Get PDF
    With the physical limitations of current CMOS technology, it becomes necessary to design and develop new methods to perform simple and complex computations. Nature is efficient, so many in the scientific community attempt to mimic it when optimizing or creating new systems and devices. The human brain is looked to as an efficient computing device, inspiring strong interest in developing powerful computer systems that resemble its architecture and behavior such as neural networks. There is much research focusing on both circuit designs that behave like neurons and arrangement of these electromechanical neurons to compute complex operations. It has been shown previously that the synchronization characteristics of neural oscillators can be used not only for primitive computation functions such as convolution but for complex non-Boolean computations. With strong interest in the research community to develop biologically representative neural networks, this dissertation analyzes and simulates biologically plausible networks, the four-dimensional Hodgkin-Huxley and the simpler two-dimensional Fitzhugh-Nagumo neural models, fashioned in winner-take-all neuronal networks. The synchronization behavior of these neurons coupled together is studied in detail. Different neural network topologies are considered including lateral inhibition and inhibition via a global interneuron. Then, this dissertation analyzes the winner-take-all behaviors, in terms of both firing rates and phases, of neuronal networks with different topologies. A technique based on phase response curve is suggested for the analysis of synchronization phase characteristics of winner-take-all networks. Simulations are performed to validate the analytical results. This study promotes the understanding of winner-take-all operations in biological neuronal networks and provides a fundamental basis for applications of winner-take-all networks in modern computing systems
    corecore