1,564 research outputs found
Collective stability of networks of winner-take-all circuits
The neocortex has a remarkably uniform neuronal organization, suggesting that
common principles of processing are employed throughout its extent. In
particular, the patterns of connectivity observed in the superficial layers of
the visual cortex are consistent with the recurrent excitation and inhibitory
feedback required for cooperative-competitive circuits such as the soft
winner-take-all (WTA). WTA circuits offer interesting computational properties
such as selective amplification, signal restoration, and decision making. But,
these properties depend on the signal gain derived from positive feedback, and
so there is a critical trade-off between providing feedback strong enough to
support the sophisticated computations, while maintaining overall circuit
stability. We consider the question of how to reason about stability in very
large distributed networks of such circuits. We approach this problem by
approximating the regular cortical architecture as many interconnected
cooperative-competitive modules. We demonstrate that by properly understanding
the behavior of this small computational module, one can reason over the
stability and convergence of very large networks composed of these modules. We
obtain parameter ranges in which the WTA circuit operates in a high-gain
regime, is stable, and can be aggregated arbitrarily to form large stable
networks. We use nonlinear Contraction Theory to establish conditions for
stability in the fully nonlinear case, and verify these solutions using
numerical simulations. The derived bounds allow modes of operation in which the
WTA network is multi-stable and exhibits state-dependent persistent activities.
Our approach is sufficiently general to reason systematically about the
stability of any network, biological or technological, composed of networks of
small modules that express competition through shared inhibition.Comment: 7 Figure
Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks
Finding actions that satisfy the constraints imposed by both external inputs
and internal representations is central to decision making. We demonstrate that
some important classes of constraint satisfaction problems (CSPs) can be solved
by networks composed of homogeneous cooperative-competitive modules that have
connectivity similar to motifs observed in the superficial layers of neocortex.
The winner-take-all modules are sparsely coupled by programming neurons that
embed the constraints onto the otherwise homogeneous modular computational
substrate. We show rules that embed any instance of the CSPs planar four-color
graph coloring, maximum independent set, and Sudoku on this substrate, and
provide mathematical proofs that guarantee these graph coloring problems will
convergence to a solution. The network is composed of non-saturating linear
threshold neurons. Their lack of right saturation allows the overall network to
explore the problem space driven through the unstable dynamics generated by
recurrent excitation. The direction of exploration is steered by the constraint
neurons. While many problems can be solved using only linear inhibitory
constraints, network performance on hard problems benefits significantly when
these negative constraints are implemented by non-linear multiplicative
inhibition. Overall, our results demonstrate the importance of instability
rather than stability in network computation, and also offer insight into the
computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018
State-Dependent Computation Using Coupled Recurrent Networks
Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how
networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent
networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state iswithdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit
Competition through selective inhibitory synchrony
Models of cortical neuronal circuits commonly depend on inhibitory feedback
to control gain, provide signal normalization, and to selectively amplify
signals using winner-take-all (WTA) dynamics. Such models generally assume that
excitatory and inhibitory neurons are able to interact easily, because their
axons and dendrites are co-localized in the same small volume. However,
quantitative neuroanatomical studies of the dimensions of axonal and dendritic
trees of neurons in the neocortex show that this co-localization assumption is
not valid. In this paper we describe a simple modification to the WTA circuit
design that permits the effects of distributed inhibitory neurons to be coupled
through synchronization, and so allows a single WTA to be distributed widely in
cortical space, well beyond the arborization of any single inhibitory neuron,
and even across different cortical areas. We prove by non-linear contraction
analysis, and demonstrate by simulation that distributed WTA sub-systems
combined by such inhibitory synchrony are inherently stable. We show
analytically that synchronization is substantially faster than winner
selection. This circuit mechanism allows networks of independent WTAs to fully
or partially compete with each other.Comment: in press at Neural computation; 4 figure
Six networks on a universal neuromorphic computing substrate
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality
Experience-driven formation of parts-based representations in a model of layered visual memory
Growing neuropsychological and neurophysiological evidence suggests that the
visual cortex uses parts-based representations to encode, store and retrieve
relevant objects. In such a scheme, objects are represented as a set of
spatially distributed local features, or parts, arranged in stereotypical
fashion. To encode the local appearance and to represent the relations between
the constituent parts, there has to be an appropriate memory structure formed
by previous experience with visual objects. Here, we propose a model how a
hierarchical memory structure supporting efficient storage and rapid recall of
parts-based representations can be established by an experience-driven process
of self-organization. The process is based on the collaboration of slow
bidirectional synaptic plasticity and homeostatic unit activity regulation,
both running at the top of fast activity dynamics with winner-take-all
character modulated by an oscillatory rhythm. These neural mechanisms lay down
the basis for cooperation and competition between the distributed units and
their synaptic connections. Choosing human face recognition as a test task, we
show that, under the condition of open-ended, unsupervised incremental
learning, the system is able to form memory traces for individual faces in a
parts-based fashion. On a lower memory layer the synaptic structure is
developed to represent local facial features and their interrelations, while
the identities of different persons are captured explicitly on a higher layer.
An additional property of the resulting representations is the sparseness of
both the activity during the recall and the synaptic patterns comprising the
memory traces.Comment: 34 pages, 12 Figures, 1 Table, published in Frontiers in
Computational Neuroscience (Special Issue on Complex Systems Science and
Brain Dynamics),
http://www.frontiersin.org/neuroscience/computationalneuroscience/paper/10.3389/neuro.10/015.2009
Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity
In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational studies of sensory processing in neocortical network models equipped with synaptic plasticity
- …