2,434 research outputs found
Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks
Finding actions that satisfy the constraints imposed by both external inputs
and internal representations is central to decision making. We demonstrate that
some important classes of constraint satisfaction problems (CSPs) can be solved
by networks composed of homogeneous cooperative-competitive modules that have
connectivity similar to motifs observed in the superficial layers of neocortex.
The winner-take-all modules are sparsely coupled by programming neurons that
embed the constraints onto the otherwise homogeneous modular computational
substrate. We show rules that embed any instance of the CSPs planar four-color
graph coloring, maximum independent set, and Sudoku on this substrate, and
provide mathematical proofs that guarantee these graph coloring problems will
convergence to a solution. The network is composed of non-saturating linear
threshold neurons. Their lack of right saturation allows the overall network to
explore the problem space driven through the unstable dynamics generated by
recurrent excitation. The direction of exploration is steered by the constraint
neurons. While many problems can be solved using only linear inhibitory
constraints, network performance on hard problems benefits significantly when
these negative constraints are implemented by non-linear multiplicative
inhibition. Overall, our results demonstrate the importance of instability
rather than stability in network computation, and also offer insight into the
computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018
Flexible Memory Networks
Networks of neurons in some brain areas are flexible enough to encode new
memories quickly. Using a standard firing rate model of recurrent networks, we
develop a theory of flexible memory networks. Our main results characterize
networks having the maximal number of flexible memory patterns, given a
constraint graph on the network's connectivity matrix. Modulo a mild
topological condition, we find a close connection between maximally flexible
networks and rank 1 matrices. The topological condition is H_1(X;Z)=0, where X
is the clique complex associated to the network's constraint graph; this
condition is generically satisfied for large random networks that are not
overly sparse. In order to prove our main results, we develop some
matrix-theoretic tools and present them in a self-contained section independent
of the neuroscience context.Comment: Accepted to Bulletin of Mathematical Biology, 11 July 201
Diversity of emergent dynamics in competitive threshold-linear networks: a preliminary report
Threshold-linear networks consist of simple units interacting in the presence
of a threshold nonlinearity. Competitive threshold-linear networks have long
been known to exhibit multistability, where the activity of the network settles
into one of potentially many steady states. In this work, we find conditions
that guarantee the absence of steady states, while maintaining bounded
activity. These conditions lead us to define a combinatorial family of
competitive threshold-linear networks, parametrized by a simple directed graph.
By exploring this family, we discover that threshold-linear networks are
capable of displaying a surprisingly rich variety of nonlinear dynamics,
including limit cycles, quasiperiodic attractors, and chaos. In particular,
several types of nonlinear behaviors can co-exist in the same network. Our
mathematical results also enable us to engineer networks with multiple dynamic
patterns. Taken together, these theoretical and computational findings suggest
that threshold-linear networks may be a valuable tool for understanding the
relationship between network connectivity and emergent dynamics.Comment: 12 pages, 9 figures. Preliminary repor
Computation in Dynamically Bounded Asymmetric Systems
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems
Collective stability of networks of winner-take-all circuits
The neocortex has a remarkably uniform neuronal organization, suggesting that
common principles of processing are employed throughout its extent. In
particular, the patterns of connectivity observed in the superficial layers of
the visual cortex are consistent with the recurrent excitation and inhibitory
feedback required for cooperative-competitive circuits such as the soft
winner-take-all (WTA). WTA circuits offer interesting computational properties
such as selective amplification, signal restoration, and decision making. But,
these properties depend on the signal gain derived from positive feedback, and
so there is a critical trade-off between providing feedback strong enough to
support the sophisticated computations, while maintaining overall circuit
stability. We consider the question of how to reason about stability in very
large distributed networks of such circuits. We approach this problem by
approximating the regular cortical architecture as many interconnected
cooperative-competitive modules. We demonstrate that by properly understanding
the behavior of this small computational module, one can reason over the
stability and convergence of very large networks composed of these modules. We
obtain parameter ranges in which the WTA circuit operates in a high-gain
regime, is stable, and can be aggregated arbitrarily to form large stable
networks. We use nonlinear Contraction Theory to establish conditions for
stability in the fully nonlinear case, and verify these solutions using
numerical simulations. The derived bounds allow modes of operation in which the
WTA network is multi-stable and exhibits state-dependent persistent activities.
Our approach is sufficiently general to reason systematically about the
stability of any network, biological or technological, composed of networks of
small modules that express competition through shared inhibition.Comment: 7 Figure
A Step Towards Uncovering The Structure of Multistable Neural Networks
We study the structure of multistable recurrent neural networks. The
activation function is simplified by a nonsmooth Heaviside step function. This
nonlinearity partitions the phase space into regions with different, yet linear
dynamics. We derive how multistability is encoded within the network
architecture. Stable states are identified by their semipositivity constraints
on the synaptic weight matrix. The restrictions can be separated by their
effects on the signs or the strengths of the connections. Exact results on
network topology, sign stability, weight matrix factorization, pattern
completion and pattern coupling are derived and proven. These may lay the
foundation of more complex recurrent neural networks and neurocomputing.Comment: 33 pages, 9 figure
- …