1,163 research outputs found
Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks
Finding actions that satisfy the constraints imposed by both external inputs
and internal representations is central to decision making. We demonstrate that
some important classes of constraint satisfaction problems (CSPs) can be solved
by networks composed of homogeneous cooperative-competitive modules that have
connectivity similar to motifs observed in the superficial layers of neocortex.
The winner-take-all modules are sparsely coupled by programming neurons that
embed the constraints onto the otherwise homogeneous modular computational
substrate. We show rules that embed any instance of the CSPs planar four-color
graph coloring, maximum independent set, and Sudoku on this substrate, and
provide mathematical proofs that guarantee these graph coloring problems will
convergence to a solution. The network is composed of non-saturating linear
threshold neurons. Their lack of right saturation allows the overall network to
explore the problem space driven through the unstable dynamics generated by
recurrent excitation. The direction of exploration is steered by the constraint
neurons. While many problems can be solved using only linear inhibitory
constraints, network performance on hard problems benefits significantly when
these negative constraints are implemented by non-linear multiplicative
inhibition. Overall, our results demonstrate the importance of instability
rather than stability in network computation, and also offer insight into the
computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018
Death and rebirth of neural activity in sparse inhibitory networks
In this paper, we clarify the mechanisms underlying a general phenomenon
present in pulse-coupled heterogeneous inhibitory networks: inhibition can
induce not only suppression of the neural activity, as expected, but it can
also promote neural reactivation. In particular, for globally coupled systems,
the number of firing neurons monotonically reduces upon increasing the strength
of inhibition (neurons' death). However, the random pruning of the connections
is able to reverse the action of inhibition, i.e. in a sparse network a
sufficiently strong synaptic strength can surprisingly promote, rather than
depress, the activity of the neurons (neurons' rebirth). Thus the number of
firing neurons reveals a minimum at some intermediate synaptic strength. We
show that this minimum signals a transition from a regime dominated by the
neurons with higher firing activity to a phase where all neurons are
effectively sub-threshold and their irregular firing is driven by current
fluctuations. We explain the origin of the transition by deriving an analytic
mean field formulation of the problem able to provide the fraction of active
neurons as well as the first two moments of their firing statistics. The
introduction of a synaptic time scale does not modify the main aspects of the
reported phenomenon. However, for sufficiently slow synapses the transition
becomes dramatic, the system passes from a perfectly regular evolution to an
irregular bursting dynamics. In this latter regime the model provides
predictions consistent with experimental findings for a specific class of
neurons, namely the medium spiny neurons in the striatum.Comment: 19 pages, 10 figures, submitted to NJ
Analysis and design of a distributed k-winners-take-all model
The -winners-take-all (WTA) problem is to find the largest inputs from inputs. In this paper, we design and propose a novel distributed WTA model, for which no central unit is needed to realize the computation of the winners. As a result, the proposed model has the general advantages of distributed models over centralized ones, such as better robustness to faults of agents. The global asymptotic convergence of the proposed distributed model is proven. Besides, two numerical examples on networks of agents with static inputs and time-varying inputs are presented to validate the performance of the proposed model
Universal neural field computation
Turing machines and G\"odel numbers are important pillars of the theory of
computation. Thus, any computational architecture needs to show how it could
relate to Turing machines and how stable implementations of Turing computation
are possible. In this chapter, we implement universal Turing computation in a
neural field environment. To this end, we employ the canonical symbologram
representation of a Turing machine obtained from a G\"odel encoding of its
symbolic repertoire and generalized shifts. The resulting nonlinear dynamical
automaton (NDA) is a piecewise affine-linear map acting on the unit square that
is partitioned into rectangular domains. Instead of looking at point dynamics
in phase space, we then consider functional dynamics of probability
distributions functions (p.d.f.s) over phase space. This is generally described
by a Frobenius-Perron integral transformation that can be regarded as a neural
field equation over the unit square as feature space of a dynamic field theory
(DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with
rectangular support are mapped onto uniform p.d.f.s with rectangular support,
again. We call the resulting representation \emph{dynamic field automaton}.Comment: 21 pages; 6 figures. arXiv admin note: text overlap with
arXiv:1204.546
Optimizing the energy consumption of spiking neural networks for neuromorphic applications
In the last few years, spiking neural networks have been demonstrated to
perform on par with regular convolutional neural networks. Several works have
proposed methods to convert a pre-trained CNN to a Spiking CNN without a
significant sacrifice of performance. We demonstrate first that
quantization-aware training of CNNs leads to better accuracy in SNNs. One of
the benefits of converting CNNs to spiking CNNs is to leverage the sparse
computation of SNNs and consequently perform equivalent computation at a lower
energy consumption. Here we propose an efficient optimization strategy to train
spiking networks at lower energy consumption, while maintaining similar
accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets
Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking
This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications
- …