13,338 research outputs found
Matrix Representation of Spiking Neural P Systems
Spiking neural P systems (SN P systems, for short) are a class of distributed parallel computing devices inspired from the way neurons communicate by means of spikes. In this work, a discrete structure representation of SN P systems with extended rules and without delay is proposed. Specifically, matrices are used to represent SN P systems. In order to represent the computations of SN P systems by matrices, configuration vectors are defined to monitor the number of spikes in each neuron at any given configuration; transition net gain vectors are also introduced to quantify the total amount of spikes consumed and produced after the chosen rules are applied. Nondeterminism of the systems is assured by a set of spiking transition vectors that could be used at any given time during the computation. With such matrix representation, it is quite convenient to determine the next configuration from a given configuration, since it involves only multiplication and addition of matrices after deciding the spiking transition vector.Ministerio de Ciencia e Innovación TIN2009-13192Junta de Andalucía P08-TIC0420
Time After Time: Notes on Delays In Spiking Neural P Systems
Spiking Neural P systems, SNP systems for short, are biologically inspired
computing devices based on how neurons perform computations. SNP systems use
only one type of symbol, the spike, in the computations. Information is encoded
in the time differences of spikes or the multiplicity of spikes produced at
certain times. SNP systems with delays (associated with rules) and those
without delays are two of several Turing complete SNP system variants in
literature. In this work we investigate how restricted forms of SNP systems
with delays can be simulated by SNP systems without delays. We show the
simulations for the following spike routing constructs: sequential, iteration,
join, and split.Comment: 11 pages, 9 figures, 4 lemmas, 1 theorem, preprint of Workshop on
Computation: Theory and Practice 2012 at DLSU, Manila together with UP
Diliman, DLSU, Tokyo Institute of Technology, and Osaka universit
Improving Simulations of Spiking Neural P Systems in NVIDIA CUDA GPUs: CuSNP
Spiking neural P systems (in short, SN P systems) are parallel models of
computations inspired by the spiking ( ring) of biological neurons. In SN P systems, neurons
function as spike processors and are placed on nodes of a directed graph. Synapses,
the connections between neurons, are represented by arcs or directed endges in the graph.
Not only do SN P systems have parallel semantics (i.e. neurons operate in parallel), but
their structure as directed graphs allow them to be represented as vectors or matrices.
Such representations allow the use of linear algebra operations for simulating the
evolution of the system con gurations, i.e. computations. In this work, we continue the
implementations of SN P systems with delays, i.e. a delay is associated with the sending
of a spike from a neuron to its neighbouring neurons. Our implementation is based on
a modi ed representation of SN P systems as vectors and matrices for SN P systems
without delays. We us massively parallel processors known as graphics processing units
(in short, GPUs) from NVIDIA. For experimental validation, we use SN P systems implementing
generalized sorting networks. We report a speedup, i.e. the ratio between the
running time of the sequential over the parallel simulator, of up to approximately 51
times for a 512-size input to the sorting network
Synchronization of coupled neural oscillators with heterogeneous delays
We investigate the effects of heterogeneous delays in the coupling of two
excitable neural systems. Depending upon the coupling strengths and the time
delays in the mutual and self-coupling, the compound system exhibits different
types of synchronized oscillations of variable period. We analyze this
synchronization based on the interplay of the different time delays and support
the numerical results by analytical findings. In addition, we elaborate on
bursting-like dynamics with two competing timescales on the basis of the
autocorrelation function.Comment: 18 pages, 14 figure
Exploiting Device Mismatch in Neuromorphic VLSI Systems to Implement Axonal Delays
Sheik S, Chicca E, Indiveri G. Exploiting Device Mismatch in Neuromorphic VLSI Systems to Implement Axonal Delays. Presented at the International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia.Axonal delays are used in neural computation to implement faithful models of biological neural systems, and in spiking neural networks models to solve computationally demanding tasks. While there is an increasing number of software simulations of spiking neural networks that make use of axonal delays, only a small fraction of currently existing hardware neuromorphic systems supports them. In this paper we demonstrate a strategy to implement temporal delays in hardware spiking neural networks distributed across multiple Very Large Scale Integration (VLSI) chips. This is achieved by exploiting the inherent device mismatch present in the analog circuits that implement silicon neurons and synapses inside the chips, and the digital communication infrastructure used to configure the network topology and transmit the spikes across chips. We present an example of a recurrent VLSI spiking neural network that employs axonal delays and demonstrate how the proposed strategy efficiently implements them in hardware
Heterogeneous Delays in Neural Networks
We investigate heterogeneous coupling delays in complex networks of excitable
elements described by the FitzHugh-Nagumo model. The effects of discrete as
well as of uni- and bimodal continuous distributions are studied with a focus
on different topologies, i.e., regular, small-world, and random networks. In
the case of two discrete delay times resonance effects play a major role:
Depending on the ratio of the delay times, various characteristic spiking
scenarios, such as coherent or asynchronous spiking, arise. For continuous
delay distributions different dynamical patterns emerge depending on the width
of the distribution. For small distribution widths, we find highly synchronized
spiking, while for intermediate widths only spiking with low degree of
synchrony persists, which is associated with traveling disruptions, partial
amplitude death, or subnetwork synchronization, depending sensitively on the
network topology. If the inhomogeneity of the coupling delays becomes too
large, global amplitude death is induced
Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware
In recent years the field of neuromorphic low-power systems that consume
orders of magnitude less power gained significant momentum. However, their
wider use is still hindered by the lack of algorithms that can harness the
strengths of such architectures. While neuromorphic adaptations of
representation learning algorithms are now emerging, efficient processing of
temporal sequences or variable length-inputs remain difficult. Recurrent neural
networks (RNN) are widely used in machine learning to solve a variety of
sequence learning tasks. In this work we present a train-and-constrain
methodology that enables the mapping of machine learned (Elman) RNNs on a
substrate of spiking neurons, while being compatible with the capabilities of
current and near-future neuromorphic systems. This "train-and-constrain" method
consists of first training RNNs using backpropagation through time, then
discretizing the weights and finally converting them to spiking RNNs by
matching the responses of artificial neurons with those of the spiking neurons.
We demonstrate our approach by mapping a natural language processing task
(question classification), where we demonstrate the entire mapping process of
the recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", a
spike-based digital neuromorphic hardware architecture. TrueNorth imposes
specific constraints on connectivity, neural and synaptic parameters. To
satisfy these constraints, it was necessary to discretize the synaptic weights
and neural activities to 16 levels, and to limit fan-in to 64 inputs. We find
that short synaptic delays are sufficient to implement the dynamical (temporal)
aspect of the RNN in the question classification task. The hardware-constrained
model achieved 74% accuracy in question classification while using less than
0.025% of the cores on one TrueNorth chip, resulting in an estimated power
consumption of ~17 uW
Integration of continuous-time dynamics in a spiking neural network simulator
Contemporary modeling approaches to the dynamics of neural networks consider
two main classes of models: biologically grounded spiking neurons and
functionally inspired rate-based units. The unified simulation framework
presented here supports the combination of the two for multi-scale modeling
approaches, the quantitative validation of mean-field approaches by spiking
network simulations, and an increase in reliability by usage of the same
simulation code and the same network model specifications for both model
classes. While most efficient spiking simulations rely on the communication of
discrete events, rate models require time-continuous interactions between
neurons. Exploiting the conceptual similarity to the inclusion of gap junctions
in spiking network simulations, we arrive at a reference implementation of
instantaneous and delayed interactions between rate-based models in a spiking
network simulator. The separation of rate dynamics from the general connection
and communication infrastructure ensures flexibility of the framework. We
further demonstrate the broad applicability of the framework by considering
various examples from the literature ranging from random networks to neural
field models. The study provides the prerequisite for interactions between
rate-based and spiking models in a joint simulation
- …