1,384 research outputs found
Asynchronous Spiking Neural P Systems with Multiple Channels and Symbols
Spiking neural P systems (SNP systems, in short) are a class of distributed parallel computation systems, inspired from the way that the neurons process and communicate information by means of spikes. A new variant of SNP systems, which works in asynchronous mode, asynchronous spiking neural P systems with multiple channels and symbols (ASNP-MCS systems, in short), is investigated in this paper. There are two interesting features in ASNP-MCS systems: multiple channels and multiple symbols. That is, every neuron has more than one synaptic channels to connect its subsequent neurons, and every neuron can deal with more than one type of spikes. The variant works in asynchronous mode: in every step, each neuron can be free to fire or not when its rules can be applied. The computational completeness of ASNP-MCS systems is investigated. It is proved that ASNP-MCS systems as number generating and accepting devices are Turing universal. Moreover, we obtain a small universal function computing device that is an ASNP-MCS system with 67 neurons. Specially, a new idea that can solve ``block'' problems is proposed in INPUT modules
Power-law statistics and universal scaling in the absence of criticality
Critical states are sometimes identified experimentally through power-law
statistics or universal scaling functions. We show here that such features
naturally emerge from networks in self-sustained irregular regimes away from
criticality. In these regimes, statistical physics theory of large interacting
systems predict a regime where the nodes have independent and identically
distributed dynamics. We thus investigated the statistics of a system in which
units are replaced by independent stochastic surrogates, and found the same
power-law statistics, indicating that these are not sufficient to establish
criticality. We rather suggest that these are universal features of large-scale
networks when considered macroscopically. These results put caution on the
interpretation of scaling laws found in nature.Comment: in press in Phys. Rev.
Six networks on a universal neuromorphic computing substrate
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Simulating FRSN P Systems with Real Numbers in P-Lingua on sequential and CUDA platforms
Fuzzy Reasoning Spiking Neural P systems (FRSN P systems,
for short) is a variant of Spiking Neural P systems incorporating
fuzzy logic elements that make it suitable to model fuzzy diagnosis knowledge
and reasoning required for fault diagnosis applications. In this sense,
several FRSN P system variants have been proposed, dealing with real
numbers, trapezoidal numbers, weights, etc. The model incorporating
real numbers was the first introduced [13], presenting promising applications
in the field of fault diagnosis of electrical systems. For this variant,
a matrix-based algorithm was provided which, when executed on parallel
computing platforms, fully exploits the model maximally parallel
capacities. In this paper we introduce a P-Lingua framework extension
to parse and simulate FRSN P systems with real numbers. Two simulators,
implementing a variant of the original matrix-based simulation
algorithm, are provided: a sequential one (written in Java), intended to
run on traditional CPUs, and a parallel one, intended to run on CUDAenabled
devices.Ministerio de Economía y Competitividad TIN2012-3743
Heterogeneous Delays in Neural Networks
We investigate heterogeneous coupling delays in complex networks of excitable
elements described by the FitzHugh-Nagumo model. The effects of discrete as
well as of uni- and bimodal continuous distributions are studied with a focus
on different topologies, i.e., regular, small-world, and random networks. In
the case of two discrete delay times resonance effects play a major role:
Depending on the ratio of the delay times, various characteristic spiking
scenarios, such as coherent or asynchronous spiking, arise. For continuous
delay distributions different dynamical patterns emerge depending on the width
of the distribution. For small distribution widths, we find highly synchronized
spiking, while for intermediate widths only spiking with low degree of
synchrony persists, which is associated with traveling disruptions, partial
amplitude death, or subnetwork synchronization, depending sensitively on the
network topology. If the inhomogeneity of the coupling delays becomes too
large, global amplitude death is induced
Dynamic Adaptive Computation: Tuning network states to task requirements
Neural circuits are able to perform computations under very diverse
conditions and requirements. The required computations impose clear constraints
on their fine-tuning: a rapid and maximally informative response to stimuli in
general requires decorrelated baseline neural activity. Such network dynamics
is known as asynchronous-irregular. In contrast, spatio-temporal integration of
information requires maintenance and transfer of stimulus information over
extended time periods. This can be realized at criticality, a phase transition
where correlations, sensitivity and integration time diverge. Being able to
flexibly switch, or even combine the above properties in a task-dependent
manner would present a clear functional advantage. We propose that cortex
operates in a "reverberating regime" because it is particularly favorable for
ready adaptation of computational properties to context and task. This
reverberating regime enables cortical networks to interpolate between the
asynchronous-irregular and the critical state by small changes in effective
synaptic strength or excitation-inhibition ratio. These changes directly adapt
computational properties, including sensitivity, amplification, integration
time and correlation length within the local network. We review recent
converging evidence that cortex in vivo operates in the reverberating regime,
and that various cortical areas have adapted their integration times to
processing requirements. In addition, we propose that neuromodulation enables a
fine-tuning of the network, so that local circuits can either decorrelate or
integrate, and quench or maintain their input depending on task. We argue that
this task-dependent tuning, which we call "dynamic adaptive computation",
presents a central organization principle of cortical networks and discuss
first experimental evidence.Comment: 6 pages + references, 2 figure
Homeostatic plasticity and external input shape neural network dynamics
In vitro and in vivo spiking activity clearly differ. Whereas networks in
vitro develop strong bursts separated by periods of very little spiking
activity, in vivo cortical networks show continuous activity. This is puzzling
considering that both networks presumably share similar single-neuron dynamics
and plasticity rules. We propose that the defining difference between in vitro
and in vivo dynamics is the strength of external input. In vitro, networks are
virtually isolated, whereas in vivo every brain area receives continuous input.
We analyze a model of spiking neurons in which the input strength, mediated by
spike rate homeostasis, determines the characteristics of the dynamical state.
In more detail, our analytical and numerical results on various network
topologies show consistently that under increasing input, homeostatic
plasticity generates distinct dynamic states, from bursting, to
close-to-critical, reverberating and irregular states. This implies that the
dynamic state of a neural network is not fixed but can readily adapt to the
input strengths. Indeed, our results match experimental spike recordings in
vitro and in vivo: the in vitro bursting behavior is consistent with a state
generated by very low network input (< 0.1%), whereas in vivo activity suggests
that on the order of 1% recorded spikes are input-driven, resulting in
reverberating dynamics. Importantly, this predicts that one can abolish the
ubiquitous bursts of in vitro preparations, and instead impose dynamics
comparable to in vivo activity by exposing the system to weak long-term
stimulation, thereby opening new paths to establish an in vivo-like assay in
vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.
- …