872 research outputs found
Configuring spiking neural network training algorithms
Spiking neural networks, based on biologically-plausible neurons with temporal information
coding, are provably more powerful than widely used artificial neural networks
based on sigmoid neurons (ANNs). However, training them is more challenging than
training ANNs. Several methods have been proposed in the literature, each with its
limitations: SpikeProp, NSEBP, ReSuMe, etc. And setting numerous parameters of
spiking networks to obtain good accuracy has been largely ad hoc.
In this work, we used automated algorithm configuration tools to determine optimal
combinations of parameters for ANNs, artificial neural networks with components
simulating glia cells (astrocytes), and for spiking neural networks with SpikeProp
learning algorithm. This allowed us to achieve better accuracy on standard datasets
(Iris and Wisconsin Breast Cancer), and showed that even after optimization augmenting
an artificial neural network with glia results in improved performance.
Guided by the experimental results, we have developed methods for determining
values of several parameters of spiking neural networks, in particular weight and output
ranges. These methods have been incorporated into a SpikeProp implementation
Simulation of networks of spiking neurons: A review of tools and strategies
We review different aspects of the simulation of spiking neural networks. We
start by reviewing the different types of simulation strategies and algorithms
that are currently implemented. We next review the precision of those
simulation strategies, in particular in cases where plasticity depends on the
exact timing of the spikes. We overview different simulators and simulation
environments presently available (restricted to those freely available, open
source and documented). For each simulation tool, its advantages and pitfalls
are reviewed, with an aim to allow the reader to identify which simulator is
appropriate for a given task. Finally, we provide a series of benchmark
simulations of different types of networks of spiking neurons, including
Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based
or conductance-based synapses, using clock-driven or event-driven integration
strategies. The same set of models are implemented on the different simulators,
and the codes are made available. The ultimate goal of this review is to
provide a resource to facilitate identifying the appropriate integration
strategy and simulation tool to use for a given modeling problem related to
spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of
Computational Neuroscience, in press (2007
Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition
A neuromorphic chip that combines CMOS analog spiking neurons and memristive
synapses offers a promising solution to brain-inspired computing, as it can
provide massive neural network parallelism and density. Previous hybrid analog
CMOS-memristor approaches required extensive CMOS circuitry for training, and
thus eliminated most of the density advantages gained by the adoption of
memristor synapses. Further, they used different waveforms for pre and
post-synaptic spikes that added undesirable circuit overhead. Here we describe
a hardware architecture that can feature a large number of memristor synapses
to learn real-world patterns. We present a versatile CMOS neuron that combines
integrate-and-fire behavior, drives passive memristors and implements
competitive learning in a compact circuit module, and enables in-situ
plasticity in the memristor synapses. We demonstrate handwritten-digits
recognition using the proposed architecture using transistor-level circuit
simulations. As the described neuromorphic architecture is homogeneous, it
realizes a fundamental building block for large-scale energy-efficient
brain-inspired silicon chips that could lead to next-generation cognitive
computing.Comment: This is a preprint of an article accepted for publication in IEEE
Journal on Emerging and Selected Topics in Circuits and Systems, vol 5, no.
2, June 201
Unsupervised SFQ-Based Spiking Neural Network
Single Flux Quantum (SFQ) technology represents a groundbreaking advancement
in computational efficiency and ultra-high-speed neuromorphic processing. The
key features of SFQ technology, particularly data representation, transmission,
and processing through SFQ pulses, closely mirror fundamental aspects of
biological neural structures. Consequently, SFQ-based circuits emerge as an
ideal candidate for realizing Spiking Neural Networks (SNNs). This study
presents a proof-of-concept demonstration of an SFQ-based SNN architecture,
showcasing its capacity for ultra-fast switching at remarkably low energy
consumption per output activity. Notably, our work introduces innovative
approaches: (i) We introduce a novel spike-timing-dependent plasticity
mechanism to update synapses and to trace spike-activity by incorporating a
leaky non-destructive readout circuit. (ii) We propose a novel method to
dynamically regulate the threshold behavior of leaky integrate and fire
superconductor neurons, enhancing the adaptability of our SNN architecture.
(iii) Our research incorporates a novel winner-take-all mechanism, aligning
with practical strategies for SNN development and enabling effective
decision-making processes. The effectiveness of these proposed structural
enhancements is evaluated by integrating high-level models into the BindsNET
framework. By leveraging BindsNET, we model the online training of an SNN,
integrating the novel structures into the learning process. To ensure the
robustness and functionality of our circuits, we employ JoSIM for circuit
parameter extraction and functional verification through simulation
Stochastic resonance and finite resolution in a network of leaky integrate-and-fire neurons.
This thesis is a study of stochastic resonance (SR) in a discrete implementation of a leaky integrate-and-fire (LIF) neuron network. The aim was to determine if SR can be realised in limited precision discrete systems implemented on digital hardware.
How neuronal modelling connects with SR is discussed. Analysis techniques for noisy spike trains are described, ranging from rate coding, statistical measures, and signal processing measures like power spectrum and signal-to-noise ratio (SNR). The main problem in computing spike train power spectra is how to get equi-spaced sample amplitudes given the short duration of spikes relative to their frequency. Three different methods of computing the SNR of a spike train given its power spectrum are described. The main problem is how to separate the power at the frequencies of interest from the noise power as the spike train encodes both noise and the signal of interest.
Two models of the LIF neuron were developed, one continuous and one discrete, and the results compared. The discrete model allowed variation of the precision of the simulation values allowing investigation of the effect of precision limitation on SR. The main difference between the two models lies in the evolution of the membrane potential. When both models are allowed to decay from a high start value in the absence of input, the discrete model does not completely discharge while the continuous model discharges to almost zero.
The results of simulating the discrete model on an FPGA and the continuous model on a PC showed that SR can be realised in discrete low resolution digital systems. SR was found to be sensitive to the precision of the values in the simulations. For a single neuron, we find that SR increases between 10 bits and 12 bits resolution after which it saturates. For a feed-forward network with multiple input neurons and one output neuron, SR is stronger with more than 6 input neurons and it saturates at a higher resolution. We conclude that stochastic resonance can manifest in discrete systems though to a lesser extent compared to continuous systems
Reconfigurable cascaded thermal neuristors for neuromorphic computing
While the complementary metal-oxide semiconductor (CMOS) technology is the
mainstream for the hardware implementation of neural networks, we explore an
alternative route based on a new class of spiking oscillators we call thermal
neuristors, which operate and interact solely via thermal processes. Utilizing
the insulator-to-metal transition in vanadium dioxide, we demonstrate a wide
variety of reconfigurable electrical dynamics mirroring biological neurons.
Notably, inhibitory functionality is achieved just in a single oxide device,
and cascaded information flow is realized exclusively through thermal
interactions. To elucidate the underlying mechanisms of the neuristors, a
detailed theoretical model is developed, which accurately reflects the
experimental results. This study establishes the foundation for scalable and
energy-efficient thermal neural networks, fostering progress in brain-inspired
computing
Neuromorphic silicon neuron circuits
23 páginas, 21 figuras, 2 tablas.-- et al.Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.This work was supported by the EU ERC grant 257219 (neuroP), the EU ICT FP7 grants 231467 (eMorph), 216777 (NABAB), 231168 (SCANDLE), 15879 (FACETS), by the Swiss National Science Foundation grant 119973 (SoundRec), by the UK EPSRC grant no. EP/C010841/1, by the Spanish grants (with support from the European Regional Development Fund) TEC2006-11730-C03-01 (SAMANTA2), TEC2009-10639-C04-01 (VULCANO) Andalusian grant num. P06TIC01417 (Brain System), and by the Australian Research Council grants num. DP0343654 and num. DP0881219.Peer Reviewe
- …