461 research outputs found
Multi-layered Spiking Neural Network with Target Timestamp Threshold Adaptation and STDP
Spiking neural networks (SNNs) are good candidates to produce
ultra-energy-efficient hardware. However, the performance of these models is
currently behind traditional methods. Introducing multi-layered SNNs is a
promising way to reduce this gap. We propose in this paper a new threshold
adaptation system which uses a timestamp objective at which neurons should
fire. We show that our method leads to state-of-the-art classification rates on
the MNIST dataset (98.60%) and the Faces/Motorbikes dataset (99.46%) with an
unsupervised SNN followed by a linear SVM. We also investigate the sparsity
level of the network by testing different inhibition policies and STDP rules
Neuroinspired unsupervised learning and pruning with subquantum CBRAM arrays.
Resistive RAM crossbar arrays offer an attractive solution to minimize off-chip data transfer and parallelize on-chip computations for neural networks. Here, we report a hardware/software co-design approach based on low energy subquantum conductive bridging RAM (CBRAM®) devices and a network pruning technique to reduce network level energy consumption. First, we demonstrate low energy subquantum CBRAM devices exhibiting gradual switching characteristics important for implementing weight updates in hardware during unsupervised learning. Then we develop a network pruning algorithm that can be employed during training, different from previous network pruning approaches applied for inference only. Using a 512 kbit subquantum CBRAM array, we experimentally demonstrate high recognition accuracy on the MNIST dataset for digital implementation of unsupervised learning. Our hardware/software co-design approach can pave the way towards resistive memory based neuro-inspired systems that can autonomously learn and process information in power-limited settings
Demonstrating Advantages of Neuromorphic Computation: A Pilot Study
Neuromorphic devices represent an attempt to mimic aspects of the brain's
architecture and dynamics with the aim of replicating its hallmark functional
capabilities in terms of computational power, robust learning and energy
efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic
system to implement a proof-of-concept demonstration of reward-modulated
spike-timing-dependent plasticity in a spiking network that learns to play the
Pong video game by smooth pursuit. This system combines an electronic
mixed-signal substrate for emulating neuron and synapse dynamics with an
embedded digital processor for on-chip learning, which in this work also serves
to simulate the virtual environment and learning agent. The analog emulation of
neuronal membrane dynamics enables a 1000-fold acceleration with respect to
biological real-time, with the entire chip operating on a power budget of 57mW.
Compared to an equivalent simulation using state-of-the-art software, the
on-chip emulation is at least one order of magnitude faster and three orders of
magnitude more energy-efficient. We demonstrate how on-chip learning can
mitigate the effects of fixed-pattern noise, which is unavoidable in analog
substrates, while making use of temporal variability for action exploration.
Learning compensates imperfections of the physical substrate, as manifested in
neuronal parameter variability, by adapting synaptic weights to match
respective excitability of individual neurons.Comment: Added measurements with noise in NEST simulation, add notice about
journal publication. Frontiers in Neuromorphic Engineering (2019
Unsupervised Visual Feature Learning with Spike-timing-dependent Plasticity: How Far are we from Traditional Feature Learning Approaches?
Spiking neural networks (SNNs) equipped with latency coding and spike-timing
dependent plasticity rules offer an alternative to solve the data and energy
bottlenecks of standard computer vision approaches: they can learn visual
features without supervision and can be implemented by ultra-low power hardware
architectures. However, their performance in image classification has never
been evaluated on recent image datasets. In this paper, we compare SNNs to
auto-encoders on three visual recognition datasets, and extend the use of SNNs
to color images. The analysis of the results helps us identify some bottlenecks
of SNNs: the limits of on-center/off-center coding, especially for color
images, and the ineffectiveness of current inhibition mechanisms. These issues
should be addressed to build effective SNNs for image recognition
Learning to Recognize Actions from Limited Training Examples Using a Recurrent Spiking Neural Model
A fundamental challenge in machine learning today is to build a model that
can learn from few examples. Here, we describe a reservoir based spiking neural
model for learning to recognize actions with a limited number of labeled
videos. First, we propose a novel encoding, inspired by how microsaccades
influence visual perception, to extract spike information from raw video data
while preserving the temporal correlation across different frames. Using this
encoding, we show that the reservoir generalizes its rich dynamical activity
toward signature action/movements enabling it to learn from few training
examples. We evaluate our approach on the UCF-101 dataset. Our experiments
demonstrate that our proposed reservoir achieves 81.3%/87% Top-1/Top-5
accuracy, respectively, on the 101-class data while requiring just 8 video
examples per class for training. Our results establish a new benchmark for
action recognition from limited video examples for spiking neural models while
yielding competetive accuracy with respect to state-of-the-art non-spiking
neural models.Comment: 13 figures (includes supplementary information
VIOLA - A multi-purpose and web-based visualization tool for neuronal-network simulation output
Neuronal network models and corresponding computer simulations are invaluable
tools to aid the interpretation of the relationship between neuron properties,
connectivity and measured activity in cortical tissue. Spatiotemporal patterns
of activity propagating across the cortical surface as observed experimentally
can for example be described by neuronal network models with layered geometry
and distance-dependent connectivity. The interpretation of the resulting stream
of multi-modal and multi-dimensional simulation data calls for integrating
interactive visualization steps into existing simulation-analysis workflows.
Here, we present a set of interactive visualization concepts called views for
the visual analysis of activity data in topological network models, and a
corresponding reference implementation VIOLA (VIsualization Of Layer Activity).
The software is a lightweight, open-source, web-based and platform-independent
application combining and adapting modern interactive visualization paradigms,
such as coordinated multiple views, for massively parallel neurophysiological
data. For a use-case demonstration we consider spiking activity data of a
two-population, layered point-neuron network model subject to a spatially
confined excitation originating from an external population. With the multiple
coordinated views, an explorative and qualitative assessment of the
spatiotemporal features of neuronal activity can be performed upfront of a
detailed quantitative data analysis of specific aspects of the data.
Furthermore, ongoing efforts including the European Human Brain Project aim at
providing online user portals for integrated model development, simulation,
analysis and provenance tracking, wherein interactive visual analysis tools are
one component. Browser-compatible, web-technology based solutions are therefore
required. Within this scope, with VIOLA we provide a first prototype.Comment: 38 pages, 10 figures, 3 table
- …