11,341 research outputs found
Demonstration of Programmable Brain-Inspired Optoelectronic Neuron in Photonic Spiking Neural Network with Neural Heterogeneity
Photonic Spiking Neural Networks (PSNN) composed of the co-integrated CMOS
and photonic elements can offer low loss, low power, highly-parallel, and
high-throughput computing for brain-inspired neuromorphic systems. In addition,
heterogeneity of neuron dynamics can also bring greater diversity and
expressivity to brain-inspired networks, potentially allowing for the
implementation of complex functions with fewer neurons. In this paper, we
design, fabricate, and experimentally demonstrate an optoelectronic spiking
neuron that can simultaneously achieve high programmability for heterogeneous
biological neural networks and maintain high-speed computing. We demonstrate
that our neuron can be programmed to tune four essential parameters of neuron
dynamics under 1GSpike/s input spiking pattern signals. A single neuron circuit
can be tuned to output three spiking patterns, including chattering behaviors.
The PSNN consisting of the optoelectronic spiking neuron and a Mach-Zehnder
interferometer (MZI) mesh synaptic network achieves 89.3% accuracy on the Iris
dataset. Our neuron power consumption is 1.18 pJ/spike output, mainly limited
by the power efficiency of the vertical-cavity-lasers, optical coupling
efficiency, and the 45 nm CMOS platform used in this experiment, and is
predicted to achieve 36.84 fJ/spike output with a 7 nm CMOS platform (e.g.
ASAP7) integrated with silicon photonics containing on-chip micron-scale
lasers
A CMOS Spiking Neuron for Dense Memristor-Synapse Connectivity for Brain-Inspired Computing
Neuromorphic systems that densely integrate CMOS spiking neurons and
nano-scale memristor synapses open a new avenue of brain-inspired computing.
Existing silicon neurons have molded neural biophysical dynamics but are
incompatible with memristor synapses, or used extra training circuitry thus
eliminating much of the density advantages gained by using memristors, or were
energy inefficient. Here we describe a novel CMOS spiking leaky
integrate-and-fire neuron circuit. Building on a reconfigurable architecture
with a single opamp, the described neuron accommodates a large number of
memristor synapses, and enables online spike timing dependent plasticity (STDP)
learning with optimized power consumption. Simulation results of an 180nm CMOS
design showed 97% power efficiency metric when realizing STDP learning in
10,000 memristor synapses with a nominal 1M{\Omega} memristance, and only
13{\mu}A current consumption when integrating input spikes. Therefore, the
described CMOS neuron contributes a generalized building block for large-scale
brain-inspired neuromorphic systems.Comment: This is a preprint of an article accepted for publication in
International Joint Conference on Neural Networks (IJCNN) 201
Implementation Costs of Spiking versus Rate-Based ANNs
Artificial neural networks are an effective machine learning technique for a variety of data sets and domains, but exploiting the inherent parallelism in neural networks requires specialized hardware. Typically, computing the output of each neuron requires many multiplications, evaluation of a transcendental activation function, and transfer of its output to a large number of other neurons. These restrictions become more expensive when internal values are represented with increasingly higher data precision. A spiking neural network eliminates the limitations of typical rate-based neural networks by reducing neuron output and synapse weights to one-bit values, eliminating hardware multipliers, and simplifying the activation function. However, a spiking neural network requires a larger number of neurons than what is needed in a comparable rate-based network. In order to determine if the benefits of spiking neural networks outweigh the costs, we designed the smallest spiking neural network and rate-based artificial neural network that achieved 90% or comparable testing accuracy on the MNIST data set. After estimating the FPGA storage requirements for synapse values of each network, we concluded rate-based neural networks need significantly fewer bits than spiking neural networks
A geographically distributed bio-hybrid neural network with memristive plasticity
Throughout evolution the brain has mastered the art of processing real-world
inputs through networks of interlinked spiking neurons. Synapses have emerged
as key elements that, owing to their plasticity, are merging neuron-to-neuron
signalling with memory storage and computation. Electronics has made important
steps in emulating neurons through neuromorphic circuits and synapses with
nanoscale memristors, yet novel applications that interlink them in
heterogeneous bio-inspired and bio-hybrid architectures are just beginning to
materialise. The use of memristive technologies in brain-inspired architectures
for computing or for sensing spiking activity of biological neurons8 are only
recent examples, however interlinking brain and electronic neurons through
plasticity-driven synaptic elements has remained so far in the realm of the
imagination. Here, we demonstrate a bio-hybrid neural network (bNN) where
memristors work as "synaptors" between rat neural circuits and VLSI neurons.
The two fundamental synaptors, from artificial-to-biological (ABsyn) and from
biological-to- artificial (BAsyn), are interconnected over the Internet. The
bNN extends across Europe, collapsing spatial boundaries existing in natural
brain networks and laying the foundations of a new geographically distributed
and evolving architecture: the Internet of Neuro-electronics (IoN).Comment: 16 pages, 10 figure
Community detection with spiking neural networks for neuromorphic hardware
We present results related to the performance of an algorithm for community
detection which incorporates event-driven computation. We define a mapping
which takes a graph G to a system of spiking neurons. Using a fully connected
spiking neuron system, with both inhibitory and excitatory synaptic
connections, the firing patterns of neurons within the same community can be
distinguished from firing patterns of neurons in different communities. On a
random graph with 128 vertices and known community structure we show that by
using binary decoding and a Hamming-distance based metric, individual
communities can be identified from spike train similarities. Using bipolar
decoding and finite rate thresholding, we verify that inhibitory connections
prevent the spread of spiking patterns.Comment: Conference paper presented at ORNL Neuromorphic Workshop 2017, 7
pages, 6 figure
- …