242 research outputs found
Spatio-temporal Learning with Arrays of Analog Nanosynapses
Emerging nanodevices such as resistive memories are being considered for
hardware realizations of a variety of artificial neural networks (ANNs),
including highly promising online variants of the learning approaches known as
reservoir computing (RC) and the extreme learning machine (ELM). We propose an
RC/ELM inspired learning system built with nanosynapses that performs both
on-chip projection and regression operations. To address time-dynamic tasks,
the hidden neurons of our system perform spatio-temporal integration and can be
further enhanced with variable sampling or multiple activation windows. We
detail the system and show its use in conjunction with a highly analog
nanosynapse device on a standard task with intrinsic timing dynamics- the TI-46
battery of spoken digits. The system achieves nearly perfect (99%) accuracy at
sufficient hidden layer size, which compares favorably with software results.
In addition, the model is extended to a larger dataset, the MNIST database of
handwritten digits. By translating the database into the time domain and using
variable integration windows, up to 95% classification accuracy is achieved. In
addition to an intrinsically low-power programming style, the proposed
architecture learns very quickly and can easily be converted into a spiking
system with negligible loss in performance- all features that confer
significant energy efficiency.Comment: 6 pages, 3 figures. Presented at 2017 IEEE/ACM Symposium on Nanoscale
architectures (NANOARCH
Overcoming device unreliability with continuous learning in a population coding based computing system
The brain, which uses redundancy and continuous learning to overcome the
unreliability of its components, provides a promising path to building
computing systems that are robust to the unreliability of their constituent
nanodevices. In this work, we illustrate this path by a computing system based
on population coding with magnetic tunnel junctions that implement both neurons
and synaptic weights. We show that equipping such a system with continuous
learning enables it to recover from the loss of neurons and makes it possible
to use unreliable synaptic weights (i.e. low energy barrier magnetic memories).
There is a tradeoff between power consumption and precision because low energy
barrier memories consume less energy than high barrier ones. For a given
precision, there is an optimal number of neurons and an optimal energy barrier
for the weights that leads to minimum power consumption
Resonate and Fire Neuron with Fixed Magnetic Skyrmions
In the brain, the membrane potential of many neurons oscillates in a
subthreshold damped fashion and fire when excited by an input frequency that
nearly equals their eigen frequency. In this work, we investigate theoretically
the artificial implementation of such "resonate-and-fire" neurons by utilizing
the magnetization dynamics of a fixed magnetic skyrmion in the free layer of a
magnetic tunnel junction (MTJ). To realize firing of this nanomagnetic
implementation of an artificial neuron, we propose to employ voltage control of
magnetic anisotropy or voltage generated strain as an input (spike or
sinusoidal) signal, which modulates the perpendicular magnetic anisotropy
(PMA). This results in continual expansion and shrinking (i.e. breathing) of a
skyrmion core that mimics the subthreshold oscillation. Any subsequent input
pulse having an interval close to the breathing period or a sinusoidal input
close to the eigen frequency drives the magnetization dynamics of the fixed
skyrmion in a resonant manner. The time varying electrical resistance of the
MTJ layer due to this resonant oscillation of the skyrmion core is used to
drive a Complementary Metal Oxide Semiconductor (CMOS) buffer circuit, which
produces spike outputs. By rigorous micromagnetic simulation, we investigate
the interspike timing dependence and response to different excitatory and
inhibitory incoming input pulses. Finally, we show that such resonate and fire
neurons have potential application in coupled nanomagnetic oscillator based
associative memory arrays
Nano-oscillator-based classification with a machine learning-compatible architecture
Pattern classification architectures leveraging the physics of coupled
nano-oscillators have been demonstrated as promising alternative computing
approaches, but lack effective learning algorithms. In this work, we propose a
nano-oscillator based classification architecture where the natural frequencies
of the oscillators are learned linear combinations of the inputs, and define an
offline learning algorithm based on gradient back-propagation. Our results show
significant classification improvements over a related approach with online
learning. We also compare our architecture with a standard neural network on a
simple machine learning case, which suggests that our approach is economical in
terms of numbers of adjustable parameters. The introduced architecture is also
compatible with existing nano-technologies: the architecture does not require
changes in the coupling between nano-oscillators, and it is tolerant to
oscillator phase noise
- …