18 research outputs found
Spatio-temporal Learning with Arrays of Analog Nanosynapses
Emerging nanodevices such as resistive memories are being considered for
hardware realizations of a variety of artificial neural networks (ANNs),
including highly promising online variants of the learning approaches known as
reservoir computing (RC) and the extreme learning machine (ELM). We propose an
RC/ELM inspired learning system built with nanosynapses that performs both
on-chip projection and regression operations. To address time-dynamic tasks,
the hidden neurons of our system perform spatio-temporal integration and can be
further enhanced with variable sampling or multiple activation windows. We
detail the system and show its use in conjunction with a highly analog
nanosynapse device on a standard task with intrinsic timing dynamics- the TI-46
battery of spoken digits. The system achieves nearly perfect (99%) accuracy at
sufficient hidden layer size, which compares favorably with software results.
In addition, the model is extended to a larger dataset, the MNIST database of
handwritten digits. By translating the database into the time domain and using
variable integration windows, up to 95% classification accuracy is achieved. In
addition to an intrinsically low-power programming style, the proposed
architecture learns very quickly and can easily be converted into a spiking
system with negligible loss in performance- all features that confer
significant energy efficiency.Comment: 6 pages, 3 figures. Presented at 2017 IEEE/ACM Symposium on Nanoscale
architectures (NANOARCH
Neuromorphic, Digital and Quantum Computation with Memory Circuit Elements
Memory effects are ubiquitous in nature and the class of memory circuit
elements - which includes memristors, memcapacitors and meminductors - shows
great potential to understand and simulate the associated fundamental physical
processes. Here, we show that such elements can also be used in electronic
schemes mimicking biologically-inspired computer architectures, performing
digital logic and arithmetic operations, and can expand the capabilities of
certain quantum computation schemes. In particular, we will discuss few
examples where the concept of memory elements is relevant to the realization of
associative memory in neuronal circuits, spike-timing-dependent plasticity of
synapses, digital and field-programmable quantum computing
Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses
International audienceMultiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations
Recommended from our members
Magnetic materials : fundamental synthesis of two-dimensional magnets and applications to neuromorphic computing
Two dimensional magnetic materials hold the promise of helping to achieve beyond CMOS computing tasks. 2D magnetic materials can be used in fabricating magnetic tunnel junctions with higher tunnel magnetoresistance which can then be applied to making new neuromorphic computing architectures primarily geared towards artificial intelligence and machine learning applications. In this work I summarize my synthesis and investigation of the properties of Crâ‚‚C which belongs to the group of 2D transition metal carbides or nitrides called MXenes. Crâ‚‚C has been predicted to have intrinsic half metallic ferromagnetic behaviors. These magnetic behaviors can be tuned based on the level of functionalization of the surface of the material. I show different parameters such as etchant, reaction temperature, and molar concentration that I have tuned in order to optimally derive Crâ‚‚C from its parent MAX phase Crâ‚‚AlC by removing the Al layer with a fluoride salt and hydrochloric acid. I also show how magnetic tunnel junctions (MTJs), which are two ferromagnetic layers with a tunnel barrier in between, can be used to make a synapse which is a neuromorphic computing primitive. The synapse circuit that I have proposed displays spike timing dependent plasticity which is an integral component of learning and memory in the brain. I show how different delay conditions between the presynaptic signal and the postsynaptic signal lead to currents of different magnitudes flowing through the ferromagnetic layer of the magnetic tunnel junction synapse. I also show how these currents move the domain wall both in micromagnetic simulation and using a domain wall MTJ Spice model that has been developed. I went on to wire four of these synapses together to observe the temporal dynamics of the system. My results show that the lower the delay between the presynaptic pulse and the postsynaptic pulse, the higher the current through the MTJ synapse and hence the larger the domain wall displacement. These studies pave the way for empirical understanding of the Crâ‚‚C MXene, including its potential magnetic properties, as well as doing online machine learning classification tasks with arrays of magnetic synapsesElectrical and Computer Engineerin
Applications of memristors in conventional analogue electronics
This dissertation presents the steps employed to activate and utilise analogue memristive devices in conventional analogue circuits and beyond.
TiO2 memristors are mainly utilised in this study, and their large variability in operation in between similar devices is identified.
A specialised memristor characterisation instrument is designed and built to mitigate this issue and to allow access to large numbers of devices at a time.
Its performance is quantified against linear resistors, crossbars of linear resistors, stand-alone memristive elements and crossbars of memristors.
This platform allows for a wide range of different pulsing algorithms to be applied on individual devices, or on crossbars of memristive elements, and is used throughout this dissertation.
Different ways of achieving analogue resistive switching from any device state are presented.
Results of these are used to devise a state-of-art biasing parameter finder which automatically extracts pulsing parameters that induce repeatable analogue resistive switching.
IV measurements taken during analogue resistive switching are then utilised to model the internal atomic structure of two devices, via fittings by the Simmons tunnelling barrier model.
These reveal that voltage pulses modulate a nano-tunnelling gap along a conical shape.
Further retention measurements are performed which reveal that under certain conditions, TiO2 memristors become volatile at short time scales.
This volatile behaviour is then implemented into a novel SPICE volatile memristor model.
These characterisation methods of solid-state devices allowed for inclusion of TiO2 memristors in practical electronic circuits.
Firstly, in the context of large analogue resistive crossbars, a crosspoint reading method is analysed and improved via a 3-step technique.
Its scaling performance is then quantified via SPICE simulations.
Next, the observed volatile dynamics of memristors are exploited in two separate sequence detectors, with applications in neuromorphic engineering.
Finally, the memristor as a programmable resistive weight is exploited to synthesise a memristive programmable gain amplifier and a practical memristive automatic gain control circuit.Open Acces
Spintronics-based Architectures for non-von Neumann Computing
The scaling of transistor technology in the last few decades has significantly impacted our lives. It has given birth to different kinds of computational workloads which are becoming increasingly relevant. Some of the most prominent examples are Machine Learning based tasks such as image classification and pattern recognition which use Deep Neural Networks that are highly computation and memory-intensive. The traditional and general-purpose architectures that we use today typically exhibit high energy and latency on such computations. This, and the apparent end of Moore's law of scaling, has got researchers into looking for devices beyond CMOS and for computational paradigms that are non-conventional. In this dissertation, we focus on a spintronic device, the Magnetic Tunnel Junction (MTJ), which has demonstrated potential as cache and embedded memory. We look into how the MTJ can be used beyond memory and deployed in various non-conventional and non-von Neumann architectures for accelerating computations or making them energy efficient.
First, we investigate into Stochastic Computing (SC) and show how MTJs can be used to build energy-efficient Neural Network (NN) hardware in this domain. SC is primarily bit-serial computing which requires simple logic gates for arithmetic operations. We explore the use of MTJs as Stochastic Number Generators (SNG) by exploiting their probabilistic switching characteristics and propose an energy-efficient MTJ-SNG. It is deployed as part of an NN hardware implemented in the SC domain. Its characteristics allow for achieving further energy efficiency through NN weight approximation, towards which we develop an optimization problem.
Next, we turn our attention to analog computing and propose a method for training of analog Neural Network hardware. We consider a resistive MTJ crossbar architecture for representing an NN layer since it is capable of in-memory computing and performs matrix-vector multiplications with O(1) time complexity. We propose the on-chip training of the NN crossbar since, first, it can leverage the parallelism in the crossbar to perform weight update, second, it allows to take into account the device variations, and third, it enables avoiding large sneak currents in transistor-less crossbars which can cause undesired weight changes.
Lastly, we propose an MTJ-based non-von Neumann hardware platform for solving combinatorial optimization problems since they are NP-hard. We adopt the Ising model for encoding such problems and solving them with simulated annealing. We let MTJs represent Ising units, design a scalable circuit capable of performing Ising computations and develop a reconfigurable architecture to which any NP-hard problem can be mapped. We also suggest methods to take into account the non-idealities present in the proposed hardware