15,021 research outputs found
Neuro-memristive Circuits for Edge Computing: A review
The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing
Recommended from our members
Versatile stochastic dot product circuits based on nonvolatile memories for high performance neurocomputing and neurooptimization.
The key operation in stochastic neural networks, which have become the state-of-the-art approach for solving problems in machine learning, information theory, and statistics, is a stochastic dot-product. While there have been many demonstrations of dot-product circuits and, separately, of stochastic neurons, the efficient hardware implementation combining both functionalities is still missing. Here we report compact, fast, energy-efficient, and scalable stochastic dot-product circuits based on either passively integrated metal-oxide memristors or embedded floating-gate memories. The circuit's high performance is due to mixed-signal implementation, while the efficient stochastic operation is achieved by utilizing circuit's noise, intrinsic and/or extrinsic to the memory cell array. The dynamic scaling of weights, enabled by analog memory devices, allows for efficient realization of different annealing approaches to improve functionality. The proposed approach is experimentally verified for two representative applications, namely by implementing neural network for solving a four-node graph-partitioning problem, and a Boltzmann machine with 10-input and 8-hidden neurons
A neuromorphic systems approach to in-memory computing with non-ideal memristive devices: From mitigation to exploitation
Memristive devices represent a promising technology for building neuromorphic
electronic systems. In addition to their compactness and non-volatility
features, they are characterized by computationally relevant physical
properties, such as state-dependence, non-linear conductance changes, and
intrinsic variability in both their switching threshold and conductance values,
that make them ideal devices for emulating the bio-physics of real synapses. In
this paper we present a spiking neural network architecture that supports the
use of memristive devices as synaptic elements, and propose mixed-signal
analog-digital interfacing circuits which mitigate the effect of variability in
their conductance values and exploit their variability in the switching
threshold, for implementing stochastic learning. The effect of device
variability is mitigated by using pairs of memristive devices configured in a
complementary push-pull mechanism and interfaced to a current-mode normalizer
circuit. The stochastic learning mechanism is obtained by mapping the desired
change in synaptic weight into a corresponding switching probability that is
derived from the intrinsic stochastic behavior of memristive devices. We
demonstrate the features of the CMOS circuits and apply the architecture
proposed to a standard neural network hand-written digit classification
benchmark based on the MNIST data-set. We evaluate the performance of the
approach proposed on this benchmark using behavioral-level spiking neural
network simulation, showing both the effect of the reduction in conductance
variability produced by the current-mode normalizer circuit, and the increase
in performance as a function of the number of memristive devices used in each
synapse.Comment: 13 pages, 12 figures, accepted for Faraday Discussion
Spin-Based Neuron Model with Domain Wall Magnets as Synapse
We present artificial neural network design using spin devices that achieves
ultra low voltage operation, low power consumption, high speed, and high
integration density. We employ spin torque switched nano-magnets for modelling
neuron and domain wall magnets for compact, programmable synapses. The spin
based neuron-synapse units operate locally at ultra low supply voltage of 30mV
resulting in low computation power. CMOS based inter-neuron communication is
employed to realize network-level functionality. We corroborate circuit
operation with physics based models developed for the spin devices. Simulation
results for character recognition as a benchmark application shows 95% lower
power consumption as compared to 45nm CMOS design
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Current-Mode Techniques for the Implementation of Continuous- and Discrete-Time Cellular Neural Networks
This paper presents a unified, comprehensive approach
to the design of continuous-time (CT) and discrete-time
(DT) cellular neural networks (CNN) using CMOS current-mode
analog techniques. The net input signals are currents instead
of voltages as presented in previous approaches, thus avoiding
the need for current-to-voltage dedicated interfaces in image
processing tasks with photosensor devices. Outputs may be either
currents or voltages. Cell design relies on exploitation of current
mirror properties for the efficient implementation of both linear
and nonlinear analog operators. These cells are simpler and
easier to design than those found in previously reported CT
and DT-CNN devices. Basic design issues are covered, together
with discussions on the influence of nonidealities and advanced
circuit design issues as well as design for manufacturability
considerations associated with statistical analysis. Three prototypes
have been designed for l.6-pm n-well CMOS technologies.
One is discrete-time and can be reconfigured via local logic for
noise removal, feature extraction (borders and edges), shadow
detection, hole filling, and connected component detection (CCD)
on a rectangular grid with unity neighborhood radius. The other
two prototypes are continuous-time and fixed template: one for
CCD and other for noise removal. Experimental results are given
illustrating performance of these prototypes
- …