2 research outputs found
Threshold Logic in a Flash
This paper describes a novel design of a threshold logic gate (a binary
perceptron) and its implementation as a standard cell. This new cell structure,
referred to as flash threshold logic (FTL), uses floating gate (flash)
transistors to realize the weights associated with a threshold function. The
threshold voltages of the flash transistors serve as a proxy for the weights.
An FTL cell can be equivalently viewed as a multi-input, edge-triggered
flipflop which computes a threshold function on a clock edge. Consequently, it
can be used in the automatic synthesis of ASICs. The use of flash transistors
in the FTL cell allows programming of the weights after fabrication, thereby
preventing discovery of its function by a foundry or by reverse engineering.
This paper focuses on the design and characteristics of the FTL cell. We
present a novel method for programming the weights of an FTL cell for a
specified threshold function using a modified perceptron learning algorithm.
The algorithm is further extended to select weights to maximize the robustness
of the design in the presence of process variations. The FTL circuit was
designed in 40nm technology and simulations with layout-extracted parasitics
included, demonstrate significant improvements in the area (79.7%), power
(61.1%), and performance (42.5%) when compared to the equivalent
implementations of the same function in conventional static CMOS design. Weight
selection targeting robustness is demonstrated using Monte Carlo simulations.
The paper also shows how FTL cells can be used for fixing timing errors after
fabrication
A Survey of Neuromorphic Computing and Neural Networks in Hardware
Neuromorphic computing has come to refer to a variety of brain-inspired
computers, devices, and models that contrast the pervasive von Neumann computer
architecture. This biologically inspired approach has created highly connected
synthetic neurons and synapses that can be used to model neuroscience theories
as well as solve challenging machine learning problems. The promise of the
technology is to create a brain-like ability to learn and adapt, but the
technical challenges are significant, starting with an accurate neuroscience
model of how the brain works, to finding materials and engineering
breakthroughs to build devices to support these models, to creating a
programming framework so the systems can learn, to creating applications with
brain-like capabilities. In this work, we provide a comprehensive survey of the
research and motivations for neuromorphic computing over its history. We begin
with a 35-year review of the motivations and drivers of neuromorphic computing,
then look at the major research areas of the field, which we define as
neuro-inspired models, algorithms and learning approaches, hardware and
devices, supporting systems, and finally applications. We conclude with a broad
discussion on the major research topics that need to be addressed in the coming
years to see the promise of neuromorphic computing fulfilled. The goals of this
work are to provide an exhaustive review of the research conducted in
neuromorphic computing since the inception of the term, and to motivate further
work by illuminating gaps in the field where new research is needed