18 research outputs found
Redesigning Commercial Floating-Gate Memory for Analog Computing Applications
We have modified a commercial NOR flash memory array to enable high-precision
tuning of individual floating-gate cells for analog computing applications. The
modified array area per cell in a 180 nm process is about 1.5 um^2. While this
area is approximately twice the original cell size, it is still at least an
order of magnitude smaller than in the state-of-the-art analog circuit
implementations. The new memory cell arrays have been successfully tested, in
particular confirming that each cell may be automatically tuned, with ~1%
precision, to any desired subthreshold readout current value within an almost
three-orders-of-magnitude dynamic range, even using an unoptimized tuning
algorithm. Preliminary results for a four-quadrant vector-by-matrix multiplier,
implemented with the modified memory array gate-coupled with additional
peripheral floating-gate transistors, show highly linear transfer
characteristics over a broad range of input currents.Comment: 4 pages, 6 figure
Two Transistor Synapse with Spike Timing Dependent Plasticity
We present a novel two transistor synapse (“2TS”) that exhibits spike timing dependent plasticity (“STDP”). Temporal coincidence of synthetic pre- and post- synaptic action potentials across the 2TS induces localized floating gate injection and tunneling that result in proportional Hebbian synaptic weight updates. In the absence of correlated pre- and postsynaptic activity, no significant weight updates occur. A compact implementation of the 2TS has been designed,
simulated, and fabricated in a commercial 0.5 μm process. Suitable synthetic neural waveforms for symmetric STDP have been derived and circuit and network operation have been modeled and tested. Simulations agree with theory and preliminary experimental results
Recommended from our members
Larger bases and mixed analog/digital neural nets
The paper overviews results dealing with the approximation capabilities of neural networks, and bounds on the size of threshold gate circuits. Based on an explicit numerical algorithm for Kolmogorov`s superpositions the authors show that minimum size neural networks--for implementing any Boolean function--have the identity function as the activation function. Conclusions and several comments on the required precision are ending the paper
Recommended from our members
Implementing size-optimal discrete neural networks require analog circuitry
This paper starts by overviewing results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions the authors show that implementing Boolean functions can be done using neurons having an identity transfer function. Because in this case the size of the network is minimized, it follows that size-optimal solutions for implementing Boolean functions can be obtained using analog circuitry. Conclusions and several comments on the required precision are ending the paper
Recommended from our members
On automatic synthesis of analog/digital circuits
The paper builds on a recent explicit numerical algorithm for Kolmogorov`s superpositions, and will show that in order to synthesize minimum size (i.e., size-optimal) circuits for implementing any Boolean function, the nonlinear activation function of the gates has to be the identity function. Because classical and--or implementations, as well as threshold gate implementations require exponential size, it follows that size-optimal solutions for implementing arbitrary Boolean functions can be obtained using analog (or mixed analog/digital) circuits. Conclusions and several comments are ending the paper
Recommended from our members
On Kolmogorov's superpositions and Boolean functions
The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper
Recommended from our members
2D neural hardware versus 3D biological ones
This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections
A study on hardware design for high performance artificial neural network by using FPGA and NoC
制度:新 ; 報告番号:甲3421号 ; 学位の種類:博士(工学) ; 授与年月日:2011/9/15 ; 早大学位記番号:新574
Recommended from our members
Optimal neural computations require analog processors
This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections)