20 research outputs found
Time-Dependent Variability in RRAM-based Analog Neuromorphic System for Pattern Recognition
For the first time, this work investigated the time dependent variability (TDV) in RRAMs and its interaction with the RRAM-based analog neuromorphic circuits for pattern recognition. It is found that even the circuits are well trained, the TDV effect can introduce non-negligible recognition accuracy drop during the operating condition. The impact of TDV on the neuromorphic circuits increases when higher resistances are used for the circuit implementation, challenging for the future low power operation. In addition, the impact of TDV cannot be suppressed by either scaling up with more synapses or increasing the response time and thus threatens both real-time and general-purpose applications with high accuracy requirements. Further study on different circuit configurations, operating conditions and training algorithms, provides guidelines for the practical hardware implementation
Ultra-High-density 3D vertical RRAM with stacked JunctionLess nanowires for In-Memory-Computing applications
The Von-Neumann bottleneck is a clear limitation for data-intensive
applications, bringing in-memory computing (IMC) solutions to the fore. Since
large data sets are usually stored in nonvolatile memory (NVM), various
solutions have been proposed based on emerging memories, such as OxRAM, that
rely mainly on area hungry, one transistor (1T) one OxRAM (1R) bit-cell. To
tackle this area issue, while keeping the programming control provided by 1T1R
bit-cell, we propose to combine gate-all-around stacked junctionless nanowires
(1JL) and OxRAM (1R) technology to create a 3-D memory pillar with ultrahigh
density. Nanowire junctionless transistors have been fabricated, characterized,
and simulated to define current conditions for the whole pillar. Finally, based
on Simulation Program with Integrated Circuit Emphasis (SPICE) simulations, we
demonstrated successfully scouting logic operations up to three-pillar layers,
with one operand per layer
Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient
Recently, backpropagation through time inspired learning algorithms are
widely introduced into SNNs to improve the performance, which brings the
possibility to attack the models accurately given Spatio-temporal gradient
maps. We propose two approaches to address the challenges of gradient input
incompatibility and gradient vanishing. Specifically, we design a gradient to
spike converter to convert continuous gradients to ternary ones compatible with
spike inputs. Then, we design a gradient trigger to construct ternary gradients
that can randomly flip the spike inputs with a controllable turnover rate, when
meeting all zero gradients. Putting these methods together, we build an
adversarial attack methodology for SNNs trained by supervised algorithms.
Moreover, we analyze the influence of the training loss function and the firing
threshold of the penultimate layer, which indicates a "trap" region under the
cross-entropy loss that can be escaped by threshold tuning. Extensive
experiments are conducted to validate the effectiveness of our solution.
Besides the quantitative analysis of the influence factors, we evidence that
SNNs are more robust against adversarial attack than ANNs. This work can help
reveal what happens in SNN attack and might stimulate more research on the
security of SNN models and neuromorphic devices
Performance Enhancement of Large Crossbar Resistive Memories With Complementary and 1D1R-1R1D RRAM Structures
The paper proposes novel solutions to improve the signal and thermal integrity of crossbar arrays of Resistive Random-Access Memories, that are among the most promising technologies for the 3D monolithic integration. These structures suffer from electrothermal issues, due to the heat generated by the power dissipation during the write process. This paper explores novel solutions based on new architectures and materials, for managing the issues related to the voltage drop along the interconnects and to thermal crosstalk between memory cells. The analyzed memristor is the 1 Diode - 1 Resistor memory. The two architectural solutions are given by a reverse architecture and a complementary resistive switching one. Compared to conventional architectures, both of them are also reducing the number of layers where the bias is applied. The electrothermal performance of these new structures is compared to that of the reference one, for a case-study given by a 4 × 4 × 4 array. To this end, a full-3D numerical Multiphysics model is implemented and successfully compared against other models in literature. The possibility of changing the interconnect materials is also analyzed. The results of this performance analysis clearly show the benefits of moving to these novel architectures, together with the choice of new materials
Choose your tools carefully: a comparative evaluation of deterministic vs. stochastic and binary vs. analog neuron models for implementing emerging computing paradigms
Neuromorphic computing, commonly understood as a computing approach built upon neurons, synapses, and their dynamics, as opposed to Boolean gates, is gaining large mindshare due to its direct application in solving current and future computing technological problems, such as smart sensing, smart devices, self-hosted and self-contained devices, artificial intelligence (AI) applications, etc. In a largely software-defined implementation of neuromorphic computing, it is possible to throw enormous computational power or optimize models and networks depending on the specific nature of the computational tasks. However, a hardware-based approach needs the identification of well-suited neuronal and synaptic models to obtain high functional and energy efficiency, which is a prime concern in size, weight, and power (SWaP) constrained environments. In this work, we perform a study on the characteristics of hardware neuron models (namely, inference errors, generalizability and robustness, practical implementability, and memory capacity) that have been proposed and demonstrated using a plethora of emerging nano-materials technology-based physical devices, to quantify the performance of such neurons on certain classes of problems that are of great importance in real-time signal processing like tasks in the context of reservoir computing. We find that the answer on which neuron to use for what applications depends on the particulars of the application requirements and constraints themselves, i.e., we need not only a hammer but all sorts of tools in our tool chest for high efficiency and quality neuromorphic computing