5 research outputs found

    Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array

    Get PDF
    Recent advances in neuroscience together with nanoscale electronic device technology have resulted in huge interests in realizing brain-like computing hardwares using emerging nanoscale memory devices as synaptic elements. Although there has been experimental work that demonstrated the operation of nanoscale synaptic element at the single device level, network level studies have been limited to simulations. In this work, we demonstrate, using experiments, array level associative learning using phase change synaptic devices connected in a grid like configuration similar to the organization of the biological brain. Implementing Hebbian learning with phase change memory cells, the synaptic grid was able to store presented patterns and recall missing patterns in an associative brain-like fashion. We found that the system is robust to device variations, and large variations in cell resistance states can be accommodated by increasing the number of training epochs. We illustrated the tradeoff between variation tolerance of the network and the overall energy consumption, and found that energy consumption is decreased significantly for lower variation tolerance.Comment: Original article can be found here: http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00205/abstrac

    A compute-in-memory chip based on resistive random-access memory.

    No full text
    Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory2-5. Although recent studies have demonstrated in-memory matrix-vector multiplication on fully integrated RRAM-CIM hardware6-17, it remains a goal for a RRAM-CIM chip to simultaneously deliver high energy efficiency, versatility to support diverse models and software-comparable accuracy. Although efficiency, versatility and accuracy are all indispensable for broad adoption of the technology, the inter-related trade-offs among them cannot be addressed by isolated improvements on any single abstraction level of the design. Here, by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM-a RRAM-based CIM chip that simultaneously delivers versatility in reconfiguring CIM cores for diverse model architectures, energy efficiency that is two-times better than previous state-of-the-art RRAM-CIM chips across various computational bit-precisions, and inference accuracy comparable to software models quantized to four-bit weights across various AI tasks, including accuracy of 99.0 percent on MNIST18 and 85.7 percent on CIFAR-1019 image classification, 84.7-percent accuracy on Google speech command recognition20, and a 70-percent reduction in image-reconstruction error on a Bayesian image-recovery task

    Memristor devices for neural networks

    No full text
    Neural network technologies have taken center stage owing to their powerful computing capability for supporting deep learning in artificial intelligence. However, conventional synaptic devices such as SRAM and DRAM are not satisfactory solutions for neural networks. Recently, several types of memristor devices have become popular alternatives because of their outstanding characteristics such as scalability, high performance, and non-volatility. To understand the characteristics of memristors, a comparison among memristors has been made, considering both maturity and performance. Magneto-resistance random access memory, phase-change random access memory, and resistive random access memory among the proposed memristors are good candidates as synaptic devices for weight storage and matrixvector multiplication required in artificial neural networks (ANNs). Moreover, these devices play key roles as synaptic devices in research for bio-plausible spiking neural networks (SNNs) because their distinctive switching properties are well matched for emulating synaptic and neuron functions of biological neural networks. In this paper we review motivation, advantage, technology, and applications of memristor devices for neural networks from practical approaches of ANNs to futuristic research of SNNs, considering the current status of memristor technology
    corecore