45 research outputs found

    Designing Neuromorphic Computing Systems with Memristor Devices

    Get PDF
    Deep Neural Networks (DNNs) have demonstrated fascinating performance in many real-world applications and achieved near human-level accuracy in computer vision, natural video prediction, and many different applications. However, DNNs consume a lot of processing power, especially if realized on General Purpose GPUs or CPUs, which make them unsuitable for low-power applications. On the other hand, neuromorphic computing systems are heavily investigated as a potential substitute for traditional von Neumann systems in high-speed low-power applications. One way to implement neuromorphic systems is to use memristor crossbar arrays because of their small size, low power consumption, synaptic like behavior, and scalability. However, these systems are in their early developing stages and still have many challenges to be solved before commercialization. In this dissertation, we will investigate designing of neuromorphic computing systems, targeting classification and generation applications. Specifically, we introduce three novel neuromorphic computing systems. The first system implements a multi-layer feed-forward neural network, where memristor crossbar arrays are utilized in realizing a novel hybrid spiking-based multi-layered self-learning system. This system is capable of on-chip training, whereas for most previously published systems training is done off-chip. The system performance is evaluated using three different datasets showing improved average failure error by 42% than previously published systems and great immunity against process variations. The second system implements an Echo State Network (ESN), as a special type of recurrent neural networks, by utilizing a novel memristor double crossbar architecture. The system has been trained for sample generation, using the Mackey-Glass dataset, and simulations show accurate sample generation within a 75% window size of the training dataset. Finally, we introduce a novel neuromorphic computing for real-time cardiac arrhythmia classification. Raw ECG data is directly fed to the system, without any feature extraction, and hence reducing classification time and power consumption. The proposed system achieves an overall accuracy of 96.17% and requires only 34 ms to test one ECG beat, which outperforms most of its counterparts. For future work, we introduce a preliminary neuromorphic system implementing a deep Generative Adversarial Network (GAN), based on ESNs. The system is called ESN-GAN and it targets natural video generation applications

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing

    Full text link
    Machine learning, particularly in the form of deep learning, has driven most of the recent fundamental developments in artificial intelligence. Deep learning is based on computational models that are, to a certain extent, bio-inspired, as they rely on networks of connected simple computing units operating in parallel. Deep learning has been successfully applied in areas such as object/pattern recognition, speech and natural language processing, self-driving vehicles, intelligent self-diagnostics tools, autonomous robots, knowledgeable personal assistants, and monitoring. These successes have been mostly supported by three factors: availability of vast amounts of data, continuous growth in computing power, and algorithmic innovations. The approaching demise of Moore's law, and the consequent expected modest improvements in computing power that can be achieved by scaling, raise the question of whether the described progress will be slowed or halted due to hardware limitations. This paper reviews the case for a novel beyond CMOS hardware technology, memristors, as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks. Central themes are the reliance on non-von-Neumann computing architectures and the need for developing tailored learning and inference algorithms. To argue that lessons from biology can be useful in providing directions for further progress in artificial intelligence, we briefly discuss an example based reservoir computing. We conclude the review by speculating on the big picture view of future neuromorphic and brain-inspired computing systems.Comment: Keywords: memristor, neuromorphic, AI, deep learning, spiking neural networks, in-memory computin

    Deep Liquid State Machines with Neural Plasticity and On-Device Learning

    Get PDF
    The Liquid State Machine (LSM) is a recurrent spiking neural network designed for efficient processing of spatio-temporal streams of information. LSMs have several inbuilt features such as robustness, fast training and inference speed, generalizability, continual learning (no catastrophic forgetting), and energy efficiency. These features make LSM’s an ideal network for deploying intelligence on-device. In general, single LSMs are unable to solve complex real-world tasks. Recent literature has shown emergence of hierarchical architectures to support temporal information processing over different time scales. However, these approaches do not typically investigate the optimum topology for communication between layers in the hierarchical network, or assume prior knowledge about the target problem and are not generalizable. In this thesis, a deep Liquid State Machine (deep-LSM) network architecture is proposed. The deep-LSM uses staggered reservoirs to process temporal information on multiple timescales. A key feature of this network is that neural plasticity and attention are embedded in the topology to bolster its performance for complex spatio-temporal tasks. An advantage of the deep-LSM is that it exploits the random projection native to the LSM as well as local plasticity mechanisms to optimize the data transfer between sequential layers. Both random projections and local plasticity mechanisms are ideal for on-device learning due to their low computational complexity and the absence of backpropagating error. The deep-LSM is deployed on a custom learning architecture with memristors to study the feasibility of on-device learning. The performance of the deep-LSM is demonstrated on speech recognition and seizure detection applications

    Pruning random resistive memory for optimizing analogue AI

    Full text link
    The rapid advancement of artificial intelligence (AI) has been marked by the large language models exhibiting human-like intelligence. However, these models also present unprecedented challenges to energy consumption and environmental sustainability. One promising solution is to revisit analogue computing, a technique that predates digital computing and exploits emerging analogue electronic devices, such as resistive memory, which features in-memory computing, high scalability, and nonvolatility. However, analogue computing still faces the same challenges as before: programming nonidealities and expensive programming due to the underlying devices physics. Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning to optimize the topology of a randomly weighted analogue resistive memory neural network. Software-wise, the topology of a randomly weighted neural network is optimized by pruning connections rather than precisely tuning resistive memory weights. Hardware-wise, we reveal the physical origin of the programming stochasticity using transmission electron microscopy, which is leveraged for large-scale and low-cost implementation of an overparameterized random neural network containing high-performance sub-networks. We implemented the co-design on a 40nm 256K resistive memory macro, observing 17.3% and 19.9% accuracy improvements in image and audio classification on FashionMNIST and Spoken digits datasets, as well as 9.8% (2%) improvement in PR (ROC) in image segmentation on DRIVE datasets, respectively. This is accompanied by 82.1%, 51.2%, and 99.8% improvement in energy efficiency thanks to analogue in-memory computing. By embracing the intrinsic stochasticity and in-memory computing, this work may solve the biggest obstacle of analogue computing systems and thus unleash their immense potential for next-generation AI hardware

    Design of Robust Memristor-Based Neuromorphic Circuits and Systems with Online Learning

    Get PDF
    Computing systems that are capable of performing human-like cognitive tasks have been an area of active research in the recent past. However, due to the bottleneck faced by the traditionally adopted von Neumann computing architecture, bio-inspired neural network style computing paradigm has seen a spike in research interest. Physical implementations of this paradigm of computing are known as neuromorphic systems. In the recent years, in the domain of neuromorphic systems, memristor based neuromorphic systems have gained increased attention from the research community due to the advantages offered by memristors such as their nanoscale size, nonvolatile nature and power efficient programming capability. However, these devices also suffer from a variety of non-ideal behaviors such as switching speed and threshold asymmetry, limited resolution and endurance that can have a detrimental impact on the operation of the systems employing these devices. This work aims to develop device-aware circuits that are robust in the face of such non-ideal properties. A bi-memristor synapse is first presented whose spike-timing-dependent plasticity (STDP) behavior can be precisely controlled on-chip and hence is shown to be robust. Later, a mixed-mode neuron is introduced that is amenable for use in conjunction with a range of memristors without needing to custom design it. These circuits are then used together to construct a memristive crossbar based system with supervised STDP learning to perform a pattern recognition application. The learning in the crossbar system is shown to be robust to the device-level issues owing to the robustness of the proposed circuits. Lastly, the proposed circuits are applied to build a liquid state machine based reservoir computing system. The reservoir used here is a spiking recurrent neural network generated using an evolutionary optimization algorithm and the readout layer is built with the crossbar system presented earlier, with STDP based online learning. A generalized framework for the hardware implementation of this system is proposed and it is shown that this liquid state machine is robust against device-level switching issues that would have otherwise impacted learning in the readout layer. Thereby, it is demonstrated that the proposed circuits along with their learning techniques can be used to build robust memristor-based neuromorphic systems with online learning
    corecore