117 research outputs found

    Online Training of Spiking Recurrent Neural Networks with Phase-Change Memory Synapses

    Full text link
    Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. To address these challenges and enable online learning in memristive neuromorphic RNNs, we present a simulation framework of differential-architecture crossbar arrays based on an accurate and comprehensive Phase-Change Memory (PCM) device model. We train a spiking RNN whose weights are emulated in the presented simulation framework, using a recently proposed e-prop learning rule. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial PCM non-idealities. We compare several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates

    Bio-inspired Hardware Architectures for Memory, Image Processing, and Control Applications

    Full text link
    Emerging technologies are expected to partially replace and enhance CMOS systems as the end of transistor scaling approaches. A particular type of emerging technology of interest is the variable resistance devices due to their scalability, non-volatile nature, and CMOS process compatibility. The goal of this dissertation is to present circuit and system level applications of CMOS and variable resistance devices with bio-inspired computation paradigms as the main focus. The summary of the results offered per chapter is as follows: In the first chapter of this thesis, an introduction to the work presented in the rest of this thesis and the model for the variable resistance device is provided. In the second chapter of this thesis, a crossbar memory architecture that utilizes a reduced constraint read-monitored-write scheme is presented. Variable resistance based crossbar memories are prime candidates to succeed the Flash as the mainstream nonvolatile memory due to their density, scalability, and write endurance. The proposed scheme supports multi-bit storage per cell and utilizes reduced hardware, aiming to decrease the feedback complexity and latency while still operating with CMOS compatible voltages. Additionally, a read technique that can successfully distinguish resistive states under the existence of resistance drift due to read/write disturbances in the array is presented. Derivations of analytical relations are provided to set forth a design methodology in selecting peripheral device parameters. In the third chapter of this thesis, an analog programmable resistive grid-based architecture mimicking the cellular connections of a biological retina in the most basic level, capable of performing various real time image processing tasks such as edge and line detections, is presented. Resistive grid-based analog structures have been shown to have advantages of compact area, noise immunity, and lower power consumption compared to their digital counterparts. However, these are static structures that can only perform one type of image processing task. The proposed unit cell structure employs 3-D confined resonant tunneling diodes called quantum dots for signal amplification and latching, and these dots are interconnected between neighboring cells through non-volatile continuously variable resistive elements. A method to program connections is introduced and verified through circuit simulations. Various diffusion characteristics, edge detection, and line detection tasks have been demonstrated through simulations using a 2-D array of the proposed cell structure, and analytical models have been provided. In the fourth chapter of this thesis, a bio-inspired hardware designed to solve the optimal control problem for general systems is presented. Adaptive Dynamic Programming algorithms provide means to approximate optimal control actions for linear and non-linear systems. Action-Critic Networks based approach is an efficient way to approximately evaluate the cost function and the optimal control actions. However, due to its computation intensiveness, this approach is usually implemented in high level programming languages run using general purpose processors. The presented hardware design is aimed at reducing the computation time and the hardware overhead by using the Heuristic Dynamic Programming algorithm which is a form of Adaptive Dynamic Programming. The proposed hardware operating at mere speed of 10 MHz yields 237 times faster learning rate in comparison to conventional software implementations running on fast processors such as the 1.2 GHz Intel Xeon processor.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136972/1/yalciny_1.pd

    Towards Efficient In-memory Computing Hardware for Quantized Neural Networks: State-of-the-art, Open Challenges and Perspectives

    Full text link
    The amount of data processed in the cloud, the development of Internet-of-Things (IoT) applications, and growing data privacy concerns force the transition from cloud-based to edge-based processing. Limited energy and computational resources on edge push the transition from traditional von Neumann architectures to In-memory Computing (IMC), especially for machine learning and neural network applications. Network compression techniques are applied to implement a neural network on limited hardware resources. Quantization is one of the most efficient network compression techniques allowing to reduce the memory footprint, latency, and energy consumption. This paper provides a comprehensive review of IMC-based Quantized Neural Networks (QNN) and links software-based quantization approaches to IMC hardware implementation. Moreover, open challenges, QNN design requirements, recommendations, and perspectives along with an IMC-based QNN hardware roadmap are provided

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Neuro-inspired electronic skin for robots

    Get PDF
    Touch is a complex sensing modality owing to large number of receptors (mechano, thermal, pain) nonuniformly embedded in the soft skin all over the body. These receptors can gather and encode the large tactile data, allowing us to feel and perceive the real world. This efficient somatosensation far outperforms the touch-sensing capability of most of the state-of-the-art robots today and suggests the need for neural-like hardware for electronic skin (e-skin). This could be attained through either innovative schemes for developing distributed electronics or repurposing the neuromorphic circuits developed for other sensory modalities such as vision and audio. This Review highlights the hardware implementations of various computational building blocks for e-skin and the ways they can be integrated to potentially realize human skin–like or peripheral nervous system–like functionalities. The neural-like sensing and data processing are discussed along with various algorithms and hardware architectures. The integration of ultrathin neuromorphic chips for local computation and the printed electronics on soft substrate used for the development of e-skin over large areas are expected to advance robotic interaction as well as open new avenues for research in medical instrumentation, wearables, electronics, and neuroprosthetics
    • …
    corecore