12 research outputs found

    Stochastic resonance effect in binary STDP performed by RRAM devices

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The beneficial role of noise in the binary spike time dependent plasticity (STDP) learning rule, when implemented with memristors, is experimentally analyzed. The two memristor conductance states, which emulate the neuron synapse in neuromorphic architectures, can be better distinguished if a gaussian noise is added to the bias. The addition of noise allows to reach memristor conductances which are proportional to the overlap between pre- and post-synaptic pulses.This research was funded by the Spanish MCIN/AEI/10.13039/501100011033, Projects PID2019- 103869RB and TEC2017-90969-EXP. The Spanish MicroNanoFab ICTS is acknowledged for sample fabrication.Peer ReviewedPostprint (author's final draft

    MOCAST 2021

    Get PDF
    The 10th International Conference on Modern Circuit and System Technologies on Electronics and Communications (MOCAST 2021) will take place in Thessaloniki, Greece, from July 5th to July 7th, 2021. The MOCAST technical program includes all aspects of circuit and system technologies, from modeling to design, verification, implementation, and application. This Special Issue presents extended versions of top-ranking papers in the conference. The topics of MOCAST include:Analog/RF and mixed signal circuits;Digital circuits and systems design;Nonlinear circuits and systems;Device and circuit modeling;High-performance embedded systems;Systems and applications;Sensors and systems;Machine learning and AI applications;Communication; Network systems;Power management;Imagers, MEMS, medical, and displays;Radiation front ends (nuclear and space application);Education in circuits, systems, and communications

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Memristor-based design solutions for mitigating parametric variations in IoT applications

    Get PDF
    PhD ThesisRapid advancement of the internet of things (IoT) is predicated by two important factors of the electronic technology, namely device size and energy-efficiency. With smaller size comes the problem of process, voltage and temperature (PVT) variations of delays which are the key operational parameters of devices. Parametric variability is also an obstacle on the way to allowing devices to work in systems with unpredictable power sources, such as those powered by energy-harvesters. Designers tackle these problems holistically by developing new techniques such as asynchronous logic, where mechanisms such as matching delays are widely used to adapt to delay variations. To mitigate energy efficiency and power interruption issues the matching delays need to be ideally retained in a non-volatile storage. Meanwhile, a resistive memory called memristor becomes a promising component for power-restricted applications owing to its inherent non-volatility. While providing non-volatility, the use of memristor in delay matching incurs some power overheads. This creates the first challenge on the way of introducing memristors into IoT devices for the delay matching. Another important factor affecting the use of memristors in IoT devices is the dependence of the memristor value on temperature. For example, a memristance decoder used in the memristor-based components must be able to correct the read data without incurring significant overheads on the overall system. This creates the second challenge for overcoming the temperature effect in memristance decoding process. In this research, we propose methods for improving PVT tolerance and energy characteristics of IoT devices from the perspective of above two main challenges: (i) utilising memristor to enhance the energy efficiency of the delay element (DE), and (ii) improving the temperature awareness and energy robustness of the memristance decoder. For memristor-based delay element (MemDE), we applied a memristor between two inverters to vary the path resistance, which determines the RC delay. This allows power saving due to the low number of switching components and the absence of external delay storage. We also investigate a solution for avoiding the unintended tuning (UT) and a timing model to estimate the proper pulse width for memristance tuning. The simulation results based on UMC 180nm technology and VTEAM model show the MemDE can provide the delay between 0.55ns and 1.44ns which is compatible to the 4-bit multiplexerbased delay element (MuxDE) in the same technology while consuming thirteen times less power. The key contribution within (i) is the development of low-power MemDE to mitigate the timing mismatch caused by PVT variations. To estimate the temperature effect on memristance, we develop an empirical temperature model which fits both titanium dioxide and silver chalcogenide memristors. The temperature experiments are conducted using the latter device, and the results confirm the validity of the proposed model with the accuracy R-squared >88%. The memristance decoder is designed to deliver two key advantages. Firstly, the temperature model is integrated into the VTEAM model to enable the temperature compensation. Secondly, it supports resolution scalability to match the energy budget. The simulation results of the 2-bit decoder based on UMC 65nm technology show the energy can be varied between 49fJ and 98fJ. This is the second major contribution to address the challenge (ii). This thesis gives future research directions into an in-depth study of the memristive electronics as a variation-robust energy-efficient design paradigm and its impact on developing future IoT applications.sponsored by the Royal Thai Governmen

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case
    corecore