65 research outputs found

    A caloritronics-based Mott neuristor

    Full text link
    Machine learning imitates the basic features of biological neural networks to efficiently perform tasks such as pattern recognition. This has been mostly achieved at a software level, and a strong effort is currently being made to mimic neurons and synapses with hardware components, an approach known as neuromorphic computing. CMOS-based circuits have been used for this purpose, but they are non-scalable, limiting the device density and motivating the search for neuromorphic materials. While recent advances in resistive switching have provided a path to emulate synapses at the 10 nm scale, a scalable neuron analogue is yet to be found. Here, we show how heat transfer can be utilized to mimic neuron functionalities in Mott nanodevices. We use the Joule heating created by current spikes to trigger the insulator-to-metal transition in a biased VO2 nanogap. We show that thermal dynamics allow the implementation of the basic neuron functionalities: activity, leaky integrate-and-fire, volatility and rate coding. By using local temperature as the internal variable, we avoid the need of external capacitors, which reduces neuristor size by several orders of magnitude. This approach could enable neuromorphic hardware to take full advantage of the rapid advances in memristive synapses, allowing for much denser and complex neural networks. More generally, we show that heat dissipation is not always an undesirable effect: it can perform computing tasks if properly engineered

    Stochastic resonance effect in binary STDP performed by RRAM devices

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The beneficial role of noise in the binary spike time dependent plasticity (STDP) learning rule, when implemented with memristors, is experimentally analyzed. The two memristor conductance states, which emulate the neuron synapse in neuromorphic architectures, can be better distinguished if a gaussian noise is added to the bias. The addition of noise allows to reach memristor conductances which are proportional to the overlap between pre- and post-synaptic pulses.This research was funded by the Spanish MCIN/AEI/10.13039/501100011033, Projects PID2019- 103869RB and TEC2017-90969-EXP. The Spanish MicroNanoFab ICTS is acknowledged for sample fabrication.Peer ReviewedPostprint (author's final draft

    Integrated Architecture for Neural Networks and Security Primitives using RRAM Crossbar

    Full text link
    This paper proposes an architecture that integrates neural networks (NNs) and hardware security modules using a single resistive random access memory (RRAM) crossbar. The proposed architecture enables using a single crossbar to implement NN, true random number generator (TRNG), and physical unclonable function (PUF) applications while exploiting the multi-state storage characteristic of the RRAM crossbar for the vector-matrix multiplication operation required for the implementation of NN. The TRNG is implemented by utilizing the crossbar's variation in device switching thresholds to generate random bits. The PUF is implemented using the same crossbar initialized as an entropy source for the TRNG. Additionally, the weights locking concept is introduced to enhance the security of NNs by preventing unauthorized access to the NN weights. The proposed architecture provides flexibility to configure the RRAM device in multiple modes to suit different applications. It shows promise in achieving a more efficient and compact design for the hardware implementation of NNs and security primitives

    Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses

    Full text link
    Spiking neural networks (SNN) are artificial computational models that have been inspired by the brain's ability to naturally encode and process information in the time domain. The added temporal dimension is believed to render them more computationally efficient than the conventional artificial neural networks, though their full computational capabilities are yet to be explored. Recently, computational memory architectures based on non-volatile memory crossbar arrays have shown great promise to implement parallel computations in artificial and spiking neural networks. In this work, we experimentally demonstrate for the first time, the feasibility to realize high-performance event-driven in-situ supervised learning systems using nanoscale and stochastic phase-change synapses. Our SNN is trained to recognize audio signals of alphabets encoded using spikes in the time domain and to generate spike trains at precise time instances to represent the pixel intensities of their corresponding images. Moreover, with a statistical model capturing the experimental behavior of the devices, we investigate architectural and systems-level solutions for improving the training and inference performance of our computational memory-based system. Combining the computational potential of supervised SNNs with the parallel compute power of computational memory, the work paves the way for next-generation of efficient brain-inspired systems
    • …
    corecore