184 research outputs found

    Building Reservoir Computing Hardware Using Low Energy-Barrier Magnetics

    Full text link
    Biologically inspired recurrent neural networks, such as reservoir computers are of interest in designing spatio-temporal data processors from a hardware point of view due to the simple learning scheme and deep connections to Kalman filters. In this work we discuss using in-depth simulation studies a way to construct hardware reservoir computers using an analog stochastic neuron cell built from a low energy-barrier magnet based magnetic tunnel junction and a few transistors. This allows us to implement a physical embodiment of the mathematical model of reservoir computers. Compact implementation of reservoir computers using such devices may enable building compact, energy-efficient signal processors for standalone or in-situ machine cognition in edge devices.Comment: To be presented at International Conference on Neuromorphic Systems 202

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural Network

    Full text link
    Neuromorphic computing systems overcome the limitations of traditional von Neumann computing architectures. These computing systems can be further improved upon by using emerging technologies that are more efficient than CMOS for neural computation. Recent research has demonstrated memristors and spintronic devices in various neural network designs boost efficiency and speed. This paper presents a biologically inspired fully spintronic neuron used in a fully spintronic Hopfield RNN. The network is used to solve tasks, and the results are compared against those of current Hopfield neuromorphic architectures which use emerging technologies

    Pattern recognition using spiking antiferromagnetic neurons

    Full text link
    Spintronic devices offer a promising avenue for the development of nanoscale, energy-efficient artificial neurons for neuromorphic computing. It has previously been shown that with antiferromagnetic (AFM) oscillators, ultra-fast spiking artificial neurons can be made that mimic many unique features of biological neurons. In this work, we train an artificial neural network of AFM neurons to perform pattern recognition. A simple machine learning algorithm called spike pattern association neuron (SPAN), which relies on the temporal position of neuron spikes, is used during training. In under a microsecond of physical time, the AFM neural network is trained to recognize symbols composed from a grid by producing a spike within a specified time window. We further achieve multi-symbol recognition with the addition of an output layer to suppress undesirable spikes. Through the utilization of AFM neurons and the SPAN algorithm, we create a neural network capable of high-accuracy recognition with overall power consumption on the order of picojoules

    Optimising network interactions through device agnostic models

    Full text link
    Physically implemented neural networks hold the potential to achieve the performance of deep learning models by exploiting the innate physical properties of devices as computational tools. This exploration of physical processes for computation requires to also consider their intrinsic dynamics, which can serve as valuable resources to process information. However, existing computational methods are unable to extend the success of deep learning techniques to parameters influencing device dynamics, which often lack a precise mathematical description. In this work, we formulate a universal framework to optimise interactions with dynamic physical systems in a fully data-driven fashion. The framework adopts neural stochastic differential equations as differentiable digital twins, effectively capturing both deterministic and stochastic behaviours of devices. Employing differentiation through the trained models provides the essential mathematical estimates for optimizing a physical neural network, harnessing the intrinsic temporal computation abilities of its physical nodes. To accurately model real devices' behaviours, we formulated neural-SDE variants that can operate under a variety of experimental settings. Our work demonstrates the framework's applicability through simulations and physical implementations of interacting dynamic devices, while highlighting the importance of accurately capturing system stochasticity for the successful deployment of a physically defined neural network
    • …
    corecore