69 research outputs found

    MEMSORN: Self-organization of an inhomogeneous memristive hardware for sequence learning

    Full text link
    Learning is a fundamental component for creating intelligent machines. Biological intelligence orchestrates synaptic and neuronal learning at multiple time-scales to self-organize populations of neurons for solving complex tasks. Inspired by this, we design and experimentally demonstrate an adaptive hardware architecture Memristive Self-organizing Spiking Recurrent Neural Network (MEMSORN). MEMSORN incorporates resistive memory (RRAM) in its synapses and neurons which configure their state based on Hebbian and Homeostatic plasticity respectively. For the first time, we derive these plasticity rules directly from the statistical measurements of our fabricated RRAM-based neurons and synapses. These “technologically plausible” learning rules exploit the intrinsic variability of the devices and improve the accuracy of the network on a sequence learning task by 30%. Finally, we compare the performance of MEMSORN to a fully-randomly set-up recurrent network on the same task, showing that self-organization improves the accuracy by more than 15%. This work demonstrates the importance of the device-circuit-algorithm co-design approach for implementing brain-inspired computing hardware

    Self-organization of an inhomogeneous memristive hardware for sequence learning

    Full text link
    Learning is a fundamental component of creating intelligent machines. Biological intelligence orchestrates synaptic and neuronal learning at multiple time scales to self-organize populations of neurons for solving complex tasks. Inspired by this, we design and experimentally demonstrate an adaptive hardware architecture Memristive Self-organizing Spiking Recurrent Neural Network (MEMSORN). MEMSORN incorporates resistive memory (RRAM) in its synapses and neurons which configure their state based on Hebbian and Homeostatic plasticity respectively. For the first time, we derive these plasticity rules directly from the statistical measurements of our fabricated RRAM-based neurons and synapses. These "technologically plausible” learning rules exploit the intrinsic variability of the devices and improve the accuracy of the network on a sequence learning task by 30%. Finally, we compare the performance of MEMSORN to a fully-randomly-set-up spiking recurrent network on the same task, showing that self-organization improves the accuracy by more than 15%. This work demonstrates the importance of the device-circuit-algorithm co-design approach for implementing brain-inspired computing hardware

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Realizing In-Memory Baseband Processing for Ultra-Fast and Energy-Efficient 6G

    Full text link
    To support emerging applications ranging from holographic communications to extended reality, next-generation mobile wireless communication systems require ultra-fast and energy-efficient baseband processors. Traditional complementary metal-oxide-semiconductor (CMOS)-based baseband processors face two challenges in transistor scaling and the von Neumann bottleneck. To address these challenges, in-memory computing-based baseband processors using resistive random-access memory (RRAM) present an attractive solution. In this paper, we propose and demonstrate RRAM-implemented in-memory baseband processing for the widely adopted multiple-input-multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) air interface. Its key feature is to execute the key operations, including discrete Fourier transform (DFT) and MIMO detection using linear minimum mean square error (L-MMSE) and zero forcing (ZF), in one-step. In addition, RRAM-based channel estimation module is proposed and discussed. By prototyping and simulations, we demonstrate the feasibility of RRAM-based full-fledged communication system in hardware, and reveal it can outperform state-of-the-art baseband processors with a gain of 91.2×\times in latency and 671×\times in energy efficiency by large-scale simulations. Our results pave a potential pathway for RRAM-based in-memory computing to be implemented in the era of the sixth generation (6G) mobile communications.Comment: arXiv admin note: text overlap with arXiv:2205.0356

    Gaussian states provide universal and versatile quantum reservoir computing

    Full text link
    We establish the potential of continuous-variable Gaussian states in performing reservoir computing with linear dynamical systems in classical and quantum regimes. Reservoir computing is a machine learning approach to time series processing. It exploits the computational power, high-dimensional state space and memory of generic complex systems to achieve its goal, giving it considerable engineering freedom compared to conventional computing or recurrent neural networks. We prove that universal reservoir computing can be achieved without nonlinear terms in the Hamiltonian or non-Gaussian resources. We find that encoding the input time series into Gaussian states is both a source and a means to tune the nonlinearity of the overall input-output map. We further show that reservoir computing can in principle be powered by quantum fluctuations, such as squeezed vacuum, instead of classical intense fields. Our results introduce a new research paradigm for quantum reservoir computing and the engineering of Gaussian quantum states, pushing both fields into a new direction.Comment: 13 pages, 4 figure

    Optical Axons for Electro-Optical Neural Networks

    Get PDF
    Recently, neuromorphic sensors, which convert analogue signals to spiking frequencies, have ‎been reported for neurorobotics. In bio-inspired systems these sensors are connected to the main neural unit to perform ‎post-processing of the sensor data. The performance of spiking neural networks has been ‎improved using optical synapses, which offer parallel communications between the distanced ‎neural areas but are sensitive to the intensity variations of the optical signal. For systems with ‎several neuromorphic sensors, which are connected optically to the main unit, the use of ‎optical synapses is not an advantage. To address this, in this paper we propose and ‎experimentally verify optical axons with synapses activated optically using digital signals. The ‎synaptic weights are encoded by the energy of the stimuli, which are then optically transmitted ‎independently. We show that the optical intensity fluctuations and link’s misalignment result ‎in delay in activation of the synapses. For the proposed optical axon, we have demonstrated line of ‎sight transmission over a maximum link length of 190 cm with a delay of 8 μs. Furthermore, we ‎show the axon delay as a function of the illuminance using a fitted model for which the root mean square error (RMS) ‎similarity is 0.95
    corecore