69 research outputs found
MEMSORN: Self-organization of an inhomogeneous memristive hardware for sequence learning
Learning is a fundamental component for creating intelligent machines. Biological intelligence orchestrates synaptic and neuronal learning at multiple time-scales to self-organize populations of neurons for solving complex tasks. Inspired by this, we design and experimentally demonstrate an adaptive hardware architecture Memristive Self-organizing Spiking Recurrent Neural Network (MEMSORN). MEMSORN incorporates resistive memory (RRAM) in its synapses and neurons which configure their state based on Hebbian and Homeostatic plasticity respectively. For the first time, we derive these plasticity rules directly from the statistical measurements of our fabricated RRAM-based neurons and synapses. These “technologically plausible” learning rules exploit the intrinsic variability of the devices and improve the accuracy of the network on a sequence learning task by 30%. Finally, we compare the performance of MEMSORN to a fully-randomly set-up recurrent network on the same task, showing that self-organization improves the accuracy by more than 15%. This work demonstrates the importance of the device-circuit-algorithm co-design approach for implementing brain-inspired computing hardware
Recommended from our members
Finite-Time State Estimation for Delayed Neural Networks with Redundant Delayed Channels
10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 61703245 and 61873148); 10.13039/501100010029-Taishan Scholar Project of Shandong Province of China; 10.13039/501100002858-China Post-Doctoral Science Foundation (Grant Number: 2016M600547); Qingdao Post-Doctoral Applied Research Project (Grant Number: 2016117); Post-Doctoral Special Innovation Foundation of Shandong (Grant Number: 201701015); 10.13039/501100000288-Royal Society of the U.K.;
10.13039/100005156-Alexander von Humboldt Foundation of German
Self-organization of an inhomogeneous memristive hardware for sequence learning
Learning is a fundamental component of creating intelligent machines. Biological intelligence orchestrates synaptic and neuronal learning at multiple time scales to self-organize populations of neurons for solving complex tasks. Inspired by this, we design and experimentally demonstrate an adaptive hardware architecture Memristive Self-organizing Spiking Recurrent Neural Network (MEMSORN). MEMSORN incorporates resistive memory (RRAM) in its synapses and neurons which configure their state based on Hebbian and Homeostatic plasticity respectively. For the first time, we derive these plasticity rules directly from the statistical measurements of our fabricated RRAM-based neurons and synapses. These "technologically plausible” learning rules exploit the intrinsic variability of the devices and improve the accuracy of the network on a sequence learning task by 30%. Finally, we compare the performance of MEMSORN to a fully-randomly-set-up spiking recurrent network on the same task, showing that self-organization improves the accuracy by more than 15%. This work demonstrates the importance of the device-circuit-algorithm co-design approach for implementing brain-inspired computing hardware
Recommended from our members
Non-fragile state estimation for discrete Markovian jumping neural networks
In this paper, the non-fragile state estimation problem is investigated for a class of discrete-time neural networks subject to Markovian jumping parameters and time delays. In terms of a Markov chain, the mode switching phenomenon at different times is considered in both the parameters and the discrete delays of the neural networks. To account for the possible gain variations occurring in the implementation, the gain of the estimator is assumed to be perturbed by multiplicative norm-bounded uncertainties. We aim to design a non-fragile state estimator such that, in the presence of all admissible gain variations, the estimation error converges to zero exponentially. By adopting the Lyapunov–Krasovskii functional and the stochastic analysis theory, sufficient conditions are established to ensure the existence of the desired state estimator that guarantees the stability of the overall estimation error dynamics. The explicit expression of such estimators is parameterized by solving a convex optimization problem via the semi-definite programming method. A numerical simulation example is provided to verify the usefulness of the proposed methods
Recommended from our members
On State Estimation for Discrete Time-Delayed Memristive Neural Networks Under the WTOD Protocol: A Resilient Set-Membership Approach
In this article, a resilient set-membership approach is put forward to deal with the state estimation problem for a sort of discrete-time memristive neural networks (DMNNs) with hybrid time delays under the weighted try-once-discard protocol (WTODP). The WTODP is utilized to mitigate unnecessary network congestion occurring in the channel between DMNNs and the state estimator. In order to ensure resilience against possible realization errors, the estimator gain is permitted to undergo some norm-bounded parameter drifts. Our objective is to design a resilient set-membership estimator (RSME) that is capable of resisting gain variations and unknown-but-bounded noises by confining the estimation error to certain ellipsoidal regions. By resorting to the recursive matrix inequality technique, sufficient conditions are acquired for the existence of the expected RSME and, subsequently, an optimization problem is formalized by minimizing the constraint ellipsoid (with respect to the estimation error) under WTODP. Finally, numerical simulation is carried out to validate the usefulness of RSME.10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 61873058, 61873148 and 61933007); AHPU Youth Top-Notch Talent Support Program of China (Grant Number: 2018BJRC009);
Natural Science Foundation of Universities in Anhui Province of China (Grant Number: gxyqZD2019053);
Heilongjiang Postdoctoral Sustentation Fund of China (Grant Number: LBH-Z19048); Royal Society of the U.K.;
Alexander von Humboldt Foundation of Germany
A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning
Reservoir computing (RC), first applied to temporal signal processing, is a
recurrent neural network in which neurons are randomly connected. Once
initialized, the connection strengths remain unchanged. Such a simple structure
turns RC into a non-linear dynamical system that maps low-dimensional inputs
into a high-dimensional space. The model's rich dynamics, linear separability,
and memory capacity then enable a simple linear readout to generate adequate
responses for various applications. RC spans areas far beyond machine learning,
since it has been shown that the complex dynamics can be realized in various
physical hardware implementations and biological devices. This yields greater
flexibility and shorter computation time. Moreover, the neuronal responses
triggered by the model's dynamics shed light on understanding brain mechanisms
that also exploit similar dynamical processes. While the literature on RC is
vast and fragmented, here we conduct a unified review of RC's recent
developments from machine learning to physics, biology, and neuroscience. We
first review the early RC models, and then survey the state-of-the-art models
and their applications. We further introduce studies on modeling the brain's
mechanisms by RC. Finally, we offer new perspectives on RC development,
including reservoir design, coding frameworks unification, physical RC
implementations, and interaction between RC, cognitive neuroscience and
evolution.Comment: 51 pages, 19 figures, IEEE Acces
Realizing In-Memory Baseband Processing for Ultra-Fast and Energy-Efficient 6G
To support emerging applications ranging from holographic communications to
extended reality, next-generation mobile wireless communication systems require
ultra-fast and energy-efficient baseband processors. Traditional complementary
metal-oxide-semiconductor (CMOS)-based baseband processors face two challenges
in transistor scaling and the von Neumann bottleneck. To address these
challenges, in-memory computing-based baseband processors using resistive
random-access memory (RRAM) present an attractive solution. In this paper, we
propose and demonstrate RRAM-implemented in-memory baseband processing for the
widely adopted multiple-input-multiple-output orthogonal frequency division
multiplexing (MIMO-OFDM) air interface. Its key feature is to execute the key
operations, including discrete Fourier transform (DFT) and MIMO detection using
linear minimum mean square error (L-MMSE) and zero forcing (ZF), in one-step.
In addition, RRAM-based channel estimation module is proposed and discussed. By
prototyping and simulations, we demonstrate the feasibility of RRAM-based
full-fledged communication system in hardware, and reveal it can outperform
state-of-the-art baseband processors with a gain of 91.2 in latency and
671 in energy efficiency by large-scale simulations. Our results pave a
potential pathway for RRAM-based in-memory computing to be implemented in the
era of the sixth generation (6G) mobile communications.Comment: arXiv admin note: text overlap with arXiv:2205.0356
Gaussian states provide universal and versatile quantum reservoir computing
We establish the potential of continuous-variable Gaussian states in
performing reservoir computing with linear dynamical systems in classical and
quantum regimes. Reservoir computing is a machine learning approach to time
series processing. It exploits the computational power, high-dimensional state
space and memory of generic complex systems to achieve its goal, giving it
considerable engineering freedom compared to conventional computing or
recurrent neural networks. We prove that universal reservoir computing can be
achieved without nonlinear terms in the Hamiltonian or non-Gaussian resources.
We find that encoding the input time series into Gaussian states is both a
source and a means to tune the nonlinearity of the overall input-output map. We
further show that reservoir computing can in principle be powered by quantum
fluctuations, such as squeezed vacuum, instead of classical intense fields. Our
results introduce a new research paradigm for quantum reservoir computing and
the engineering of Gaussian quantum states, pushing both fields into a new
direction.Comment: 13 pages, 4 figure
Optical Axons for Electro-Optical Neural Networks
Recently, neuromorphic sensors, which convert analogue signals to spiking frequencies, have been reported for neurorobotics. In bio-inspired systems these sensors are connected to the main neural unit to perform post-processing of the sensor data. The performance of spiking neural networks has been improved using optical synapses, which offer parallel communications between the distanced neural areas but are sensitive to the intensity variations of the optical signal. For systems with several neuromorphic sensors, which are connected optically to the main unit, the use of optical synapses is not an advantage. To address this, in this paper we propose and experimentally verify optical axons with synapses activated optically using digital signals. The synaptic weights are encoded by the energy of the stimuli, which are then optically transmitted independently. We show that the optical intensity fluctuations and link’s misalignment result in delay in activation of the synapses. For the proposed optical axon, we have demonstrated line of sight transmission over a maximum link length of 190 cm with a delay of 8 μs. Furthermore, we show the axon delay as a function of the illuminance using a fitted model for which the root mean square error (RMS) similarity is 0.95
- …