10 research outputs found
Digital multiplierāless implementation of highāprecision SDSP and synaptic strengthābased STDP
Spiking neural networks (SNNs) can achieve lower latency and higher efficiency compared with traditional neural networks if they are implemented in dedicated neuromorphic hardware. In both biological and artificial spiking neuronal systems, synaptic modifications are the main mechanism for learning. Plastic synapses are thus the core component of neuromorphic hardware with onāchip learning capability. Recently, several research groups have designed hardware architectures for modeling plasticity in SNNs for various applications. Following these research efforts, this paper proposes multiplierāless digital neuromorphic circuits for two plasticity learning rules: the spikeādriven synaptic plasticity (SDSP) and synaptic strengthābased spike timingādependent plasticity (SSSTDP). The proposed architectures have increased the precision of the plastic synaptic weights and are suitable for spiking neural network architectures with more precise calculations. The proposed models are validated in MATLAB simulations and physical implementations on a fieldāprogrammable gate array (FPGA)
A Digital Multiplier-less Neuromorphic Model for Learning a Context-Dependent Task
Highly efficient performance-resources trade-off of the biological brain is a motivation for research on neuromorphic computing. Neuromorphic engineers develop event-based spiking neural networks (SNNs) in hardware. Learning in SNNs is a challenging topic of current research. Reinforcement learning (RL) is a particularly promising learning paradigm, important for developing autonomous agents. In this paper, we propose a digital multiplier-less hardware implementation of an SNN with RL capability. The network is able to learn stimulus-response associations in a context-dependent learning task. Validated in a robotic experiment, the proposed model replicates the behavior in animal experiments and the respective computational model
Digital Multiplier-Less Spiking Neural Network Architecture of Reinforcement Learning in a Context-Dependent Task
Neuromorphic engineers develop event-based spiking neural networks (SNNs) in hardware. These SNNs closer resemble the dynamics of biological neurons than conventional artificial neural networks and achieve higher efficiency thanks to the event-based, asynchronous nature of the processing. Learning in the hardware SNNs is a more challenging task, however. The conventional supervised learning methods cannot be directly applied to SNNs due to the non-differentiable event-based nature of their activation. For this reason, learning in SNNs is currently an active research topic. Reinforcement learning (RL) is a particularly promising learning method for neuromorphic implementation, especially in the field of autonomous agents' control. An SNN realization of a bio-inspired RL model is in the focus of this work. In particular, in this article, we propose a new digital multiplier-less hardware implementation of an SNN with RL capability. We show how the proposed network can learn stimulus-response associations in a context-dependent task. The task is inspired by biological experiments that study RL in animals. The architecture is described using the standard digital design flow and uses power- and space-efficient cores. The proposed hardware SNN model is compared both to data from animal experiments and to a computational model. We perform a comparison to the behavioral experiments using a robot, to show the learning capability in hardware in a closed sensory-motor loop
Low-Energy and Fast Spiking Neural Network For Context-Dependent Learning on FPGA
Supervised, unsupervised, and reinforcement learning (RL) mechanisms are known as the most powerful learning paradigms empowering neuromorphic systems. These systems typically take advantage of unsupervised learning because they can learn the distribution of sensory information. However, to perform a task, not only is it important to have sensory information, but also it is required to have information about the context in which the system is operating. In this sense, reinforcement learning is very powerful for interacting with the environment while performing a context-dependent task. The predominant motivation for this brief is to present a digital architecture for a spiking neural network (SNN) model with RL capability suitable for learning a context-dependent task. The proposed architecture is composed of hardware-friendly leaky integrate-and-firing (LIF) neurons and spike timing dependent plasticity (STDP)-based synapses implemented on a field programmable gate array (FPGA). Hardware synthesis and physical implementations show that the resulting circuits can faithfully reproduce the outcome of a learning task previously performed in both animal experimentation and computational modelings. Compared to the state-of-the-art neuromorphic FPGA circuits with context-dependent learning capability, our circuit fires 10.7 times fewer spikes, which accelerates learning 15 times, while requiring 16 times less energy. This is a significant step in achieving fast and low-energy SNNs with context-dependent learning ability on FPGAs
Low-energy and fast spiking neural network for context-dependent learning on FPGA
Supervised, unsupervised, and reinforcement learning (RL) mechanisms are known as the most powerful learning paradigms empowering neuromorphic systems. These systems typically take advantage of unsupervised learning because they can learn the distribution of sensory information. However, to perform a task, not only is it important to have sensory information, but also it is required to have information about the context in which the system is operating. In this sense, reinforcement learning is very powerful for interacting with the environment while performing a context-dependent task. The predominant motivation for this research is to present a digital architecture for a spiking neural network (SNN) model with RL capability suitable for learning a context-dependent task. The proposed architecture is composed of hardware-friendly leaky integrate-and-firing (LIF) neurons and spike timing dependent plasticity (STDP)-based synapses implemented on a field programmable gate array (FPGA). Hardware synthesis and physical implementations show that the resulting circuits can faithfully reproduce the outcome of a learning task previously performed in both animal experimentation and computational modelings. Compared to the state-of-the-art neuromorphic FPGA circuits with context-dependent learning capability, our circuit fires 10.7 times fewer spikes, which accelerates learning 15 times, while requiring 16 times less energy. This is a significant step in achieving fast and low-energy SNNs with context-dependent learning ability on FPGAs
Efficient levels of spatial pyramid representation for local binary patterns
Local binary patterns (LBPs) are a wellāknown operator that shows the ability for rotation and scale invariant texture classification. A recent extension of this operator is the pyramid transform domain approach on LBPs (PLBP). Obtaining more accuracy by using more pyramid representations is an important result of PLBP, which increases not only feature dimensionality, but also classification computational time (CT). This study illustrates that more pyramid image representations will not improve the performance of the PLBP. We evaluate efficient levels of representation for the PLBP descriptor. In addition, the authors propose some feature selection approaches, such as the multiālevel and multiāresolution (ML + MR) approach and the ML, MR and multiāband (ML + MR + MB) approach and discuss their efficiency and CT. Experimental results show that the proposed feature selection approaches improve the accuracy of texture classification with fewer pyramid image representations. In addition, replacing the Chiā2 similarity measurement with Czekannowski improves the accuracy of texture classification