8 research outputs found

    FPGA Spiking Neural Processors With Supervised and Unsupervised Spike Timing Dependent Plasticity

    Get PDF
    Energy efficient architectures for brain inspired computing have been an active area of research with recent advances in the field of neuroscience. Spiking neural networks (SNN) are a class of artificial neural networks in which information is encoded in discrete spike events, closely resembling the biological brain. Liquid State Machine (LSM) is a computational model developed in theoretical neuroscience to describe information processing in recurrent neural circuits and can be used to model recurrent SNNs. LSM is composed of an input, reservoir and output layers. A major challenge in SNNs is training the network with discrete spiking events for which traditional loss functions and optimization techniques cannot be applied directly. Spike Timing Dependent Plasticity (STDP) is an unsupervised learning algorithm which updates synaptic weights based on time difference between spikes of pre synaptic and post synaptic neurons. STDP is a localized learning algorithm and induces self-organizing behaviors resulting in sparse network structures making it a suitable choice for low cost hardware implementation. SNNs are hardware friendly as presence or absence of a spike can be encoded using a binary digit. In this research, SNN processor with energy efficient architecture is developed and is implemented on Xilinx Zynq ZC706 FPGA platform. Hardware friendly learning rules based on STDP are proposed and reservoir and readout layers are trained with these learning algorithms. In order to achieve energy efficiency, sparsification algorithm utilizing STDP rule is proposed and implemented. On chip training and inference are carried out and it is shown that with the proposed unsupervised STDP for reservoir training and supervised STDP for readout training, classification performance of 95% is achieved for TI corpus speech data set. Classification performance, hardware overhead and power consumption of the processor with different learning schemes are reported

    Information-Theoretic Intrinsic Plasticity for Online Unsupervised Learning in Spiking Neural Networks

    Get PDF
    As a self-adaptive mechanism, intrinsic plasticity (IP) plays an essential role in maintaining homeostasis and shaping the dynamics of neural circuits. From a computational point of view, IP has the potential to enable promising non-Hebbian learning in artificial neural networks. While IP based learning has been attempted for spiking neuron models, the existing IP rules are ad hoc in nature, and the practical success of their application has not been demonstrated particularly toward enabling real-life learning tasks. This work aims to address the theoretical and practical limitations of the existing works by proposing a new IP rule named SpiKL-IP. SpiKL-IP is developed based on a rigorous information-theoretic approach where the target of IP tuning is to maximize the entropy of the output firing rate distribution of each spiking neuron. This goal is achieved by tuning the output firing rate distribution toward a targeted optimal exponential distribution. Operating on a proposed firing-rate transfer function, SpiKL-IP adapts the intrinsic parameters of a spiking neuron while minimizing the KL-divergence from the targeted exponential distribution to the actual output firing rate distribution. SpiKL-IP can robustly operate in an online manner under complex inputs and network settings. Simulation studies demonstrate that the application of SpiKL-IP to individual neurons in isolation or as part of a larger spiking neural network robustly produces the desired exponential distribution. The evaluation of SpiKL-IP under real-world speech and image classification tasks shows that SpiKL-IP noticeably outperforms two existing IP rules and can significantly boost recognition accuracy by up to more than 16%

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Combined behavioral and neural investigations of pup retrieval

    Get PDF
    The ability to adequately adapt to a dramatically changing environment is crucial for an animal’s survival. When female mice give birth to their offspring, their environment changes drastically and they immediately need to care for the offspring, thereby ensuring the offspring’s wellbeing. Pups completely transform the environment around the mouse, triggering a number of new behaviors, as they provide a slew of new sensory inputs, including tactile and olfactory, but also auditory. Pups emit ultrasonic vocalizations (USVs) when isolated outside the nest, triggering retrieval behavior in mothers (MTs). After pups have returned to the nest and are cared for, the USV emission ceases. Interestingly, not only MTs but also virgin mice can perform pup retrieval, provided that they either have experience with pups in their home cage or are repeatedly exposed to pups in a pup retrieval task. Those two animal groups are referred to as experienced (EVs) and naive virgins (NVs). Studies have shown that excitatory neurons in the auditory cortex of MTs and EVs respond more strongly to pup calls over time. However, these studies have been performed under head-restrained unnatural conditions. Here, we provide a framework in which MTs, EVs and NVs retrieve pups in a semi-natural, freely behaving setting. During the experiment, they carry a head-mounted miniscope that allows for imaging neural activity in multiple neurons in the auditory cortex. The entire multisensory scenery is therefore accessible to mice, which was shown to impact auditory responses to pup calls. In our study, we show differences in behavioral performances of these three groups, with MTs displaying the most skilled and fine-tuned pup retrieval behavior, already highly effective during the final pregnancy stage. EVs show slightly reduced pup retrieval abilities, but superior to NVs, which retrieve pups effectively only after a few days. Additionally, we discovered that not only pups emitted USVs, but also adult mice vocalized. Intriguingly, they vocalized significantly more when pups were present in the behavioral arena, as compared to when they were alone. Clear pup call responsive neurons in the auditory cortex of all groups were scarce. Nevertheless, the overall neuronal population showed significant responses to pup calls at least in MTs, less so in EVs and least pronounced in NVs. Strikingly, other more global and behaviorally relevant events, such as pup retrievals and nest entries and exits, showed a distinct neural signature. Despite the scarcity of clear single cell responses to pup calls, the population of auditory cortex neurons carried information about pup call presence throughout all sessions in all groups, measured by a decoding analysis. This population code could be described as a sparse and dynamic code containing a few highly informative neurons, i.e. high weight neurons, that carried most of the decoding weight in a given session. This sparsity was most pronounced in MTs and least so in NVs. Besides, these high weight neurons were largely non-overlapping with high weight neurons for other non-pup call related event types. When relating single trial pup call decoding accuracies with the associated behavioral performance in a given trial, we could identify a significant relationship in EVs that was absent in MTs and NVs, suggesting that improved single trial decoding accuracies were linked to improved pup retrieval abilities. Altogether, this study shows how different pup exposure regimes can affect the learning of an essential offspring caring behavior and, that these different learning types differently enhance the neural representations of associated sensory cues

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
    corecore