281 research outputs found

    Predicting voluntary movements from motor cortical activity with neuromorphic hardware

    Get PDF
    This document is the Accepted Manuscript version of the following article: A. Lungu, A. Riehle, M. P. Nawrot and M. Schmuker, "Predicting voluntary movements from motor cortical activity with neuromorphic hardware," in IBM Journal of Research and Development, Vol. 61, no. 2/3, pp. 5:1-5:12, March-May 1 2017. The version of record is available online at doi: 10.1147/JRD.2017.2656063. © 2017 by International Business Machines Corporation. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Neurons in the mammalian motor cortices encode physical parameters of voluntary movements during planning and execution of a motor task. Brain-machine interfaces can decode limb movements from the activity of these neurons in real time. The future goal is to control prosthetic devices in severely paralyzed patients or to restore communication if the ability to speak or make gestures is lost. Here, we implemented a spiking neural network that decodes movement intentions from individual neuronal activity recorded in the motor cortex of a monkey. The network runs on neuromorphic hardware and performs its computations in a purely spike-based fashion. It incorporates an insect-brain-inspired, three-layer architecture with 176 neurons. Cortical signals are filtered using lateral inhibition, and the network is trained in a supervised fashion to predict two opposing directions of the monkey’s arm reaching movement before the movement is carried out. Our network operates on the actual spikes that have been emitted by motor cortical neurons, without the need to construct intermediate non-spiking representations. Using a pseudo-population of 12 manually-selected neurons, it reliably predicts the movement direction with an accuracy of 89.32 % on unseen data after only 100 training trials. Our results provide a proof of concept for the first-time use of a neuromorphic device for decoding movement intentions.Peer reviewe

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Neuromorphic robotic platform with visual input, processor and actuator, based on spiking neural networks

    Get PDF
    This paper describes the design and modus of operation of a neuromorphic robotic platform based on SpiNNaker, and its implementation on the goalkeeper task. The robotic system utilises an address event representation (AER) type of camera (dynamic vision sensor (DVS)) to capture features of a moving ball, and a servo motor to position the goalkeeper to intercept the incoming ball. At the backbone of the system is a microcontroller (Arduino Due) which facilitates communication and control between different robot parts. A spiking neuronal network (SNN), which is running on SpiNNaker, predicts the location of arrival of the moving ball and decides where to place the goalkeeper. In our setup, the maximum data transmission speed of the closed-loop system is approximately 3000 packets per second for both uplink and downlink, and the robot can intercept balls whose speed is up to 1 m/s starting from the distance of about 0.8 m. The interception accuracy is up to 85%, the response latency is 6.5 ms and the maximum power consumption is 7.15W. This is better than previous implementations based on PC. Here, a simplified version of an SNN has been developed for the ‘interception of a moving object’ task, for the purpose of demonstrating the platform, however a generalised SNN for this problem is a nontrivial problem. A demo video of the robot goalie is available on YouTube

    Emergence of associative learning in a neuromorphic inference network

    Get PDF
    OBJECTIVE: In the theoretical framework of predictive coding and active inference, the brain can be viewed as instantiating a rich generative model of the world that predicts incoming sensory data while continuously updating its parameters via minimization of prediction errors. While this theory has been successfully applied to cognitive processes - by modelling the activity of functional neural networks at a mesoscopic scale - the validity of the approach when modelling neurons as an ensemble of inferring agents, in a biologically plausible architecture, remained to be explored. APPROACH: We modelled a simplified cerebellar circuit with individual neurons acting as Bayesian agents to simulate the classical delayed eyeblink conditioning protocol. Neurons and synapses adjusted their activity to minimize their prediction error, which was used as the network cost function. This cerebellar network was then implemented in hardware by replicating digital neuronal elements via a low-power microcontroller. MAIN RESULTS: Persistent changes of synaptic strength - that mirrored neurophysiological observations - emerged via local (neurocentric) prediction error minimization, leading to the expression of associative learning. The same paradigm was effectively emulated in low-power hardware showing remarkably efficient performance compared to conventional neuromorphic architectures. SIGNIFICANCE: These findings show that: i) an ensemble of free energy minimizing neurons - organized in a biological plausible architecture - can recapitulate functional self-organization observed in nature, such as associative plasticity, and ii) a neuromorphic network of inference units can learn unsupervised tasks without embedding predefined learning rules in the circuit, thus providing a potential avenue to a novel form of brain-inspired artificial intelligence
    • …
    corecore