38,402 research outputs found

    Adaptive motor control and learning in a spiking neural network realised on a mixed-signal neuromorphic processor

    Full text link
    Neuromorphic computing is a new paradigm for design of both the computing hardware and algorithms inspired by biological neural networks. The event-based nature and the inherent parallelism make neuromorphic computing a promising paradigm for building efficient neural network based architectures for control of fast and agile robots. In this paper, we present a spiking neural network architecture that uses sensory feedback to control rotational velocity of a robotic vehicle. When the velocity reaches the target value, the mapping from the target velocity of the vehicle to the correct motor command, both represented in the spiking neural network on the neuromorphic device, is autonomously stored on the device using on-chip plastic synaptic weights. We validate the controller using a wheel motor of a miniature mobile vehicle and inertia measurement unit as the sensory feedback and demonstrate online learning of a simple 'inverse model' in a two-layer spiking neural network on the neuromorphic chip. The prototype neuromorphic device that features 256 spiking neurons allows us to realise a simple proof of concept architecture for the purely neuromorphic motor control and learning. The architecture can be easily scaled-up if a larger neuromorphic device is available.Comment: 6+1 pages, 4 figures, will appear in one of the Robotics conference

    Modular Acquisition and Stimulation System for Timestamp-Driven Neuroscience Experiments

    Full text link
    Dedicated systems are fundamental for neuroscience experimental protocols that require timing determinism and synchronous stimuli generation. We developed a data acquisition and stimuli generator system for neuroscience research, optimized for recording timestamps from up to 6 spiking neurons and entirely specified in a high-level Hardware Description Language (HDL). Despite the logic complexity penalty of synthesizing from such a language, it was possible to implement our design in a low-cost small reconfigurable device. Under a modular framework, we explored two different memory arbitration schemes for our system, evaluating both their logic element usage and resilience to input activity bursts. One of them was designed with a decoupled and latency insensitive approach, allowing for easier code reuse, while the other adopted a centralized scheme, constructed specifically for our application. The usage of a high-level HDL allowed straightforward and stepwise code modifications to transform one architecture into the other. The achieved modularity is very useful for rapidly prototyping novel electronic instrumentation systems tailored to scientific research.Comment: Preprint submitted to ARC 2015. Extended: 16 pages, 10 figures. The final publication is available at link.springer.co

    An Energy Efficient non-volatile FPGA Digital Processor for Brain Neuromodulation

    Get PDF
    PhD ThesisBrain stimulation technologies have the potential to provide considerable clinical benefits for people with a range of neurological disorders. Recent neuroscience studies have shown that considerable information of brain states is contained in the low frequency local field potential (If-LFP; below 5Hz) recordings with application in real-time closed-loop neurostimulation for treating neurological disorders. Given these signals can be sampled at low sampling rate and hence provide sparse data streams, there is an opportunity to design implantable neuroprosthesis with long battery lifecycles which enables enough processing power to implement long-term, real-time closed loop control algorithms. In this thesis, a closed-loop embedded digital processor has been created for use in rodent neuroscience experiments. The first contribution of this work is to develop a mathematical analytical design approach of feedback controller for suppressing high-amplitude epileptic activity in the neuron mass model to form a better understanding of how to perform a better closed-loop stimulation to control seizures. The second contribution and the third contribution are combined to present an exploratory energy-efficient digital processor architecture built with commercial off-the-shelf non-volatile FPGAs and microcontroller for sparse data processing of brain neuromodulation. A digital hardware design of an exemplar PID control algorithm has been implemented on this proposed digital architecture. A new power computing diagram of this time-driven approach significantly reduced the power consumption which suggests that a digital combined control system of non-volatile FPGAs and microcontroller outweighs a digital control system of microcontroller with microcontroller regarding computing time cost and energy consumption supposing one microcontroller is always required. Taken together, this digital energy-efficient processor architecture gives important insights and viewpoints for the further advancements of neuroprosthesis for brain neurostimulation to achieve lower power consumption for sparse sampling data rate

    The Strathclyde Brain Computer Interface (S-BCI) : the road to clinical translation

    Get PDF
    In this paper, we summarise the state of development of the Strathclyde Brain Computer Interface (S-BCI) and what has been so far achieved. We also briefly discuss our next steps for translation to spinal cord injured patients and the challenges we envisage in this process and how we plan to address some of them. Projections of the S-BCI project for the coming few years are also presented

    Muscle synergies in neuroscience and robotics: from input-space to task-space perspectives

    Get PDF
    In this paper we review the works related to muscle synergies that have been carried-out in neuroscience and control engineering. In particular, we refer to the hypothesis that the central nervous system (CNS) generates desired muscle contractions by combining a small number of predefined modules, called muscle synergies. We provide an overview of the methods that have been employed to test the validity of this scheme, and we show how the concept of muscle synergy has been generalized for the control of artificial agents. The comparison between these two lines of research, in particular their different goals and approaches, is instrumental to explain the computational implications of the hypothesized modular organization. Moreover, it clarifies the importance of assessing the functional role of muscle synergies: although these basic modules are defined at the level of muscle activations (input-space), they should result in the effective accomplishment of the desired task. This requirement is not always explicitly considered in experimental neuroscience, as muscle synergies are often estimated solely by analyzing recorded muscle activities. We suggest that synergy extraction methods should explicitly take into account task execution variables, thus moving from a perspective purely based on input-space to one grounded on task-space as well

    Contrastive Hebbian Learning with Random Feedback Weights

    Full text link
    Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning
    • …
    corecore