92 research outputs found

    A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks

    Get PDF
    Biological intelligence processes information using impulses or spikes, which makes those living creatures able to perceive and act in the real world exceptionally well and outperform state-of-the-art robots in almost every aspect of life. To make up the deficit, emerging hardware technologies and software knowledge in the fields of neuroscience, electronics, and computer science have made it possible to design biologically realistic robots controlled by spiking neural networks (SNNs), inspired by the mechanism of brains. However, a comprehensive review on controlling robots based on SNNs is still missing. In this paper, we survey the developments of the past decade in the field of spiking neural networks for control tasks, with particular focus on the fast emerging robotics-related applications. We first highlight the primary impetuses of SNN-based robotics tasks in terms of speed, energy efficiency, and computation capabilities. We then classify those SNN-based robotic applications according to different learning rules and explicate those learning rules with their corresponding robotic applications. We also briefly present some existing platforms that offer an interaction between SNNs and robotics simulations for exploration and exploitation. Finally, we conclude our survey with a forecast of future challenges and some associated potential research topics in terms of controlling robots based on SNNs

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    Learning Mechanisms to account for the Speed, Selectivity and Invariance of Responses in the visual Cortex

    Get PDF
    Dans cette thèse je propose plusieurs mécanismes de plasticité synaptique qui pourraient expliquer la rapidité, la sélectivité et l'invariance des réponses neuronales dans le cortex visuel. Leur plausibilité biologique est discutée. J'expose également les résultats d'une expérience de psychophysique pertinente, qui montrent que la familiarité peut accélérer les traitements visuels. Au delà de ces résultats propres au système visuel, les travaux présentés ici créditent l'hypothèse de l'utilisation des dates de spikes pour encoder, décoder, et traiter l'information dans le cerveau - c'est la théorie dite du 'codage temporel'. Dans un tel cadre, la Spike Timing Dependent Plasticity pourrait jouer un rôle clef, en détectant des patterns de spikes répétitifs et en permettant d'y répondre de plus en plus rapidement.In this thesis I propose various activity-driven synaptic plasticity mechanisms that could account for the speed, selectivity and invariance of the neuronal responses in the visual cortex. Their biological plausibility is discussed. I also present the results of a relevant psychophysical experiment demonstrating that familiarity can accelerate visual processing. Beyond these results on the visual system, the studies presented here also credit the hypothesis that the brain uses the spike times to encode, decode, and process information - a theory referred to as 'temporal coding'. In such a framework the Spike Timing Dependent Plasticity may play a key role, by detecting repeating spike patterns and by generating faster and faster responses to those patterns

    A Neuromorphic Machine Learning Framework based on the Growth Transform Dynamical System

    Get PDF
    As computation increasingly moves from the cloud to the source of data collection, there is a growing demand for specialized machine learning algorithms that can perform learning and inference at the edge in energy and resource-constrained environments. In this regard, we can take inspiration from small biological systems like insect brains that exhibit high energy-efficiency within a small form-factor, and show superior cognitive performance using fewer, coarser neural operations (action potentials or spikes) than the high-precision floating-point operations used in deep learning platforms. Attempts at bridging this gap using neuromorphic hardware has produced silicon brains that are orders of magnitude inefficient in energy dissipation as well as performance. This is because neuromorphic machine learning (ML) algorithms are traditionally built bottom-up, starting with neuron models that mimic the response of biological neurons and connecting them together to form a network. Neural responses and weight parameters are therefore not optimized w.r.t. any system objective, and it is not evident how individual spikes and the associated population dynamics are related to a network objective. On the other hand, conventional ML algorithms follow a top-down synthesis approach, starting from a system objective (that usually only models task efficiency), and reducing the problem to the model of a non-spiking neuron with non-local updates and little or no control over the population dynamics. I propose that a reconciliation of the two approaches may be key to designing scalable spiking neural networks that optimize for both energy and task efficiency under realistic physical constraints, while enabling spike-based encoding and learning based on local updates in an energy-based framework like traditional ML models. To this end, I first present a neuron model implementing a mapping based on polynomial growth transforms, which allows for independent control over spike forms and transient firing statistics. I show how spike responses are generated as a result of constraint violation while minimizing a physically plausible energy functional involving a continuous-valued neural variable, that represents the local power dissipation in a neuron. I then show how the framework could be extended to coupled neurons in a network by remapping synaptic interactions in a standard spiking network. I show how the network could be designed to perform a limited amount of learning in an energy-efficient manner even without synaptic adaptation by appropriate choices of network structure and parameters - through spiking SVMs that learn to allocate switching energy to neurons that are more important for classification and through spiking associative memory networks that learn to modulate their responses based on global activity. Lastly, I describe a backpropagation-less learning framework for synaptic adaptation where weight parameters are optimized w.r.t. a network-level loss function that represents spiking activity across the network, but which produces updates that are local. I show how the approach can be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the network-level spiking activity. I build upon this framework to introduce end-to-end spiking neural network (SNN) architectures and demonstrate their applicability for energy and resource-efficient learning using a benchmark dataset

    Structure, Dynamics and Self-Organization in Recurrent Neural Networks: From Machine Learning to Theoretical Neuroscience

    Get PDF
    At a first glance, artificial neural networks, with engineered learning algorithms and carefully chosen nonlinearities, are nothing like the complicated self-organized spiking neural networks studied by theoretical neuroscientists. Yet, both adapt to their inputs, keep information from the past in their state space and are able of learning, implying that some information processing principles should be common to both. In this thesis we study those principles by incorporating notions of systems theory, statistical physics and graph theory into artificial neural networks and theoretical neuroscience models. % TO DO: What is different in this thesis? -> classical signal processing with complex systems on top The starting point for this thesis is \ac{RC}, a learning paradigm used both in machine learning\cite{jaeger2004harnessing} and in theoretical neuroscience\cite{maass2002real}. A neural network in \ac{RC} consists of two parts, a reservoir – a directed and weighted network of neurons that projects the input time series onto a high dimensional space – and a readout which is trained to read the state of the neurons in the reservoir and combine them linearly to give the desired output. In classical \ac{RC}, the reservoir is randomly initialized and left untrained, which alleviates the training costs in comparison to other recurrent neural networks. However, this lack of training implies that reservoirs are not adapted to specific tasks and thus their performance is often lower than that of other neural networks. Our contribution has been to show how knowledge about a task can be integrated into the reservoir architecture, so that reservoirs can be tailored to specific problems without training. We do this design by identifying two features that are useful for machine learning: the memory of the reservoir and its power spectra. First we show that the correlations between neurons limit the capacity of the reservoir to retain traces of previous inputs, and demonstrate that those correlations are controlled by moduli of the eigenvalues of the adjacency matrix of the reservoir. Second, we prove that when the reservoir resonates at the frequencies that are present on the desired output signal, the performance of the readout increases. Knowing the features of the reservoir dynamics that we need, the next question is how to impose them. The simplest way to design a network with that resonates at a certain frequency is by adding cycles, which act as feedback loops, but this also induces correlations and hence memory modifications. To disentangle the frequencies and the memory design, we studied how the addition of cycles modifies the eigenvalues in the adjacency matrix of the network. Surprisingly, the shape of the eigenvalues is quite beautiful \cite{aceituno2019universal} and can be characterized using random matrix theory tools. Combining this knowledge with our result relating eigenvalues and correlations, we designed an heuristic that tailors reservoirs to specific tasks and showed that it improves upon state of the art \ac{RC} in three different machine learning tasks. Although this idea works in the machine learning version of \ac{RC}, there is one fundamental problem when we try to translate to the world of theoretical neuroscience: the proposed frequency adaptation requires prior knowledge of the task, which might not be plausible in a biological neural network. Therefore the following questions are whether those resonances can emerge by unsupervised learning, and which kind of learning rules would be required. Remarkably, these resonances can be induced by the well-known Spike Time-Dependent Plasticity (STDP) combined with homeostatic mechanisms. We show this by deriving two self-consistent equations: one where the activity of every neuron can be calculated from its synaptic weights and its external inputs and a second one where the synaptic weights can be obtained from the neural activity. By considering spatio-temporal symmetries in our inputs we obtained two families of solutions to those equations where a periodic input is enhanced by the neural network after STDP. This approach shows that periodic and quasiperiodic inputs can induce resonances that agree with the aforementioned \ac{RC} theory. Those results, although rigorous, are expressed on a language of statistical physics and cannot be easily tested or verified in real, scarce data. To make them more accessible to the neuroscience community we showed that latency reduction, a well-known effect of STDP\cite{song2000competitive} which has been experimentally observed \cite{mehta2000experience}, generates neural codes that agree with the self-consistency equations and their solutions. In particular, this analysis shows that metabolic efficiency, synchronization and predictions can emerge from that same phenomena of latency reduction, thus closing the loop with our original machine learning problem. To summarize, this thesis exposes principles of learning recurrent neural networks that are consistent with adaptation in the nervous system and also improve current machine learning methods. This is done by leveraging features of the dynamics of recurrent neural networks such as resonances and correlations in machine learning problems, then imposing the required dynamics into reservoir computing through control theory notions such as feedback loops and spectral analysis. Then we assessed the plausibility of such adaptation in biological networks, deriving solutions from self-organizing processes that are biologically plausible and align with the machine learning prescriptions. Finally, we relate those processes to learning rules in biological neurons, showing how small local adaptations of the spike times can lead to neural codes that are efficient and can be interpreted in machine learning terms

    Parameter optimization of evolving spiking neural network with dynamic population particle swarm optimization

    Get PDF
    Evolving Spiking Neural Network (ESNN) is widely used in classification problem. However, ESNN like any other neural networks is incapable to find its own parameter optimum values, which are crucial for classification accuracy. Thus, in this study, ESNN is integrated with an improved Particle Swarm Optimization (PSO) known as Dynamic Population Particle Swarm Optimization (DPPSO) to optimize the ESNN parameters: the modulation factor (Mod), similarity factor (Sim) and threshold factor (C). To find the optimum ESNN parameter value, DPPSO uses a dynamic population that removes the lowest particle value in every pre-defined iteration. The integration of ESNN-DPPSO facilitates the ESNN parameter optimization searching during the training stage. The performance analysis is measured by classification accuracy and is compared with the existing method. Five datasets gained from University of California Irvine (UCI) Machine Learning Repository are used for this study. The experimental result presents better accuracy compared to the existing technique and thus improves the ESNN method in optimising its parameter values

    On microelectronic self-learning cognitive chip systems

    Get PDF
    After a brief review of machine learning techniques and applications, this Ph.D. thesis examines several approaches for implementing machine learning architectures and algorithms into hardware within our laboratory. From this interdisciplinary background support, we have motivations for novel approaches that we intend to follow as an objective of innovative hardware implementations of dynamically self-reconfigurable logic for enhanced self-adaptive, self-(re)organizing and eventually self-assembling machine learning systems, while developing this new particular area of research. And after reviewing some relevant background of robotic control methods followed by most recent advanced cognitive controllers, this Ph.D. thesis suggests that amongst many well-known ways of designing operational technologies, the design methodologies of those leading-edge high-tech devices such as cognitive chips that may well lead to intelligent machines exhibiting conscious phenomena should crucially be restricted to extremely well defined constraints. Roboticists also need those as specifications to help decide upfront on otherwise infinitely free hardware/software design details. In addition and most importantly, we propose these specifications as methodological guidelines tightly related to ethics and the nowadays well-identified workings of the human body and of its psyche
    • …
    corecore