11 research outputs found

    Human activity recognition: suitability of a neuromorphic approach for on-edge AIoT applications

    Get PDF
    Human activity recognition (HAR) is a classification problem involving time-dependent signals produced by body monitoring, and its application domain covers all the aspects of human life, from healthcare to sport, from safety to smart environments. As such, it is naturally well suited for on-edge deployment of personalized point-of-care (POC) analyses or other tailored services for the user. However, typical smart and wearable devices suffer from relevant limitations regarding energy consumption, and this significantly hinders the possibility for successful employment of edge computing for tasks like HAR. In this paper, we investigate how this problem can be mitigated by adopting a neuromorphic approach. By comparing optimized classifiers based on traditional deep neural network (DNN) architectures as well as on recent alternatives like the Legendre Memory Unit (LMU), we show how spiking neural networks (SNNs) can effectively deal with the temporal signals typical of HAR providing high performances at a low energy cost. By carrying out an application-oriented hyperparameter optimization, we also propose a methodology flexible to be extended to different domains, to enlarge the field of neuro-inspired classifier suitable for on-edge artificial intelligence of things (AIoT) applications

    On-line learning applied to spiking neural network for antilock braking systems

    Get PDF
    Computationally replicating the behaviour of the cerebral cortex to perform the control tasks of daily life in a human being is a challenge today. First, … Finally, a suitable learning model that allows adapting neural network response to changing conditions in the environment is also required. Spiking Neural Networks (SNN) are currently the closest approximation to biological neural networks. SNNs make use of temporal spike trains to deal with inputs and outputs, thus allowing a faster and more complex computation. In this paper, a controller based on an SNN is proposed to perform the control of an anti-lock braking system (ABS) in vehicles. To this end, two neural networks are used to regulate the braking force. The first one is devoted to estimating the optimal slip while the second one is in charge of setting the optimal braking pressure. The latter resembles biological reflex arcs to ensure stability during operation. This neural structure is used to control the fast regulation cycles that occur during ABS operation. Furthermore, an algorithm has been developed to train the network while driving. On-line learning is proposed to update the response of the controller. Hence, to cope with real conditions, a control algorithm based on neural networks that learn by making use of neural plasticity, similar to what occurs in biological systems, has been implemented. Neural connections are modulated using Spike-Timing-Dependent Plasticity (STDP) by means of a supervised learning structure using the slip error as input. Road-type detection has been included in the same neural structure. To validate and to evaluate the performance of the proposed algorithm, simulations as well as experiments in a real vehicle were carried out. The algorithm proved to be able to adapt to changes in adhesion conditions rapidly. This way, the capability of spiking neural networks to perform the full control logic of the ABS has been verified.Funding for open access charge: Universidad de Málaga / CBUA This work was partly supported by the Ministry of Science and Innovation under grant PID2019-105572RB-I00, partly by the Regional Government of Andalusia under grant UMA18-FEDERJA-109, and partly by the University of Malaga as well as the KTH Royal Institute of Technology and its initiative, TRENoP

    Spike-based local synaptic plasticity: a survey of computational models and neuromorphic circuits

    Get PDF
    Understanding how biological neural networks carry out learning using spike-based local plasticity mechanisms can lead to the development of real-time, energy-efficient, and adaptive neuromorphic processing systems. A large number of spike-based learning models have recently been proposed following different approaches. However, it is difficult to assess if these models can be easily implemented in neuromorphic hardware, and to compare their features and ease of implementation. To this end, in this survey, we provide an overview of representative brain-inspired synaptic plasticity models and mixed-signal complementary metal–oxide–semiconductor neuromorphic circuits within a unified framework. We review historical, experimental, and theoretical approaches to modeling synaptic plasticity, and we identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules. We provide a common definition of a locality principle based on pre- and postsynaptic neural signals, which we propose as an important requirement for physical implementations of synaptic plasticity circuits. Based on this principle, we compare the properties of these models within the same framework, and describe a set of mixed-signal electronic circuits that can be used to implement their computing principles, and to build efficient on-chip and online learning in neuromorphic processing systems

    Spike-based local synaptic plasticity: A survey of computational models and neuromorphic circuits

    Get PDF
    Understanding how biological neural networks carry out learning using spike-based local plasticity mechanisms can lead to the development of powerful, energy-efficient, and adaptive neuromorphic processing systems. A large number of spike-based learning models have recently been proposed following different approaches. However, it is difficult to assess if and how they could be mapped onto neuromorphic hardware, and to compare their features and ease of implementation. To this end, in this survey, we provide a comprehensive overview of representative brain-inspired synaptic plasticity models and mixed-signal CMOS neuromorphic circuits within a unified framework. We review historical, bottom-up, and top-down approaches to modeling synaptic plasticity, and we identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules. We provide a common definition of a locality principle based on pre- and post-synaptic neuron information, which we propose as a fundamental requirement for physical implementations of synaptic plasticity. Based on this principle, we compare the properties of these models within the same framework, and describe the mixed-signal electronic circuits that implement their computing primitives, pointing out how these building blocks enable efficient on-chip and online learning in neuromorphic processing systems

    Implementation of bioinspired algorithms on the neuromorphic VLSI system SpiNNaker 2

    Get PDF
    It is believed that neuromorphic hardware will accelerate neuroscience research and enable the next generation edge AI. On the other hand, brain-inspired algorithms are supposed to work efficiently on neuromorphic hardware. But both processes don't happen automatically. To efficiently bring together hardware and algorithm, optimizations are necessary based on the understanding of both sides. In this work, software frameworks and optimizations for efficient implementation of neural network-based algorithms on SpiNNaker 2 are proposed, resulting in optimized power consumption, memory footprint and computation time. In particular, first, a software framework including power management strategies is proposed to apply dynamic voltage and frequency scaling (DVFS) to the simulation of spiking neural networks, which is also the first-ever software framework running a neural network on SpiNNaker 2. The result shows the power consumption is reduced by 60.7% in the synfire chain benchmark. Second, numerical optimizations and data structure optimizations lead to an efficient implementation of reward-based synaptic sampling, which is one of the most complex plasticity algorithms ever implemented on neuromorphic hardware. The results show a reduction of computation time by a factor of 2 and energy consumption by 62%. Third, software optimizations are proposed which effectively exploit the efficiency of the multiply-accumulate array and the flexibility of the ARM core, which results in, when compared with Loihi, 3 times faster inference speed and 5 times lower energy consumption in a keyword spotting benchmark, and faster inference speed and lower energy consumption for adaptive control benchmark in high dimensional cases. The results of this work demonstrate the potential of SpiNNaker 2, explore its range of applications and also provide feedback for the design of the next generation neuromorphic hardware
    corecore