30 research outputs found

    Temporal Variability Analysis in sEMG Hand Grasp Recognition using Temporal Convolutional Networks

    Get PDF
    Hand movement recognition via surface electromyographic (sEMG) signal is a promising approach for the advance in Human-Computer Interaction. However, this field has to deal with two main issues: (1) the long-term reliability of sEMG-based control is limited by the variability affecting the sEMG signal (especially, variability over time); (2) the classification algorithms need to be suitable for implementation on embedded devices, which have strict constraints in terms of power budget and computational resources. Current solutions present a performance over-time drop that makes them unsuitable for reliable gesture controller design. In this paper, we address temporal variability of sEMG-based grasp recognition, proposing a new approach based on Temporal Convolutional Networks, a class of deep learning algorithms particularly suited for time series analysis and temporal pattern recognition. Our approach improves by 7.6% the best results achieved in the literature on the NinaPro DB6, a reference dataset for temporal variability analysis of sEMG. Moreover, when targeting the much more challenging inter-session accuracy objective, our method achieves an accuracy drop of just 4.8% between intra- and inter-session validation. This proves the suitability of our setup for a robust, reliable long-term implementation. Furthermore, we distill the network using deep network quantization and pruning techniques, demonstrating that our approach can use down to 120x lower memory footprint than the initial network and 4x lower memory footprint than a baseline Support Vector Machine, with an inter-session accuracy degradation of only 2.5%, proving that the solution is suitable for embedded resource-constrained implementations

    A Microcontroller is All You Need: Enabling Transformer Execution on Low-Power IoT Endnodes

    Get PDF
    Transformer networks have become state-of-the-art for many tasks such as NLP and are closing the gap on other tasks like image recognition. Similarly, Transformers and Attention methods are starting to attract attention on smaller-scale tasks, which fit the typical memory envelope of MCUs. In this work, we propose a new set of execution kernels tuned for efficient execution on MCU-class RISC-V and ARM Cortex-M cores. We focus on minimizing memory movements while maximizing data reuse in the Attention layers. With our library, we obtain 3.4x, 1.8 x, and 2.1 x lower latency and energy on 8-bit Attention layers, compared to previous state-of-the-art (SoA) linear and matrix multiplication kernels in the CMSIS-NN and PULP-NN libraries on the STM32H7 (Cortex M7), STM32L4 (Cortex M4), and GAP8 (RISC-V IMC-Xpulp) platforms, respectively. As a use case for our TinyTransformer library, we also demonstrate that we can fit a 263 kB Transformer on the GAP8 platform, outperforming the previous SoA convolutional architecture on the TinyRadarNN dataset, with a latency of 9.24 ms and 0.47 mJ energy consumption and an accuracy improvement of 3.5%

    Temporal Variability Analysis in sEMG Hand Grasp Recognition using Temporal Convolutional Networks

    No full text
    Hand movement recognition via surface electromyographic (sEMG) signal is a promising approach for the advance in Human-Computer Interaction. However, this field has to deal with two main issues: (1) the long-term reliability of sEMG-based control is limited by the variability affecting the sEMG signal (especially, variability over time); (2) the classification algorithms need to be suitable for implementation on embedded devices, which have strict constraints in terms of power budget and computational resources. Current solutions present a performance over-time drop that makes them unsuitable for reliable gesture controller design. In this paper, we address temporal variability of sEMG-based grasp recognition, proposing a new approach based on Temporal Convolutional Networks, a class of deep learning algorithms particularly suited for time series analysis and temporal pattern recognition. Our approach improves by 7.6% the best results achieved in the literature on the NinaPro DB6, a reference dataset for temporal variability analysis of sEMG. Moreover, when targeting the much more challenging inter-session accuracy objective, our method achieves an accuracy drop of just 4.8% between intra- and inter-session validation. This proves the suitability of our setup for a robust, reliable long-term implementation. Furthermore, we distill the network using deep network quantization and pruning techniques, demonstrating that our approach can use down to 120 lower memory footprint than the initial network and 4 lower memory footprint than a baseline Support Vector Machine, with an inter-session accuracy degradation of only 2.5%, proving that the solution is suitable for embedded resource-constrained implementations

    Event-based Low-Power and Low-Latency Regression Method for Hand Kinematics from Surface EMG

    No full text
    Human-Machine Interfaces (HMIs) are a rapidly progressing field, and gesture recognition is a promising method in industrial, consumer, and health use cases. Surface electromyography (sEMG) is a State-of-the-Art (SoA) pathway for human-to-machine communication. Currently, the research goal is a more intuitive and fluid control, moving from signal classification of discrete positions to continuous control based on regression. The sEMG-based regression is still scarcely explored in research since most approaches have addressed classification. In this work, we propose the first event-based EMG encoding applied to the regression of hand kinematics suitable for working in streaming on a low-power microcontroller (STM32 F401, mounting ARM Cortex-M4). The motivation for event-based encoding is to exploit upcoming neuromorphic hardware to benefit from reduced latency and power consumption. We achieve a Mean Absolute Error of 8.8± 2.3 degrees on 5 degrees of actuation on the public dataset NinaPro DB8, comparable with the SoA Deep Neural Network (DNN). We use 9× less memory and 13× less energy per inference, with 10× shorter latency per inference compared to the SoA deep net, proving suitable for resource-constrained embedded platforms

    Temporal Variability Analysis in sEMG Hand Grasp Recognition using Temporal Convolutional Networks

    No full text
    Hand movement recognition via surface electromyographic (sEMG) signal is a promising approach for the advance in Human-Computer Interaction. However, this field has to deal with two main issues: (1) the long-term reliability of sEMG-based control is limited by the variability affecting the sEMG signal (especially, variability over time); (2) the classification algorithms need to be suitable for implementation on embedded devices, which have strict constraints in terms of power budget and computational resources. Current solutions present a performance over-time drop that makes them unsuitable for reliable gesture controller design. In this paper, we address temporal variability of sEMG-based grasp recognition, proposing a new approach based on Temporal Convolutional Networks, a class of deep learning algorithms particularly suited for time series analysis and temporal pattern recognition. Our approach improves by 7.6% the best results achieved in the literature on the NinaPro DB6, a reference dataset for temporal variability analysis of sEMG. Moreover, when targeting the much more challenging inter-session accuracy objective, our method achieves an accuracy drop of just 4.8% between intra- and inter-session validation. This proves the suitability of our setup for a robust, reliable long-term implementation. Furthermore, we distill the network using deep network quantization and pruning techniques, demonstrating that our approach can use down to 120 lower memory footprint than the initial network and 4 lower memory footprint than a baseline Support Vector Machine, with an inter-session accuracy degradation of only 2.5%, proving that the solution is suitable for embedded resource-constrained implementations

    A Microcontroller is All You Need: Enabling Transformer Execution on Low-Power IoT Endnodes

    No full text
    Transformer networks have become state-of-The-Art for many tasks such as NLP and are closing the gap on other tasks like image recognition. Similarly, Transformers and Attention methods are starting to attract attention on smaller-scale tasks, which fit the typical memory envelope of MCUs. In this work, we propose a new set of execution kernels tuned for efficient execution on MCU-class RISC-V and ARM Cortex-M cores. We focus on minimizing memory movements while maximizing data reuse in the Attention layers. With our library, we obtain 3.4×, 1.8×, and 2.1× lower latency and energy on 8-bit Attention layers, compared to previous state-of-The-Art (SoA) linear and matrix multiplication kernels in the CMSIS-NN and PULP-NN libraries on the STM32H7 (Cortex M7), STM32L4 (Cortex M4), and GAP8 (RISC-V IMC-Xpulp) platforms, respectively. As a use case for our TinyTransformer library, we also demonstrate that we can fit a 263 kB Transformer on the GAP8 platform, outperforming the previous SoA convolutional architecture on the TinyRadarNN dataset, with a latency of 9.24 ms and 0.47 mJ energy consumption and an accuracy improvement of 3.5%

    Tackling time-variability in semg-based gesture recognition with on-device incremental learning and temporal convolutional networks

    No full text
    Human-machine interaction is showing promising results for robotic prosthesis control and rehabilitation. In these fields, hand movement recognition via surface electromyographic (sEMG) signals is one of the most promising approaches. However, it still suffers from the issue of sEMG signal's variability over time, which negatively impacts classification robustness. In particular, the non-stationarity of input signals and the surface electrodes' shift can cause up to 30% degradation in gesture recognition accuracy. This work addresses the temporal variability of the sEMG-based gesture recognition by proposing to train a Temporal Convolutional Network (TCN) incrementally over multiple gesture training sessions. Using incremental learning, we re-train our model on stored latent data spanning multiple sessions. We validate our approach on the UniBo-20-Session dataset, which includes 8 hand gestures from 3 subjects. Our incremental learning framework obtains 18.9% higher accuracy compared to a baseline with a standard single training session. Deploying our TCN on a Parallel, Ultra-Low Power (PULP) microcontroller unit (MCU), GAP8, we achieve an inference latency and energy of 12.9 ms and 0.66 mJ, respectively, with a weight memory footprint of 427 kB and a data memory footprint of 0.5-32 MB

    Low-latency detection of epileptic seizures from IEEG with temporal convolutional networks on a low-power parallel MCU

    No full text
    Epilepsy is a severe neurological disorder that affects about 1% of the world population, and one-third of cases are drug-resistant. Apart from surgery, drug-resistant patients can benefit from closed-loop brain stimulation, eliminating or mitigating the epileptic symptoms. For the closed-loop to be accurate and safe, it is paramount to couple stimulation with a detection system able to recognize seizure onset with high sensitivity and specificity and short latency, while meeting the strict computation and energy constraints of always-on real-time monitoring platforms. We propose a novel setup for iEEG-based epilepsy detection, exploiting a Temporal Convolutional Network (TCN) optimized for deployability on low-power edge devices for real-time monitoring. We test our approach on the Short-Term SWEC-ETHZ iEEG Database, containing a total of 100 epileptic seizures from 16 patients (from 2 to 14 per patient) comparing it with the state-of-the-art (SoA) approach, represented by Hyperdimensional Computing (HD). Our TCN attains a detection delay which is 10 s better than SoA, without performance drop in sensitivity and specificity. Contrary to previous literature, we also enforce a time-consistent setup, where training seizures always precede testing seizures chronologically. When deployed on a commercial low-power parallel microcontroller unit (MCU), each inference with our model has a latency of only 5.68 ms and an energy cost of only 124.5 \ub5J if executed on 1 core, and latency 1.46 ms and an energy cost 51.2 \ub5J if parallelized on 8 cores. These latency and energy consumption, lower than the current SoA, demonstrates the suitability of our solution for real-time long-term embedded epilepsy monitoring

    Tackling Time-Variability in sEMG-based Gesture Recognition with On-Device Incremental Learning and Temporal Convolutional Networks

    No full text
    Human-machine interaction is showing promising results for robotic prosthesis control and rehabilitation. In these fields, hand movement recognition via surface electromyographic (sEMG) signals is one of the most promising approaches. However, it still suffers from the issue of sEMG signal's variability over time, which negatively impacts classification robustness. In particular, the non-stationarity of input signals and the surface electrodes' shift can cause up to 30% degradation in gesture recognition accuracy. This work addresses the temporal variability of the sEMG-based gesture recognition by proposing to train a Temporal Convolutional Network (TCN) incrementally over multiple gesture training sessions. Using incremental learning, we re-train our model on stored latent data spanning multiple sessions. We validate our approach on the UniBo-20-Session dataset, which includes 8 hand gestures from 3 subjects. Our incremental learning framework obtains 18.9% higher accuracy compared to a baseline with a standard single training session. Deploying our TCN on a Parallel, Ultra-Low Power (PULP) microcontroller unit (MCU), GAP8, we achieve an inference latency and energy of 12.9 ms and 0.66 mJ, respectively, with a weight memory footprint of 427 kB and a data memory footprint of 0.5-32 MB

    Robust Real-Time Embedded EMG Recognition Framework Using Temporal Convolutional Networks on a Multicore IoT Processor

    Get PDF
    Hand movement classification via surface electromyographic (sEMG) signal is a well-established approach for advanced Human-Computer Interaction. However, sEMG movement recognition has to deal with the long-Term reliability of sEMG-based control, limited by the variability affecting the sEMG signal. Embedded solutions are affected by a recognition accuracy drop over time that makes them unsuitable for reliable gesture controller design. In this paper, we present a complete wearable-class embedded system for robust sEMG-based gesture recognition, based on Temporal Convolutional Networks (TCNs). Firstly, we developed a novel TCN topology (TEMPONet), and we tested our solution on a benchmark dataset (Ninapro), achieving 49.6% average accuracy, 7.8%, better than current State-Of-The-Art (SoA). Moreover, we designed an energy-efficient embedded platform based on GAP8, a novel 8-core IoT processor. Using our embedded platform, we collected a second 20-sessions dataset to validate the system on a setup which is representative of the final deployment. We obtain 93.7% average accuracy with the TCN, comparable with a SoA SVM approach (91.1%). Finally, we profiled the performance of the network implemented on GAP8 by using an 8-bit quantization strategy to fit the memory constraint of the processor. We reach a mathbf {4 imes } lower memory footprint (460 kB) with a performance degradation of only 3% accuracy. We detailed the execution on the GAP8 platform, showing that the quantized network executes a single classification in 12.84 ms with a power envelope of 0.9 mJ, making it suitable for a long-lifetime wearable deployment
    corecore