264 research outputs found

    Decoding HD-EMG Signals for Myoelectric Control-How Small Can the Analysis Window Size be?

    Get PDF

    sEMG-based hand gesture recognition with deep learning

    Get PDF
    Hand gesture recognition based on surface electromyographic (sEMG) signals is a promising approach for the development of Human-Machine Interfaces (HMIs) with a natural control, such as intuitive robot interfaces or poly-articulated prostheses. However, real-world applications are limited by reliability problems due to motion artifacts, postural and temporal variability, and sensor re-positioning. This master thesis is the first application of deep learning on the Unibo-INAIL dataset, the first public sEMG dataset exploring the variability between subjects, sessions and arm postures, by collecting data over 8 sessions of each of 7 able-bodied subjects executing 6 hand gestures in 4 arm postures. In the most recent studies, the variability is addressed with training strategies based on training set composition, which improve inter-posture and inter-day generalization of classical (i.e. non-deep) machine learning classifiers, among which the RBF-kernel SVM yields the highest accuracy. The deep architecture realized in this work is a 1d-CNN implemented in Pytorch, inspired by a 2d-CNN reported to perform well on other public benchmark databases. On this 1d-CNN, various training strategies based on training set composition were implemented and tested. Multi-session training proves to yield higher inter-session validation accuracies than single-session training. Two-posture training proves to be the best postural training (proving the benefit of training on more than one posture), and yields 81.2% inter-posture test accuracy. Five-day training proves to be the best multi-day training, and yields 75.9% inter-day test accuracy. All results are close to the baseline. Moreover, the results of multi-day trainings highlight the phenomenon of user adaptation, indicating that training should also prioritize recent data. Though not better than the baseline, the achieved classification accuracies rightfully place the 1d-CNN among the candidates for further research

    Surface EMG-Based Inter-Session/Inter-Subject Gesture Recognition by Leveraging Lightweight All-ConvNet and Transfer Learning

    Full text link
    Gesture recognition using low-resolution instantaneous HD-sEMG images opens up new avenues for the development of more fluid and natural muscle-computer interfaces. However, the data variability between inter-session and inter-subject scenarios presents a great challenge. The existing approaches employed very large and complex deep ConvNet or 2SRNN-based domain adaptation methods to approximate the distribution shift caused by these inter-session and inter-subject data variability. Hence, these methods also require learning over millions of training parameters and a large pre-trained and target domain dataset in both the pre-training and adaptation stages. As a result, it makes high-end resource-bounded and computationally very expensive for deployment in real-time applications. To overcome this problem, we propose a lightweight All-ConvNet+TL model that leverages lightweight All-ConvNet and transfer learning (TL) for the enhancement of inter-session and inter-subject gesture recognition performance. The All-ConvNet+TL model consists solely of convolutional layers, a simple yet efficient framework for learning invariant and discriminative representations to address the distribution shifts caused by inter-session and inter-subject data variability. Experiments on four datasets demonstrate that our proposed methods outperform the most complex existing approaches by a large margin and achieve state-of-the-art results on inter-session and inter-subject scenarios and perform on par or competitively on intra-session gesture recognition. These performance gaps increase even more when a tiny amount (e.g., a single trial) of data is available on the target domain for adaptation. These outstanding experimental results provide evidence that the current state-of-the-art models may be overparameterized for sEMG-based inter-session and inter-subject gesture recognition tasks

    Machine Learning-Based Hand Gesture Recognition via EMG Data

    Get PDF
    Electromyography (EMG) data gives information about the electrical activity related to muscles. EMG data obtained from arm through sensors helps to understand hand gestures. For this work, hand gesture data were taken from UCI2019 EMG dataset obtained from MYO thalmic armband were classied with six dierent machine learning algorithms. Articial Neural Network (ANN), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Naive Bayes (NB), Decision Tree (DT) and Random Forest (RF) methods were preferred for comparison based on several performance metrics which are accuracy, precision, sensitivity, specicity, classication error, kappa, root mean squared error (RMSE) and correlation. The data belongs to seven hand gestures. 700 samples from 7 classes (100 samples per group) were used in the experiments. The splitting ratio in the classication was 0.8-0.2, i.e. 80% of the samples were used in training and 20% of data were used in testing phase of the classier. NB was found to be the best among other methods because of high accuracy (96.43%) and sensitivity (96.43%) and the lowest RMSE (0.189). Considering the results of the performance parameters, it can be said that this study recognizes and classies seven hand gestures successfully in comparison with the literature

    Development and comparison of dataglove and sEMG signal-based algorithms for the improvement of a hand gestures recognition system.

    Get PDF
    openHand gesture recognition is a topic widely discussed in literature, where several techniques are analyzed both in terms of input signal types and algorithms. The main bottleneck of the field is the generalization ability of the classifier, which becomes harder as the number of gestures to classify increases. This project has two purposes: first, it aims to develop a reliable and high-generalizability classifier, evaluating the difference in performances when using Dataglove and sEMG signals; finally, it makes considerations regarding the difficulties and advantages of developing a sEMG signal-based hand gesture recognition system, with the objective of providing indications for its improvement. To design the algorithms, data coming from a public available dataset were considered; the information were referred to 40 healthy subjects (not amputees), and for each of the 17 gestures considered, 6 repetitions were done. Finally, both conventional machine learning and deep learning approaches were used, comparing their efficiency. The results showed better performances for dataglove-based classifier, highlighting the signal informative power, while the sEMG could not provide high generalization. Interestingly, the latter signal gives better performances if it’s analyzed with classical machine learning approaches which allowed, performing feature selection, to underline both the most significative features and the most informative channels. This study confirmed the intrinsic difficulties in using the sEMG signal, but it could provide hints for the improvement of sEMG signal-based hand gesture recognition systems, by reduction of computational cost and electrodes position optimization

    The Effect of Space-filling Curves on the Efficiency of Hand Gesture Recognition Based on sEMG Signals

    Get PDF
    Over the past few years, Deep learning (DL) has revolutionized the field of data analysis. Not only are the algorithmic paradigms changed, but also the performance in various classification and prediction tasks has been significantly improved with respect to the state-of-the-art, especially in the area of computer vision. The progress made in computer vision has produced a spillover in many other domains, such as biomedical engineering. Some recent works are directed towards surface electromyography (sEMG) based hand gesture recognition, often addressed as an image classification problem and solved using tools such as Convolutional Neural Networks (CNN). This paper extends our previous work on the application of the Hilbert space-filling curve for the generation of image representations from multi-electrode sEMG signals, by investigating how the Hilbert curve compares to the Peano- and Z-order space-filling curves. The proposed space-filling mapping methods are evaluated on a variety of network architectures and in some cases yield a classification improvement of at least 3%, when used to structure the inputs before feeding them into the original network architectures

    Walking Activity Recognition with sEMG Sensor Array on Thigh Circumference using Convolutional Neural Network

    Get PDF
    In recognition of walking gait modes using surface electromyography (sEMG), the use of sEMG sensor array can provide sensor redundancy and less rigorous identification of sEMG electrode placements as compared to the conventional sEMG electrode placements right in the middle of muscle bellies. However, the potentially lesser discriminative and noisier sEMG signals from the sEMG sensor array pose the challenge in developing accurate and robust machine learning classifier for walking activity recognition. In this paper, we explore the use of convolution neural network (CNN) classifier with frequency gradient feature derived from EMG signal spectrogram for detecting different walking activities using an sEMG sensor array on thigh circumference. EMG dataset from five healthy subjects and an amputee for five walking activities namely walking at slow, normal and fast speed, ramp ascending and ramp descending are used to train and test the CNN-based classifier. Our preliminary findings suggest that frequency gradient feature can improve the CNN-based classifier performance for walking activity recognition using EMG sensor array on thigh circumference

    Transfer learning in hand movement intention detection based on surface electromyography signals

    Get PDF
    Over the past several years, electromyography (EMG) signals have been used as a natural interface to interact with computers and machines. Recently, deep learning algorithms such as Convolutional Neural Networks (CNNs) have gained interest for decoding the hand movement intention from EMG signals. However, deep networks require a large dataset to train appropriately. Creating such a database for a single subject could be very time-consuming. In this study, we addressed this issue from two perspectives: (i) we proposed a subject-transfer framework to use the knowledge learned from other subjects to compensate for a target subject’s limited data; (ii) we proposed a task-transfer framework in which the knowledge learned from a set of basic hand movements is used to classify more complex movements, which include a combination of mentioned basic movements. We introduced two CNN-based architectures for hand movement intention detection and a subject-transfer learning approach. Classifiers are tested on the Nearlab dataset, a sEMG hand/wrist movement dataset including 8 movements and 11 subjects, along with their combination, and on open-source hand sEMG dataset “NinaPro DataBase 2 (DB2).” For the Nearlab database, the subject-transfer learning approach improved the average classification accuracy of the proposed deep classifier from 92.60 to 93.30% when classifier was utilizing 10 other subjects’ data via our proposed framework. For Ninapro DB2 exercise B (17 hand movement classes), this improvement was from 81.43 to 82.87%. Moreover, three stages of analysis in task-transfer approach proved that it is possible to classify combination hand movements using the knowledge learned from a set of basic hand movements with zero, few samples and few seconds of data from the target movement classes. First stage takes advantage of shared muscle synergies to classify combined movements, while second and third stages take advantage of novel algorithms using few-shot learning and fine-tuning to use samples from target domain to further train the classifier trained on the source database. The use of information learned from basic hand movements improved classification accuracy of combined hand movements by 10%
    corecore