724 research outputs found
Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey
Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (â60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN
Recommended from our members
Thumbs up, thumbs down:non-verbal human-robot interaction through real-time EMG classification via inductive and supervised transductive transfer learning
In this study, we present a transfer learning method for gesture classification via an inductive and supervised transductive approach with an electromyographic dataset gathered via the Myo armband. A ternary gesture classification problem is presented by states of âthumbs upâ, âthumbs downâ, and ârelaxâ in order to communicate in the affirmative or negative in a non-verbal fashion to a machine. Of the nine statistical learning paradigms benchmarked over 10-fold cross validation (with three methods of feature selection), an ensemble of Random Forest and Support Vector Machine through voting achieves the best score of 91.74% with a rule-based feature selection method. When new subjects are considered, this machine learning approach fails to generalise new data, and thus the processes of Inductive and Supervised Transductive Transfer Learning are introduced with a short calibration exercise (15 s). Failure of generalisation shows that 5 s of data per-class is the strongest for classification (versus one through seven seconds) with only an accuracy of 55%, but when a short 5 s per class calibration task is introduced via the suggested transfer method, a Random Forest can then classify unseen data from the calibrated subject at an accuracy of around 97%, outperforming the 83% accuracy boasted by the proprietary Myo system. Finally, a preliminary application is presented through social interaction with a humanoid Pepper robot, where the use of our approach and a most-common-class metaclassifier achieves 100% accuracy for all trials of a â20 Questionsâ game
sEMG-based hand gesture recognition with deep learning
Hand gesture recognition based on surface electromyographic (sEMG) signals is a promising approach for the development of Human-Machine Interfaces (HMIs) with a natural control, such as intuitive robot interfaces or poly-articulated prostheses. However, real-world applications are limited by reliability problems due to motion artifacts, postural and temporal variability, and sensor re-positioning.
This master thesis is the first application of deep learning on the Unibo-INAIL dataset, the first public sEMG dataset exploring the variability between subjects, sessions and arm postures, by collecting data over 8 sessions of each of 7 able-bodied subjects executing 6 hand gestures in 4 arm postures. In the most recent studies, the variability is addressed with training strategies based on training set composition, which improve inter-posture and inter-day generalization of classical (i.e. non-deep) machine learning classifiers, among which the RBF-kernel SVM yields the highest accuracy.
The deep architecture realized in this work is a 1d-CNN implemented in Pytorch, inspired by a 2d-CNN reported to perform well on other public benchmark databases. On this 1d-CNN, various training strategies based on training set composition were implemented and tested.
Multi-session training proves to yield higher inter-session validation accuracies than single-session training. Two-posture training proves to be the best postural training (proving the benefit of training on more than one posture), and yields 81.2% inter-posture test accuracy. Five-day training proves to be the best multi-day training, and yields 75.9% inter-day test accuracy. All results are close to the baseline. Moreover, the results of multi-day trainings highlight the phenomenon of user adaptation, indicating that training should also prioritize recent data.
Though not better than the baseline, the achieved classification accuracies rightfully place the 1d-CNN among the candidates for further research
Interpreting Deep Learning Features for Myoelectric Control: A Comparison with Handcrafted Features
The research in myoelectric control systems primarily focuses on extracting
discriminative representations from the electromyographic (EMG) signal by
designing handcrafted features. Recently, deep learning techniques have been
applied to the challenging task of EMG-based gesture recognition. The adoption
of these techniques slowly shifts the focus from feature engineering to feature
learning. However, the black-box nature of deep learning makes it hard to
understand the type of information learned by the network and how it relates to
handcrafted features. Additionally, due to the high variability in EMG
recordings between participants, deep features tend to generalize poorly across
subjects using standard training methods. Consequently, this work introduces a
new multi-domain learning algorithm, named ADANN, which significantly enhances
(p=0.00004) inter-subject classification accuracy by an average of 19.40%
compared to standard training. Using ADANN-generated features, the main
contribution of this work is to provide the first topological data analysis of
EMG-based gesture recognition for the characterisation of the information
encoded within a deep network, using handcrafted features as landmarks. This
analysis reveals that handcrafted features and the learned features (in the
earlier layers) both try to discriminate between all gestures, but do not
encode the same information to do so. Furthermore, using convolutional network
visualization techniques reveal that learned features tend to ignore the most
activated channel during gesture contraction, which is in stark contrast with
the prevalence of handcrafted features designed to capture amplitude
information. Overall, this work paves the way for hybrid feature sets by
providing a clear guideline of complementary information encoded within learned
and handcrafted features.Comment: The first two authors shared first authorship. The last three authors
shared senior authorship. 32 page
Fall risk detection mechanism in the elderly, based on electromyographic signals, through the use of artificial intelligence
Introduction: The tests used to classify older adults at risk of falls are questioned in literature. Tools from the field of artificial intelligence are an alternative to classify older adults more precisely. Objective: To identify the risk of falls in the elderly through electromyographic signals of the lower limb, using tools from the field of artificial intelligence. Methods: A descriptive study design was used. The unit of analysis was made up of 32 older adults (16 with and 16 without risk of falls). The electrical activity of the lower limb muscles was recorded during the functional walking gesture. The cycles obtained were divided into training and validation sets, and then from the amplitude variable, select attributes using the Weka software. Finally, the Support Vector Machines (SVM) classifier was implemented. Results: A classifier of two classes (elderly adults with and without risk of falls) based on SVM was built, whose performance was: Kappa index 0.97 (almost perfect agreement strength), sensitivity 97%, specificity 100%. Conclusions: The SVM artificial intelligence technique applied to the analysis of lower limb electromyographic signals during walking can be considered a precision tool of diagnostic, monitoring and follow-up for older adults with and without risk of falls
- âŠ