734 research outputs found
Signals to Spikes for Neuromorphic Regulated Reservoir Computing and EMG Hand Gesture Recognition
Surface electromyogram (sEMG) signals result from muscle movement and hence
they are an ideal candidate for benchmarking event-driven sensing and
computing. We propose a simple yet novel approach for optimizing the spike
encoding algorithm's hyper-parameters inspired by the readout layer concept in
reservoir computing. Using a simple machine learning algorithm after spike
encoding, we report performance higher than the state-of-the-art spiking neural
networks on two open-source datasets for hand gesture recognition. The spike
encoded data is processed through a spiking reservoir with a biologically
inspired topology and neuron model. When trained with the unsupervised activity
regulation CRITICAL algorithm to operate at the edge of chaos, the reservoir
yields better performance than state-of-the-art convolutional neural networks.
The reservoir performance with regulated activity was found to be 89.72% for
the Roshambo EMG dataset and 70.6% for the EMG subset of sensor fusion dataset.
Therefore, the biologically-inspired computing paradigm, which is known for
being power efficient, also proves to have a great potential when compared with
conventional AI algorithms.Comment: Accepted to International Conference on Neuromorphic Systems (ICONS
2021
Estimation and Early Prediction of Grip Force Based on sEMG Signals and Deep Recurrent Neural Networks
Hands are used for communicating with the surrounding environment and have a
complex structure that enables them to perform various tasks with their
multiple degrees of freedom. Hand amputation can prevent a person from
performing their daily activities. In that event, finding a suitable, fast, and
reliable alternative for the missing limb can affect the lives of people who
suffer from such conditions. As the most important use of the hands is to grasp
objects, the purpose of this study is to accurately predict gripping force from
surface electromyography (sEMG) signals during a pinch-type grip. In that
regard, gripping force and sEMG signals are derived from 10 healthy subjects.
Results show that for this task, recurrent networks outperform nonrecurrent
ones, such as a fully connected multilayer perceptron (MLP) network. Gated
recurrent unit (GRU) and long short-term memory (LSTM) networks can predict the
gripping force with R-squared values of 0.994 and 0.992, respectively, and a
prediction rate of over 1300 predictions per second. The predominant advantage
of using such frameworks is that the gripping force can be predicted straight
from preprocessed sEMG signals without any form of feature extraction, not to
mention the ability to predict future force values using larger prediction
horizons adequately. The methods presented in this study can be used in the
myoelectric control of prosthetic hands or robotic grippers.Comment: 9 pages, accepted for publication in journal of the Brazilian Society
of Mechanical Sciences and Engineerin
Machine Learning-Based Hand Gesture Recognition via EMG Data
Electromyography (EMG) data gives information about the electrical activity related to muscles. EMG data obtained from arm through sensors helps to understand hand gestures. For this work, hand gesture data were taken from UCI2019 EMG dataset obtained from MYO thalmic armband were classied with six dierent machine learning algorithms. Articial Neural Network (ANN), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Naive Bayes (NB), Decision Tree (DT) and Random Forest (RF) methods were preferred for comparison based on several performance metrics which are accuracy, precision, sensitivity, specicity, classication error, kappa, root mean squared error (RMSE) and correlation. The data belongs to seven hand gestures. 700 samples from 7 classes (100 samples per group) were used in the experiments. The splitting ratio in the classication was 0.8-0.2, i.e. 80% of the samples were used in training and 20% of data were used in testing phase of the classier. NB was found to be the best among other methods because of high accuracy (96.43%) and sensitivity (96.43%) and the lowest RMSE (0.189). Considering the results of the performance parameters, it can be said that this study recognizes and classies seven hand gestures successfully in comparison with the literature
A Transferable Adaptive Domain Adversarial Neural Network for Virtual Reality Augmented EMG-Based Gesture Recognition
Within the field of electromyography-based (EMG) gesture recognition,
disparities exist between the offline accuracy reported in the literature and
the real-time usability of a classifier. This gap mainly stems from two
factors: 1) The absence of a controller, making the data collected dissimilar
to actual control. 2) The difficulty of including the four main dynamic factors
(gesture intensity, limb position, electrode shift, and transient changes in
the signal), as including their permutations drastically increases the amount
of data to be recorded. Contrarily, online datasets are limited to the exact
EMG-based controller used to record them, necessitating the recording of a new
dataset for each control method or variant to be tested. Consequently, this
paper proposes a new type of dataset to serve as an intermediate between
offline and online datasets, by recording the data using a real-time
experimental protocol. The protocol, performed in virtual reality, includes the
four main dynamic factors and uses an EMG-independent controller to guide
movements. This EMG-independent feedback ensures that the user is in-the-loop
during recording, while enabling the resulting dynamic dataset to be used as an
EMG-based benchmark. The dataset is comprised of 20 able-bodied participants
completing three to four sessions over a period of 14 to 21 days. The ability
of the dynamic dataset to serve as a benchmark is leveraged to evaluate the
impact of different recalibration techniques for long-term (across-day) gesture
recognition, including a novel algorithm, named TADANN. TADANN consistently and
significantly (p<0.05) outperforms using fine-tuning as the recalibration
technique.Comment: 10 Pages. The last three authors shared senior authorshi
A transferable adaptive domain adversarial neural network for virtual reality augmented EMG-Based gesture recognition
Within the field of electromyography-based (EMG) gesture recognition, disparities exist between the off line accuracy reported in the literature and the real-time usability of a classifier. This gap mainly stems from two factors: 1) The absence of a controller, making the data collected dissimilar to actual control. 2) The difficulty of including the four main dynamic factors (gesture intensity, limb position, electrode shift, and transient changes in the signal), as including their permutations drastically increases the amount of data to be recorded. Contrarily, online datasets are limited to the exact EMG-based controller used to record them, necessitating the recording of a new dataset for each control method or variant to be tested. Consequently, this paper proposes a new type of dataset to serve as an intermediate between off line and online datasets, by recording the data using a real-time experimental protocol. The protocol, performed in virtual reality, includes the four main dynamic factors and uses an EMG-independent controller to guide movements. This EMG-independent feedback ensures that the user is in-the-loop during recording, while enabling the resulting dynamic dataset to be used as an EMG-based benchmark. The dataset is comprised of 20 able-bodied participants completing three to four sessions over a period of 14 to 21 days. The ability of the dynamic dataset to serve as a benchmark is leveraged to evaluate the impact of different-recalibration techniques for long-term (across-day) gesture recognition, including a novel algorithm, named TADANN. TADANN consistently and significantly (p <; 0.05) outperforms using fine-tuning as the recalibration technique
Surface Electromyography and Artificial Intelligence for Human Activity Recognition - A Systematic Review on Methods, Emerging Trends Applications, Challenges, and Future Implementation
Human activity recognition (HAR) has become increasingly popular in recent years due to its potential to meet the growing needs of various industries. Electromyography (EMG) is essential in various clinical and biological settings. It is a metric that helps doctors diagnose conditions that affect muscle activation patterns and monitor patients’ progress in rehabilitation, disease diagnosis, motion intention recognition, etc. This review summarizes the various research papers based on HAR with EMG. Over recent years, the integration of Artificial Intelligence (AI) has catalyzed remarkable advancements in the classification of biomedical signals, with a particular focus on EMG data. Firstly, this review meticulously curates a wide array of research papers that have contributed significantly to the evolution of EMG-based activity recognition. By surveying the existing literature, we provide an insightful overview of the key findings and innovations that have propelled this field forward. It explore the various approaches utilized for preprocessing EMG signals, including noise reduction, baseline correction, filtering, and normalization, ensure that the EMG data is suitably prepared for subsequent analysis. In addition, we unravel the multitude of techniques employed to extract meaningful features from raw EMG data, encompassing both time-domain and frequency-domain features. These techniques are fundamental to achieving a comprehensive characterization of muscle activity patterns. Furthermore, we provide an extensive overview of both Machine Learning (ML) and Deep Learning (DL) classification methods, showcasing their respective strengths, limitations, and real-world applications in recognizing diverse human activities from EMG signals. In examining the hardware infrastructure for HAR with EMG, the synergy between hardware and software is underscored as paramount for enabling real-time monitoring. Finally, we also discovered open issues and future research direction that may point to new lines of inquiry for ongoing research toward EMG-based detection.publishedVersio
ViT-MDHGR: Cross-day Reliability and Agility in Dynamic Hand Gesture Prediction via HD-sEMG Signal Decoding
Surface electromyography (sEMG) and high-density sEMG (HD-sEMG) biosignals
have been extensively investigated for myoelectric control of prosthetic
devices, neurorobotics, and more recently human-computer interfaces because of
their capability for hand gesture recognition/prediction in a wearable and
non-invasive manner. High intraday (same-day) performance has been reported.
However, the interday performance (separating training and testing days) is
substantially degraded due to the poor generalizability of conventional
approaches over time, hindering the application of such techniques in real-life
practices. There are limited recent studies on the feasibility of multi-day
hand gesture recognition. The existing studies face a major challenge: the need
for long sEMG epochs makes the corresponding neural interfaces impractical due
to the induced delay in myoelectric control. This paper proposes a compact
ViT-based network for multi-day dynamic hand gesture prediction. We tackle the
main challenge as the proposed model only relies on very short HD-sEMG signal
windows (i.e., 50 ms, accounting for only one-sixth of the convention for
real-time myoelectric implementation), boosting agility and responsiveness. Our
proposed model can predict 11 dynamic gestures for 20 subjects with an average
accuracy of over 71% on the testing day, 3-25 days after training. Moreover,
when calibrated on just a small portion of data from the testing day, the
proposed model can achieve over 92% accuracy by retraining less than 10% of the
parameters for computational efficiency
Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey
Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN
Machine Learning for Hand Gesture Classification from Surface Electromyography Signals
Classifying hand gestures from Surface Electromyography (sEMG) is a process which has applications in human-machine interaction, rehabilitation and prosthetic control. Reduction in the cost and increase in the availability of necessary hardware over recent years has made sEMG a more viable solution for hand gesture classification. The research challenge is the development of processes to robustly and accurately predict the current gesture based on incoming sEMG data.
This thesis presents a set of methods, techniques and designs that improve upon evaluation of, and performance on, the classification problem as a whole. These are brought together to set a new baseline for the potential classification. Evaluation is improved by careful choice of metrics and design of cross-validation techniques that account for data bias caused by common experimental techniques. A landmark study is re-evaluated with these improved techniques, and it is shown that data augmentation can be used to significantly improve upon the performance using conventional classification methods.
A novel neural network architecture and supporting improvements are presented that further improve performance and is refined such that the network can achieve similar performance with many fewer parameters than competing designs. Supporting techniques such as subject adaptation and smoothing algorithms are then explored to improve overall performance and also provide more nuanced trade-offs with various aspects of performance, such as incurred latency and prediction smoothness.
A new study is presented which compares the performance potential of medical grade electrodes and a low-cost commercial alternative showing that for a modest-sized gesture set, they can compete. The data is also used to explore data labelling in experimental design and to evaluate the numerous aspects of performance that must be traded off
- …