119 research outputs found

    Surface Electromyography and Artificial Intelligence for Human Activity Recognition - A Systematic Review on Methods, Emerging Trends Applications, Challenges, and Future Implementation

    Get PDF
    Human activity recognition (HAR) has become increasingly popular in recent years due to its potential to meet the growing needs of various industries. Electromyography (EMG) is essential in various clinical and biological settings. It is a metric that helps doctors diagnose conditions that affect muscle activation patterns and monitor patients’ progress in rehabilitation, disease diagnosis, motion intention recognition, etc. This review summarizes the various research papers based on HAR with EMG. Over recent years, the integration of Artificial Intelligence (AI) has catalyzed remarkable advancements in the classification of biomedical signals, with a particular focus on EMG data. Firstly, this review meticulously curates a wide array of research papers that have contributed significantly to the evolution of EMG-based activity recognition. By surveying the existing literature, we provide an insightful overview of the key findings and innovations that have propelled this field forward. It explore the various approaches utilized for preprocessing EMG signals, including noise reduction, baseline correction, filtering, and normalization, ensure that the EMG data is suitably prepared for subsequent analysis. In addition, we unravel the multitude of techniques employed to extract meaningful features from raw EMG data, encompassing both time-domain and frequency-domain features. These techniques are fundamental to achieving a comprehensive characterization of muscle activity patterns. Furthermore, we provide an extensive overview of both Machine Learning (ML) and Deep Learning (DL) classification methods, showcasing their respective strengths, limitations, and real-world applications in recognizing diverse human activities from EMG signals. In examining the hardware infrastructure for HAR with EMG, the synergy between hardware and software is underscored as paramount for enabling real-time monitoring. Finally, we also discovered open issues and future research direction that may point to new lines of inquiry for ongoing research toward EMG-based detection.publishedVersio

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN

    A Physics-Informed Low-Shot Learning For sEMG-Based Estimation of Muscle Force and Joint Kinematics

    Full text link
    Muscle force and joint kinematics estimation from surface electromyography (sEMG) are essential for real-time biomechanical analysis of the dynamic interplay among neural muscle stimulation, muscle dynamics, and kinetics. Recent advances in deep neural networks (DNNs) have shown the potential to improve biomechanical analysis in a fully automated and reproducible manner. However, the small sample nature and physical interpretability of biomechanical analysis limit the applications of DNNs. This paper presents a novel physics-informed low-shot learning method for sEMG-based estimation of muscle force and joint kinematics. This method seamlessly integrates Lagrange's equation of motion and inverse dynamic muscle model into the generative adversarial network (GAN) framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references. Experimental validations are conducted on two scenarios (i.e. the walking trials and wrist motion trials). Results indicate that the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics, which outperforms the selected benchmark methods, including physics-informed convolution neural network (PI-CNN), vallina generative adversarial network (GAN), and multi-layer extreme learning machine (ML-ELM).Comment: 17 pages, 8 Figure

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    EMG- BASED HAND GESTURE RECOGNITION USING DEEP LEARNING AND SIGNAL-TO-IMAGE CONVERSION TOOLS

    Get PDF
    In this paper, deep learning-based hand gesture recognition using surface EMG signals is presented. We use Principal component analysis (PCA) to reduce the data set. Here a threshold-based approach is also proposed to select the principal components (PCs). Then the Continuous wavelet transform (CWT) is carried out to prepare the time-frequency representation of images which is used as the input of the classifier. A very deep convolutional neural network (CNN) is proposed as the gesture classifier. The classifier is trained on 10-fold cross-validation framework and we achieve average recognition accuracy of 99.44%, sensitivity of 97.78% and specificity of 99.68% respectively

    Multimodaalinen käyttöliittymä interaktiivista yhteistyötä varten nelijalkaisten robottien kanssa

    Get PDF
    A variety of approaches for hand gesture recognition have been proposed, where most interest has recently been directed towards different deep learning methods. The modalities, on which these approaches are based, most commonly range from different imaging sensors to inertial measurement units (IMU) and electromyography (EMG) sensors. EMG and IMUs allow detection of gestures without being affected by the line of sight or lighting conditions. The detection algorithms are fairly well established, but their application to real world use cases is limited, apart from prostheses and exoskeletons. In this thesis, a multimodal interface for human robot interaction (HRI) is developed for quadruped robots. The interface is based on a combination of two detection algorithms; one for detecting gestures based on surface electromyography (sEMG) and IMU signals, and the other for detecting the operator using visible light and depth cameras. Multiple architectures for gesture detection are compared, where the best regression performance with offline multi-user data was achieved by a hybrid of a convolutional neural network (CNN) and a long short-term memory (LSTM), with a mean squared error (MSE) of 4.7 · 10−3 in the normalised gestures. A person-following behaviour is implemented for a quadruped robot, which is controlled using the predefined gestures. The complete interface is evaluated online by one expert user two days after recording the last samples of the training data. The gesture detection system achieved an F-score of 0.95 for the gestures alone, and 0.90, when unrecognised attempts due to other technological aspects, such as disturbances in Bluetooth data transmission, are included. The system to reached online performance levels comparable to those reported for offline sessions and online sessions with real-time visual feedback. While the current interface was successfully deployed to the robot, further advances should be aimed at improving inter-subject performance and wireless communication reliability between the devices.Käden eleiden tunnistamiseksi on ehdotettu useita vaihtoehtoisia ratkaisuja, mutta tällä hetkellä tutkimus- ja kehitystyö on pääasiassa keskittynyt erilaisiin syvän oppimisen menetelmiin. Hyödynnetyt teknologiat vaihtelevat useimmiten kuvantavista antureista inertiamittausyksiköihin (inertial measurement unit, IMU) ja lihassähkökäyrää (electromyography, EMG) mittaaviin antureihin. EMG ja IMU:t mahdollistavat eleiden tunnistuksen riippumatta näköyhteydestä tai valaistusolosuhteista. Eleiden tunnistukseen käytettävät menetelmät ovat jo melko vakiintuneita, mutta niiden käyttökohteet ovat rajoittuneet lähinnä proteeseihin ja ulkoisiin tukirankoihin. Tässä opinnäytetyössä kehitettiin useaa modaliteettia hyödyntävä käyttöliittymä ihmisen ja robotin vuorovaikutusta varten. Käyttöliittymä perustuu kahden menetelmän yhdistelmään, joista ensimmäinen vastaa eleiden tunnistuksesta pohjautuen ihon pinnalta mitattavaan EMG:hen ja IMU-signaaleihin, ja toinen käyttäjän tunnistuksesta näkyvän valon- ja syvyyskameroiden perusteella. Työssä vertaillaan useita eleiden tunnistuksen soveltuvia arkkitehtuureja, joista parhaan tuloksen usean käyttäjän opetusaineistolla saavutti konvoluutineuroverkon (convolutional neural network, CNN) ja pitkäkestoisen lyhytkestomuistin (long short-term memory, LSTM) yhdistelmäarkkitehtuuri. Normalisoitujen eleiden regression keskimääräinen neliöllinen virhe (mean squared error, MSE) oli tällä arkkitehtuurilla 4,7·10−3. Eleitä hyödynnettiin robotille toteutetun henkilön seuraamistehtävän ohjaamisessa. Lopullinen käyttöliittymä arvioitiin yhdellä kokeneella koehenkilöllä kaksi päivää viimeisten eleiden mittaamisen jälkeen. Tällöin eleiden tunnistusjärjestelmä saavutti F-testiarvon 0,95, kun vain eleiden tunnistuksen kyvykkyys huomioitiin. Arvioitaessa koko järjestelmän toimivuutta saavutettiin F-testiarvo 0,90, jossa muun muassa Bluetooth-pohjainen tiedonsiirto heikensi tuloksia. Suoraan robottiin yhteydessä ollessaan, järjestelmän saavuttama eleiden tunnistuskyky vastasi laboratorioissa suoritettujen kokeiden suorituskykyä. Vaikka järjestelmän toiminta vahvistettiin onnistuneesti, tulee tutkimuksen jatkossa keskittyä etenkin ihmisten välisen yleistymisen parantamiseen, sekä langattoman tiedonsiirron ongelmien korjaamiseen

    Force-Aware Interface via Electromyography for Natural VR/AR Interaction

    Full text link
    While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR.Comment: ACM Transactions on Graphics (SIGGRAPH Asia 2022

    Hybrid Wearable Signal Processing/Learning via Deep Neural Networks

    Get PDF
    Wearable technologies are gaining considerable attention in recent years as a potential post-smartphone platform with several applications of significant engineering importance. Wearable technologies are expected to become more prevalent in a variety of areas, including modern healthcare practices, robotic prosthesis control, Artificial Reality (AR) and Virtual Reality (VR) applications, Human Machine Interface/Interaction (HMI), and remote support for patients and chronically ill patients at home. The emergence of wearable technologies can be attributed to the advancement of flexible electronic materials; the availability of advanced cloud and wireless communication systems, and; the Internet of Things (IoT) coupled with high demand from the tech-savvy population and the elderly population for healthcare management. Wearable devices in the healthcare realm gather various biological signals from the human body, among which Electrocardiogram (ECG), Photoplethysmogram (PPG), and surface Electromyogram (sEMG), are the most widely non-intrusive monitored signals. Utilizing these widely used non-intrusive signals, the primary emphasis of the proposed dissertation is on the development of advanced Machine Learning (ML), in particular Deep Learning (DL), algorithms to increase the accuracy of wearable devices in specific tasks. In this context and in the first part, using ECG and PPG bio-signals, we focus on development of accurate subject-specific solutions for continuous and cuff-less Blood Pressure (BP) monitoring. More precisely, a deep learning-based framework known as BP-Net is proposed for predicting continuous upper and lower bounds of blood pressure, respectively, known as Systolic BP (SBP) and Diastolic BP (DBP). Furthermore, by capitalizing on the fact that datasets used in recent literature are not unified and properly defined, a unified dataset is constructed from the MIMIC-I and MIMIC-III databases obtained from PhysioNet. In the second part, we focus on hand gesture recognition utilizing sEMG signals, which have the potential to be used in the myoelectric prostheses control systems or decoding Myo Armbands data to interpret human intent in AR/VR environments. Capitalizing on the recent advances in hybrid architectures and Transformers in different applications, we aim to enhance the accuracy of sEMG-based hand gesture recognition by introducing a hybrid architecture based on Transformers, referred to as the Transformer for Hand Gesture Recognition (TraHGR). In particular, the TraHGR architecture consists of two parallel paths followed by a linear layer that acts as a fusion center to integrate the advantage of each module. The ultimate goal of this work is to increase the accuracy of gesture classifications, which could be a major step towards the development of more advanced HMI systems that can improve the quality of life for people with disabilities or enhance the user experience in AR/VR applications. Besides improving accuracy, decreasing the number of parameters in the Deep Neural Network (DNN) architectures plays an important role in wearable devices. In other words, to achieve the highest possible accuracy, complicated and heavy-weighted Deep Neural Networks (DNNs) are typically developed, which restricts their practical application in low-power and resource-constrained wearable systems. Therefore, in our next attempt, we propose a lightweight hybrid architecture based on the Convolutional Neural Network (CNN) and attention mechanism, referred to as Hierarchical Depth-wise Convolution along with the Attention Mechanism (HDCAM), to effectively extract local and global representations of the input. The key objective behind the design of HDCAM was to ensure its resource efficiency while maintaining comparable or better performance than the current state-of-the-art methods

    A myoelectric digital twin for fast and realistic modelling in deep learning

    Get PDF
    Muscle electrophysiology has emerged as a powerful tool to drive human machine interfaces, with many new recent applications outside the traditional clinical domains, such as robotics and virtual reality. However, more sophisticated, functional, and robust decoding algorithms are required to meet the fine control requirements of these applications. Deep learning has shown high potential in meeting these demands, but requires a large amount of high-quality annotated data, which is expensive and time-consuming to acquire. Data augmentation using simulations, a strategy applied in other deep learning applications, has never been attempted in electromyography due to the absence of computationally efficient models. We introduce a concept of Myoelectric Digital Twin - highly realistic and fast computational model tailored for the training of deep learning algorithms. It enables simulation of arbitrary large and perfectly annotated datasets of realistic electromyography signals, allowing new approaches to muscular signal decoding, accelerating the development of human-machine interfaces
    corecore