42 research outputs found

    A transferable adaptive domain adversarial neural network for virtual reality augmented EMG-Based gesture recognition

    Get PDF
    Within the field of electromyography-based (EMG) gesture recognition, disparities exist between the off line accuracy reported in the literature and the real-time usability of a classifier. This gap mainly stems from two factors: 1) The absence of a controller, making the data collected dissimilar to actual control. 2) The difficulty of including the four main dynamic factors (gesture intensity, limb position, electrode shift, and transient changes in the signal), as including their permutations drastically increases the amount of data to be recorded. Contrarily, online datasets are limited to the exact EMG-based controller used to record them, necessitating the recording of a new dataset for each control method or variant to be tested. Consequently, this paper proposes a new type of dataset to serve as an intermediate between off line and online datasets, by recording the data using a real-time experimental protocol. The protocol, performed in virtual reality, includes the four main dynamic factors and uses an EMG-independent controller to guide movements. This EMG-independent feedback ensures that the user is in-the-loop during recording, while enabling the resulting dynamic dataset to be used as an EMG-based benchmark. The dataset is comprised of 20 able-bodied participants completing three to four sessions over a period of 14 to 21 days. The ability of the dynamic dataset to serve as a benchmark is leveraged to evaluate the impact of different-recalibration techniques for long-term (across-day) gesture recognition, including a novel algorithm, named TADANN. TADANN consistently and significantly (p <; 0.05) outperforms using fine-tuning as the recalibration technique

    Guidage non-intrusif d'un bras robotique à l'aide d'un bracelet myoélectrique à électrode sÚche

    Get PDF
    Depuis plusieurs annĂ©es la robotique est vue comme une solution clef pour amĂ©liorer la qualitĂ© de vie des personnes ayant subi une amputation. Pour crĂ©er de nouvelles prothĂšses intelligentes qui peuvent ĂȘtre facilement intĂ©grĂ©es Ă  la vie quotidienne et acceptĂ©e par ces personnes, celles-ci doivent ĂȘtre non-intrusives, fiables et peu coĂ»teuses. L’électromyographie de surface fournit une interface intuitive et non intrusive basĂ©e sur l’activitĂ© musculaire de l’utilisateur permettant d’interagir avec des robots. Cependant, malgrĂ© des recherches approfondies dans le domaine de la classification des signaux sEMG, les classificateurs actuels manquent toujours de fiabilitĂ©, car ils ne sont pas robustes face au bruit Ă  court terme (par exemple, petit dĂ©placement des Ă©lectrodes, fatigue musculaire) ou Ă  long terme (par exemple, changement de la masse musculaire et des tissus adipeux) et requiert donc de recalibrer le classifieur de façon pĂ©riodique. L’objectif de mon projet de recherche est de proposer une interface myoĂ©lectrique humain-robot basĂ© sur des algorithmes d’apprentissage par transfert et d’adaptation de domaine afin d’augmenter la fiabilitĂ© du systĂšme Ă  long-terme, tout en minimisant l’intrusivitĂ© (au niveau du temps de prĂ©paration) de ce genre de systĂšme. L’aspect non intrusif est obtenu en utilisant un bracelet Ă  Ă©lectrode sĂšche possĂ©dant dix canaux. Ce bracelet (3DC Armband) est de notre (Docteur Gabriel Gagnon-Turcotte, mes co-directeurs et moi-mĂȘme) conception et a Ă©tĂ© rĂ©alisĂ© durant mon doctorat. À l’heure d’écrire ces lignes, le 3DC Armband est le bracelet sans fil pour l’enregistrement de signaux sEMG le plus performant disponible. Contrairement aux dispositifs utilisant des Ă©lectrodes Ă  base de gel qui nĂ©cessitent un rasage de l’avant-bras, un nettoyage de la zone de placement et l’application d’un gel conducteur avant l’utilisation, le brassard du 3DC peut simplement ĂȘtre placĂ© sur l’avant-bras sans aucune prĂ©paration. Cependant, cette facilitĂ© d’utilisation entraĂźne une diminution de la qualitĂ© de l’information du signal. Cette diminution provient du fait que les Ă©lectrodes sĂšches obtiennent un signal plus bruitĂ© que celle Ă  base de gel. En outre, des mĂ©thodes invasives peuvent rĂ©duire les dĂ©placements d’électrodes lors de l’utilisation, contrairement au brassard. Pour remĂ©dier Ă  cette dĂ©gradation de l’information, le projet de recherche s’appuiera sur l’apprentissage profond, et plus prĂ©cisĂ©ment sur les rĂ©seaux convolutionels. Le projet de recherche a Ă©tĂ© divisĂ© en trois phases. La premiĂšre porte sur la conception d’un classifieur permettant la reconnaissance de gestes de la main en temps rĂ©el. La deuxiĂšme porte sur l’implĂ©mentation d’un algorithme d’apprentissage par transfert afin de pouvoir profiter des donnĂ©es provenant d’autres personnes, permettant ainsi d’amĂ©liorer la classification des mouvements de la main pour un nouvel individu tout en diminuant le temps de prĂ©paration nĂ©cessaire pour utiliser le systĂšme. La troisiĂšme phase consiste en l’élaboration et l’implĂ©mentation des algorithmes d’adaptation de domaine et d’apprentissage faiblement supervisĂ© afin de crĂ©er un classifieur qui soit robuste au changement Ă  long terme.For several years, robotics has been seen as a key solution to improve the quality of life of people living with upper-limb disabilities. To create new, smart prostheses that can easily be integrated into everyday life, they must be non-intrusive, reliable and inexpensive. Surface electromyography provides an intuitive interface based on a user’s muscle activity to interact with robots. However, despite extensive research in the field of sEMG signal classification, current classifiers still lack reliability due to their lack of robustness to short-term (e.g. small electrode displacement, muscle fatigue) or long-term (e.g. change in muscle mass and adipose tissue) noise. In practice, this mean that to be useful, classifier needs to be periodically re-calibrated, a time consuming process. The goal of my research project is to proposes a human-robot myoelectric interface based on transfer learning and domain adaptation algorithms to increase the reliability of the system in the long term, while at the same time reducing the intrusiveness (in terms of hardware and preparation time) of this kind of systems. The non-intrusive aspect is achieved from a dry-electrode armband featuring ten channels. This armband, named the 3DC Armband is from our (Dr. Gabriel Gagnon-Turcotte, my co-directors and myself) conception and was realized during my doctorate. At the time of writing, the 3DC Armband offers the best performance for currently available dry-electrodes, surface electromyographic armbands. Unlike gel-based electrodes which require intrusive skin preparation (i.e. shaving, cleaning the skin and applying conductive gel), the 3DC Armband can simply be placed on the forearm without any preparation. However, this ease of use results in a decrease in the quality of information. This decrease is due to the fact that the signal recorded by dry electrodes is inherently noisier than gel-based ones. In addition, other systems use invasive methods (intramuscular electromyography) to capture a cleaner signal and reduce the source of noises (e.g. electrode shift). To remedy this degradation of information resulting from the non-intrusiveness of the armband, this research project will rely on deep learning, and more specifically on convolutional networks. The research project was divided into three phases. The first is the design of a classifier allowing the recognition of hand gestures in real-time. The second is the implementation of a transfer learning algorithm to take advantage of the data recorded across multiple users, thereby improving the system’s accuracy, while decreasing the time required to use the system. The third phase is the development and implementation of a domain adaptation and self-supervised learning to enhance the classifier’s robustness to long-term changes

    ViT-MDHGR: Cross-day Reliability and Agility in Dynamic Hand Gesture Prediction via HD-sEMG Signal Decoding

    Full text link
    Surface electromyography (sEMG) and high-density sEMG (HD-sEMG) biosignals have been extensively investigated for myoelectric control of prosthetic devices, neurorobotics, and more recently human-computer interfaces because of their capability for hand gesture recognition/prediction in a wearable and non-invasive manner. High intraday (same-day) performance has been reported. However, the interday performance (separating training and testing days) is substantially degraded due to the poor generalizability of conventional approaches over time, hindering the application of such techniques in real-life practices. There are limited recent studies on the feasibility of multi-day hand gesture recognition. The existing studies face a major challenge: the need for long sEMG epochs makes the corresponding neural interfaces impractical due to the induced delay in myoelectric control. This paper proposes a compact ViT-based network for multi-day dynamic hand gesture prediction. We tackle the main challenge as the proposed model only relies on very short HD-sEMG signal windows (i.e., 50 ms, accounting for only one-sixth of the convention for real-time myoelectric implementation), boosting agility and responsiveness. Our proposed model can predict 11 dynamic gestures for 20 subjects with an average accuracy of over 71% on the testing day, 3-25 days after training. Moreover, when calibrated on just a small portion of data from the testing day, the proposed model can achieve over 92% accuracy by retraining less than 10% of the parameters for computational efficiency

    Surface Electromyography and Artificial Intelligence for Human Activity Recognition - A Systematic Review on Methods, Emerging Trends Applications, Challenges, and Future Implementation

    Get PDF
    Human activity recognition (HAR) has become increasingly popular in recent years due to its potential to meet the growing needs of various industries. Electromyography (EMG) is essential in various clinical and biological settings. It is a metric that helps doctors diagnose conditions that affect muscle activation patterns and monitor patients’ progress in rehabilitation, disease diagnosis, motion intention recognition, etc. This review summarizes the various research papers based on HAR with EMG. Over recent years, the integration of Artificial Intelligence (AI) has catalyzed remarkable advancements in the classification of biomedical signals, with a particular focus on EMG data. Firstly, this review meticulously curates a wide array of research papers that have contributed significantly to the evolution of EMG-based activity recognition. By surveying the existing literature, we provide an insightful overview of the key findings and innovations that have propelled this field forward. It explore the various approaches utilized for preprocessing EMG signals, including noise reduction, baseline correction, filtering, and normalization, ensure that the EMG data is suitably prepared for subsequent analysis. In addition, we unravel the multitude of techniques employed to extract meaningful features from raw EMG data, encompassing both time-domain and frequency-domain features. These techniques are fundamental to achieving a comprehensive characterization of muscle activity patterns. Furthermore, we provide an extensive overview of both Machine Learning (ML) and Deep Learning (DL) classification methods, showcasing their respective strengths, limitations, and real-world applications in recognizing diverse human activities from EMG signals. In examining the hardware infrastructure for HAR with EMG, the synergy between hardware and software is underscored as paramount for enabling real-time monitoring. Finally, we also discovered open issues and future research direction that may point to new lines of inquiry for ongoing research toward EMG-based detection.publishedVersio

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN

    Deep Learning Based Upper-limb Motion Estimation Using Surface Electromyography

    Get PDF
    To advance human-machine interfaces (HMI) that can help disabled people reconstruct lost functions of upper-limbs, machine learning (ML) techniques, particularly classification-based pattern recognition (PR), have been extensively implemented to decode human movement intentions from surface electromyography (sEMG) signals. However, performances of ML can be substantially affected, or even limited, by feature engineering that requires expertise in both domain knowledge and experimental experience. To overcome this limitation, researchers are now focusing on deep learning (DL) techniques to derive informative, representative, and transferable features from raw data automatically. Despite some progress reported in recent literature, it is still very challenging to achieve reliable and robust interpretation of user intentions in practical scenarios. This is mainly because of the high complexity of upper-limb motions and the non-stable characteristics of sEMG signals. Besides, the PR scheme only identifies discrete states of motion. To complete coordinated tasks such as grasping, users have to rely on a sequential on/off control of each individual function, which is inherently different from the simultaneous and proportional control (SPC) strategy adopted by the natural motions of upper-limbs. The aim of this thesis is to develop and advance several DL techniques for the estimation of upper-limb motions from sEMG, and the work is centred on three themes: 1) to improve the reliability of gesture recognition by rejecting uncertain classification outcomes; 2) to build regression frameworks for joint kinematics estimation that enables SPC; and 3) to reduce the degradation of estimation performances when DL model is applied to a new individual. In order to achieve these objectives, the following efforts were made: 1) a confidence model was designed to predict the possibility of correctness with regard to each classification of convolutional neural networks (CNN), such that the uncertain recognition can be identified and rejected; 2) a hybrid framework using CNN for deep feature extraction and long short-term memory neural network (LSTM) was constructed to conduct sequence regression, which could simultaneously exploit the temporal and spatial information in sEMG data; 3) the hybrid framework was further extended by integrating Kalman filter with LSTM units in the recursive learning process, obtaining a deep Kalman filter network (DKFN) to perform kinematics estimation more effectively; and 4) a novel regression scheme was proposed for supervised domain adaptation (SDA), based on which the model generalisation among subjects can be substantially enhanced

    Multimodaalinen kÀyttöliittymÀ interaktiivista yhteistyötÀ varten nelijalkaisten robottien kanssa

    Get PDF
    A variety of approaches for hand gesture recognition have been proposed, where most interest has recently been directed towards different deep learning methods. The modalities, on which these approaches are based, most commonly range from different imaging sensors to inertial measurement units (IMU) and electromyography (EMG) sensors. EMG and IMUs allow detection of gestures without being affected by the line of sight or lighting conditions. The detection algorithms are fairly well established, but their application to real world use cases is limited, apart from prostheses and exoskeletons. In this thesis, a multimodal interface for human robot interaction (HRI) is developed for quadruped robots. The interface is based on a combination of two detection algorithms; one for detecting gestures based on surface electromyography (sEMG) and IMU signals, and the other for detecting the operator using visible light and depth cameras. Multiple architectures for gesture detection are compared, where the best regression performance with offline multi-user data was achieved by a hybrid of a convolutional neural network (CNN) and a long short-term memory (LSTM), with a mean squared error (MSE) of 4.7 · 10−3 in the normalised gestures. A person-following behaviour is implemented for a quadruped robot, which is controlled using the predefined gestures. The complete interface is evaluated online by one expert user two days after recording the last samples of the training data. The gesture detection system achieved an F-score of 0.95 for the gestures alone, and 0.90, when unrecognised attempts due to other technological aspects, such as disturbances in Bluetooth data transmission, are included. The system to reached online performance levels comparable to those reported for offline sessions and online sessions with real-time visual feedback. While the current interface was successfully deployed to the robot, further advances should be aimed at improving inter-subject performance and wireless communication reliability between the devices.KĂ€den eleiden tunnistamiseksi on ehdotettu useita vaihtoehtoisia ratkaisuja, mutta tĂ€llĂ€ hetkellĂ€ tutkimus- ja kehitystyö on pÀÀasiassa keskittynyt erilaisiin syvĂ€n oppimisen menetelmiin. Hyödynnetyt teknologiat vaihtelevat useimmiten kuvantavista antureista inertiamittausyksiköihin (inertial measurement unit, IMU) ja lihassĂ€hkökĂ€yrÀÀ (electromyography, EMG) mittaaviin antureihin. EMG ja IMU:t mahdollistavat eleiden tunnistuksen riippumatta nĂ€köyhteydestĂ€ tai valaistusolosuhteista. Eleiden tunnistukseen kĂ€ytettĂ€vĂ€t menetelmĂ€t ovat jo melko vakiintuneita, mutta niiden kĂ€yttökohteet ovat rajoittuneet lĂ€hinnĂ€ proteeseihin ja ulkoisiin tukirankoihin. TĂ€ssĂ€ opinnĂ€ytetyössĂ€ kehitettiin useaa modaliteettia hyödyntĂ€vĂ€ kĂ€yttöliittymĂ€ ihmisen ja robotin vuorovaikutusta varten. KĂ€yttöliittymĂ€ perustuu kahden menetelmĂ€n yhdistelmÀÀn, joista ensimmĂ€inen vastaa eleiden tunnistuksesta pohjautuen ihon pinnalta mitattavaan EMG:hen ja IMU-signaaleihin, ja toinen kĂ€yttĂ€jĂ€n tunnistuksesta nĂ€kyvĂ€n valon- ja syvyyskameroiden perusteella. TyössĂ€ vertaillaan useita eleiden tunnistuksen soveltuvia arkkitehtuureja, joista parhaan tuloksen usean kĂ€yttĂ€jĂ€n opetusaineistolla saavutti konvoluutineuroverkon (convolutional neural network, CNN) ja pitkĂ€kestoisen lyhytkestomuistin (long short-term memory, LSTM) yhdistelmĂ€arkkitehtuuri. Normalisoitujen eleiden regression keskimÀÀrĂ€inen neliöllinen virhe (mean squared error, MSE) oli tĂ€llĂ€ arkkitehtuurilla 4,7·10−3. EleitĂ€ hyödynnettiin robotille toteutetun henkilön seuraamistehtĂ€vĂ€n ohjaamisessa. Lopullinen kĂ€yttöliittymĂ€ arvioitiin yhdellĂ€ kokeneella koehenkilöllĂ€ kaksi pĂ€ivÀÀ viimeisten eleiden mittaamisen jĂ€lkeen. TĂ€llöin eleiden tunnistusjĂ€rjestelmĂ€ saavutti F-testiarvon 0,95, kun vain eleiden tunnistuksen kyvykkyys huomioitiin. Arvioitaessa koko jĂ€rjestelmĂ€n toimivuutta saavutettiin F-testiarvo 0,90, jossa muun muassa Bluetooth-pohjainen tiedonsiirto heikensi tuloksia. Suoraan robottiin yhteydessĂ€ ollessaan, jĂ€rjestelmĂ€n saavuttama eleiden tunnistuskyky vastasi laboratorioissa suoritettujen kokeiden suorituskykyĂ€. Vaikka jĂ€rjestelmĂ€n toiminta vahvistettiin onnistuneesti, tulee tutkimuksen jatkossa keskittyĂ€ etenkin ihmisten vĂ€lisen yleistymisen parantamiseen, sekĂ€ langattoman tiedonsiirron ongelmien korjaamiseen

    Machine Learning for Hand Gesture Classification from Surface Electromyography Signals

    Get PDF
    Classifying hand gestures from Surface Electromyography (sEMG) is a process which has applications in human-machine interaction, rehabilitation and prosthetic control. Reduction in the cost and increase in the availability of necessary hardware over recent years has made sEMG a more viable solution for hand gesture classification. The research challenge is the development of processes to robustly and accurately predict the current gesture based on incoming sEMG data. This thesis presents a set of methods, techniques and designs that improve upon evaluation of, and performance on, the classification problem as a whole. These are brought together to set a new baseline for the potential classification. Evaluation is improved by careful choice of metrics and design of cross-validation techniques that account for data bias caused by common experimental techniques. A landmark study is re-evaluated with these improved techniques, and it is shown that data augmentation can be used to significantly improve upon the performance using conventional classification methods. A novel neural network architecture and supporting improvements are presented that further improve performance and is refined such that the network can achieve similar performance with many fewer parameters than competing designs. Supporting techniques such as subject adaptation and smoothing algorithms are then explored to improve overall performance and also provide more nuanced trade-offs with various aspects of performance, such as incurred latency and prediction smoothness. A new study is presented which compares the performance potential of medical grade electrodes and a low-cost commercial alternative showing that for a modest-sized gesture set, they can compete. The data is also used to explore data labelling in experimental design and to evaluate the numerous aspects of performance that must be traded off

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
    corecore