27 research outputs found

    Guidage non-intrusif d'un bras robotique à l'aide d'un bracelet myoélectrique à électrode sèche

    Get PDF
    Depuis plusieurs années la robotique est vue comme une solution clef pour améliorer la qualité de vie des personnes ayant subi une amputation. Pour créer de nouvelles prothèses intelligentes qui peuvent être facilement intégrées à la vie quotidienne et acceptée par ces personnes, celles-ci doivent être non-intrusives, fiables et peu coûteuses. L’électromyographie de surface fournit une interface intuitive et non intrusive basée sur l’activité musculaire de l’utilisateur permettant d’interagir avec des robots. Cependant, malgré des recherches approfondies dans le domaine de la classification des signaux sEMG, les classificateurs actuels manquent toujours de fiabilité, car ils ne sont pas robustes face au bruit à court terme (par exemple, petit déplacement des électrodes, fatigue musculaire) ou à long terme (par exemple, changement de la masse musculaire et des tissus adipeux) et requiert donc de recalibrer le classifieur de façon périodique. L’objectif de mon projet de recherche est de proposer une interface myoélectrique humain-robot basé sur des algorithmes d’apprentissage par transfert et d’adaptation de domaine afin d’augmenter la fiabilité du système à long-terme, tout en minimisant l’intrusivité (au niveau du temps de préparation) de ce genre de système. L’aspect non intrusif est obtenu en utilisant un bracelet à électrode sèche possédant dix canaux. Ce bracelet (3DC Armband) est de notre (Docteur Gabriel Gagnon-Turcotte, mes co-directeurs et moi-même) conception et a été réalisé durant mon doctorat. À l’heure d’écrire ces lignes, le 3DC Armband est le bracelet sans fil pour l’enregistrement de signaux sEMG le plus performant disponible. Contrairement aux dispositifs utilisant des électrodes à base de gel qui nécessitent un rasage de l’avant-bras, un nettoyage de la zone de placement et l’application d’un gel conducteur avant l’utilisation, le brassard du 3DC peut simplement être placé sur l’avant-bras sans aucune préparation. Cependant, cette facilité d’utilisation entraîne une diminution de la qualité de l’information du signal. Cette diminution provient du fait que les électrodes sèches obtiennent un signal plus bruité que celle à base de gel. En outre, des méthodes invasives peuvent réduire les déplacements d’électrodes lors de l’utilisation, contrairement au brassard. Pour remédier à cette dégradation de l’information, le projet de recherche s’appuiera sur l’apprentissage profond, et plus précisément sur les réseaux convolutionels. Le projet de recherche a été divisé en trois phases. La première porte sur la conception d’un classifieur permettant la reconnaissance de gestes de la main en temps réel. La deuxième porte sur l’implémentation d’un algorithme d’apprentissage par transfert afin de pouvoir profiter des données provenant d’autres personnes, permettant ainsi d’améliorer la classification des mouvements de la main pour un nouvel individu tout en diminuant le temps de préparation nécessaire pour utiliser le système. La troisième phase consiste en l’élaboration et l’implémentation des algorithmes d’adaptation de domaine et d’apprentissage faiblement supervisé afin de créer un classifieur qui soit robuste au changement à long terme.For several years, robotics has been seen as a key solution to improve the quality of life of people living with upper-limb disabilities. To create new, smart prostheses that can easily be integrated into everyday life, they must be non-intrusive, reliable and inexpensive. Surface electromyography provides an intuitive interface based on a user’s muscle activity to interact with robots. However, despite extensive research in the field of sEMG signal classification, current classifiers still lack reliability due to their lack of robustness to short-term (e.g. small electrode displacement, muscle fatigue) or long-term (e.g. change in muscle mass and adipose tissue) noise. In practice, this mean that to be useful, classifier needs to be periodically re-calibrated, a time consuming process. The goal of my research project is to proposes a human-robot myoelectric interface based on transfer learning and domain adaptation algorithms to increase the reliability of the system in the long term, while at the same time reducing the intrusiveness (in terms of hardware and preparation time) of this kind of systems. The non-intrusive aspect is achieved from a dry-electrode armband featuring ten channels. This armband, named the 3DC Armband is from our (Dr. Gabriel Gagnon-Turcotte, my co-directors and myself) conception and was realized during my doctorate. At the time of writing, the 3DC Armband offers the best performance for currently available dry-electrodes, surface electromyographic armbands. Unlike gel-based electrodes which require intrusive skin preparation (i.e. shaving, cleaning the skin and applying conductive gel), the 3DC Armband can simply be placed on the forearm without any preparation. However, this ease of use results in a decrease in the quality of information. This decrease is due to the fact that the signal recorded by dry electrodes is inherently noisier than gel-based ones. In addition, other systems use invasive methods (intramuscular electromyography) to capture a cleaner signal and reduce the source of noises (e.g. electrode shift). To remedy this degradation of information resulting from the non-intrusiveness of the armband, this research project will rely on deep learning, and more specifically on convolutional networks. The research project was divided into three phases. The first is the design of a classifier allowing the recognition of hand gestures in real-time. The second is the implementation of a transfer learning algorithm to take advantage of the data recorded across multiple users, thereby improving the system’s accuracy, while decreasing the time required to use the system. The third phase is the development and implementation of a domain adaptation and self-supervised learning to enhance the classifier’s robustness to long-term changes

    Robust myoelectric pattern recognition methods for reducing users’ calibration burden: challenges and future

    Get PDF
    Myoelectric pattern recognition (MPR) has evolved into a sophisticated technology widely employed in controlling myoelectric interface (MI) devices like prosthetic and orthotic robots. Current MIs not only enable multi-degree-of-freedom control of prosthetic limbs but also demonstrate substantial potential in consumer electronics. However, the non-stationary random characteristics of myoelectric signals poses challenges, leading to performance degradation in practical scenarios such as electrode shifting and switching new users. Conventional MIs often necessitate meticulous calibration, imposing a significant burden on users. To address user frustration during the calibration process, researchers have focused on identifying MPR methods that alleviate this burden. This article categorizes common scenarios that incur calibration burdens as based on data distribution shift and based on dynamic data categories. Then further investigated and summarized the popular robust MPR algorithms used to reduce the user’s calibration burden. We categorize these algorithms as based on data manipulate, feature manipulation and, model structure. And describes the scenarios to which each method is applicable and the conditions required for calibration. Finally, this review is concluded with the advantages of robust MPR and the remaining challenges and future opportunities

    Decoding HD-EMG Signals for Myoelectric Control-How Small Can the Analysis Window Size be?

    Get PDF

    Multikernel convolutional neural network for sEMG based hand gesture classification

    Get PDF
    openIl riconoscimento dei gesti della mano è un argomento ampiamente discusso in letteratura, dove vengono analizzate diverse tecniche sia in termini di tipi di segnale in ingresso che di algoritmi. Tra i più utilizzati ci sono i segnali elettromiografici (sEMG), già ampiamente sfruttati nelle applicazioni di interazione uomo-macchina (HMI). Determinare come decodificare le informazioni contenute nei segnali EMG in modo robusto e accurato è un problema chiave per il quale è urgente trovare una soluzione. Recentemente, molti incarichi di riconoscimento dei pattern EMG sono stati affrontati utilizzando metodi di deep learning. Nonostante le elevate prestazioni di questi ultimi, le loro capacità di generalizzazione sono spesso limitate dall'elevata eterogeneità tra i soggetti, l'impedenza cutanea, il posizionamento dei sensori, ecc. Inoltre, poiché questo progetto è focalizzato sull'applicazione in tempo reale di protesi, ci sono maggiori vincoli sui tempi di risposta del sistema che riducono la complessità dei modelli. In questa tesi è stata testata una rete neurale convoluzionale multi-kernel su diversi dataset pubblici per verificare la sua generalizzabilità. Inoltre, è stata analizzata la capacità del modello di superare i limiti inter-soggetto e inter-sessione in giorni diversi, preservando i vincoli legati a un sistema embedded. I risultati confermano le difficoltà incontrate nell'estrazione di informazioni dai segnali emg; tuttavia, dimostrano la possibilità di ottenere buone prestazioni per un uso robusto di mani prostetiche. Inoltre, è possibile ottenere prestazioni migliori personalizzando il modello con tecniche di transfer learning e di adattamento al dominio.Hand gesture recognition is a widely discussed topic in the literature, where different techniques are analyzed in terms of both input signal types and algorithms. Among the most widely used are electromyographic signals (sEMG), which are already widely exploited in human-computer interaction (HMI) applications. Determining how to decode the information contained in EMG signals robustly and accurately is a key problem for which a solution is urgently needed. Recently, many EMG pattern recognition tasks have been addressed using deep learning methods. Despite their high performance, their generalization capabilities are often limited by high heterogeneity among subjects, skin impedance, sensor placement, etc. In addition, because this project is focused on the real-time application of prostheses, there are greater constraints on the system response times that reduce the complexity of the models. In this thesis, a multi-kernel convolutional neural network was tested on several public datasets to verify its generalizability. In addition, the model's ability to overcome inter-subject and inter-session constraints on different days while preserving the constraints associated with an embedded system was analyzed. The results confirm the difficulties encountered in extracting information from emg signals; however, they demonstrate the possibility of achieving good performance for robust use of prosthetic hands. In addition, better performance can be achieved by customizing the model with transfer learning and domain-adaptationtechniques

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Deep Learning Based Upper-limb Motion Estimation Using Surface Electromyography

    Get PDF
    To advance human-machine interfaces (HMI) that can help disabled people reconstruct lost functions of upper-limbs, machine learning (ML) techniques, particularly classification-based pattern recognition (PR), have been extensively implemented to decode human movement intentions from surface electromyography (sEMG) signals. However, performances of ML can be substantially affected, or even limited, by feature engineering that requires expertise in both domain knowledge and experimental experience. To overcome this limitation, researchers are now focusing on deep learning (DL) techniques to derive informative, representative, and transferable features from raw data automatically. Despite some progress reported in recent literature, it is still very challenging to achieve reliable and robust interpretation of user intentions in practical scenarios. This is mainly because of the high complexity of upper-limb motions and the non-stable characteristics of sEMG signals. Besides, the PR scheme only identifies discrete states of motion. To complete coordinated tasks such as grasping, users have to rely on a sequential on/off control of each individual function, which is inherently different from the simultaneous and proportional control (SPC) strategy adopted by the natural motions of upper-limbs. The aim of this thesis is to develop and advance several DL techniques for the estimation of upper-limb motions from sEMG, and the work is centred on three themes: 1) to improve the reliability of gesture recognition by rejecting uncertain classification outcomes; 2) to build regression frameworks for joint kinematics estimation that enables SPC; and 3) to reduce the degradation of estimation performances when DL model is applied to a new individual. In order to achieve these objectives, the following efforts were made: 1) a confidence model was designed to predict the possibility of correctness with regard to each classification of convolutional neural networks (CNN), such that the uncertain recognition can be identified and rejected; 2) a hybrid framework using CNN for deep feature extraction and long short-term memory neural network (LSTM) was constructed to conduct sequence regression, which could simultaneously exploit the temporal and spatial information in sEMG data; 3) the hybrid framework was further extended by integrating Kalman filter with LSTM units in the recursive learning process, obtaining a deep Kalman filter network (DKFN) to perform kinematics estimation more effectively; and 4) a novel regression scheme was proposed for supervised domain adaptation (SDA), based on which the model generalisation among subjects can be substantially enhanced

    Toward Long-Term FMG Model-Based Estimation of Applied Hand Force in Dynamic Motion During Human–Robot Interactions

    Get PDF
    Physical human-robot interaction (pHRI) is reliant on human actions and can be addressed by studying human upper-limb motions during interactions. Use of force myography (FMG) signals, which detect muscle contractions, can be useful in developing machine learning algorithms as controls. In this paper, a novel long-term calibrated FMG-based trained model is presented to estimate applied force in dynamic motion during real-time interactions between a human and a linear robot. The proposed FMG-based pHRI framework was investigated in new, unseen, real-time scenarios for the first time. Initially, a long-term reference dataset (multiple source distributions) of upper-limb FMG data was generated as five participants interacted with the robot applying force in five different dynamic motions. Ten other participants interacted with the robot in two intended motions to evaluate the out-of-distribution (OOD) target data (new, unlearned), which was different than the population data. Two practical scenarios were considered for assessment: i) a participant applied force in a new, unlearned motion (scenario 1), and ii) a new, unlearned participant applied force in an intended motion (scenario 2). In each scenario, few long-term FMG-based models were trained using a baseline dataset [reference dataset (scenario 1, 2) and/or a learnt participant dataset (scenario 1)] and a calibration dataset (collected during evaluation). Real-time evaluation showed that the proposed long-term calibrated FMG-based models (LCFMG) could achieve estimation accuracies of 80%-94% in all scenarios. These results are useful towards integrating and generalizing human activity data in a robot control scheme by avoiding extensive HRI training phase in regular applications
    corecore