80 research outputs found
Guidage non-intrusif d'un bras robotique à l'aide d'un bracelet myoélectrique à électrode sÚche
Depuis plusieurs annĂ©es la robotique est vue comme une solution clef pour amĂ©liorer la qualitĂ© de vie des personnes ayant subi une amputation. Pour crĂ©er de nouvelles prothĂšses intelligentes qui peuvent ĂȘtre facilement intĂ©grĂ©es Ă la vie quotidienne et acceptĂ©e par ces personnes, celles-ci doivent ĂȘtre non-intrusives, fiables et peu coĂ»teuses. LâĂ©lectromyographie de surface fournit une interface intuitive et non intrusive basĂ©e sur lâactivitĂ© musculaire de lâutilisateur permettant dâinteragir avec des robots. Cependant, malgrĂ© des recherches approfondies dans le domaine de la classification des signaux sEMG, les classificateurs actuels manquent toujours de fiabilitĂ©, car ils ne sont pas robustes face au bruit Ă court terme (par exemple, petit dĂ©placement des Ă©lectrodes, fatigue musculaire) ou Ă long terme (par exemple, changement de la masse musculaire et des tissus adipeux) et requiert donc de recalibrer le classifieur de façon pĂ©riodique. Lâobjectif de mon projet de recherche est de proposer une interface myoĂ©lectrique humain-robot basĂ© sur des algorithmes dâapprentissage par transfert et dâadaptation de domaine afin dâaugmenter la fiabilitĂ© du systĂšme Ă long-terme, tout en minimisant lâintrusivitĂ© (au niveau du temps de prĂ©paration) de ce genre de systĂšme. Lâaspect non intrusif est obtenu en utilisant un bracelet Ă Ă©lectrode sĂšche possĂ©dant dix canaux. Ce bracelet (3DC Armband) est de notre (Docteur Gabriel Gagnon-Turcotte, mes co-directeurs et moi-mĂȘme) conception et a Ă©tĂ© rĂ©alisĂ© durant mon doctorat. Ă lâheure dâĂ©crire ces lignes, le 3DC Armband est le bracelet sans fil pour lâenregistrement de signaux sEMG le plus performant disponible. Contrairement aux dispositifs utilisant des Ă©lectrodes Ă base de gel qui nĂ©cessitent un rasage de lâavant-bras, un nettoyage de la zone de placement et lâapplication dâun gel conducteur avant lâutilisation, le brassard du 3DC peut simplement ĂȘtre placĂ© sur lâavant-bras sans aucune prĂ©paration. Cependant, cette facilitĂ© dâutilisation entraĂźne une diminution de la qualitĂ© de lâinformation du signal. Cette diminution provient du fait que les Ă©lectrodes sĂšches obtiennent un signal plus bruitĂ© que celle Ă base de gel. En outre, des mĂ©thodes invasives peuvent rĂ©duire les dĂ©placements dâĂ©lectrodes lors de lâutilisation, contrairement au brassard. Pour remĂ©dier Ă cette dĂ©gradation de lâinformation, le projet de recherche sâappuiera sur lâapprentissage profond, et plus prĂ©cisĂ©ment sur les rĂ©seaux convolutionels. Le projet de recherche a Ă©tĂ© divisĂ© en trois phases. La premiĂšre porte sur la conception dâun classifieur permettant la reconnaissance de gestes de la main en temps rĂ©el. La deuxiĂšme porte sur lâimplĂ©mentation dâun algorithme dâapprentissage par transfert afin de pouvoir profiter des donnĂ©es provenant dâautres personnes, permettant ainsi dâamĂ©liorer la classification des mouvements de la main pour un nouvel individu tout en diminuant le temps de prĂ©paration nĂ©cessaire pour utiliser le systĂšme. La troisiĂšme phase consiste en lâĂ©laboration et lâimplĂ©mentation des algorithmes dâadaptation de domaine et dâapprentissage faiblement supervisĂ© afin de crĂ©er un classifieur qui soit robuste au changement Ă long terme.For several years, robotics has been seen as a key solution to improve the quality of life of people living with upper-limb disabilities. To create new, smart prostheses that can easily be integrated into everyday life, they must be non-intrusive, reliable and inexpensive. Surface electromyography provides an intuitive interface based on a userâs muscle activity to interact with robots. However, despite extensive research in the field of sEMG signal classification, current classifiers still lack reliability due to their lack of robustness to short-term (e.g. small electrode displacement, muscle fatigue) or long-term (e.g. change in muscle mass and adipose tissue) noise. In practice, this mean that to be useful, classifier needs to be periodically re-calibrated, a time consuming process. The goal of my research project is to proposes a human-robot myoelectric interface based on transfer learning and domain adaptation algorithms to increase the reliability of the system in the long term, while at the same time reducing the intrusiveness (in terms of hardware and preparation time) of this kind of systems. The non-intrusive aspect is achieved from a dry-electrode armband featuring ten channels. This armband, named the 3DC Armband is from our (Dr. Gabriel Gagnon-Turcotte, my co-directors and myself) conception and was realized during my doctorate. At the time of writing, the 3DC Armband offers the best performance for currently available dry-electrodes, surface electromyographic armbands. Unlike gel-based electrodes which require intrusive skin preparation (i.e. shaving, cleaning the skin and applying conductive gel), the 3DC Armband can simply be placed on the forearm without any preparation. However, this ease of use results in a decrease in the quality of information. This decrease is due to the fact that the signal recorded by dry electrodes is inherently noisier than gel-based ones. In addition, other systems use invasive methods (intramuscular electromyography) to capture a cleaner signal and reduce the source of noises (e.g. electrode shift). To remedy this degradation of information resulting from the non-intrusiveness of the armband, this research project will rely on deep learning, and more specifically on convolutional networks. The research project was divided into three phases. The first is the design of a classifier allowing the recognition of hand gestures in real-time. The second is the implementation of a transfer learning algorithm to take advantage of the data recorded across multiple users, thereby improving the systemâs accuracy, while decreasing the time required to use the system. The third phase is the development and implementation of a domain adaptation and self-supervised learning to enhance the classifierâs robustness to long-term changes
Biosignalâbased humanâmachine interfaces for assistance and rehabilitation : a survey
As a definition, HumanâMachine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignalâbased HMIs for assistance and rehabilitation to outline stateâofâtheâart and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, fullâtext), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An everâgrowing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIsâ complex-ity, so their usefulness should be carefully evaluated for the specific application
A Transferable Adaptive Domain Adversarial Neural Network for Virtual Reality Augmented EMG-Based Gesture Recognition
Within the field of electromyography-based (EMG) gesture recognition,
disparities exist between the offline accuracy reported in the literature and
the real-time usability of a classifier. This gap mainly stems from two
factors: 1) The absence of a controller, making the data collected dissimilar
to actual control. 2) The difficulty of including the four main dynamic factors
(gesture intensity, limb position, electrode shift, and transient changes in
the signal), as including their permutations drastically increases the amount
of data to be recorded. Contrarily, online datasets are limited to the exact
EMG-based controller used to record them, necessitating the recording of a new
dataset for each control method or variant to be tested. Consequently, this
paper proposes a new type of dataset to serve as an intermediate between
offline and online datasets, by recording the data using a real-time
experimental protocol. The protocol, performed in virtual reality, includes the
four main dynamic factors and uses an EMG-independent controller to guide
movements. This EMG-independent feedback ensures that the user is in-the-loop
during recording, while enabling the resulting dynamic dataset to be used as an
EMG-based benchmark. The dataset is comprised of 20 able-bodied participants
completing three to four sessions over a period of 14 to 21 days. The ability
of the dynamic dataset to serve as a benchmark is leveraged to evaluate the
impact of different recalibration techniques for long-term (across-day) gesture
recognition, including a novel algorithm, named TADANN. TADANN consistently and
significantly (p<0.05) outperforms using fine-tuning as the recalibration
technique.Comment: 10 Pages. The last three authors shared senior authorshi
Hand Gestures Recognition for Human-Machine Interfaces: A Low-Power Bio-Inspired Armband
Hand gesture recognition has recently increased its popularity as Human-Machine Interface (HMI) in the biomedical field. Indeed, it can be performed involving many different non-invasive techniques, e.g., surface ElectroMyoGraphy (sEMG) or PhotoPlethysmoGraphy (PPG). In the last few years, the interest demonstrated by both academia and industry brought to a continuous spawning of commercial and custom wearable devices, which tried to address different challenges in many application fields, from tele-rehabilitation to sign language recognition. In this work, we propose a novel 7-channel sEMG armband, which can be employed as HMI for both serious gaming control and rehabilitation support. In particular, we designed the prototype focusing on the capability of our device to compute the Average Threshold Crossing (ATC) parameter, which is evaluated by counting how many times the sEMG signal crosses a threshold during a fixed time duration (i.e., 130 ms), directly on the wearable device. Exploiting the event-driven characteristic of the ATC, our armband is able to accomplish the on-board prediction of common hand gestures requiring less power w.r.t. state of the art devices. At the end of an acquisition campaign that involved the participation of 26 people, we obtained an average classifier accuracy of 91.9% when aiming to recognize in real time 8 active hand gestures plus the idle state. Furthermore, with 2.92mA of current absorption during active functioning and 1.34mA prediction latency, this prototype confirmed our expectations and can be an appealing solution for long-term (up to 60 h) medical and consumer applications
ViT-MDHGR: Cross-day Reliability and Agility in Dynamic Hand Gesture Prediction via HD-sEMG Signal Decoding
Surface electromyography (sEMG) and high-density sEMG (HD-sEMG) biosignals
have been extensively investigated for myoelectric control of prosthetic
devices, neurorobotics, and more recently human-computer interfaces because of
their capability for hand gesture recognition/prediction in a wearable and
non-invasive manner. High intraday (same-day) performance has been reported.
However, the interday performance (separating training and testing days) is
substantially degraded due to the poor generalizability of conventional
approaches over time, hindering the application of such techniques in real-life
practices. There are limited recent studies on the feasibility of multi-day
hand gesture recognition. The existing studies face a major challenge: the need
for long sEMG epochs makes the corresponding neural interfaces impractical due
to the induced delay in myoelectric control. This paper proposes a compact
ViT-based network for multi-day dynamic hand gesture prediction. We tackle the
main challenge as the proposed model only relies on very short HD-sEMG signal
windows (i.e., 50 ms, accounting for only one-sixth of the convention for
real-time myoelectric implementation), boosting agility and responsiveness. Our
proposed model can predict 11 dynamic gestures for 20 subjects with an average
accuracy of over 71% on the testing day, 3-25 days after training. Moreover,
when calibrated on just a small portion of data from the testing day, the
proposed model can achieve over 92% accuracy by retraining less than 10% of the
parameters for computational efficiency
The Relationship between Anthropometric Variables and Features of Electromyography Signal for Human-Computer Interface
http://doi.org/10.4018/978-1-4666-6090-8 ISBN 13 : 9781466660908 EISBN13: 9781466660915International audienceMuscle-computer interfaces (MCIs) based on surface electromyography (EMG) pattern recognition have been developed based on two consecutive components: feature extraction and classification algorithms. Many features and classifiers are proposed and evaluated, which yield the high classification accuracy and the high number of discriminated motions under a single-session experimental condition. However, there are many limitations to use MCIs in the real-world contexts, such as the robustness over time, noise, or low-level EMG activities. Although the selection of the suitable robust features can solve such problems, EMG pattern recognition has to design and train for a particular individual user to reach high accuracy. Due to different body compositions across users, a feasibility to use anthropometric variables to calibrate EMG recognition system automatically/semi-automatically is proposed. This chapter presents the relationships between robust features extracted from actions associated with surface EMG signals and twelve related anthropometric variables. The strong and significant associations presented in this chapter could benefit a further design of the MCIs based on EMG pattern recognition
- âŠ