122 research outputs found

    Novel Muscle Monitoring by Radiomyography(RMG) and Application to Hand Gesture Recognition

    Full text link
    Conventional electromyography (EMG) measures the continuous neural activity during muscle contraction, but lacks explicit quantification of the actual contraction. Mechanomyography (MMG) and accelerometers only measure body surface motion, while ultrasound, CT-scan and MRI are restricted to in-clinic snapshots. Here we propose a novel radiomyography (RMG) for continuous muscle actuation sensing that can be wearable and touchless, capturing both superficial and deep muscle groups. We verified RMG experimentally by a forearm wearable sensor for detailed hand gesture recognition. We first converted the radio sensing outputs to the time-frequency spectrogram, and then employed the vision transformer (ViT) deep learning network as the classification model, which can recognize 23 gestures with an average accuracy up to 99% on 8 subjects. By transfer learning, high adaptivity to user difference and sensor variation were achieved at an average accuracy up to 97%. We further demonstrated RMG to monitor eye and leg muscles and achieved high accuracy for eye movement and body postures tracking. RMG can be used with synchronous EMG to derive stimulation-actuation waveforms for many future applications in kinesiology, physiotherapy, rehabilitation, and human-machine interface

    Transfer learning in hand movement intention detection based on surface electromyography signals

    Get PDF
    Over the past several years, electromyography (EMG) signals have been used as a natural interface to interact with computers and machines. Recently, deep learning algorithms such as Convolutional Neural Networks (CNNs) have gained interest for decoding the hand movement intention from EMG signals. However, deep networks require a large dataset to train appropriately. Creating such a database for a single subject could be very time-consuming. In this study, we addressed this issue from two perspectives: (i) we proposed a subject-transfer framework to use the knowledge learned from other subjects to compensate for a target subject’s limited data; (ii) we proposed a task-transfer framework in which the knowledge learned from a set of basic hand movements is used to classify more complex movements, which include a combination of mentioned basic movements. We introduced two CNN-based architectures for hand movement intention detection and a subject-transfer learning approach. Classifiers are tested on the Nearlab dataset, a sEMG hand/wrist movement dataset including 8 movements and 11 subjects, along with their combination, and on open-source hand sEMG dataset “NinaPro DataBase 2 (DB2).” For the Nearlab database, the subject-transfer learning approach improved the average classification accuracy of the proposed deep classifier from 92.60 to 93.30% when classifier was utilizing 10 other subjects’ data via our proposed framework. For Ninapro DB2 exercise B (17 hand movement classes), this improvement was from 81.43 to 82.87%. Moreover, three stages of analysis in task-transfer approach proved that it is possible to classify combination hand movements using the knowledge learned from a set of basic hand movements with zero, few samples and few seconds of data from the target movement classes. First stage takes advantage of shared muscle synergies to classify combined movements, while second and third stages take advantage of novel algorithms using few-shot learning and fine-tuning to use samples from target domain to further train the classifier trained on the source database. The use of information learned from basic hand movements improved classification accuracy of combined hand movements by 10%

    Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning

    Get PDF
    In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyography-based gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples. This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.Comment: Source code and datasets available: https://github.com/Giguelingueling/MyoArmbandDatase

    Upper Limb Movement Recognition utilising EEG and EMG Signals for Rehabilitative Robotics

    Full text link
    Upper limb movement classification, which maps input signals to the target activities, is a key building block in the control of rehabilitative robotics. Classifiers are trained for the rehabilitative system to comprehend the desires of the patient whose upper limbs do not function properly. Electromyography (EMG) signals and Electroencephalography (EEG) signals are used widely for upper limb movement classification. By analysing the classification results of the real-time EEG and EMG signals, the system can understand the intention of the user and predict the events that one would like to carry out. Accordingly, it will provide external help to the user. However, the noise in the real-time EEG and EMG data collection process contaminates the effectiveness of the data, which undermines classification performance. Moreover, not all patients process strong EMG signals due to muscle damage and neuromuscular disorder. To address these issues, this paper explores different feature extraction techniques and machine learning and deep learning models for EEG and EMG signals classification and proposes a novel decision-level multisensor fusion technique to integrate EEG signals with EMG signals. This system retrieves effective information from both sources to understand and predict the desire of the user, and thus aid. By testing out the proposed technique on a publicly available WAY-EEG-GAL dataset, which contains EEG and EMG signals that were recorded simultaneously, we manage to conclude the feasibility and effectiveness of the novel system.Comment: 20 pages, 11 figures, 2 tables; Thesis for Undergraduate Research Project in Computing, NUS; Accepted by Future of Information and Communication Conference 2023, San Francisc

    Guidage non-intrusif d'un bras robotique à l'aide d'un bracelet myoélectrique à électrode sÚche

    Get PDF
    Depuis plusieurs annĂ©es la robotique est vue comme une solution clef pour amĂ©liorer la qualitĂ© de vie des personnes ayant subi une amputation. Pour crĂ©er de nouvelles prothĂšses intelligentes qui peuvent ĂȘtre facilement intĂ©grĂ©es Ă  la vie quotidienne et acceptĂ©e par ces personnes, celles-ci doivent ĂȘtre non-intrusives, fiables et peu coĂ»teuses. L’électromyographie de surface fournit une interface intuitive et non intrusive basĂ©e sur l’activitĂ© musculaire de l’utilisateur permettant d’interagir avec des robots. Cependant, malgrĂ© des recherches approfondies dans le domaine de la classification des signaux sEMG, les classificateurs actuels manquent toujours de fiabilitĂ©, car ils ne sont pas robustes face au bruit Ă  court terme (par exemple, petit dĂ©placement des Ă©lectrodes, fatigue musculaire) ou Ă  long terme (par exemple, changement de la masse musculaire et des tissus adipeux) et requiert donc de recalibrer le classifieur de façon pĂ©riodique. L’objectif de mon projet de recherche est de proposer une interface myoĂ©lectrique humain-robot basĂ© sur des algorithmes d’apprentissage par transfert et d’adaptation de domaine afin d’augmenter la fiabilitĂ© du systĂšme Ă  long-terme, tout en minimisant l’intrusivitĂ© (au niveau du temps de prĂ©paration) de ce genre de systĂšme. L’aspect non intrusif est obtenu en utilisant un bracelet Ă  Ă©lectrode sĂšche possĂ©dant dix canaux. Ce bracelet (3DC Armband) est de notre (Docteur Gabriel Gagnon-Turcotte, mes co-directeurs et moi-mĂȘme) conception et a Ă©tĂ© rĂ©alisĂ© durant mon doctorat. À l’heure d’écrire ces lignes, le 3DC Armband est le bracelet sans fil pour l’enregistrement de signaux sEMG le plus performant disponible. Contrairement aux dispositifs utilisant des Ă©lectrodes Ă  base de gel qui nĂ©cessitent un rasage de l’avant-bras, un nettoyage de la zone de placement et l’application d’un gel conducteur avant l’utilisation, le brassard du 3DC peut simplement ĂȘtre placĂ© sur l’avant-bras sans aucune prĂ©paration. Cependant, cette facilitĂ© d’utilisation entraĂźne une diminution de la qualitĂ© de l’information du signal. Cette diminution provient du fait que les Ă©lectrodes sĂšches obtiennent un signal plus bruitĂ© que celle Ă  base de gel. En outre, des mĂ©thodes invasives peuvent rĂ©duire les dĂ©placements d’électrodes lors de l’utilisation, contrairement au brassard. Pour remĂ©dier Ă  cette dĂ©gradation de l’information, le projet de recherche s’appuiera sur l’apprentissage profond, et plus prĂ©cisĂ©ment sur les rĂ©seaux convolutionels. Le projet de recherche a Ă©tĂ© divisĂ© en trois phases. La premiĂšre porte sur la conception d’un classifieur permettant la reconnaissance de gestes de la main en temps rĂ©el. La deuxiĂšme porte sur l’implĂ©mentation d’un algorithme d’apprentissage par transfert afin de pouvoir profiter des donnĂ©es provenant d’autres personnes, permettant ainsi d’amĂ©liorer la classification des mouvements de la main pour un nouvel individu tout en diminuant le temps de prĂ©paration nĂ©cessaire pour utiliser le systĂšme. La troisiĂšme phase consiste en l’élaboration et l’implĂ©mentation des algorithmes d’adaptation de domaine et d’apprentissage faiblement supervisĂ© afin de crĂ©er un classifieur qui soit robuste au changement Ă  long terme.For several years, robotics has been seen as a key solution to improve the quality of life of people living with upper-limb disabilities. To create new, smart prostheses that can easily be integrated into everyday life, they must be non-intrusive, reliable and inexpensive. Surface electromyography provides an intuitive interface based on a user’s muscle activity to interact with robots. However, despite extensive research in the field of sEMG signal classification, current classifiers still lack reliability due to their lack of robustness to short-term (e.g. small electrode displacement, muscle fatigue) or long-term (e.g. change in muscle mass and adipose tissue) noise. In practice, this mean that to be useful, classifier needs to be periodically re-calibrated, a time consuming process. The goal of my research project is to proposes a human-robot myoelectric interface based on transfer learning and domain adaptation algorithms to increase the reliability of the system in the long term, while at the same time reducing the intrusiveness (in terms of hardware and preparation time) of this kind of systems. The non-intrusive aspect is achieved from a dry-electrode armband featuring ten channels. This armband, named the 3DC Armband is from our (Dr. Gabriel Gagnon-Turcotte, my co-directors and myself) conception and was realized during my doctorate. At the time of writing, the 3DC Armband offers the best performance for currently available dry-electrodes, surface electromyographic armbands. Unlike gel-based electrodes which require intrusive skin preparation (i.e. shaving, cleaning the skin and applying conductive gel), the 3DC Armband can simply be placed on the forearm without any preparation. However, this ease of use results in a decrease in the quality of information. This decrease is due to the fact that the signal recorded by dry electrodes is inherently noisier than gel-based ones. In addition, other systems use invasive methods (intramuscular electromyography) to capture a cleaner signal and reduce the source of noises (e.g. electrode shift). To remedy this degradation of information resulting from the non-intrusiveness of the armband, this research project will rely on deep learning, and more specifically on convolutional networks. The research project was divided into three phases. The first is the design of a classifier allowing the recognition of hand gestures in real-time. The second is the implementation of a transfer learning algorithm to take advantage of the data recorded across multiple users, thereby improving the system’s accuracy, while decreasing the time required to use the system. The third phase is the development and implementation of a domain adaptation and self-supervised learning to enhance the classifier’s robustness to long-term changes

    Wheelchair control using EEG signal classification

    Get PDF
    Tato diplomovĂĄ prĂĄce pƙedstavuje koncept elektrickĂ©ho invalidnĂ­ho vozĂ­ku ovlĂĄdanĂ©ho lidskou myslĂ­. Tento koncept je určen pro osoby, kterĂ© elektrickĂœ invalidnĂ­ vozĂ­k nemohou ovlĂĄdat klasickĂœmi zpĆŻsoby, jakĂœm je napƙíklad joystick. V prĂĄci jsou popsĂĄny čtyƙi hlavnĂ­ komponenty konceptu: elektroencefalograf, brain-computer interface (rozhranĂ­ mozek-počítač), systĂ©m sdĂ­lenĂ© kontroly a samotnĂœ elektrickĂœ invalidnĂ­ vozĂ­k. V textu je pƙedstavena pouĆŸitĂĄ metodologie a vĂœsledky provedenĂœch experimentĆŻ. V zĂĄvěru jsou nastĂ­něna doporučenĂ­ pro budoucĂ­ vĂœvoj.This diploma thesis presents the concept of mind-controlled electric wheelchair designed for people who are not able to use other interfaces such as hand joystick. Four main components of concept are described: electroencephalography, brain-computer interface, shared control and the electric wheelchair. In the text used methodology is described and results of conducted experiments are presented. In conclusion suggestions for future development are outlined.

    A fully-wearable non-invasive SSVEP-based BCI system enabled by AR techniques for daily use in real environment.

    Get PDF
    This thesis aims to explore the design and implementation of Brain Computer Interfaces (BCIs) specifically for non medical scenarios, and therefore to propose a solution that overcomes typical drawbacks of existing systems such as long and uncomfortable setup time, scarce or nonexistent mobility, and poor real-time performance. The research starts from the design and implementation of a plug-and-play wearable low-power BCI that is capable of decoding up to eight commands displayed on a LCD screen, with about 2 seconds of latency. The thesis also addresses the issues emerging from the usage of the BCI during a walk in a real environment while tracking the subject via indoor positioning system. Furthermore, the BCI is then enhanced with a smart glasses device that projects the BCI visual interface with augmented reality (AR) techniques, unbinding the system usage from the need of infrastructures in the surrounding environment
    • 

    corecore