122 research outputs found
Novel Muscle Monitoring by Radiomyography(RMG) and Application to Hand Gesture Recognition
Conventional electromyography (EMG) measures the continuous neural activity
during muscle contraction, but lacks explicit quantification of the actual
contraction. Mechanomyography (MMG) and accelerometers only measure body
surface motion, while ultrasound, CT-scan and MRI are restricted to in-clinic
snapshots. Here we propose a novel radiomyography (RMG) for continuous muscle
actuation sensing that can be wearable and touchless, capturing both
superficial and deep muscle groups. We verified RMG experimentally by a forearm
wearable sensor for detailed hand gesture recognition. We first converted the
radio sensing outputs to the time-frequency spectrogram, and then employed the
vision transformer (ViT) deep learning network as the classification model,
which can recognize 23 gestures with an average accuracy up to 99% on 8
subjects. By transfer learning, high adaptivity to user difference and sensor
variation were achieved at an average accuracy up to 97%. We further
demonstrated RMG to monitor eye and leg muscles and achieved high accuracy for
eye movement and body postures tracking. RMG can be used with synchronous EMG
to derive stimulation-actuation waveforms for many future applications in
kinesiology, physiotherapy, rehabilitation, and human-machine interface
Transfer learning in hand movement intention detection based on surface electromyography signals
Over the past several years, electromyography (EMG) signals have been used as a natural interface to interact with computers and machines. Recently, deep learning algorithms such as Convolutional Neural Networks (CNNs) have gained interest for decoding the hand movement intention from EMG signals. However, deep networks require a large dataset to train appropriately. Creating such a database for a single subject could be very time-consuming. In this study, we addressed this issue from two perspectives: (i) we proposed a subject-transfer framework to use the knowledge learned from other subjects to compensate for a target subjectâs limited data; (ii) we proposed a task-transfer framework in which the knowledge learned from a set of basic hand movements is used to classify more complex movements, which include a combination of mentioned basic movements. We introduced two CNN-based architectures for hand movement intention detection and a subject-transfer learning approach. Classifiers are tested on the Nearlab dataset, a sEMG hand/wrist movement dataset including 8 movements and 11 subjects, along with their combination, and on open-source hand sEMG dataset âNinaPro DataBase 2 (DB2).â For the Nearlab database, the subject-transfer learning approach improved the average classification accuracy of the proposed deep classifier from 92.60 to 93.30% when classifier was utilizing 10 other subjectsâ data via our proposed framework. For Ninapro DB2 exercise B (17 hand movement classes), this improvement was from 81.43 to 82.87%. Moreover, three stages of analysis in task-transfer approach proved that it is possible to classify combination hand movements using the knowledge learned from a set of basic hand movements with zero, few samples and few seconds of data from the target movement classes. First stage takes advantage of shared muscle synergies to classify combined movements, while second and third stages take advantage of novel algorithms using few-shot learning and fine-tuning to use samples from target domain to further train the classifier trained on the source database. The use of information learned from basic hand movements improved classification accuracy of combined hand movements by 10%
Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning
In recent years, deep learning algorithms have become increasingly more
prominent for their unparalleled ability to automatically learn discriminant
features from large amounts of data. However, within the field of
electromyography-based gesture recognition, deep learning algorithms are seldom
employed as they require an unreasonable amount of effort from a single person,
to generate tens of thousands of examples.
This work's hypothesis is that general, informative features can be learned
from the large amounts of data generated by aggregating the signals of multiple
users, thus reducing the recording burden while enhancing gesture recognition.
Consequently, this paper proposes applying transfer learning on aggregated data
from multiple users, while leveraging the capacity of deep learning algorithms
to learn discriminant features from large datasets. Two datasets comprised of
19 and 17 able-bodied participants respectively (the first one is employed for
pre-training) were recorded for this work, using the Myo Armband. A third Myo
Armband dataset was taken from the NinaPro database and is comprised of 10
able-bodied participants. Three different deep learning networks employing
three different modalities as input (raw EMG, Spectrograms and Continuous
Wavelet Transform (CWT)) are tested on the second and third dataset. The
proposed transfer learning scheme is shown to systematically and significantly
enhance the performance for all three networks on the two datasets, achieving
an offline accuracy of 98.31% for 7 gestures over 17 participants for the
CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw
EMG-based ConvNet. Finally, a use-case study employing eight able-bodied
participants suggests that real-time feedback allows users to adapt their
muscle activation strategy which reduces the degradation in accuracy normally
experienced over time.Comment: Source code and datasets available:
https://github.com/Giguelingueling/MyoArmbandDatase
Recommended from our members
Efficiency evaluation of external environments control using bio-signals
There are many types of bio-signals with various control application prospects. This dissertation regards possible application domain of electroencephalographic signal. The implementation of EEG signals, as a source of information used for control of external devices, became recently a growing concern in the scientific world. Application of electroencephalographic signals in Brain-Computer Interfaces (BCI) (variant of Human-Computer Interfaces (HCI)) as an implement, which enables direct and fast communication between the human brain and an external device, has become recently very popular.
Currently available on the market, BCI solutions require complex signal processing methodology, which results in the need of an expensive equipment with high computing power.
In this work, a study on using various types of EEG equipment in order to apply the most appropriate one was conducted. The analysis of EEG signals is very complex due to the presence of various internal and external artifacts. The signals are also sensitive to disturbances and non-stochastic, what makes the analysis a complicated task. The research was performed on customised (built by the author of this dissertation) equipment, on professional medical device and on Emotiv EPOC headset.
This work concentrated on application of an inexpensive, easy to use, Emotiv EPOC headset as a tool for gaining EEG signals. The project also involved application of embedded system platform - TS-7260. That solution caused limits in choosing an appropriate signal processing method, as embedded platforms characterise with a little efficiency and low computing power. That aspect was the most challenging part of the whole work.
Implementation of the embedded platform enables to extend the possible future application of the proposed BCI. It also gives more flexibility, as the platform is able to simulate various environments.
The study did not involve the use of traditional statistical or complex signal processing methods. The novelty of the solution relied on implementation of the basic mathematical operations. The efficiency of this method was also presented in this dissertation. Another important aspect of the conducted study is that the research was carried out not only in a laboratory, but also in an environment reflecting real-life conditions.
The results proved efficiency and suitability of the implementation of the proposed solution in real-life environments. The further study will focus on improvement of the signal-processing method and application of other bio-signals - in order to extend the possible applicability and ameliorate its effectiveness
Upper Limb Movement Recognition utilising EEG and EMG Signals for Rehabilitative Robotics
Upper limb movement classification, which maps input signals to the target
activities, is a key building block in the control of rehabilitative robotics.
Classifiers are trained for the rehabilitative system to comprehend the desires
of the patient whose upper limbs do not function properly. Electromyography
(EMG) signals and Electroencephalography (EEG) signals are used widely for
upper limb movement classification. By analysing the classification results of
the real-time EEG and EMG signals, the system can understand the intention of
the user and predict the events that one would like to carry out. Accordingly,
it will provide external help to the user. However, the noise in the real-time
EEG and EMG data collection process contaminates the effectiveness of the data,
which undermines classification performance. Moreover, not all patients process
strong EMG signals due to muscle damage and neuromuscular disorder. To address
these issues, this paper explores different feature extraction techniques and
machine learning and deep learning models for EEG and EMG signals
classification and proposes a novel decision-level multisensor fusion technique
to integrate EEG signals with EMG signals. This system retrieves effective
information from both sources to understand and predict the desire of the user,
and thus aid. By testing out the proposed technique on a publicly available
WAY-EEG-GAL dataset, which contains EEG and EMG signals that were recorded
simultaneously, we manage to conclude the feasibility and effectiveness of the
novel system.Comment: 20 pages, 11 figures, 2 tables; Thesis for Undergraduate Research
Project in Computing, NUS; Accepted by Future of Information and
Communication Conference 2023, San Francisc
Guidage non-intrusif d'un bras robotique à l'aide d'un bracelet myoélectrique à électrode sÚche
Depuis plusieurs annĂ©es la robotique est vue comme une solution clef pour amĂ©liorer la qualitĂ© de vie des personnes ayant subi une amputation. Pour crĂ©er de nouvelles prothĂšses intelligentes qui peuvent ĂȘtre facilement intĂ©grĂ©es Ă la vie quotidienne et acceptĂ©e par ces personnes, celles-ci doivent ĂȘtre non-intrusives, fiables et peu coĂ»teuses. LâĂ©lectromyographie de surface fournit une interface intuitive et non intrusive basĂ©e sur lâactivitĂ© musculaire de lâutilisateur permettant dâinteragir avec des robots. Cependant, malgrĂ© des recherches approfondies dans le domaine de la classification des signaux sEMG, les classificateurs actuels manquent toujours de fiabilitĂ©, car ils ne sont pas robustes face au bruit Ă court terme (par exemple, petit dĂ©placement des Ă©lectrodes, fatigue musculaire) ou Ă long terme (par exemple, changement de la masse musculaire et des tissus adipeux) et requiert donc de recalibrer le classifieur de façon pĂ©riodique. Lâobjectif de mon projet de recherche est de proposer une interface myoĂ©lectrique humain-robot basĂ© sur des algorithmes dâapprentissage par transfert et dâadaptation de domaine afin dâaugmenter la fiabilitĂ© du systĂšme Ă long-terme, tout en minimisant lâintrusivitĂ© (au niveau du temps de prĂ©paration) de ce genre de systĂšme. Lâaspect non intrusif est obtenu en utilisant un bracelet Ă Ă©lectrode sĂšche possĂ©dant dix canaux. Ce bracelet (3DC Armband) est de notre (Docteur Gabriel Gagnon-Turcotte, mes co-directeurs et moi-mĂȘme) conception et a Ă©tĂ© rĂ©alisĂ© durant mon doctorat. Ă lâheure dâĂ©crire ces lignes, le 3DC Armband est le bracelet sans fil pour lâenregistrement de signaux sEMG le plus performant disponible. Contrairement aux dispositifs utilisant des Ă©lectrodes Ă base de gel qui nĂ©cessitent un rasage de lâavant-bras, un nettoyage de la zone de placement et lâapplication dâun gel conducteur avant lâutilisation, le brassard du 3DC peut simplement ĂȘtre placĂ© sur lâavant-bras sans aucune prĂ©paration. Cependant, cette facilitĂ© dâutilisation entraĂźne une diminution de la qualitĂ© de lâinformation du signal. Cette diminution provient du fait que les Ă©lectrodes sĂšches obtiennent un signal plus bruitĂ© que celle Ă base de gel. En outre, des mĂ©thodes invasives peuvent rĂ©duire les dĂ©placements dâĂ©lectrodes lors de lâutilisation, contrairement au brassard. Pour remĂ©dier Ă cette dĂ©gradation de lâinformation, le projet de recherche sâappuiera sur lâapprentissage profond, et plus prĂ©cisĂ©ment sur les rĂ©seaux convolutionels. Le projet de recherche a Ă©tĂ© divisĂ© en trois phases. La premiĂšre porte sur la conception dâun classifieur permettant la reconnaissance de gestes de la main en temps rĂ©el. La deuxiĂšme porte sur lâimplĂ©mentation dâun algorithme dâapprentissage par transfert afin de pouvoir profiter des donnĂ©es provenant dâautres personnes, permettant ainsi dâamĂ©liorer la classification des mouvements de la main pour un nouvel individu tout en diminuant le temps de prĂ©paration nĂ©cessaire pour utiliser le systĂšme. La troisiĂšme phase consiste en lâĂ©laboration et lâimplĂ©mentation des algorithmes dâadaptation de domaine et dâapprentissage faiblement supervisĂ© afin de crĂ©er un classifieur qui soit robuste au changement Ă long terme.For several years, robotics has been seen as a key solution to improve the quality of life of people living with upper-limb disabilities. To create new, smart prostheses that can easily be integrated into everyday life, they must be non-intrusive, reliable and inexpensive. Surface electromyography provides an intuitive interface based on a userâs muscle activity to interact with robots. However, despite extensive research in the field of sEMG signal classification, current classifiers still lack reliability due to their lack of robustness to short-term (e.g. small electrode displacement, muscle fatigue) or long-term (e.g. change in muscle mass and adipose tissue) noise. In practice, this mean that to be useful, classifier needs to be periodically re-calibrated, a time consuming process. The goal of my research project is to proposes a human-robot myoelectric interface based on transfer learning and domain adaptation algorithms to increase the reliability of the system in the long term, while at the same time reducing the intrusiveness (in terms of hardware and preparation time) of this kind of systems. The non-intrusive aspect is achieved from a dry-electrode armband featuring ten channels. This armband, named the 3DC Armband is from our (Dr. Gabriel Gagnon-Turcotte, my co-directors and myself) conception and was realized during my doctorate. At the time of writing, the 3DC Armband offers the best performance for currently available dry-electrodes, surface electromyographic armbands. Unlike gel-based electrodes which require intrusive skin preparation (i.e. shaving, cleaning the skin and applying conductive gel), the 3DC Armband can simply be placed on the forearm without any preparation. However, this ease of use results in a decrease in the quality of information. This decrease is due to the fact that the signal recorded by dry electrodes is inherently noisier than gel-based ones. In addition, other systems use invasive methods (intramuscular electromyography) to capture a cleaner signal and reduce the source of noises (e.g. electrode shift). To remedy this degradation of information resulting from the non-intrusiveness of the armband, this research project will rely on deep learning, and more specifically on convolutional networks. The research project was divided into three phases. The first is the design of a classifier allowing the recognition of hand gestures in real-time. The second is the implementation of a transfer learning algorithm to take advantage of the data recorded across multiple users, thereby improving the systemâs accuracy, while decreasing the time required to use the system. The third phase is the development and implementation of a domain adaptation and self-supervised learning to enhance the classifierâs robustness to long-term changes
Wheelchair control using EEG signal classification
Tato diplomovĂĄ prĂĄce pĆedstavuje koncept elektrickĂ©ho invalidnĂho vozĂku ovlĂĄdanĂ©ho lidskou myslĂ. Tento koncept je urÄen pro osoby, kterĂ© elektrickĂœ invalidnĂ vozĂk nemohou ovlĂĄdat klasickĂœmi zpĆŻsoby, jakĂœm je napĆĂklad joystick. V prĂĄci jsou popsĂĄny ÄtyĆi hlavnĂ komponenty konceptu: elektroencefalograf, brain-computer interface (rozhranĂ mozek-poÄĂtaÄ), systĂ©m sdĂlenĂ© kontroly a samotnĂœ elektrickĂœ invalidnĂ vozĂk. V textu je pĆedstavena pouĆŸitĂĄ metodologie a vĂœsledky provedenĂœch experimentĆŻ. V zĂĄvÄru jsou nastĂnÄna doporuÄenĂ pro budoucĂ vĂœvoj.This diploma thesis presents the concept of mind-controlled electric wheelchair designed for people who are not able to use other interfaces such as hand joystick. Four main components of concept are described: electroencephalography, brain-computer interface, shared control and the electric wheelchair. In the text used methodology is described and results of conducted experiments are presented. In conclusion suggestions for future development are outlined.
A fully-wearable non-invasive SSVEP-based BCI system enabled by AR techniques for daily use in real environment.
This thesis aims to explore the design and implementation of Brain Computer Interfaces (BCIs) specifically for non medical scenarios, and therefore to propose a solution that overcomes typical drawbacks of existing systems such as long and uncomfortable setup time, scarce or nonexistent mobility, and poor real-time performance. The research starts from the design and implementation of a plug-and-play wearable low-power BCI that is capable of decoding up to eight commands displayed on a LCD screen, with about 2 seconds of latency. The thesis also addresses the issues emerging from the usage of the BCI during a walk in a real environment while tracking the subject via indoor positioning system. Furthermore, the BCI is then enhanced with a smart glasses device that projects the BCI visual interface with augmented reality (AR) techniques, unbinding the system usage from the need of infrastructures in the surrounding environment
- âŠ