38 research outputs found

    Challenges and Trends of Machine Learning in the Myoelectric Control System for Upper Limb Exoskeletons and Exosuits

    Get PDF
    Myoelectric control systems as the emerging control strategies for upper limb wearable robots have shown their efficacy and applicability to effectively provide motion assistance and/or restore motor functions in people with impairment or disabilities, as well as augment physical performance in able-bodied individuals. In myoelectric control, electromyographic (EMG) signals from muscles are utilized, improving adaptability and human-robot interactions during various motion tasks. Machine learning has been widely applied in myoelectric control systems due to its advantages in detecting and classifying various human motions and motion intentions. This chapter illustrates the challenges and trends in recent machine learning algorithms implemented on myoelectric control systems designed for upper limb wearable robots, and highlights the key focus areas for future research directions. Different modalities of recent machine learning-based myoelectric control systems are described in detail, and their advantages and disadvantages are summarized. Furthermore, key design aspects and the type of experiments conducted to validate the efficacy of the proposed myoelectric controllers are explained. Finally, the challenges and limitations of current myoelectric control systems using machine learning algorithms are analyzed, from which future research directions are suggested

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN

    Variational Autoencoder and Sensor Fusion for Robust Myoelectric Controls

    Get PDF
    Myoelectric control schemes aim to utilize the surface electromyography (EMG) signals which are the electric potentials directly measured from skeletal muscles to control wearable robots such as exoskeletons and prostheses. The main challenge of myoelectric controls is to increase and preserve the signal quality by minimizing the effect of confounding factors such as muscle fatigue or electrode shift. Current research in myoelectric control schemes are developed to work in ideal laboratory conditions, but there is a persistent need to have these control schemes be more robust and work in real-world environments. Following the manifold hypothesis, complexity in the world can be broken down from a high-dimensional space to a lower-dimensional form or representation that can explain how the higher-dimensional real world operates. From this premise, the biological actions and their relevant multimodal signals can be compressed and optimally pertinent when performed in both laboratory and non-laboratory settings once the learned representation or manifold is discovered. This thesis outlines a method that incorporates the use of a contrastive variational autoencoder with an integrated classifier on multimodal sensor data to create a compressed latent space representation that can be used in future myoelectric control schemes

    Applications of Neural Networks in Classifying Trained and Novel Gestures Using Surface Electromyography

    Get PDF
    Current prosthetic control systems explored in the literature that use pattern recognition can perform a limited number of pre-assigned functions, as they must be trained using muscle signals for every movement the user wants to perform. The goal of this study was to explore the development of a prosthetic control system that can classify both trained and novel gestures, for applications in commercial prosthetic arms. The first objective of this study was to evaluate the feasibility of three different algorithms in classifying raw sEMG data for both trained isometric gestures, and for novel isometric gestures that were not included in the training data set. The algorithms used were; a feedforward multi-layer perceptron (FFMLP), a stacked sparse autoencoder (SSAE), and a convolution neural network (CNN). The second objective is to evaluate the algorithms’ abilities to classify novel isometric gestures that were not included in the training data set, and to determine the effect of different gesture combinations on the classification accuracy. The third objective was to predict the binary (flexed/extended) digit positions without training the network using kinematic data from the participants hand. A g-tec USB Biosignal Amplifier was used to collect data from eight differential sEMG channels from 10 able-bodied participants. These participants performed 14 gestures including rest, that involved a variety of discrete finger flexion/extension tasks. Forty seconds of data were collected for each gesture at 1200 Hz from eight bipolar sEMG channels. These 14 gestures were then organized into 20 unique gesture combinations, where each combination consisted of a different sub-set of gestures used for training, and another sub-set used as the novel gestures, which were only used to test the algorithms’ predictive capabilities. Participants were asked to perform the gestures in such a way where each digit was either fully flexed or fully extended to the best of their abilities. In this way the digit positions for each gesture could be labelled with a value of zero or one depending on its binary positions. Therefore, the algorithms used could be provided with both input data (sEMG) and output labels without needing to record joint kinematics. The post processing analysis of the outputs for each algorithm was conducted using two different methods, these being all-or-nothing gesture classification (ANGC) and weighted digit gesture classification (WDGC). All 20 combinations were tested using the FFMLP, SSAE, and CNN using Matlab. For both analysis methods, the CNN outperformed the FFMLP and SSAE. Statistical analysis was not provided for the performance of novel gestures using ANGC method, as the data was highly skewed, and did not fall on a normal distribution due to the large number of zero valued classification results for most of the novel gestures. The FFMLP and SSAE showed no significant difference from one another for the trained ANGC method, but the FFMLP showed statistically higher performance than the SSAE for trained and novel WDGC results. The results indicate that the CNN was able to classify most digits with reasonable accuracy, and the performance varied between participants. The results also indicate that for some participants, this may be suitable for prosthetic control applications. The FFMLP and SSAE were largely unable to classify novel digit positions and obtained significantly lower performance accuracies for novel gestures for both analysis methods when compared to the CNN. Therefore, the FFMLP and SSAE algorithms do not seem to be suitable for prosthetic control applications using the proposed raw data input, and the output architecture

    How Many Muscles? Optimal Muscles Set Search for Optimizing Myocontrol Performance

    Get PDF
    In myo-control, for computational and setup constraints, the measurement of a high number of muscles is not always possible: the choice of the muscle set to use in a myo-control strategy depends on the desired application scope and a search for a reduced muscle set, tailored to the application, has never been performed. The identification of such set would involve finding the minimum set of muscles whose difference in terms of intention detection performance is not statistically significant when compared to the original set. Also, given the intrinsic sensitivity of muscle synergies to variations of EMG signals matrix, the reduced set should not alter synergies that come from the initial input, since they provide physiological information on motor coordination. The advantages of such reduced set, in a rehabilitation context, would be the reduction of the inputs processing time, the reduction of the setup bulk and a higher sensitivity to synergy changes after training, which can eventually lead to modifications of the ongoing therapy. In this work, the existence of a minimum muscle set, called optimal set, for an upper-limb myoelectric application, that preserves performance of motor activity prediction and the physiological meaning of synergies, has been investigated. Analyzing isometric contractions during planar reaching tasks, two types of optimal muscle sets were examined: a subject-specific one and a global one. The former relies on the subject-specific movement strategy, the latter is composed by the most recurrent muscles among subjects specific optimal sets and shared by all the subjects. Results confirmed that the muscle set can be reduced to achieve comparable hand force estimation performances. Moreover, two types of muscle synergies namely “Pose-Shared” (extracted from a single multi-arm-poses dataset) and “Pose-Related” (clustering pose-specific synergies), extracted from the global optimal muscle set, have shown a significant similarity with full-set related ones meaning a high consistency of the motor primitives. Pearson correlation coefficients assessed the similarity of each synergy. The discovering of dominant muscles by means of the optimization of both muscle set size and force estimation error may reveal a clue on the link between synergistic patterns and the force task

    Synergy-Based Human Grasp Representations and Semi-Autonomous Control of Prosthetic Hands

    Get PDF
    Das sichere und stabile Greifen mit humanoiden Roboterhänden stellt eine große Herausforderung dar. Diese Dissertation befasst sich daher mit der Ableitung von Greifstrategien für Roboterhände aus der Beobachtung menschlichen Greifens. Dabei liegt der Fokus auf der Betrachtung des gesamten Greifvorgangs. Dieser umfasst zum einen die Hand- und Fingertrajektorien während des Greifprozesses und zum anderen die Kontaktpunkte sowie den Kraftverlauf zwischen Hand und Objekt vom ersten Kontakt bis zum statisch stabilen Griff. Es werden nichtlineare posturale Synergien und Kraftsynergien menschlicher Griffe vorgestellt, die die Generierung menschenähnlicher Griffposen und Griffkräfte erlauben. Weiterhin werden Synergieprimitive als adaptierbare Repräsentation menschlicher Greifbewegungen entwickelt. Die beschriebenen, vom Menschen gelernten Greifstrategien werden für die Steuerung robotischer Prothesenhände angewendet. Im Rahmen einer semi-autonomen Steuerung werden menschenähnliche Greifbewegungen situationsgerecht vorgeschlagen und vom Nutzenden der Prothese überwacht

    Inter-subject Domain Adaptation for CNN-based Wrist Kinematics Estimation using sEMG

    Get PDF

    Deep Learning Based Upper-limb Motion Estimation Using Surface Electromyography

    Get PDF
    To advance human-machine interfaces (HMI) that can help disabled people reconstruct lost functions of upper-limbs, machine learning (ML) techniques, particularly classification-based pattern recognition (PR), have been extensively implemented to decode human movement intentions from surface electromyography (sEMG) signals. However, performances of ML can be substantially affected, or even limited, by feature engineering that requires expertise in both domain knowledge and experimental experience. To overcome this limitation, researchers are now focusing on deep learning (DL) techniques to derive informative, representative, and transferable features from raw data automatically. Despite some progress reported in recent literature, it is still very challenging to achieve reliable and robust interpretation of user intentions in practical scenarios. This is mainly because of the high complexity of upper-limb motions and the non-stable characteristics of sEMG signals. Besides, the PR scheme only identifies discrete states of motion. To complete coordinated tasks such as grasping, users have to rely on a sequential on/off control of each individual function, which is inherently different from the simultaneous and proportional control (SPC) strategy adopted by the natural motions of upper-limbs. The aim of this thesis is to develop and advance several DL techniques for the estimation of upper-limb motions from sEMG, and the work is centred on three themes: 1) to improve the reliability of gesture recognition by rejecting uncertain classification outcomes; 2) to build regression frameworks for joint kinematics estimation that enables SPC; and 3) to reduce the degradation of estimation performances when DL model is applied to a new individual. In order to achieve these objectives, the following efforts were made: 1) a confidence model was designed to predict the possibility of correctness with regard to each classification of convolutional neural networks (CNN), such that the uncertain recognition can be identified and rejected; 2) a hybrid framework using CNN for deep feature extraction and long short-term memory neural network (LSTM) was constructed to conduct sequence regression, which could simultaneously exploit the temporal and spatial information in sEMG data; 3) the hybrid framework was further extended by integrating Kalman filter with LSTM units in the recursive learning process, obtaining a deep Kalman filter network (DKFN) to perform kinematics estimation more effectively; and 4) a novel regression scheme was proposed for supervised domain adaptation (SDA), based on which the model generalisation among subjects can be substantially enhanced

    Hybrid Wearable Signal Processing/Learning via Deep Neural Networks

    Get PDF
    Wearable technologies are gaining considerable attention in recent years as a potential post-smartphone platform with several applications of significant engineering importance. Wearable technologies are expected to become more prevalent in a variety of areas, including modern healthcare practices, robotic prosthesis control, Artificial Reality (AR) and Virtual Reality (VR) applications, Human Machine Interface/Interaction (HMI), and remote support for patients and chronically ill patients at home. The emergence of wearable technologies can be attributed to the advancement of flexible electronic materials; the availability of advanced cloud and wireless communication systems, and; the Internet of Things (IoT) coupled with high demand from the tech-savvy population and the elderly population for healthcare management. Wearable devices in the healthcare realm gather various biological signals from the human body, among which Electrocardiogram (ECG), Photoplethysmogram (PPG), and surface Electromyogram (sEMG), are the most widely non-intrusive monitored signals. Utilizing these widely used non-intrusive signals, the primary emphasis of the proposed dissertation is on the development of advanced Machine Learning (ML), in particular Deep Learning (DL), algorithms to increase the accuracy of wearable devices in specific tasks. In this context and in the first part, using ECG and PPG bio-signals, we focus on development of accurate subject-specific solutions for continuous and cuff-less Blood Pressure (BP) monitoring. More precisely, a deep learning-based framework known as BP-Net is proposed for predicting continuous upper and lower bounds of blood pressure, respectively, known as Systolic BP (SBP) and Diastolic BP (DBP). Furthermore, by capitalizing on the fact that datasets used in recent literature are not unified and properly defined, a unified dataset is constructed from the MIMIC-I and MIMIC-III databases obtained from PhysioNet. In the second part, we focus on hand gesture recognition utilizing sEMG signals, which have the potential to be used in the myoelectric prostheses control systems or decoding Myo Armbands data to interpret human intent in AR/VR environments. Capitalizing on the recent advances in hybrid architectures and Transformers in different applications, we aim to enhance the accuracy of sEMG-based hand gesture recognition by introducing a hybrid architecture based on Transformers, referred to as the Transformer for Hand Gesture Recognition (TraHGR). In particular, the TraHGR architecture consists of two parallel paths followed by a linear layer that acts as a fusion center to integrate the advantage of each module. The ultimate goal of this work is to increase the accuracy of gesture classifications, which could be a major step towards the development of more advanced HMI systems that can improve the quality of life for people with disabilities or enhance the user experience in AR/VR applications. Besides improving accuracy, decreasing the number of parameters in the Deep Neural Network (DNN) architectures plays an important role in wearable devices. In other words, to achieve the highest possible accuracy, complicated and heavy-weighted Deep Neural Networks (DNNs) are typically developed, which restricts their practical application in low-power and resource-constrained wearable systems. Therefore, in our next attempt, we propose a lightweight hybrid architecture based on the Convolutional Neural Network (CNN) and attention mechanism, referred to as Hierarchical Depth-wise Convolution along with the Attention Mechanism (HDCAM), to effectively extract local and global representations of the input. The key objective behind the design of HDCAM was to ensure its resource efficiency while maintaining comparable or better performance than the current state-of-the-art methods

    比例筋電位制御に向けた筋シナジーの抽出、解釈、および応用の研究

    Get PDF
    Transfer of human intentions into myoelectric hand prostheses is generally achieved by learning a mapping, directly from sEMG signals to the Kinematics using linear or nonlinear regression approaches. Due to the highly random and nonlinear nature of sEMG signals such approaches are not able to exploit the functions of the modern pros- thesis, completely. Inspired from the muscle synergy hypothesis in the motor control community, some studies in the past have shown that better estimation accuracies can be achieved by learning a mapping to kinematics space from the synergistic features extracted from sEMG. However, mainly linear algorithms such as Principle Compo- nent Analysis (PCA), and Non-negative matrix factorization (NNMF) were employed to extract synergistic features, separately, from EMG and kinematics data and have not considered the nonlinearity and the strong correlation that exist between finger kine- matics and muscles. To exploit the relationship between EMG and Finger Kinematics for myoelectric control, we propose the use of the Manifold Relevance Determination (MRD) model (multi-view learning) to find the correspondence between muscular and kinematics by learning a shared low-dimensional representation. In the first part of the study, we present the approach of multi-view learning, interpretation of extracted non- linear muscle synergies from the joint study of sEMG and finger kinematics and their use in estimating the finger kinematics for the upper-limb prosthesis. Applicability of the proposed approach is then demonstrated by comparing the kinematics estimation accuracies against linear synergies and direct mapping. In the second part of the study, we propose a new approach to extract nonlinear muscle synergies from sEMG using multiview learning which addresses the two main drawbacks (1. Inconsistent synergistic patterns upon addition of sEMG signals from more muscles, 2. Weak metric for accessing the quality and quantity of muscle synergies) of established algorithms and discuss the potential of the proposed approach for reducing the number of electrodes with negligible degradation in predicted kinematics.九州工業大学博士学位論文 学位記番号:生工博甲第372号 学位授与年月日:令和2年3月25日1 Introduction|2 Related Work|3 Extraction of nonlinear synergies for proportional and simultaneous estimation of finger kinematics|4 An Approach to Extract Nonlinear Muscle Synergies from sEMG through Multi-Model Learning|5 Conclusion and Future Work九州工業大学令和元年
    corecore