38 research outputs found

    Real-time EMG based pattern recognition control for hand prostheses : a review on existing methods, challenges and future implementation

    Get PDF
    Upper limb amputation is a condition that significantly restricts the amputees from performing their daily activities. The myoelectric prosthesis, using signals from residual stump muscles, is aimed at restoring the function of such lost limbs seamlessly. Unfortunately, the acquisition and use of such myosignals are cumbersome and complicated. Furthermore, once acquired, it usually requires heavy computational power to turn it into a user control signal. Its transition to a practical prosthesis solution is still being challenged by various factors particularly those related to the fact that each amputee has different mobility, muscle contraction forces, limb positional variations and electrode placements. Thus, a solution that can adapt or otherwise tailor itself to each individual is required for maximum utility across amputees. Modified machine learning schemes for pattern recognition have the potential to significantly reduce the factors (movement of users and contraction of the muscle) affecting the traditional electromyography (EMG)-pattern recognition methods. Although recent developments of intelligent pattern recognition techniques could discriminate multiple degrees of freedom with high-level accuracy, their efficiency level was less accessible and revealed in real-world (amputee) applications. This review paper examined the suitability of upper limb prosthesis (ULP) inventions in the healthcare sector from their technical control perspective. More focus was given to the review of real-world applications and the use of pattern recognition control on amputees. We first reviewed the overall structure of pattern recognition schemes for myo-control prosthetic systems and then discussed their real-time use on amputee upper limbs. Finally, we concluded the paper with a discussion of the existing challenges and future research recommendations

    Towards electrodeless EMG linear envelope signal recording for myo-activated prostheses control

    Get PDF
    After amputation, the residual muscles of the limb may function in a normal way, enabling the electromyogram (EMG) signals recorded from them to be used to drive a replacement limb. These replacement limbs are called myoelectric prosthesis. The prostheses that use EMG have always been the first choice for both clinicians and engineers. Unfortunately, due to the many drawbacks of EMG (e.g. skin preparation, electromagnetic interferences, high sample rate, etc.); researchers have aspired to find suitable alternatives. One proposes the dry-contact, low-cost sensor based on a force-sensitive resistor (FSR) as a valid alternative which instead of detecting electrical events, detects mechanical events of muscle. FSR sensor is placed on the skin through a hard, circular base to sense the muscle contraction and to acquire the signal. Similarly, to reduce the output drift (resistance) caused by FSR edges (creep) and to maintain the FSR sensitivity over a wide input force range, signal conditioning (Voltage output proportional to force) is implemented. This FSR signal acquired using FSR sensor can be used directly to replace the EMG linear envelope (an important control signal in prosthetics applications). To find the best FSR position(s) to replace a single EMG lead, the simultaneous recording of EMG and FSR output is performed. Three FSRs are placed directly over the EMG electrodes, in the middle of the targeted muscle and then the individual (FSR1, FSR2 and FSR3) and combination of FSR (e.g. FSR1+FSR2, FSR2-FSR3) is evaluated. The experiment is performed on a small sample of five volunteer subjects. The result shows a high correlation (up to 0.94) between FSR output and EMG linear envelope. Consequently, the usage of the best FSR sensor position shows the ability of electrode less FSR-LE to proportionally control the prosthesis (3-D claw). Furthermore, FSR can be used to develop a universal programmable muscle signal sensor that can be suitable to control the myo-activated prosthesis

    A transferable adaptive domain adversarial neural network for virtual reality augmented EMG-Based gesture recognition

    Get PDF
    Within the field of electromyography-based (EMG) gesture recognition, disparities exist between the off line accuracy reported in the literature and the real-time usability of a classifier. This gap mainly stems from two factors: 1) The absence of a controller, making the data collected dissimilar to actual control. 2) The difficulty of including the four main dynamic factors (gesture intensity, limb position, electrode shift, and transient changes in the signal), as including their permutations drastically increases the amount of data to be recorded. Contrarily, online datasets are limited to the exact EMG-based controller used to record them, necessitating the recording of a new dataset for each control method or variant to be tested. Consequently, this paper proposes a new type of dataset to serve as an intermediate between off line and online datasets, by recording the data using a real-time experimental protocol. The protocol, performed in virtual reality, includes the four main dynamic factors and uses an EMG-independent controller to guide movements. This EMG-independent feedback ensures that the user is in-the-loop during recording, while enabling the resulting dynamic dataset to be used as an EMG-based benchmark. The dataset is comprised of 20 able-bodied participants completing three to four sessions over a period of 14 to 21 days. The ability of the dynamic dataset to serve as a benchmark is leveraged to evaluate the impact of different-recalibration techniques for long-term (across-day) gesture recognition, including a novel algorithm, named TADANN. TADANN consistently and significantly (p <; 0.05) outperforms using fine-tuning as the recalibration technique

    Should Hands Be Restricted When Measuring Able-Bodied Participants To Evaluate Machine Learning Controlled Prosthetic Hands?

    Get PDF
    OBJECTIVE: When evaluating methods for machine-learning controlled prosthetic hands, able-bodied participants are often recruited, for practical reasons, instead of participants with upper limb absence (ULA). However, able-bodied participants have been shown to often perform myoelectric control tasks better than participants with ULA. It has been suggested that this performance difference can be reduced by restricting the wrist and hand movements of able-bodied participants. However, the effect of such restrictions on the consistency and separability of the electromyogram's (EMG) features remains unknown. The present work investigates whether the EMG separability and consistency between unaffected and affected arms differ and whether they change after restricting the unaffected limb in persons with ULA. METHODS: Both arms of participants with unilateral ULA were compared in two conditions: with the unaffected hand and wrist restricted or not. Furthermore, it was tested if the effect of arm and restriction is influenced by arm posture (arm down, arm in front, or arm up). RESULTS: Fourteen participants (two women, age=53.4±4.05) with acquired transradial limb loss were recruited. We found that the unaffected limb generated more separated EMG than the affected limb. Furthermore, restricting the unaffected hand and wrist lowered the separability of the EMG when the arm was held down. CONCLUSION: Limb restriction is a viable method to make the EMG of able-bodied participants more similar to that of participants with ULA. SIGNIFICANCE: Future research that evaluates methods for machine learning controlled hands in able-bodied participants should restrict the participants' hand and wrist

    Spatial Information Enhances Myoelectric Control Performance with Only Two Channels

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Automatic gesture recognition (AGR) is investigated as an effortless human-machine interaction method, potentially applied in many industrial sectors. When using surface electromyogram (sEMG) for AGR, i.e. myoelectric control, a minimum of four EMG channels are required. However, in practical applications, fewer number of electrodes is always preferred, particularly for mobile and wearable applications. No published research focused on how to improve the performance of a myoelectric system with only two sEMG channels. In this study, we presented a systematic investigation to fill this gap. Specifically, we demonstrated that through spatial filtering and electrode position optimization, the myoelectric control performance was significantly improved (p < 0.05) and similar to that with four electrodes. Further, we found a significant correlation between offline and online performance metrics in the two-channel system, indicating that offline performance was transferable to online performance, highly relevant for algorithm development for sEMG-based AGR applications.Natural Sciences and Engineering Research Council of Canada || (Discovery Grant 072169) National Natural Science Foundation of China || (Grant 51620105002 and 91748119) State Key Lab of Railway Control and Safety Open Topics Fund of China || (Grant RCS2017K008)

    Myoelectric Control for Active Prostheses via Deep Neural Networks and Domain Adaptation

    Get PDF
    Recent advances in Biological Signal Processing (BSP) and Machine Learning (ML), in particular, Deep Neural Networks (DNNs), have paved the way for development of advanced Human-Machine Interface (HMI) systems for decoding human intent and controlling artificial limbs. Myoelectric control, as a subcategory of HMI sys- tems, deals with detecting, extracting, processing, and ultimately learning from Electromyogram (EMG) signals to command external devices, such as hand prostheses. In this context, hand gesture recognition/classification via Surface Electromyography (sEMG) signals has attracted a great deal of interest from many researchers. De- spite extensive progress in the field of myoelectric prosthesis, however, there are still limitations that should be addressed to achieve a more intuitive upper limb pros- thesis. Through this Ph.D. thesis, first, we perform a literature review on recent research works on pattern classification approaches for myoelectric control prosthesis to identify challenges and potential opportunities for improvement. Then, we aim to enhance the accuracy of myoelectric systems, which can be used for realizing an accu- rate and efficient HMI for myocontrol of neurorobotic systems. Beside improving the accuracy, decreasing the number of parameters in DNNs plays an important role in a Hand Gesture Recognition (HGR) system. More specifically, a key factor to achieve a more intuitive upper limb prosthesis is the feasibility of embedding DNN-based models into prostheses controllers. On the other hand, transformers are considered to be powerful DNN models that have revolutionized the Natural Language Processing (NLP) field and showed great potentials to dramatically improve different computer vision tasks. Therefore, we propose a Transformer-based neural network architecture to classify and recognize upper-limb hand gestures. Finally, another goal of this thesis is to design a modern DNN-based gesture detection model that relies on minimal training data while providing high accuracy. Although DNNs have shown superior accuracy compared to conventional methods when large amounts of data are available for training, their performance substantially decreases when data are limited. Collecting large datasets for training may be feasible in research laboratories, but it is not a practical approach for real-life applications. We propose to solve this problem, by designing a framework which utilizes a combination of temporal convolutions and attention mechanisms

    Information Centric Updating Scheme Using EASRC For Upper-Limb Myoelectric Control

    Get PDF
    In this thesis the idea of an information centric updating scheme for improved upper-limb prosthesis control is explored. The basis for this updating framework is the classifier EASRC - a hybrid classifier that takes advantage of an Extreme Learning Machine (ELM) and Sparse Representation Classification (SRC). Due to its hybrid nature, EASRC is able to perform classification depending on some confidence threshold. If the system is confident, then EASRC uses boundaries defined for each class in order to provide a prediction. Otherwise, it uses the output of ELM as a filter to obtain some smaller subset of potential classes which allows SRC to generate a prediction by input reconstruction. The dependency on input reconstruction places an emphasis on the vector-subspace occupied by each class. The class that contributes the most to the reconstruction is the class that is predicted by SRC. The speed of the ELM and the accuracy of SRC allow EASRC to be robust classifier in the EMG problem space. EASRC, while robust, is still prone to performance degradation due to forces that modify the EMG signal characteristics (nonstationarities). In order to get around this issue, the classifier can be augmented with an updating system to prevent this signal degradation over time. In order to optimize the feature vectors that are included in the classifier, a simple updating scheme that performs K-means on some buffered input data is performed to sample the most representative inputs. These representative inputs are incorporated into the class sub-dictionary after compressing the class sub-dictionary by K members. This method of replacement allows the system to adapt to changing signals caused by nonstationarities. In this thesis, we explore the updating system's classification performance under the limb-position effect when requested to perform certain hand grips in static locations away from the original training site. From the online experiment, we observe statistically significant results suggesting improvement relative to the control classifier with p < 0.0001. We suggest this updating system for EASRC may show potential for amputees challenged by their temporally changing EMG data

    Robust and reliable hand gesture recognition for myoelectric control

    Get PDF
    Surface Electromyography (sEMG) is a physiological signal to record the electrical activity of muscles by electrodes applied to the skin. In the context of Muscle Computer Interaction (MCI), systems are controlled by transforming myoelectric signals into interaction commands that convey the intent of user movement, mostly for rehabilitation purposes. Taking the myoeletric hand prosthetic control as an example, using sEMG recorded from the remaining muscles of the stump can be considered as the most natural way for amputees who lose their limbs to perform activities of daily living with the aid of prostheses. Although the earliest myoelectric control research can date back to the 1950s, there still exist considerable challenges to address the significant gap between academic research and industrial applications. Most recently, pattern recognition-based control is being developed rapidly to improve the dexterity of myoelectric prosthetic devices due to the recent development of machine learning and deep learning techniques. It is clear that the performance of Hand Gesture Recognition (HGR) plays an essential role in pattern recognition-based control systems. However, in reality, the tremendous success in achieving very high sEMG-based HGR accuracy (≥ 90%) reported in scientific articles produced only limited clinical or commercial impact. As many have reported, its real-time performance tends to degrade significantly as a result of many confounding factors, such as electrode shift, sweating, fatigue, and day-to-day variation. The main interest of the present thesis is, therefore, to improve the robustness of sEMG-based HGR by taking advantage of the most recent advanced deep learning techniques to address several practical concerns. Furthermore, the challenge of this research problem has been reinforced by only considering using raw sparse multichannel sEMG signals as input. Firstly, a framework for designing an uncertainty-aware sEMG-based hand gesture classifier is proposed. Applying it allows us to quickly build a model with the ability to make its inference along with explainable quantified multidimensional uncertainties. This addresses the black-box concern of the HGR process directly. Secondly, to fill the gap of lacking consensus on the definition of model reliability in this field, a proper definition of model reliability is proposed. Based on it, reliability analysis can be performed as a new dimension of evaluation to help select the best model without relying only on classification accuracy. Our extensive experimental results have shown the efficiency of the proposed reliability analysis, which encourages researchers to use it as a supplementary tool for model evaluation. Next, an uncertainty-aware model is designed based on the proposed framework to address the low robustness of hand grasp recognition. This offers an opportunity to investigate whether reliable models can achieve robust performance. The results show that the proposed model can improve the long-term robustness of hand grasp recognition by rejecting highly uncertain predictions. Finally, a simple but effective normalisation approach is proposed to improve the robustness of inter-subject HGR, thus addressing the clinical challenge of having only a limited amount of data from any individual. The comparison results show that better performance can be obtained by it compared to a state-of-the-art (SoA) transfer learning method when only one training cycle is available. In summary, this study presents promising methods to pursue an accurate, robust, and reliable classifier, which is the overarching goal for sEMG-based HGR. The direction for future work would be the inclusion of these in real-time myoelectric control applications

    Enhancing Upper Limb Prostheses Through Neuromorphic Sensory Feedback

    Get PDF
    Upper limb prostheses are rapidly improving in terms of both control and sensory feedback, giving rise to lifelike robotic devices that aim to restore function to amputees. Recent progress in forward control has enabled prosthesis users to make complicated grip patterns with a prosthetic hand and nerve stimulation has enabled sensations of touch in the missing hand of an amputee. A brief overview of the motivation behind the work in this thesis is given in Chapter 1, which is followed by a general overview of the field and state of the art research (Chapter 2). Chapters 3 and 4 look at the use of closed loop tactile feedback for improving prosthesis grasping functionality. This entails development of two algorithms for improving object manipulation (Chapter 3) and the first real-time implementation of neuromorphic tactile signals being used as feedback to a prosthesis controller for improved grasping (Chapter 4). The second half of the thesis (Chatpers 5 - 7) details how sensory information can be conveyed back to an amputee and how the tactile sensations can be utilized for creating a more lifelike prosthesis. Noninvasive electrical nerve stimulation was shown to provide sensations in multiple regions of the phantom hand of amputees both with and without targeted sensory reinnervation surgery (Chapter 5). A multilayered electronic dermis (e-dermis) was developed to mimic the behavior of receptors in the skin to provide, for the first time, sensations of both touch and pain back to an amputee and the prosthesis (Chapter 6). Finally, the first demonstration of sensory feedback as a key component of phantom hand movement for myoelectric pattern recognition shows that enhanced perceptions of the phantom hand can lead to improved prosthesis control (Chapter 7). This work provides the first demonstration of how amputees can perceive multiple tactile sensations through a neuromorphic stimulation paradigm. Furthermore, it describes the unique role that nerve stimulation and phantom hand activation play in the sensorimotor loop of upper limb amputees

    Machine learning-based dexterous control of hand prostheses

    Get PDF
    Upper-limb myoelectric prostheses are controlled by muscle activity information recorded on the skin surface using electromyography (EMG). Intuitive prosthetic control can be achieved by deploying statistical and machine learning (ML) tools to decipher the user’s movement intent from EMG signals. This thesis proposes various means of advancing the capabilities of non-invasive, ML-based control of myoelectric hand prostheses. Two main directions are explored, namely classification-based hand grip selection and proportional finger position control using regression methods. Several practical aspects are considered with the aim of maximising the clinical impact of the proposed methodologies, which are evaluated with offline analyses as well as real-time experiments involving both able-bodied and transradial amputee participants. It has been generally accepted that the EMG signal may not always be a reliable source of control information for prostheses, mainly due to its stochastic and non-stationary properties. One particular issue associated with the use of surface EMG signals for upper-extremity myoelectric control is the limb position effect, which is related to the lack of decoding generalisation under novel arm postures. To address this challenge, it is proposed to make concurrent use of EMG sensors and inertial measurement units (IMUs). It is demonstrated this can lead to a significant improvement in both classification accuracy (CA) and real-time prosthetic control performance. Additionally, the relationship between surface EMG and inertial measurements is investigated and it is found that these modalities are partially related due to reflecting different manifestations of the same underlying phenomenon, that is, the muscular activity. In the field of upper-limb myoelectric control, the linear discriminant analysis (LDA) classifier has arguably been the most popular choice for movement intent decoding. This is mainly attributable to its ease of implementation, low computational requirements, and acceptable decoding performance. Nevertheless, this particular method makes a strong fundamental assumption, that is, data observations from different classes share a common covariance structure. Although this assumption may often be violated in practice, it has been found that the performance of the method is comparable to that of more sophisticated algorithms. In this thesis, it is proposed to remove this assumption by making use of general class-conditional Gaussian models and appropriate regularisation to avoid overfitting issues. By performing an exhaustive analysis on benchmark datasets, it is demonstrated that the proposed approach based on regularised discriminant analysis (RDA) can offer an impressive increase in decoding accuracy. By combining the use of RDA classification with a novel confidence-based rejection policy that intends to minimise the rate of unintended hand motions, it is shown that it is feasible to attain robust myoelectric grip control of a prosthetic hand by making use of a single pair of surface EMG-IMU sensors. Most present-day commercial prosthetic hands offer the mechanical abilities to support individual digit control; however, classification-based methods can only produce pre-defined grip patterns, a feature which results in prosthesis under-actuation. Although classification-based grip control can provide a great advantage over conventional strategies, it is far from being intuitive and natural to the user. A potential way of approaching the level of dexterity enjoyed by the human hand is via continuous and individual control of multiple joints. To this end, an exhaustive analysis is performed on the feasibility of reconstructing multidimensional hand joint angles from surface EMG signals. A supervised method based on the eigenvalue formulation of multiple linear regression (MLR) is then proposed to simultaneously reduce the dimensionality of input and output variables and its performance is compared to that of typically used unsupervised methods, which may produce suboptimal results in this context. An experimental paradigm is finally designed to evaluate the efficacy of the proposed finger position control scheme during real-time prosthesis use. This thesis provides insight into the capacity of deploying a range of computational methods for non-invasive myoelectric control. It contributes towards developing intuitive interfaces for dexterous control of multi-articulated prosthetic hands by transradial amputees
    corecore