223 research outputs found

    Transradial Amputee Gesture Classification Using an Optimal Number of sEMG Sensors: An Approach Using ICA Clustering

    Full text link
    © 2001-2011 IEEE. Surface electromyography (sEMG)-based pattern recognition studies have been widely used to improve the classification accuracy of upper limb gestures. Information extracted from multiple sensors of the sEMG recording sites can be used as inputs to control powered upper limb prostheses. However, usage of multiple EMG sensors on the prosthetic hand is not practical and makes it difficult for amputees due to electrode shift/movement, and often amputees feel discomfort in wearing sEMG sensor array. Instead, using fewer numbers of sensors would greatly improve the controllability of prosthetic devices and it would add dexterity and flexibility in their operation. In this paper, we propose a novel myoelectric control technique for identification of various gestures using the minimum number of sensors based on independent component analysis (ICA) and Icasso clustering. The proposed method is a model-based approach where a combination of source separation and Icasso clustering was utilized to improve the classification performance of independent finger movements for transradial amputee subjects. Two sEMG sensor combinations were investigated based on the muscle morphology and Icasso clustering and compared to Sequential Forward Selection (SFS) and greedy search algorithm. The performance of the proposed method has been validated with five transradial amputees, which reports a higher classification accuracy (> 95%). The outcome of this study encourages possible extension of the proposed approach to real time prosthetic applications

    Shoulder muscle activation pattern recognition based on sEMG and machine learning algorithms

    Get PDF
    BACKGROUND AND OBJECTIVE: Surface electromyography (sEMG) has been used for robotic rehabilitation engineering for volitional control of hand prostheses or elbow exoskeleton, however, using sEMG for volitional control of an upper limb exoskeleton has not been perfectly developed. The long-term goal of our study is to process shoulder muscle bio-electrical signals for rehabilitative robotic assistive device motion control. The purposes of this study included: 1) to test the feasibility of machine learning algorithms in shoulder motion pattern recognition using sEMG signals from shoulder and upper limb muscles, 2) to investigate the influence of motion speed, individual variability, EMG recording device, and the amount of EMG datasets on the shoulder motion pattern recognition accuracy. METHODS: A novel convolutional neural network (CNN) structure was constructed to process EMG signals from 12 muscles for the pattern recognition of upper arm motions including resting, drinking, backward-forward motion, and abduction motion. The accuracy of the CNN models for pattern recognition under different motion speeds, among individuals, and by EMG recording devices was statistically analyzed using ANOVA, GLM Univariate analysis, and Chi-square tests. The influence of EMG dataset number used for CNN model training on recognition accuracy was studied by gradually increasing dataset number until the highest accuracy was obtained. RESULTS: Results showed that the accuracy of the normal speed CNN model in motion pattern recognition was 97.57% for normal speed motions and 97.07% for fast speed motions. The accuracy of the cross-subjects CNN model in motion pattern recognition was 79.64%. The accuracy of the cross-device CNN model in motion pattern recognition was 88.93% for normal speed motion and 80.87% for mixed speed. There was a statistical difference in pattern recognition accuracy between different CNN models. CONCLUSION: The EMG signals of shoulder and upper arm muscles from the upper limb motions can be processed using CNN algorithms to recognize the identical motions of the upper limb including drinking, forward/backward, abduction, and resting. A simple CNN model trained by EMG datasets of a designated motion speed accurately detected the motion patterns of the same motion speed, yielding the highest accuracy compared with other mixed CNN models for various speeds of motion pattern recognition. Increase of the number of EMG datasets for CNN model training improved the pattern recognition accuracy

    A non-invasive human-machine interfacing framework for investigating dexterous control of hand muscles

    Get PDF
    The recent fast development of virtual reality and robotic assistive devices enables to augment the capabilities of able-body individuals as well as to overcome the motor missing functions of neurologically impaired or amputee individuals. To control these devices, movement intentions can be captured from biological structures involved in the process of motor planning and execution, such as the central nervous system (CNS), the peripheral nervous system (in particular the spinal motor neurons) and the musculoskeletal system. Thus, human-machine interfaces (HMI) enable to transfer neural information from the neuro-muscular system to machines. To prevent any risks due to surgical operations or tissue damage in implementing these HMIs, a non-invasive approach is proposed in this thesis. In the last five decades, surface electromyography (sEMG) has been extensively explored as a non-invasive source of neural information. EMG signals are constituted by the mixed electrical activity of several recruited motor units, the fundamental components of muscle contraction. High-density sEMG (HD-sEMG) with the use of blind source separation methods enabled to identify the discharge patterns of many of these active motor units. From these decomposed discharge patterns, the net common synaptic input (CSI) to the corresponding spinal motor neurons was quantified with cross-correlation in the time and frequency domain or with principal component analysis (PCA) on one or few muscles. It has been hypothesised that this CSI would result from the contribution of spinal descending commands sent by supra-spinal structures and afferences integrated by spinal interneurons. Another motor strategy implying the integration of descending commands at the spinal level is the one regarding the coordination of many muscles to control a large number of articular joints. This neurophysiological mechanism was investigated by measuring a single EMG amplitude per muscle, thus without the use of HD-sEMG and decomposition. In this case, the aim was to understand how the central nervous system (CNS) could control a large set of muscles actuating a vast set of combinations of degrees of freedom in a modular way. Thus, time-invariant patterns of muscle coordination, i.e. muscle synergies , were found in animals and humans from EMG amplitude of many muscles, modulated by time-varying commands to be combined to fulfil complex movements. In this thesis, for the first time, we present a non-invasive framework for human-machine interfaces based on both spinal motor neuron recruitment strategy and muscle synergistic control for unifying the understanding of these two motor control strategies and producing control signals correlated to biomechanical quantities. This implies recording both from many muscles and using HD-sEMG for each muscle. We investigated 14 muscles of the hand, 6 extrinsic and 8 intrinsic. The first two studies, (in Chapters 2 and 3, respectively) present the framework for CSI quantification by PCA and the extraction of the synergistic organisation of spinal motor neurons innervating the 14 investigated muscles. For the latter analysis, in Chapter 3, we proposed the existence of what we named as motor neuron synergies extracted with non-negative matrix factorisation (NMF) from the identified motor neurons. In these first two studies, we considered 7 subjects and 7 grip types involving differently all the four fingers in opposition with the thumb. In the first study, we found that the variance explained by the CSI among all motor neuron spike trains was (53.0 ± 10.9) % and its cross-correlation with force was 0.67 ± 0.10, remarkably high with respect to previous findings. In the second study, 4 motor neuron synergies were identified and associated with the actuation of one finger in opposition with the thumb, finding even higher correlation values with force (over 0.8) for each synergy associated with a finger during the actuation of the relative finger. In Chapter 4, we then extended the set of analysed movements in a vast repertoire of gestures and repeated the analysis of Chapter 3 by finding a different synergistic organisation during the execution of tens of tasks. We divided the contribution among extrinsic and intrinsic muscles and we found that intrinsic better enable single-finger spatial discrimination, while no difference was found in regression of joint angles by dividing the two groups of muscles. Finally, in Chapter 5 we proposed the techniques of the previous chapters for cases of impairment due both to amputation and stroke. We analysed one case of pre and post rehabilitation sessions of a trans-humeral amputee, the case of a post-stroke trans-radial amputee and three cases of acute stroke, i.e. less than one month from the stroke event. We present future perspectives (Chapter 6) aimed to design and implement a platform for both rehabilitation monitoring and myoelectric control. Thus, this thesis provides a bridge between two extensively studied motor control mechanisms, i.e. motor neuron recruitment and muscle synergies, and proposes this framework as suitable for rehabilitation monitoring and control of assistive devices.Open Acces

    Current state of digital signal processing in myoelectric interfaces and related applications

    Get PDF
    This review discusses the critical issues and recommended practices from the perspective of myoelectric interfaces. The major benefits and challenges of myoelectric interfaces are evaluated. The article aims to fill gaps left by previous reviews and identify avenues for future research. Recommendations are given, for example, for electrode placement, sampling rate, segmentation, and classifiers. Four groups of applications where myoelectric interfaces have been adopted are identified: assistive technology, rehabilitation technology, input devices, and silent speech interfaces. The state-of-the-art applications in each of these groups are presented.Peer reviewe

    Neuromorphic decoding of spinal motor neuron behaviour during natural hand movements for a new generation of wearable neural interfaces

    Get PDF
    We propose a neuromorphic framework to process the activity of human spinal motor neurons for movement intention recognition. This framework is integrated into a non-invasive interface that decodes the activity of motor neurons innervating intrinsic and extrinsic hand muscles. One of the main limitations of current neural interfaces is that machine learning models cannot exploit the efficiency of the spike encoding operated by the nervous system. Spiking-based pattern recognition would detect the spatio-temporal sparse activity of a neuronal pool and lead to adaptive and compact implementations, eventually running locally in embedded systems. Emergent Spiking Neural Networks (SNN) have not yet been used for processing the activity of in-vivo human neurons. Here we developed a convolutional SNN to process a total of 467 spinal motor neurons whose activity was identified in 5 participants while executing 10 hand movements. The classification accuracy approached 0.95 ±0.14 for both isometric and non-isometric contractions. These results show for the first time the potential of highly accurate motion intent detection by combining non-invasive neural interfaces and SNN

    Applications of Neural Networks in Classifying Trained and Novel Gestures Using Surface Electromyography

    Get PDF
    Current prosthetic control systems explored in the literature that use pattern recognition can perform a limited number of pre-assigned functions, as they must be trained using muscle signals for every movement the user wants to perform. The goal of this study was to explore the development of a prosthetic control system that can classify both trained and novel gestures, for applications in commercial prosthetic arms. The first objective of this study was to evaluate the feasibility of three different algorithms in classifying raw sEMG data for both trained isometric gestures, and for novel isometric gestures that were not included in the training data set. The algorithms used were; a feedforward multi-layer perceptron (FFMLP), a stacked sparse autoencoder (SSAE), and a convolution neural network (CNN). The second objective is to evaluate the algorithms’ abilities to classify novel isometric gestures that were not included in the training data set, and to determine the effect of different gesture combinations on the classification accuracy. The third objective was to predict the binary (flexed/extended) digit positions without training the network using kinematic data from the participants hand. A g-tec USB Biosignal Amplifier was used to collect data from eight differential sEMG channels from 10 able-bodied participants. These participants performed 14 gestures including rest, that involved a variety of discrete finger flexion/extension tasks. Forty seconds of data were collected for each gesture at 1200 Hz from eight bipolar sEMG channels. These 14 gestures were then organized into 20 unique gesture combinations, where each combination consisted of a different sub-set of gestures used for training, and another sub-set used as the novel gestures, which were only used to test the algorithms’ predictive capabilities. Participants were asked to perform the gestures in such a way where each digit was either fully flexed or fully extended to the best of their abilities. In this way the digit positions for each gesture could be labelled with a value of zero or one depending on its binary positions. Therefore, the algorithms used could be provided with both input data (sEMG) and output labels without needing to record joint kinematics. The post processing analysis of the outputs for each algorithm was conducted using two different methods, these being all-or-nothing gesture classification (ANGC) and weighted digit gesture classification (WDGC). All 20 combinations were tested using the FFMLP, SSAE, and CNN using Matlab. For both analysis methods, the CNN outperformed the FFMLP and SSAE. Statistical analysis was not provided for the performance of novel gestures using ANGC method, as the data was highly skewed, and did not fall on a normal distribution due to the large number of zero valued classification results for most of the novel gestures. The FFMLP and SSAE showed no significant difference from one another for the trained ANGC method, but the FFMLP showed statistically higher performance than the SSAE for trained and novel WDGC results. The results indicate that the CNN was able to classify most digits with reasonable accuracy, and the performance varied between participants. The results also indicate that for some participants, this may be suitable for prosthetic control applications. The FFMLP and SSAE were largely unable to classify novel digit positions and obtained significantly lower performance accuracies for novel gestures for both analysis methods when compared to the CNN. Therefore, the FFMLP and SSAE algorithms do not seem to be suitable for prosthetic control applications using the proposed raw data input, and the output architecture

    Electromyography Based Human-Robot Interfaces for the Control of Artificial Hands and Wearable Devices

    Get PDF
    The design of robotic systems is currently facing human-inspired solutions as a road to replicate the human ability and flexibility in performing motor tasks. Especially for control and teleoperation purposes, the human-in-the-loop approach is a key element within the framework know as Human-Robot Interface. This thesis reports the research activity carried out for the design of Human-Robot Interfaces based on the detection of human motion intentions from surface electromyography. The main goal was to investigate intuitive and natural control solutions for the teleoperation of both robotic hands during grasping tasks and wearable devices during elbow assistive applications. The design solutions are based on the human motor control principles and surface electromyography interpretation, which are reviewed with emphasis on the concept of synergies. The electromyography based control strategies for the robotic hand grasping and the wearable device assistance are also reviewed. The contribution of this research for the control of artificial hands rely on the integration of different levels of the motor control synergistic organization, and on the combination of proportional control and machine learning approaches under the guideline of user-centred intuitiveness in the Human-Robot Interface design specifications. From the side of the wearable devices, the control of a novel upper limb assistive device based on the Twisted String Actuation concept is faced. The contribution regards the assistance of the elbow during load lifting tasks, exploring a simplification in the use of the surface electromyography within the design of the Human-Robot Interface. The aim is to work around complex subject-dependent algorithm calibrations required by joint torque estimation methods

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN
    • …
    corecore