104 research outputs found

    Decoding HD-EMG Signals for Myoelectric Control-How Small Can the Analysis Window Size be?

    Get PDF

    Real-time visual and EMG signals recognition to control dexterous prosthetic hand based on deep learning and machine learning

    Get PDF
    The revolution in prosthetic hands allows the evolution of a new generation of prostheses that increase artificial intelligence to control an adept hand. A suitable gripping and grasping action for different shapes of the objects is currently a challenging task of prosthetic hand design. The most artificial hands are based on electromyography signals. A novel approach has been proposed in this work using deep learning classification method for assorting items into seven gripping patterns based on EMG and image recognition. Hence, this approach conducting two scenarios; The first scenario is recording the EMG signals for five healthy participants for the basic hand movement (cylindrical, tip, spherical, lateral, palmar, and hook). Then three time-domain (standard deviation, mean absolute value, and the principal component analysis) are used to extract the EMG signal features. After that, the SVM is used to find the proper classes and achieve an accuracy that reaches 89%. The second scenario is collecting the 723 RGB images for 24 items and sorting them into seven classes, i.e., cylindrical, tip, spherical, lateral, palmar, hook, and full hand. The GoogLeNet algorithm is used for training based on 144 layers; these layers include the convolutional layers, ReLU activation layers, max-pooling layers, drop-out layers, and a softmax layer. The GoogLeNet achieves high training accuracy reaches 99%. Finally, the system is tested, and the experiments showed that the proposed visual hand based on the myoelectric control method (Vision-EMG) could significantly give recognition accuracy reaches 95%

    Shoulder muscle activation pattern recognition based on sEMG and machine learning algorithms

    Get PDF
    BACKGROUND AND OBJECTIVE: Surface electromyography (sEMG) has been used for robotic rehabilitation engineering for volitional control of hand prostheses or elbow exoskeleton, however, using sEMG for volitional control of an upper limb exoskeleton has not been perfectly developed. The long-term goal of our study is to process shoulder muscle bio-electrical signals for rehabilitative robotic assistive device motion control. The purposes of this study included: 1) to test the feasibility of machine learning algorithms in shoulder motion pattern recognition using sEMG signals from shoulder and upper limb muscles, 2) to investigate the influence of motion speed, individual variability, EMG recording device, and the amount of EMG datasets on the shoulder motion pattern recognition accuracy. METHODS: A novel convolutional neural network (CNN) structure was constructed to process EMG signals from 12 muscles for the pattern recognition of upper arm motions including resting, drinking, backward-forward motion, and abduction motion. The accuracy of the CNN models for pattern recognition under different motion speeds, among individuals, and by EMG recording devices was statistically analyzed using ANOVA, GLM Univariate analysis, and Chi-square tests. The influence of EMG dataset number used for CNN model training on recognition accuracy was studied by gradually increasing dataset number until the highest accuracy was obtained. RESULTS: Results showed that the accuracy of the normal speed CNN model in motion pattern recognition was 97.57% for normal speed motions and 97.07% for fast speed motions. The accuracy of the cross-subjects CNN model in motion pattern recognition was 79.64%. The accuracy of the cross-device CNN model in motion pattern recognition was 88.93% for normal speed motion and 80.87% for mixed speed. There was a statistical difference in pattern recognition accuracy between different CNN models. CONCLUSION: The EMG signals of shoulder and upper arm muscles from the upper limb motions can be processed using CNN algorithms to recognize the identical motions of the upper limb including drinking, forward/backward, abduction, and resting. A simple CNN model trained by EMG datasets of a designated motion speed accurately detected the motion patterns of the same motion speed, yielding the highest accuracy compared with other mixed CNN models for various speeds of motion pattern recognition. Increase of the number of EMG datasets for CNN model training improved the pattern recognition accuracy

    A Prosthetic Limb Managed by Sensors-Based Electronic System: Experimental Results on Amputees

    Get PDF
    Taking the advantages offered by smart high-performance electronic devices, transradial prosthesis for upper-limb amputees was developed and tested. It is equipped with sensing devices and actuators allowing hand movements; myoelectric signals are detected by Myo armband with 8 ElectroMyoGraphic (EMG) electrodes, a 9-axis Inertial Measurement Unit (IMU) and Bluetooth Low Energy (BLE) module. All data are received through HM-11 BLE transceiver by Arduino board which processes them and drives actuators. Raspberry Pi board controls a touchscreen display, providing user a feedback related to prosthesis functioning and sends EMG and IMU data, gathered via the armband, to cloud platform thus allowing orthopedic during rehabilitation period, to monitor users’ improvements in real time. A GUI software integrating a machine learning algorithm was implemented for recognizing flexion/extension/rest gestures of user fingers. The algorithm performances were tested on 9 male subjects (8 able-bodied and 1 subject affected by upper-limb amelia), demonstrating high accuracy and fast responses

    A temporal-to-spatial neural network for classification of hand movements from electromyography data

    Get PDF
    Deep convolutional neural networks (CNNs) are appealing for the purpose of classification of hand movements from surface electromyography (sEMG) data because they have the ability to perform automated person-specific feature extraction from raw data. In this paper, we make the novel contribution of proposing and evaluating a design for the early processing layers in the deep CNN for multichannel sEMG. Specifically, we propose a novel temporal-to-spatial (TtS) CNN architecture, where the first layer performs convolution separately on each sEMG channel to extract temporal features. This is motivated by the idea that sEMG signals in each channel are mediated by one or a small subset of muscles, whose temporal activation patterns are associated with the signature features of a gesture. The temporal layer captures these signature features for each channel separately, which are then spatially mixed in successive layers to recognise a specific gesture. A practical advantage is that this approach also makes the CNN simple to design for different sample rates. We use NinaPro database 1 (27 subjects and 52 movements + rest), sampled at 100 Hz, and database 2 (40 subjects and 40 movements + rest), sampled at 2 kHz, to evaluate our proposed CNN design. We benchmark against a feature-based support vector machine (SVM) classifier, two CNNs from the literature, and an additional standard design of CNN. We find that our novel TtS CNN design achieves 66.6% per-class accuracy on database 1, and 67.8% on database 2, and that the TtS CNN outperforms all other compared classifiers using a statistical hypothesis test at the 2% significance level

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Deep learning and feature engineering techniques applied to the myoelectric signal for accurate prediction of movements

    Get PDF
    Técnicas de reconhecimento de padrões no Sinal Mioelétrico (EMG) são empregadas no desenvolvimento de próteses robóticas, e para isso, adotam diversas abordagens de Inteligência Artificial (IA). Esta Tese se propõe a resolver o problema de reconhecimento de padrões EMG através da adoção de técnicas de aprendizado profundo de forma otimizada. Para isso, desenvolveu uma abordagem que realiza a extração da característica a priori, para alimentar os classificadores que supostamente não necessitam dessa etapa. O estudo integrou a plataforma BioPatRec (estudo e desenvolvimento avançado de próteses) a dois algoritmos de classificação (Convolutional Neural Network e Long Short-Term Memory) de forma híbrida, onde a entrada fornecida à rede já possui características que descrevem o movimento (nível de ativação muscular, magnitude, amplitude, potência e outros). Assim, o sinal é rastreado como uma série temporal ao invés de uma imagem, o que nos permite eliminar um conjunto de pontos irrelevantes para o classificador, tornando a informação expressivas. Na sequência, a metodologia desenvolveu um software que implementa o conceito introduzido utilizando uma Unidade de Processamento Gráfico (GPU) de modo paralelo, esse incremento permitiu que o modelo de classificação aliasse alta precisão com um tempo de treinamento inferior a 1 segundo. O modelo paralelizado foi chamado de BioPatRec-Py e empregou algumas técnicas de Engenharia de Features que conseguiram tornar a entrada da rede mais homogênea, reduzindo a variabilidade, o ruído e uniformizando a distribuição. A pesquisa obteve resultados satisfatórios e superou os demais algoritmos de classificação na maioria dos experimentos avaliados. O trabalho também realizou uma análise estatística dos resultados e fez o ajuste fino dos hiper-parâmetros de cada uma das redes. Em última instancia, o BioPatRec-Py forneceu um modelo genérico. A rede foi treinada globalmente entre os indivíduos, permitindo a criação de uma abordagem global, com uma precisão média de 97,83%.Pattern recognition techniques in the Myoelectric Signal (EMG) are employed in the development of robotic prostheses, and for that, they adopt several approaches of Artificial Intelligence (AI). This Thesis proposes to solve the problem of recognition of EMG standards through the adoption of profound learning techniques in an optimized way. The research developed an approach that extracts the characteristic a priori to feed the classifiers that supposedly do not need this step. The study integrated the BioPatRec platform (advanced prosthesis study and development) to two classification algorithms (Convolutional Neural Network and Long Short-Term Memory) in a hybrid way, where the input provided to the network already has characteristics that describe the movement (level of muscle activation, magnitude, amplitude, power, and others). Thus, the signal is tracked as a time series instead of an image, which allows us to eliminate a set of points irrelevant to the classifier, making the information expressive. In the sequence, the methodology developed software that implements the concept introduced using a Graphical Processing Unit (GPU) in parallel this increment allowed the classification model to combine high precision with a training time of less than 1 second. The parallel model was called BioPatRec-Py and employed some Engineering techniques of Features that managed to make the network entry more homogeneous, reducing variability, noise, and standardizing distribution. The research obtained satisfactory results and surpassed the other classification algorithms in most of the evaluated experiments. The work performed a statistical analysis of the outcomes and fine-tuned the hyperparameters of each of the networks. Ultimately, BioPatRec-Py provided a generic model. The network was trained globally between individuals, allowing the creation of a standardized approach, with an average accuracy of 97.83%
    corecore