284 research outputs found

    EMG SIGNALS FOR FINGER MOVEMENT CLASSIFICATION BASED ON SHORT-TERM FOURIER TRANSFORM AND DEEP LEARNING

    Get PDF
    An interface based on electromyographic (EMG) signals is considered one of the central fields in human-machine interface (HCI) research with broad practical use. This paper presents the recognition of 13 individual finger movements based on the time-frequency representation of EMG signals via spectrograms. A deep learning algorithm, namely a convolutional neural network (CNN), is used to extract features and classify them. Two approaches to EMG data representations are investigated: different window segmentation lengths and reduction of the measured channels. The overall highest accuracy of the classification reaches 95.5% for a segment length of 300 ms. The average accuracy attains more than 90% by reducing channels from four to three

    Computational Intelligence in Electromyography Analysis

    Get PDF
    Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG may be used clinically for the diagnosis of neuromuscular problems and for assessing biomechanical and motor control deficits and other functional disorders. Furthermore, it can be used as a control signal for interfacing with orthotic and/or prosthetic devices or other rehabilitation assists. This book presents an updated overview of signal processing applications and recent developments in EMG from a number of diverse aspects and various applications in clinical and experimental research. It will provide readers with a detailed introduction to EMG signal processing techniques and applications, while presenting several new results and explanation of existing algorithms. This book is organized into 18 chapters, covering the current theoretical and practical approaches of EMG research

    Deep learning-based artificial vision for grasp classification in myoelectric hands

    Get PDF
    Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at 5{{5}^{\circ}} intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached 85%85 \% for the seen and 75%75 \% for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of 84%84 \% in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88%88 \% . In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably

    Biceps brachii synergy and its contribution to target reaching tasks within a virtual cube

    Full text link
    Ces dernières années, des travaux importants ont été observés dans le développement du contrôle prothétique afin d'aider les personnes amputées du membre supérieur à améliorer leur qualité de vie au quotidien. Certaines prothèses myoélectriques modernes des membres supérieurs disponibles dans le commerce ont de nombreux degrés de liberté et nécessitent de nombreux signaux de contrôle pour réaliser plusieurs tâches fréquemment utilisées dans la vie quotidienne. Pour obtenir plusieurs signaux de contrôle, de nombreux muscles sont requis mais pour les personnes ayant subi une amputation du membre supérieur, le nombre de muscles disponibles est plus ou moins réduit selon le niveau de l’amputation. Pour accroître le nombre de signaux de contrôle, nous nous sommes intéressés au biceps brachial, vu qu’anatomiquement il est formé de 2 chefs et que de la présence de compartiments a été observée sur sa face interne. Physiologiquement, il a été trouvé que les unités motrices du biceps sont activées à différents endroits du muscle lors de la production de diverses tâches fonctionnelles. De plus, il semblerait que le système nerveux central puisse se servir de la synergie musculaire pour arriver à facilement produire plusieurs mouvements. Dans un premier temps on a donc identifié que la synergie musculaire était présente chez le biceps de sujets normaux et on a montré que les caractéristiques de cette synergie permettaient d’identifier la posture statique de la main lorsque les signaux du biceps avaient été enregistrés. Dans un deuxième temps, on a réussi à démontrer qu’il était possible, dans un cube présenté sur écran, à contrôler la position d’une sphère en vue d’atteindre diverses cibles en utilisant la synergie musculaire du biceps. Les techniques de classification utilisées pourraient servir à faciliter le contrôle des prothèses myoélectriques.In recent years, important work has been done in the development of prosthetic control to help upper limb amputees improve their quality of life on a daily basis. Some modern commercially available upper limb myoelectric prostheses have many degrees of freedom and require many control signals to perform several tasks commonly used in everyday life. To obtain several control signals, many muscles are required, but for people with upper limb amputation, the number of muscles available is more or less reduced, depending on the level of amputation. To increase the number of control signals, we were interested in the biceps brachii, since it is anatomically composed of 2 heads and the presence of compartments was observed on its internal face. Physiologically, it has been found that the motor units of the biceps are activated at different places of the muscle during production of various functional tasks. In addition, it appears that the central nervous system can use muscle synergy to easily produce multiple movements. In this research, muscle synergy was first identified to be present in the biceps of normal subjects, and it was shown that the characteristics of this synergy allowed the identification of static posture of the hand when the biceps signals had been recorded. In a second investigation, we demonstrated that it was possible in a virtual cube presented on a screen to control online the position of a sphere to reach various targets by using muscle synergy of the biceps. Classification techniques have been used to improve the classification of muscular synergy features, and these classification techniques can be integrated with control algorithm that produces dynamic movement of myoelectric prostheses to facilitate the training of prosthetic control

    Machine Learning-Based Hand Gesture Recognition via EMG Data

    Get PDF
    Electromyography (EMG) data gives information about the electrical activity related to muscles. EMG data obtained from arm through sensors helps to understand hand gestures. For this work, hand gesture data were taken from UCI2019 EMG dataset obtained from MYO thalmic armband were classied with six dierent machine learning algorithms. Articial Neural Network (ANN), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Naive Bayes (NB), Decision Tree (DT) and Random Forest (RF) methods were preferred for comparison based on several performance metrics which are accuracy, precision, sensitivity, specicity, classication error, kappa, root mean squared error (RMSE) and correlation. The data belongs to seven hand gestures. 700 samples from 7 classes (100 samples per group) were used in the experiments. The splitting ratio in the classication was 0.8-0.2, i.e. 80% of the samples were used in training and 20% of data were used in testing phase of the classier. NB was found to be the best among other methods because of high accuracy (96.43%) and sensitivity (96.43%) and the lowest RMSE (0.189). Considering the results of the performance parameters, it can be said that this study recognizes and classies seven hand gestures successfully in comparison with the literature

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN

    Proceedings of the first workshop on Peripheral Machine Interfaces: going beyond traditional surface electromyography

    Get PDF
    abstract: One of the hottest topics in rehabilitation robotics is that of proper control of prosthetic devices. Despite decades of research, the state of the art is dramatically behind the expectations. To shed light on this issue, in June, 2013 the first international workshop on Present and future of non-invasive peripheral nervous system (PNS)–Machine Interfaces (MI; PMI) was convened, hosted by the International Conference on Rehabilitation Robotics. The keyword PMI has been selected to denote human–machine interfaces targeted at the limb-deficient, mainly upper-limb amputees, dealing with signals gathered from the PNS in a non-invasive way, that is, from the surface of the residuum. The workshop was intended to provide an overview of the state of the art and future perspectives of such interfaces; this paper represents is a collection of opinions expressed by each and every researcher/group involved in it

    Recognition and Estimation of Human Finger Pointing with an RGB Camera for Robot Directive

    Full text link
    In communication between humans, gestures are often preferred or complementary to verbal expression since the former offers better spatial referral. Finger pointing gesture conveys vital information regarding some point of interest in the environment. In human-robot interaction, a user can easily direct a robot to a target location, for example, in search and rescue or factory assistance. State-of-the-art approaches for visual pointing estimation often rely on depth cameras, are limited to indoor environments and provide discrete predictions between limited targets. In this paper, we explore the learning of models for robots to understand pointing directives in various indoor and outdoor environments solely based on a single RGB camera. A novel framework is proposed which includes a designated model termed PointingNet. PointingNet recognizes the occurrence of pointing followed by approximating the position and direction of the index finger. The model relies on a novel segmentation model for masking any lifted arm. While state-of-the-art human pose estimation models provide poor pointing angle estimation accuracy of 28deg, PointingNet exhibits mean accuracy of less than 2deg. With the pointing information, the target is computed followed by planning and motion of the robot. The framework is evaluated on two robotic systems yielding accurate target reaching
    corecore