18 research outputs found

    EMG-based decoding of grasp gestures in reaching-to-grasping motions

    Get PDF
    Predicting the grasping function during reach-to-grasp motions is essential for controlling a prosthetic hand or a robotic assistive device. An early accurate prediction increases the usability and the comfort of a prosthetic device. This work proposes an electromyographic-based learning approach that decodes the grasping intention at an early stage of reach-to-grasp motion, i.e. before the final grasp/hand pre-shape takes place. Superficial electrodes and a Cyberglove were used to record the arm muscle activity and the finger joints during reach-to-grasp motions. Our results showed a 90% accuracy for the detection of the final grasp about 0.5 s after motion onset. This paper also examines the effect of different objects’ distances and different motion speeds on the detection time and accuracy of the classifier. The use of our learning approach to control a 16-degrees of freedom robotic hand confirmed the usability of our approach for the real-time control of robotic devices

    Learning Arm/Hand Coordination with an Altered Visual Input

    Get PDF
    The focus of this study was to test a novel tool for the analysis of motor coordination with an altered visual input. The altered visual input was created using special glasses that presented the view as recorded by a video camera placed at various positions around the subject. The camera was positioned at a frontal (F), lateral (L), or top (T) position with respect to the subject. We studied the differences between the arm-end (wrist) trajectories while grasping an object between altered vision (F, L, and T conditions) and normal vision (N) in ten subjects. The outcome measures from the analysis were the trajectory errors, the movement parameters, and the time of execution. We found substantial trajectory errors and an increased execution time at the baseline of the study. We also found that trajectory errors decreased in all conditions after three days of practice with the altered vision in the F condition only for 20 minutes per day, suggesting that recalibration of the visual systems occurred relatively quickly. These results indicate that this recalibration occurs via movement training in an altered condition. The results also suggest that recalibration is more difficult to achieve for altered vision in the F and L conditions compared to the T condition. This study has direct implications on the design of new rehabilitation systems

    Sensor Fusion for Closed-loop Control of Upper-limb Prostheses

    Get PDF

    Implementation of Supervised Machine Learning on Embedded Raspberry Pi System to Recognize Hand Motion as Preliminary Study for Smart Prosthetic Hand

    Get PDF
    EMG signals have random, non-linear, and non-stationary characteristics that require the selection of the suitable feature extraction and classifier for application to prosthetic hands based on EMG pattern recognition. This research aims to implement EMG pattern recognition on an embedded Raspberry Pi system to recognize hand motion as a preliminary study for a smart prosthetic hand. The contribution of this research is that the time domain feature extraction model and classifier machine can be implemented into the Raspberry Pi embedded system. In addition, the machine learning training and evaluation process is carried out online on the Raspberry Pi system. The online training process is carried out by integrating EMG data acquisition hardware devices, time domain features, classifiers, and motor control on embedded machine learning using Python programming. This study involved ten respondents in good health. EMG signals are collected at two lead flexor carpi radialis and extensor digitorum muscles. EMG signals are extracted using time domain features (TDF) mean absolute value (MAV), root mean square (RMS), variance (VAR) using a window length of 100 ms. Supervised machine learning decision tree (DT), support vector machine (SVM), and k-nearest neighbor (KNN) are chosen because they have a simple algorithm structure and less computation. Finally, the TDF and classifier are embedded in the Raspberry Pi 3 Model B+ microcomputer. Experimental results show that the highest accuracy is obtained in the open class, 97.03%. Furthermore, the additional datasets show a significant difference in accuracy (p-value <0.05). Based on the evaluation results obtained, the embedded system can be implemented for prosthetic hands based on EMG pattern recognition

    Toward a Full Prehension Decoding from Dorsomedial Area V6A

    Get PDF
    Neural prosthetics represent a promising approach to restore movements in patients affected by spinal cord lesions. To drive a full capable, brain controlled, prosthetic arm, reaching and grasping components of prehension have to be accurately reconstructed from neural activity. Neurons in the dorsomedial area V6A of macaque show sensitivity to reaching direction accounting also for depth dimension, thus encoding positions in the entire 3D space. Moreover, many neurons are sensible to grips types and wrist orientations. To assess whether these signals are adequate to drive a full capable neural prosthetic arm, we recorded spiking activity of neurons in area V6A, spike counts were used to train machine learning algorithms to reconstruct reaching and grasping. In a first work, two Macaca fascicularis monkeys were trained to perform an instructed-delay reach-to-grasp task in the dark and in the light toward objects of different shapes. The activity of 89 neurons was used to train and validate a Bayes classifier used for decoding objects and grip types. Recognition rates were well above chance level for all the epochs analyzed in this study. In a second work, monkeys were trained to perform reaches to targets located at various depths and directions and the classifier was tested whether it could correctly predict the reach goal position from V6A signals. The reach goal location was reliably decoded with accuracy close to optimal (>90%) throughout the task. Together these results, show a reliable decoding of hand grips and spatial location of reaching goals in the same area, suggesting that V6A is a suitable site to decode the entire prehension action with obvious advantages in terms of implant invasiveness. This new PPC site useful for decoding both reaching and grasping opens new perspectives in the development of human brain-computer interfaces

    End-to-End Learning of Speech 2D Feature-Trajectory for Prosthetic Hands

    Full text link
    Speech is one of the most common forms of communication in humans. Speech commands are essential parts of multimodal controlling of prosthetic hands. In the past decades, researchers used automatic speech recognition systems for controlling prosthetic hands by using speech commands. Automatic speech recognition systems learn how to map human speech to text. Then, they used natural language processing or a look-up table to map the estimated text to a trajectory. However, the performance of conventional speech-controlled prosthetic hands is still unsatisfactory. Recent advancements in general-purpose graphics processing units (GPGPUs) enable intelligent devices to run deep neural networks in real-time. Thus, architectures of intelligent systems have rapidly transformed from the paradigm of composite subsystems optimization to the paradigm of end-to-end optimization. In this paper, we propose an end-to-end convolutional neural network (CNN) that maps speech 2D features directly to trajectories for prosthetic hands. The proposed convolutional neural network is lightweight, and thus it runs in real-time in an embedded GPGPU. The proposed method can use any type of speech 2D feature that has local correlations in each dimension such as spectrogram, MFCC, or PNCC. We omit the speech to text step in controlling the prosthetic hand in this paper. The network is written in Python with Keras library that has a TensorFlow backend. We optimized the CNN for NVIDIA Jetson TX2 developer kit. Our experiment on this CNN demonstrates a root-mean-square error of 0.119 and 20ms running time to produce trajectory outputs corresponding to the voice input data. To achieve a lower error in real-time, we can optimize a similar CNN for a more powerful embedded GPGPU such as NVIDIA AGX Xavier
    corecore