77 research outputs found

    Gaze, visual, myoelectric, and inertial data of grasps for intelligent prosthetics

    Get PDF
    A hand amputation is a highly disabling event, having severe physical and psychological repercussions on a person’s life. Despite extensive efforts devoted to restoring the missing functionality via dexterous myoelectric hand prostheses, natural and robust control usable in everyday life is still challenging. Novel techniques have been proposed to overcome the current limitations, among them the fusion of surface electromyography with other sources of contextual information. We present a dataset to investigate the inclusion of eye tracking and first person video to provide more stable intent recognition for prosthetic control. This multimodal dataset contains surface electromyography and accelerometry of the forearm, and gaze, first person video, and inertial measurements of the head recorded from 15 transradial amputees and 30 able-bodied subjects performing grasping tasks. Besides the intended application for upper-limb prosthetics, we also foresee uses for this dataset to study eye-hand coordination in the context of psychophysics, neuroscience, and assistive robotics

    Gaze, visual, myoelectric, and inertial data of grasps for intelligent prosthetics

    Get PDF
    A hand amputation is a highly disabling event, having severe physical and psychological repercussions on a person’s life. Despite extensive efforts devoted to restoring the missing functionality via dexterous myoelectric hand prostheses, natural and robust control usable in everyday life is still challenging. Novel techniques have been proposed to overcome the current limitations, among them the fusion of surface electromyography with other sources of contextual information. We present a dataset to investigate the inclusion of eye tracking and first person video to provide more stable intent recognition for prosthetic control. This multimodal dataset contains surface electromyography and accelerometry of the forearm, and gaze, first person video, and inertial measurements of the head recorded from 15 transradial amputees and 30 able-bodied subjects performing grasping tasks. Besides the intended application for upper-limb prosthetics, we also foresee uses for this dataset to study eye-hand coordination in the context of psychophysics, neuroscience, and assistive robotics

    Virtual sensor of surface electromyography in a new extensive fault-tolerant classification system

    Get PDF
    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior

    Subject-Independent Frameworks for Robotic Devices: Applying Robot Learning to EMG Signals

    Get PDF
    The capability of having human and robots cooperating together has increased the interest in the control of robotic devices by means of physiological human signals. In order to achieve this goal it is crucial to be able to catch the human intention of movement and to translate it in a coherent robot action. Up to now, the classical approach when considering physiological signals, and in particular EMG signals, is to focus on the specific subject performing the task since the great complexity of these signals. This thesis aims to expand the state of the art by proposing a general subject-independent framework, able to extract the common constraints of human movement by looking at several demonstration by many different subjects. The variability introduced in the system by multiple demonstrations from many different subjects allows the construction of a robust model of human movement, able to face small variations and signal deterioration. Furthermore, the obtained framework could be used by any subject with no need for long training sessions. The signals undergo to an accurate preprocessing phase, in order to remove noise and artefacts. Following this procedure, we are able to extract significant information to be used in online processes. The human movement can be estimated by using well-established statistical methods in Robot Programming by Demonstration applications, in particular the input can be modelled by using a Gaussian Mixture Model (GMM). The performed movement can be continuously estimated with a Gaussian Mixture Regression (GMR) technique, or it can be identified among a set of possible movements with a Gaussian Mixture Classification (GMC) approach. We improved the results by incorporating some previous information in the model, in order to enriching the knowledge of the system. In particular we considered the hierarchical information provided by a quantitative taxonomy of hand grasps. Thus, we developed the first quantitative taxonomy of hand grasps considering both muscular and kinematic information from 40 subjects. The results proved the feasibility of a subject-independent framework, even by considering physiological signals, like EMG, from a wide number of participants. The proposed solution has been used in two different kinds of applications: (I) for the control of prosthesis devices, and (II) in an Industry 4.0 facility, in order to allow human and robot to work alongside or to cooperate. Indeed, a crucial aspect for making human and robots working together is their mutual knowledge and anticipation of other’s task, and physiological signals are capable to provide a signal even before the movement is started. In this thesis we proposed also an application of Robot Programming by Demonstration in a real industrial facility, in order to optimize the production of electric motor coils. The task was part of the European Robotic Challenge (EuRoC), and the goal was divided in phases of increasing complexity. This solution exploits Machine Learning algorithms, like GMM, and the robustness was assured by considering demonstration of the task from many subjects. We have been able to apply an advanced research topic to a real factory, achieving promising results

    Kernel density estimation of electromyographic signals and ensemble learning for highly accurate classification of a large set of hand/wrist motions

    Get PDF
    The performance of myoelectric control highly depends on the features extracted from surface electromyographic (sEMG) signals. We propose three new sEMG features based on the kernel density estimation. The trimmed mean of density (TMD), the entropy of density, and the trimmed mean absolute value of derivative density were computed for each sEMG channel. These features were tested for the classification of single tasks as well as of two tasks concurrently performed. For single tasks, correlation-based feature selection was used, and the features were then classified using linear discriminant analysis (LDA), non-linear support vector machines, and multi-layer perceptron. The eXtreme gradient boosting (XGBoost) classifier was used for the classification of two movements simultaneously performed. The second and third versions of the Ninapro dataset (conventional control) and Ameri’s movement dataset (simultaneous control) were used to test the proposed features. For the Ninapro dataset, the overall accuracy of LDA using the TMD feature was 98.99 ± 1.36% and 92.25 ± 9.48% for able-bodied and amputee subjects, respectively. Using ensemble learning of the three classifiers, the average macro and micro-F-score, macro recall, and precision on the validation sets were 98.23 ± 2.02, 98.32 ± 1.93, 98.32 ± 1.93, and 98.88 ± 1.31%, respectively, for the intact subjects. The movement misclassification percentage was 1.75 ± 1.73 and 3.44 ± 2.23 for the intact subjects and amputees. The proposed features were significantly correlated with the movement classes [Generalized Linear Model (GLM); P-value < 0.05]. An accurate online implementation of the proposed algorithm was also presented. For the simultaneous control, the overall accuracy was 99.71 ± 0.08 and 97.85 ± 0.10 for the XGBoost and LDA classifiers, respectively. The proposed features are thus promising for conventional and simultaneous myoelectric control.Peer ReviewedPostprint (published version

    Forked Recurrent Neural Network for Hand Gesture Classification Using Inertial Measurement Data

    Get PDF
    For many applications of hand gesture recognition, a delayfree, affordable, and mobile system relying on body signals is mandatory. Therefore, we propose an approach for hand gestures classification given signals of inertial measurement units (IMUs) that works with extremely short windows to avoid delays. With a simple recurrent neural network the suitability of the sensor modalities of an IMU (accelerometer, gyroscope, magnetometer) are evaluated by only providing data of one modality. For the multi-modal data a second network with mid-level fusion is proposed. Its forked architecture allows us to process data of each modality individually before carrying out a joint analysis for classification. Experiments on three databases reveal that even when relying on a single modality our proposed system outperforms state-of-the-art systems significantly. With the forked network classification accuracy can be further improved by over 10% absolute compared to the best reported system while causing a fraction of the delay
    • 

    corecore