2,516 research outputs found

    Deep Residual Shrinkage Networks for EMG-based Gesture Identification

    Full text link
    This work introduces a method for high-accuracy EMG based gesture identification. A newly developed deep learning method, namely, deep residual shrinkage network is applied to perform gesture identification. Based on the feature of EMG signal resulting from gestures, optimizations are made to improve the identification accuracy. Finally, three different algorithms are applied to compare the accuracy of EMG signal recognition with that of DRSN. The result shows that DRSN excel traditional neural networks in terms of EMG recognition accuracy. This paper provides a reliable way to classify EMG signals, as well as exploring possible applications of DRSN

    Interpreting Deep Learning Features for Myoelectric Control: A Comparison with Handcrafted Features

    Get PDF
    The research in myoelectric control systems primarily focuses on extracting discriminative representations from the electromyographic (EMG) signal by designing handcrafted features. Recently, deep learning techniques have been applied to the challenging task of EMG-based gesture recognition. The adoption of these techniques slowly shifts the focus from feature engineering to feature learning. However, the black-box nature of deep learning makes it hard to understand the type of information learned by the network and how it relates to handcrafted features. Additionally, due to the high variability in EMG recordings between participants, deep features tend to generalize poorly across subjects using standard training methods. Consequently, this work introduces a new multi-domain learning algorithm, named ADANN, which significantly enhances (p=0.00004) inter-subject classification accuracy by an average of 19.40% compared to standard training. Using ADANN-generated features, the main contribution of this work is to provide the first topological data analysis of EMG-based gesture recognition for the characterisation of the information encoded within a deep network, using handcrafted features as landmarks. This analysis reveals that handcrafted features and the learned features (in the earlier layers) both try to discriminate between all gestures, but do not encode the same information to do so. Furthermore, using convolutional network visualization techniques reveal that learned features tend to ignore the most activated channel during gesture contraction, which is in stark contrast with the prevalence of handcrafted features designed to capture amplitude information. Overall, this work paves the way for hybrid feature sets by providing a clear guideline of complementary information encoded within learned and handcrafted features.Comment: The first two authors shared first authorship. The last three authors shared senior authorship. 32 page

    Intersected EMG heatmaps and deep learning based gesture recognition

    Get PDF
    Hand gesture recognition in myoelectric based prosthetic devices is a key challenge to offering effective solutions to hand/lower arm amputees. A novel hand gesture recognition methodology that employs the difference of EMG energy heatmaps as the input of a specific designed deep learning neural network is presented. Experimental results using data from real amputees indicate that the proposed design achieves 94.31% as average accuracy with best accuracy rate of 98.96%. A comparison of experimental results between the proposed novel hand gesture recognition methodology and other similar approaches indicates the superior effectiveness of the new design

    Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning

    Get PDF
    In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyography-based gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples. This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.Comment: Source code and datasets available: https://github.com/Giguelingueling/MyoArmbandDatase

    EMG- BASED HAND GESTURE RECOGNITION USING DEEP LEARNING AND SIGNAL-TO-IMAGE CONVERSION TOOLS

    Get PDF
    In this paper, deep learning-based hand gesture recognition using surface EMG signals is presented. We use Principal component analysis (PCA) to reduce the data set. Here a threshold-based approach is also proposed to select the principal components (PCs). Then the Continuous wavelet transform (CWT) is carried out to prepare the time-frequency representation of images which is used as the input of the classifier. A very deep convolutional neural network (CNN) is proposed as the gesture classifier. The classifier is trained on 10-fold cross-validation framework and we achieve average recognition accuracy of 99.44%, sensitivity of 97.78% and specificity of 99.68% respectively

    Distributionally Robust Semi-Supervised Learning for People-Centric Sensing

    Full text link
    Semi-supervised learning is crucial for alleviating labelling burdens in people-centric sensing. However, human-generated data inherently suffer from distribution shift in semi-supervised learning due to the diverse biological conditions and behavior patterns of humans. To address this problem, we propose a generic distributionally robust model for semi-supervised learning on distributionally shifted data. Considering both the discrepancy and the consistency between the labeled data and the unlabeled data, we learn the latent features that reduce person-specific discrepancy and preserve task-specific consistency. We evaluate our model in a variety of people-centric recognition tasks on real-world datasets, including intention recognition, activity recognition, muscular movement recognition and gesture recognition. The experiment results demonstrate that the proposed model outperforms the state-of-the-art methods.Comment: 8 pages, accepted by AAAI201
    • …
    corecore