425 research outputs found

    Predicting Continuous Locomotion Modes via Multidimensional Feature Learning from sEMG

    Full text link
    Walking-assistive devices require adaptive control methods to ensure smooth transitions between various modes of locomotion. For this purpose, detecting human locomotion modes (e.g., level walking or stair ascent) in advance is crucial for improving the intelligence and transparency of such robotic systems. This study proposes Deep-STF, a unified end-to-end deep learning model designed for integrated feature extraction in spatial, temporal, and frequency dimensions from surface electromyography (sEMG) signals. Our model enables accurate and robust continuous prediction of nine locomotion modes and 15 transitions at varying prediction time intervals, ranging from 100 to 500 ms. In addition, we introduced the concept of 'stable prediction time' as a distinct metric to quantify prediction efficiency. This term refers to the duration during which consistent and accurate predictions of mode transitions are made, measured from the time of the fifth correct prediction to the occurrence of the critical event leading to the task transition. This distinction between stable prediction time and prediction time is vital as it underscores our focus on the precision and reliability of mode transition predictions. Experimental results showcased Deep-STP's cutting-edge prediction performance across diverse locomotion modes and transitions, relying solely on sEMG data. When forecasting 100 ms ahead, Deep-STF surpassed CNN and other machine learning techniques, achieving an outstanding average prediction accuracy of 96.48%. Even with an extended 500 ms prediction horizon, accuracy only marginally decreased to 93.00%. The averaged stable prediction times for detecting next upcoming transitions spanned from 28.15 to 372.21 ms across the 100-500 ms time advances.Comment: 10 pages,7 figure

    The Effect of Space-filling Curves on the Efficiency of Hand Gesture Recognition Based on sEMG Signals

    Get PDF
    Over the past few years, Deep learning (DL) has revolutionized the field of data analysis. Not only are the algorithmic paradigms changed, but also the performance in various classification and prediction tasks has been significantly improved with respect to the state-of-the-art, especially in the area of computer vision. The progress made in computer vision has produced a spillover in many other domains, such as biomedical engineering. Some recent works are directed towards surface electromyography (sEMG) based hand gesture recognition, often addressed as an image classification problem and solved using tools such as Convolutional Neural Networks (CNN). This paper extends our previous work on the application of the Hilbert space-filling curve for the generation of image representations from multi-electrode sEMG signals, by investigating how the Hilbert curve compares to the Peano- and Z-order space-filling curves. The proposed space-filling mapping methods are evaluated on a variety of network architectures and in some cases yield a classification improvement of at least 3%, when used to structure the inputs before feeding them into the original network architectures

    Multikernel convolutional neural network for sEMG based hand gesture classification

    Get PDF
    openIl riconoscimento dei gesti della mano è un argomento ampiamente discusso in letteratura, dove vengono analizzate diverse tecniche sia in termini di tipi di segnale in ingresso che di algoritmi. Tra i più utilizzati ci sono i segnali elettromiografici (sEMG), già ampiamente sfruttati nelle applicazioni di interazione uomo-macchina (HMI). Determinare come decodificare le informazioni contenute nei segnali EMG in modo robusto e accurato è un problema chiave per il quale è urgente trovare una soluzione. Recentemente, molti incarichi di riconoscimento dei pattern EMG sono stati affrontati utilizzando metodi di deep learning. Nonostante le elevate prestazioni di questi ultimi, le loro capacità di generalizzazione sono spesso limitate dall'elevata eterogeneità tra i soggetti, l'impedenza cutanea, il posizionamento dei sensori, ecc. Inoltre, poiché questo progetto è focalizzato sull'applicazione in tempo reale di protesi, ci sono maggiori vincoli sui tempi di risposta del sistema che riducono la complessità dei modelli. In questa tesi è stata testata una rete neurale convoluzionale multi-kernel su diversi dataset pubblici per verificare la sua generalizzabilità. Inoltre, è stata analizzata la capacità del modello di superare i limiti inter-soggetto e inter-sessione in giorni diversi, preservando i vincoli legati a un sistema embedded. I risultati confermano le difficoltà incontrate nell'estrazione di informazioni dai segnali emg; tuttavia, dimostrano la possibilità di ottenere buone prestazioni per un uso robusto di mani prostetiche. Inoltre, è possibile ottenere prestazioni migliori personalizzando il modello con tecniche di transfer learning e di adattamento al dominio.Hand gesture recognition is a widely discussed topic in the literature, where different techniques are analyzed in terms of both input signal types and algorithms. Among the most widely used are electromyographic signals (sEMG), which are already widely exploited in human-computer interaction (HMI) applications. Determining how to decode the information contained in EMG signals robustly and accurately is a key problem for which a solution is urgently needed. Recently, many EMG pattern recognition tasks have been addressed using deep learning methods. Despite their high performance, their generalization capabilities are often limited by high heterogeneity among subjects, skin impedance, sensor placement, etc. In addition, because this project is focused on the real-time application of prostheses, there are greater constraints on the system response times that reduce the complexity of the models. In this thesis, a multi-kernel convolutional neural network was tested on several public datasets to verify its generalizability. In addition, the model's ability to overcome inter-subject and inter-session constraints on different days while preserving the constraints associated with an embedded system was analyzed. The results confirm the difficulties encountered in extracting information from emg signals; however, they demonstrate the possibility of achieving good performance for robust use of prosthetic hands. In addition, better performance can be achieved by customizing the model with transfer learning and domain-adaptationtechniques

    Interpreting Deep Learning Features for Myoelectric Control: A Comparison with Handcrafted Features

    Get PDF
    The research in myoelectric control systems primarily focuses on extracting discriminative representations from the electromyographic (EMG) signal by designing handcrafted features. Recently, deep learning techniques have been applied to the challenging task of EMG-based gesture recognition. The adoption of these techniques slowly shifts the focus from feature engineering to feature learning. However, the black-box nature of deep learning makes it hard to understand the type of information learned by the network and how it relates to handcrafted features. Additionally, due to the high variability in EMG recordings between participants, deep features tend to generalize poorly across subjects using standard training methods. Consequently, this work introduces a new multi-domain learning algorithm, named ADANN, which significantly enhances (p=0.00004) inter-subject classification accuracy by an average of 19.40% compared to standard training. Using ADANN-generated features, the main contribution of this work is to provide the first topological data analysis of EMG-based gesture recognition for the characterisation of the information encoded within a deep network, using handcrafted features as landmarks. This analysis reveals that handcrafted features and the learned features (in the earlier layers) both try to discriminate between all gestures, but do not encode the same information to do so. Furthermore, using convolutional network visualization techniques reveal that learned features tend to ignore the most activated channel during gesture contraction, which is in stark contrast with the prevalence of handcrafted features designed to capture amplitude information. Overall, this work paves the way for hybrid feature sets by providing a clear guideline of complementary information encoded within learned and handcrafted features.Comment: The first two authors shared first authorship. The last three authors shared senior authorship. 32 page

    Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning

    Get PDF
    In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyography-based gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples. This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.Comment: Source code and datasets available: https://github.com/Giguelingueling/MyoArmbandDatase

    Longitudinal tracking of physiological state with electromyographic signals.

    Get PDF
    Electrophysiological measurements have been used in recent history to classify instantaneous physiological configurations, e.g., hand gestures. This work investigates the feasibility of working with changes in physiological configurations over time (i.e., longitudinally) using a variety of algorithms from the machine learning domain. We demonstrate a high degree of classification accuracy for a binary classification problem derived from electromyography measurements before and after a 35-day bedrest. The problem difficulty is increased with a more dynamic experiment testing for changes in astronaut sensorimotor performance by taking electromyography and force plate measurements before, during, and after a jump from a small platform. A LASSO regularization is performed to observe changes in relationship between electromyography features and force plate outcomes. SVM classifiers are employed to correctly identify the times at which these experiments are performed, which is important as these indicate a trajectory of adaptation

    Development of an EMG-based Muscle Health Model for Elbow Trauma Patients

    Get PDF
    Musculoskeletal (MSK) conditions are a leading cause of pain and disability worldwide. Rehabilitation is critical for recovery from these conditions and for the prevention of long-term disability. Robot-assisted therapy has been demonstrated to provide improvements to stroke rehabilitation in terms of efficiency and patient adherence. However, there are no wearable robot-assisted solutions for patients with MSK injuries. One of the limiting factors is the lack of appropriate models that allow the use of biosignals as an interface input. Furthermore, there are no models to discern the health of MSK patients as they progress through their therapy. This thesis describes the design, data collection, analysis, and validation of a novel muscle health model for elbow trauma patients. Surface electromyography (sEMG) data sets were collected from the injured arms of elbow trauma patients performing 10 upper-limb motions. The data were assessed and compared to sEMG data collected from the patients\u27 contralateral healthy limbs. A statistical analysis was conducted to identify trends relating the sEMG signals to muscle health. sEMG-based classification models for muscle health were developed. Relevant sEMG features were identified and combined into feature sets for the classification models. The classifiers were used to distinguish between two levels of health: healthy and injured (50% baseline accuracy rate). Classification models based on individual motions achieved cross-validation accuracies of 48.2--79.6%. Following feature selection and optimization of the models, cross-validation accuracies of up to 82.1% were achieved. This work suggests that there is a potential for implementing an EMG-based model of muscle health in a rehabilitative elbow brace to assess patients recovering from MSK elbow trauma. However, more research is necessary to improve the accuracy and the specificity of the classification models

    Recent Advances in Motion Analysis

    Get PDF
    The advances in the technology and methodology for human movement capture and analysis over the last decade have been remarkable. Besides acknowledged approaches for kinematic, dynamic, and electromyographic (EMG) analysis carried out in the laboratory, more recently developed devices, such as wearables, inertial measurement units, ambient sensors, and cameras or depth sensors, have been adopted on a wide scale. Furthermore, computational intelligence (CI) methods, such as artificial neural networks, have recently emerged as promising tools for the development and application of intelligent systems in motion analysis. Thus, the synergy of classic instrumentation and novel smart devices and techniques has created unique capabilities in the continuous monitoring of motor behaviors in different fields, such as clinics, sports, and ergonomics. However, real-time sensing, signal processing, human activity recognition, and characterization and interpretation of motion metrics and behaviors from sensor data still representing a challenging problem not only in laboratories but also at home and in the community. This book addresses open research issues related to the improvement of classic approaches and the development of novel technologies and techniques in the domain of motion analysis in all the various fields of application
    corecore