4 research outputs found

    Effectiveness of surface electromyography in pattern classification for upper limb amputees

    Get PDF
    This study was undertaken to explore 18 time domain (TD) and time-frequency domain (TFD) feature configurations to determine the most discriminative feature sets for classification. Features were extracted from the surface electromyography (sEMG) signal of 17 hand and wrist movements and used to perform a series of classification trials with the random forest classifier. Movement datasets for 11 intact subjects and 9 amputees from the NinaPro online database repository were used. The aim was to identify any optimum configurations that combined features from both domains and whether there was consistency across subject type for any standout features. This work built on our previous research to incorporate the TFD, using a Discrete Wavelet Transform with a Daubechies wavelet. Findings report configurations containing the same features combined from both domains perform best across subject type (TD: root mean square (RMS), waveform length, and slope sign changes; TFD: RMS, standard deviation, and energy). These mixed-domain configurations can yield optimal performance (intact subjects: 90.98%; amputee subjects: 75.16%), but with only limited improvement on single-domain configurations. This suggests there is limited scope in attempting to build a single absolute feature configuration and more focus should be put on enhancing the classification methodology for adaptivity and robustness under actual operating conditions

    Pattern classification of hand movements using time domain features of electromyography

    Get PDF
    Myoelectric control of prostheses is a long-established technique, using surface electromyography (sEMG) to detect the electrical signals of muscle activity and perform subsequent mechanical actions. Despite several decades’ research, robust, responsive and intuitive control schemes remain elusive. Current commercial hardware advances offer a variety of movements but the control systems are unnatural, using sequential switching methods triggered by specific sEMG signals. However, recent research with pattern recognition and simultaneous and proportional control shows good promise for natural myoelectric control. This paper investigates several sEMG time domain features using a series of hand movements performed by 11 subjects, taken from a benchmark database, to determine if optimal classification accuracy is dependent on feature set size. The features were extracted from the data using a sliding window process and applied to five machine learning classifiers, of which Random Forest consistently performed best. Results suggest a few simple features such as Root Mean Square and Waveform Length achieve comparable performance to using the entire feature set, when identifying the hand movements, although further work is required for feature optimisation

    Adaptive frameworks for robust myoelectric hand gesture prediction using machine learning and deep learning

    No full text
    Natural, dependable prosthesis operation using a myoelectric interface is an extremely difficult and challenging problem. The technique utilises surface electromyography (sEMG), detected as electrical signals representing muscle activity and converts them to corresponding prosthetic actions. The research conducted herein places a focus on supplying reliable operational performance and movement dexterity via myoelectric control using machine learning (ML) and deep learning (DL) strategies. The intention is to investigate the probability of supplying an upper-limb amputee with the capability of accurately completing fine-grained hand gestures. To achieve this aim, three criteria are established as performance measures to evaluate the ML and DL solutions employed, namely robustness, adaptability, and continuous-simultaneous (C&S) control. The work first investigated the classical ML methodology, in particular the feature engineering process to ascertain whether feature-set size was important and if a unique, robust feature set could be established. Combinations of time domain (TD) features were created using a series of 17 hand gestures performed by 11 subjects, taken from Database 2 of the Ninapro benchmark repository. Features were extracted using a sliding window process and applied to five ML classifiers, of which Random Forest (RF) performed best. Results suggested a feature configuration of a few simple features, root mean square (RMS), waveform length (WL), and slope sign changes (SSC), achieved comparable performance (90.53% classification accuracy) to larger and state-of-the-art feature sets (90.57%). This was built upon by research exploring the addition of time-frequency domain (TFD) features, based on wavelet transforms and their effectiveness on both intact and amputee subjects. Features were extracted from the resulting coefficients of a discrete wavelet transform (DWT), performed on the same 17 hand gestures of Ninapro Database 2 (intact) and Database 3 (amputee), creating 18 feature configurations. The aim was to identify any optimum configurations that combined features from both domains and whether there was consistency across subject type for any standout features. Findings reported a five-feature, combined-domain configuration of TD-based RMS, WL, SSC and TFD-based RMS and standard deviation, performed best for intact and amputee subjects (90.98% and 75.16%, respectively). The minimal accuracy improvement suggested there was limited scope for adding the computationally-heavy DWT and building a single, absolute feature configuration. More focus should therefore be applied to enhancing the classification method for robust, adaptable operation.The next research component explored the requirements for accurately predicting user intention using DL when performing fine-grained hand movements. This signified a switch from classification to a regression approach to enable investigation into C&S control for multiple joints of the hand. The focus was on combining a feature engineering process with the effective capability of DL to further identify salient biological characteristics. The established TD three-feature configuration was used, taken from 17 hand gestures of 40 subjects from Ninapro Database 2. The sEMG feature data were mapped to six sensors from a CyberGlove II, located at wrist, finger, and thumb joints of interest, representing associated hand kinematic data. A two-layer bidirectional gated recurrent unit (Bidi-GRU) model proved most consistent with a root mean square error (RMSE) of 3.50 and 98.04% R² scored during prediction tests.For the final element, a study of a DL-only solution to robust, adaptable, C&S control of multiple hand joints was undertaken. The feature learning capabilities of four DL models using both raw sEMG and feature data were compared against the established feature-based Bidi-GRU. Additionally, three classical ML algorithms were compared with. A novel database (LeapMyo) was created using low-cost wearable sensor hardware to acquire sEMG and joint angle data from 14 joints of the hand, for 12 subjects performing 12 hand gestures. Data augmentation of the feature input was also employed to explore any enhancement on DL model performance. Finally, a mapping framework was developed using a partial least squares (PLS) method to directly predict hand kinematic data from sEMG. It was found that combining a feature engineering input with temporal DL model (FEAT-BGRU) gave the best prediction performance (1.92° RMSE) when compared to feature learning and classical ML strategies, even with data augmentation applied. This advocates a place for human expertise still, despite the continued growth of more automated DL approaches. Interestingly, when adding batch normalisation (BN) layers to this model, the performance improved further (1.78° RMSE and 99.37% R²), which we explain as BN providing focus to layer output, providing a stronger data range from which the fully connected layer can make a more accurate joint angle prediction. The PLS framework performed less well (13.80° RMSE and 69.38% R²), indicating the importance of utilising kinematic history data for model input, alongside sEMG features.</div

    A deep adaptive framework for robust myoelectric hand movement prediction

    No full text
    This work explored the requirements of accurately and reliably predicting user intention using a deep learning methodology when performing fine-grained movements of the human hand. The focus was on combining a feature engineering process with the effective capability of deep learning to further identify salient characteristics from a biological input signal. 3 time domain features (root mean square, waveform length, and slope sign changes) were extracted from the surface electromyography (sEMG) signal of 17 hand and wrist movements performed by 40 subjects. The feature data was mapped to 6 sensor bend resistance readings from a CyberGlove II system, representing the associated hand kinematic data. These sensors were located at specific joints of interest on the human hand (the thumb’s metacarpophalangeal joint, the proximal interphalangeal joint of each finger, and the radiocarpal joint of the wrist). All datasets were taken from database 2 of the NinaPro online database repository. A 3-layer long short-term memory model with dropout was developed to predict the 6 glove sensor readings using a corresponding sEMG feature vector as input. Initial results from trials using test data from the 40 subjects produce an average mean squared error of 0.176. This indicates a viable pathway to follow for this prediction method of hand movement data, although further work is needed to optimize the model and to analyze the data with a more detailed set of metrics
    corecore