160 research outputs found
A nonlinearities inverse distance weighting spatial interpolation approach applied to the surface electromyography signal
Spatial interpolation of a surface electromyography (sEMG) signal from a set of signals recorded from a multi-electrode array is a challenge in biomedical signal processing. Consequently, it could be useful to increase the electrodes' density in detecting the skeletal muscles' motor units under detection's vacancy. This paper used two types of spatial interpolation methods for estimation: Inverse distance weighted (IDW) and Kriging. Furthermore, a new technique is proposed using a modified nonlinearity formula based on IDW. A set of EMG signals recorded from the noninvasive multi-electrode grid from different types of subjects, sex, age, and type of muscles have been studied when muscles are under regular tension activity. A goodness of fit measure (R2) is used to evaluate the proposed technique. The interpolated signals are compared with the actual signals; the Goodness of fit measure's value is almost 99%, with a processing time of 100msec. The resulting technique is shown to be of high accuracy and matching of spatial interpolated signals to actual signals compared with IDW and Kriging techniques
Interpreting Deep Learning Features for Myoelectric Control: A Comparison with Handcrafted Features
The research in myoelectric control systems primarily focuses on extracting
discriminative representations from the electromyographic (EMG) signal by
designing handcrafted features. Recently, deep learning techniques have been
applied to the challenging task of EMG-based gesture recognition. The adoption
of these techniques slowly shifts the focus from feature engineering to feature
learning. However, the black-box nature of deep learning makes it hard to
understand the type of information learned by the network and how it relates to
handcrafted features. Additionally, due to the high variability in EMG
recordings between participants, deep features tend to generalize poorly across
subjects using standard training methods. Consequently, this work introduces a
new multi-domain learning algorithm, named ADANN, which significantly enhances
(p=0.00004) inter-subject classification accuracy by an average of 19.40%
compared to standard training. Using ADANN-generated features, the main
contribution of this work is to provide the first topological data analysis of
EMG-based gesture recognition for the characterisation of the information
encoded within a deep network, using handcrafted features as landmarks. This
analysis reveals that handcrafted features and the learned features (in the
earlier layers) both try to discriminate between all gestures, but do not
encode the same information to do so. Furthermore, using convolutional network
visualization techniques reveal that learned features tend to ignore the most
activated channel during gesture contraction, which is in stark contrast with
the prevalence of handcrafted features designed to capture amplitude
information. Overall, this work paves the way for hybrid feature sets by
providing a clear guideline of complementary information encoded within learned
and handcrafted features.Comment: The first two authors shared first authorship. The last three authors
shared senior authorship. 32 page
Brain-Computer Interface Based on Unsupervised Methods to Recognize Motor Intention for Command of a Robotic Exoskeleton
Stroke and road traffic injuries may severely affect movements of lower-limbs in humans, and
consequently the locomotion, which plays an important role in daily activities, and the quality
of life. Robotic exoskeleton is an emerging alternative, which may be used on patients with
motor deficits in the lower extremities to provide motor rehabilitation and gait assistance. However,
the effectiveness of robotic exoskeletons may be reduced by the autonomous ability of the
robot to complete the movement without the patient involvement. Then, electroencephalography
signals (EEG) have been addressed to design brain-computer interfaces (BCIs), in order to
provide a communication pathway for patients perform a direct control on the exoskeleton using
the motor intention, and thus increase their participation during the rehabilitation. Specially,
activations related to motor planning may help to improve the close loop between user and
exoskeleton, enhancing the cortical neuroplasticity. Motor planning begins before movement
onset, thus, the training stage of BCIs may be affected by the intuitive labeling process, as it is
not possible to use reference signals, such as goniometer or footswitch, to select those time periods
really related to motor planning. Therefore, the gait planning recognition is a challenge,
due to the high uncertainty of selected patterns, However, few BCIs based on unsupervised
methods to recognize gait planning/stopping have been explored.
This Doctoral Thesis presents unsupervised methods to improve the performance of BCIs during
gait planning/ stopping recognition. At this context, an adaptive spatial filter for on-line
processing based on the Concordance Correlation Coefficient (CCC) was addressed to preserve
the useful information on EEG signals, while rejecting neighbor electrodes around the electrode
of interest. Here, two methods for electrode selection were proposed. First, both standard deviation
and CCC between target electrodes and their correspondent neighbor electrodes are
analyzed on sliding windows to select those neighbors that are highly correlated. Second, Zscore
analysis is performed to reject those neighbor electrodes whose amplitude values presented
significant difference in relation to other neighbors.
Furthermore, another method that uses the representation entropy and the maximal information
compression index (MICI) was proposed for feature selection, which may be robust to
select patterns, as only it depends on cluster distribution. In addition, a statistical analysis
was introduced here to adjust, in the training stage of BCIs, regularized classifiers, such as
support vector machine (SVM) and regularized discriminant analysis (RDA).
Six subjects were adopted to evaluate the performance of different BCIs based on the proposed
viii
methods, during gait planning/stopping recognition.
The unsupervised approach for feature selection showed similar performance to other methods
based on linear discriminant analysis (LDA), when it was applied in a BCI based on the traditional
Weighted Average to recognize gait planning. Additionally, the proposed adaptive filter
improved the performance of BCIs based on traditional spatial filters, such as Local Average
Reference (LAR) and WAR, as well as others BCIs based on powerful methods, such as Common
Spatial Pattern (CSP), Filter Bank Common Spatial Pattern (FBCSP) and Riemannian
kernel (RK). RK presented the best performance in comparison to CSP and FBCSP, which
agrees with the hypothesis that unsupervised methods may be more appropriate to analyze
clusters of high uncertainty, as those formed by motor planning.
BCIs using adaptive filter based on Zscore analysis, with an unsupervised approach for feature
selection and RDA showed promising results to recognize both gait planning and gait stopping,
achieving for three subjects, good values of true positive rate (>70%) and false positive (<16%).
Thus, the proposed methods may be used to obtain an optimized BCI that preserves the useful
information, enhancing the gait planning/stopping recognition. In addition, the method
for feature selection has low computational cost, which may be suitable for applications that
demand short time of training, such as clinical application time
Multi-sensor fusion based on multiple classifier systems for human activity identification
Multimodal sensors in healthcare applications have been increasingly researched because it facilitates automatic and comprehensive monitoring of human behaviors, high-intensity sports management, energy expenditure estimation, and postural detection. Recent studies have shown the importance of multi-sensor fusion to achieve robustness, high-performance generalization, provide diversity and tackle challenging issue that maybe difficult with single sensor values. The aim of this study is to propose an innovative multi-sensor fusion framework to improve human activity detection performances and reduce misrecognition rate. The study proposes a multi-view ensemble algorithm to integrate predicted values of different motion sensors. To this end, computationally efficient classification algorithms such as decision tree, logistic regression and k-Nearest Neighbors were used to implement diverse, flexible and dynamic human activity detection systems. To provide compact feature vector representation, we studied hybrid bio-inspired evolutionary search algorithm and correlation-based feature selection method and evaluate their impact on extracted feature vectors from individual sensor modality. Furthermore, we utilized Synthetic Over-sampling minority Techniques (SMOTE) algorithm to reduce the impact of class imbalance and improve performance results. With the above methods, this paper provides unified framework to resolve major challenges in human activity identification. The performance results obtained using two publicly available datasets showed significant improvement over baseline methods in the detection of specific activity details and reduced error rate. The performance results of our evaluation showed 3% to 24% improvement in accuracy, recall, precision, F-measure and detection ability (AUC) compared to single sensors and feature-level fusion. The benefit of the proposed multi-sensor fusion is the ability to utilize distinct feature characteristics of individual sensor and multiple classifier systems to improve recognition accuracy. In addition, the study suggests a promising potential of hybrid feature selection approach, diversity-based multiple classifier systems to improve mobile and wearable sensor-based human activity detection and health monitoring system. - 2019, The Author(s).This research is supported by University of Malaya BKP Special Grant no vote BKS006-2018.Scopu
Simple and computationally efficient movement classification approach for EMG-controlled prosthetic hand: ANFIS vs. artificial neural network
The aim of this paper is to propose an exploratory study on simple, accurate and computationally efficient movement classification technique for prosthetic hand application. The surface myoelectric signals were acquired from 2 muscles—Flexor Carpi Ulnaris and Extensor Carpi Radialis of 4 normal-limb subjects. These signals were segmented and the features extracted using a new combined time-domain method of feature extraction. The fuzzy C-mean clustering method and scatter plots were used to evaluate the performance of the proposed multi-feature versus other accurate multi-features. Finally, the movements were classified using the adaptive neuro-fuzzy inference system (ANFIS) and the artificial neural network. Comparison results indicate that ANFIS not only displays higher classification accuracy (88.90%) than the artificial neural network, but it also improves computation time significantl
Development of an EMG-based Muscle Health Model for Elbow Trauma Patients
Musculoskeletal (MSK) conditions are a leading cause of pain and disability worldwide. Rehabilitation is critical for recovery from these conditions and for the prevention of long-term disability. Robot-assisted therapy has been demonstrated to provide improvements to stroke rehabilitation in terms of efficiency and patient adherence. However, there are no wearable robot-assisted solutions for patients with MSK injuries. One of the limiting factors is the lack of appropriate models that allow the use of biosignals as an interface input. Furthermore, there are no models to discern the health of MSK patients as they progress through their therapy.
This thesis describes the design, data collection, analysis, and validation of a novel muscle health model for elbow trauma patients. Surface electromyography (sEMG) data sets were collected from the injured arms of elbow trauma patients performing 10 upper-limb motions. The data were assessed and compared to sEMG data collected from the patients\u27 contralateral healthy limbs. A statistical analysis was conducted to identify trends relating the sEMG signals to muscle health.
sEMG-based classification models for muscle health were developed. Relevant sEMG features were identified and combined into feature sets for the classification models. The classifiers were used to distinguish between two levels of health: healthy and injured (50% baseline accuracy rate). Classification models based on individual motions achieved cross-validation accuracies of 48.2--79.6%. Following feature selection and optimization of the models, cross-validation accuracies of up to 82.1% were achieved.
This work suggests that there is a potential for implementing an EMG-based model of muscle health in a rehabilitative elbow brace to assess patients recovering from MSK elbow trauma. However, more research is necessary to improve the accuracy and the specificity of the classification models
Embedded machine learning using microcontrollers in wearable and ambulatory systems for health and care applications: a review
The use of machine learning in medical and assistive applications is receiving significant attention thanks to the unique potential it offers to solve complex healthcare problems for which no other solutions had been found. Particularly promising in this field is the combination of machine learning with novel wearable devices. Machine learning models, however, suffer from being computationally demanding, which typically has resulted on the acquired data having to be transmitted to remote cloud servers for inference. This is not ideal from the system’s requirements point of view. Recently, efforts to replace the cloud servers with an alternative inference device closer to the sensing platform, has given rise to a new area of research Tiny Machine Learning (TinyML). In this work, we investigate the different challenges and specifications trade-offs associated to existing hardware options, as well as recently developed software tools, when trying to use microcontroller units (MCUs) as inference devices for health and care applications. The paper also reviews existing wearable systems incorporating MCUs for monitoring, and management, in the context of different health and care intended uses. Overall, this work addresses the gap in literature targeting the use of MCUs as edge inference devices for healthcare wearables. Thus, can be used as a kick-start for embedding machine learning models on MCUs, focusing on healthcare wearables
- …