1,620 research outputs found
GR-29 Wrist Intent Recognition for Stroke Rehabilitation
Abstract Hand mentor robotic device is beneficial for stroke patients . This is rehabilitation technique used in stroke therapy. It strengthens and improves the range of motion which ultimately improves the quality of life for severely impaired stroke patients. It is easy to use without assistance and most importantly stroke survivors able to use independently. Usage of hand mentor device is quite expensive for stroke patients on hourly basis . Coming up with most efficient deep learning algorithm for sensor data is motivation to cut down the cost and easy availability usage for stroke patients. EMG signal is recorded using relevant sensors which provides useful information to infer muscle movement. In this study, we utilized publicly available EMG signal datasets recorded from upper limb of human subjects to develop a neural network based model for the prediction of wrist motion intention. Research Question or Motivation The Motivation of this study is to train a simple neural network model to accurately predict three basic wrist motions (extension, flexion and no motion) using optimum number of EMG sensors. This model can be further deployed to augment the capabilities of commercially available robotic-assistive rehabilitation devices. Materials and Methods Sensor-based continuous hand gesture recognition activity requires profound knowledge about gesture activities from multitudes of low-level sensor readings. There are two ways to provide the solutions either to go by handcrafted features from sensor data or use deep learning techniques. The advantage of using deep learning technique is to utilize the automatic high-level feature extraction with outstanding performance. However, sensor data requires signal pre-or post-processing such as feature selection, dimension reduction, denoising, etc. Based on the literature review of many research papers, we found that 1D Convolutional Neural Network have recently become the state-of-the-art technique for crucial signal processing applications. 1D CNN is very effective when we aim to extract features from fixed-length segments of the overall dataset and where the location of the feature within the segment is not of high relevance. In addition to this, real-time and low-cost hardware implementation is feasible using 1D CNN. After a successful literature review on 1D CNN knowing its advantages and benefits of using over signal. We decided to use 1D CNN on raw EMG signal data. Preliminary results: Since it is an application-based project, we planned to work in phases to achieve the long-term goal of benefitting stroke patients using deep learning techniques. In this initial phase of the study, we utilized publicly available EMG dataset for hand gestures from UCI Machine Learning Repository to test the performance of the 1D CNN algorithm on gesture classification. We used only 3 labels (hand at rest, wrist flexion, wrist extension) out of 8 labels in the dataset for our particular application requirement. This dataset contains 8 EMG channels collected from commercial MYO Thalmic bracelet device. We first performed an initial analysis to investigate the optimum number of sensor/channels based on the highest gesture classification accuracy using KNN, Decision Tree and Naïve Bayes algorithms. As a result of this analysis, we obtained the optimum channel combination (Ch1, Ch4, Ch5, Ch8) data which generates the best classification accuracy. We used these 4 sensor datasets to train a 1D CNN with 78/22 train/test split. Dataset contains total 36 subjects. Data with subject number less than or equal to 28 is considered as training set and data with subject number greater than 28 is considered as test set. We also performed an optimization study on finding the optimum time signal window and overlap sizes of 100 ms and 50 ms . We achieved test accuracy of 97% for the classification accuracy of 3 gestures (hand at rest, wrist flexion, wrist extension).Advisors(s): Supervisor : Dr. Coskun Tekes Email id : [email protected](s): Artificial IntelligenceCS799
Distributionally Robust Semi-Supervised Learning for People-Centric Sensing
Semi-supervised learning is crucial for alleviating labelling burdens in
people-centric sensing. However, human-generated data inherently suffer from
distribution shift in semi-supervised learning due to the diverse biological
conditions and behavior patterns of humans. To address this problem, we propose
a generic distributionally robust model for semi-supervised learning on
distributionally shifted data. Considering both the discrepancy and the
consistency between the labeled data and the unlabeled data, we learn the
latent features that reduce person-specific discrepancy and preserve
task-specific consistency. We evaluate our model in a variety of people-centric
recognition tasks on real-world datasets, including intention recognition,
activity recognition, muscular movement recognition and gesture recognition.
The experiment results demonstrate that the proposed model outperforms the
state-of-the-art methods.Comment: 8 pages, accepted by AAAI201
PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform
Computing with high-dimensional (HD) vectors, also referred to as
, is a brain-inspired alternative to computing with
scalars. Key properties of HD computing include a well-defined set of
arithmetic operations on hypervectors, generality, scalability, robustness,
fast learning, and ubiquitous parallel operations. HD computing is about
manipulating and comparing large patterns-binary hypervectors with 10,000
dimensions-making its efficient realization on minimalistic ultra-low-power
platforms challenging. This paper describes HD computing's acceleration and its
optimization of memory accesses and operations on a silicon prototype of the
PULPv3 4-core platform (1.5mm, 2mW), surpassing the state-of-the-art
classification accuracy (on average 92.4%) with simultaneous 3.7
end-to-end speed-up and 2 energy saving compared to its single-core
execution. We further explore the scalability of our accelerator by increasing
the number of inputs and classification window on a new generation of the PULP
architecture featuring bit-manipulation instruction extensions and larger
number of 8 cores. These together enable a near ideal speed-up of 18.4
compared to the single-core PULPv3
Intersected EMG heatmaps and deep learning based gesture recognition
Hand gesture recognition in myoelectric based prosthetic devices is a key challenge to offering effective solutions to hand/lower arm amputees. A novel hand gesture recognition methodology that employs the difference of EMG energy heatmaps as the input of a specific designed deep learning neural network is presented. Experimental results using data from real amputees indicate that the proposed design achieves 94.31% as average accuracy with best accuracy rate of 98.96%. A comparison of experimental results between the proposed novel hand gesture recognition methodology and other similar approaches indicates the superior effectiveness of the new design
Transradial Amputee Gesture Classification Using an Optimal Number of sEMG Sensors: An Approach Using ICA Clustering
© 2001-2011 IEEE. Surface electromyography (sEMG)-based pattern recognition studies have been widely used to improve the classification accuracy of upper limb gestures. Information extracted from multiple sensors of the sEMG recording sites can be used as inputs to control powered upper limb prostheses. However, usage of multiple EMG sensors on the prosthetic hand is not practical and makes it difficult for amputees due to electrode shift/movement, and often amputees feel discomfort in wearing sEMG sensor array. Instead, using fewer numbers of sensors would greatly improve the controllability of prosthetic devices and it would add dexterity and flexibility in their operation. In this paper, we propose a novel myoelectric control technique for identification of various gestures using the minimum number of sensors based on independent component analysis (ICA) and Icasso clustering. The proposed method is a model-based approach where a combination of source separation and Icasso clustering was utilized to improve the classification performance of independent finger movements for transradial amputee subjects. Two sEMG sensor combinations were investigated based on the muscle morphology and Icasso clustering and compared to Sequential Forward Selection (SFS) and greedy search algorithm. The performance of the proposed method has been validated with five transradial amputees, which reports a higher classification accuracy (> 95%). The outcome of this study encourages possible extension of the proposed approach to real time prosthetic applications
putEMG -- a surface electromyography hand gesture recognition dataset
In this paper, we present a putEMG dataset intended for evaluation of hand
gesture recognition methods based on sEMG signal. The dataset was acquired for
44 able-bodied subjects and include 8 gestures (3 full hand gestures, 4
pinches, and idle). It consists of uninterrupted recordings of 24 sEMG channels
from the subject's forearm, RGB video stream and depth camera images used for
hand motion tracking. Moreover, exemplary processing scripts are also
published. putEMG dataset is available under Creative Commons
Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license at:
https://www.biolab.put.poznan.pl/putemg-dataset/. The dataset was validated
regarding sEMG amplitudes and gesture recognition performance. The
classification was performed using state-of-the-art classifiers and feature
sets. Accuracy of 90% was achieved for SVM classifier utilising RMS feature and
for LDA classifier using Hudgin's and Du's feature sets. Analysis of
performance for particular gestures showed that LDA/Du combination has
significantly higher accuracy for full hand gestures, while SVM/RMS performs
better for pinch gestures. Presented dataset can be used as a benchmark for
various classification methods, evaluation of electrode localisation concepts,
or development of classification methods invariant to user-specific features or
electrode displacement
Implementation of User-Independent Hand Gesture Recognition Classification Models Using IMU and EMG-based Sensor Fusion Techniques
According to the World Health Organization, stroke is the third leading cause of disability. A common consequence of stroke is hemiparesis, which leads to the impairment of one side of the body and affects the performance of activities of daily living. It has been proven that targeting the motor impairments as early as possible while using wearable mechatronic devices as a robot assisted therapy, and letting the patient be in control of the robotic system can improve the rehabilitation outcomes. However, despite the increased progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. This work presents, a user-independent gesture classification method based on a sensor fusion technique that combines surface electromyography (EMG) and an inertial measurement unit (IMU). The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform 10 types of gestures in 4 different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using 4 different classification methods. Finally, for each classification method, a 5-fold cross-validation method was used to test the efficacy of the classification algorithms. Overall classification accuracies in the range of 33.11%-72.1% were obtained. However, following the optimization of the gesture datasets, the overall classification accuracies increased to the range of 45.5%-84.5%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural human machine interface that allows better control of wearable mechatronic devices during robot assisted therapies
- …