151 research outputs found

    Effectiveness of surface electromyography in pattern classification for upper limb amputees

    Get PDF
    This study was undertaken to explore 18 time domain (TD) and time-frequency domain (TFD) feature configurations to determine the most discriminative feature sets for classification. Features were extracted from the surface electromyography (sEMG) signal of 17 hand and wrist movements and used to perform a series of classification trials with the random forest classifier. Movement datasets for 11 intact subjects and 9 amputees from the NinaPro online database repository were used. The aim was to identify any optimum configurations that combined features from both domains and whether there was consistency across subject type for any standout features. This work built on our previous research to incorporate the TFD, using a Discrete Wavelet Transform with a Daubechies wavelet. Findings report configurations containing the same features combined from both domains perform best across subject type (TD: root mean square (RMS), waveform length, and slope sign changes; TFD: RMS, standard deviation, and energy). These mixed-domain configurations can yield optimal performance (intact subjects: 90.98%; amputee subjects: 75.16%), but with only limited improvement on single-domain configurations. This suggests there is limited scope in attempting to build a single absolute feature configuration and more focus should be put on enhancing the classification methodology for adaptivity and robustness under actual operating conditions

    A temporal-to-spatial neural network for classification of hand movements from electromyography data

    Get PDF
    Deep convolutional neural networks (CNNs) are appealing for the purpose of classification of hand movements from surface electromyography (sEMG) data because they have the ability to perform automated person-specific feature extraction from raw data. In this paper, we make the novel contribution of proposing and evaluating a design for the early processing layers in the deep CNN for multichannel sEMG. Specifically, we propose a novel temporal-to-spatial (TtS) CNN architecture, where the first layer performs convolution separately on each sEMG channel to extract temporal features. This is motivated by the idea that sEMG signals in each channel are mediated by one or a small subset of muscles, whose temporal activation patterns are associated with the signature features of a gesture. The temporal layer captures these signature features for each channel separately, which are then spatially mixed in successive layers to recognise a specific gesture. A practical advantage is that this approach also makes the CNN simple to design for different sample rates. We use NinaPro database 1 (27 subjects and 52 movements + rest), sampled at 100 Hz, and database 2 (40 subjects and 40 movements + rest), sampled at 2 kHz, to evaluate our proposed CNN design. We benchmark against a feature-based support vector machine (SVM) classifier, two CNNs from the literature, and an additional standard design of CNN. We find that our novel TtS CNN design achieves 66.6% per-class accuracy on database 1, and 67.8% on database 2, and that the TtS CNN outperforms all other compared classifiers using a statistical hypothesis test at the 2% significance level

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8Ëš to 6.4Ëš compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Development of ECG and EMG platform with IMU to eliminate the motion artifacts found in measurements

    Get PDF
    The long term measurement and analysis of electrophysiological parameters is crucial for diagnosis of chronic diseases, and to monitor critical health parameters. It is also very important to monitor physical fitness improvement, or degradation level, of human beings where physical fitness is entirely critical for their work, or of more vulnerable members of society such as senior citizens and the sick. The state-of-the-art technological developments are leading to the use of artificial intelligence in the continuous monitoring and identification of life-threatening events in the daily life of ordinary people. However, these ambulatory measurements of electrophysiological parameters leads to drastic motion artifacts because of the test subject’s movements. Therefore, there is a dire need for the development of both hardware and software solutions to address this challenge. The scope of this thesis is to develop a hardware platform, by using off-the-shelf discrete and IC electronic components, to measure two electrophysiological parameters, electrocardiogram (ECG) and electromyogram (EMG), with an additional motion sensor inertial measurement unit (IMU) comprising nine degrees of freedom. The ECG, EMG and IMU data will be collected using the developed measurement platform from various predefined day-to-day routine activity events. A Bluetooth interface will be developed to transmit the data wirelessly, and record it on a laptop for further real-time processing. The resources of the electrical workshop and measurement lab at Aalto University will be used for the development, assembly, testing and finally for research of the measurement platform. The second aspect of the study is to prepare, process and analyze the recorded ECG and EMG data by using MATLAB. Various filtering, denoising, processing and analysis algorithms will be developed and executed to extract the features of the ECG and EMG waveform structures. Finally, graphical representations will be made for the resulting outputs of the aforementioned techniques

    Novel Time Domain Based Upper-Limb Prosthesis Control using Incremental Learning Approach

    Full text link
    The upper limb of the body is a vital for various kind of activities for human. The complete or partial loss of the upper limb would lead to a significant impact on daily activities of the amputees. EMG carries important information of human physique which helps to decode the various functionalities of human arm. EMG signal based bionics and prosthesis have gained huge research attention over the past decade. Conventional EMG-PR based prosthesis struggles to give accurate performance due to off-line training used and incapability to compensate for electrode position shift and change in arm position. This work proposes online training and incremental learning based system for upper limb prosthetic application. This system consists of ADS1298 as AFE (analog front end) and a 32 bit arm cortex-m4 processor for DSP (digital signal processing). The system has been tested for both intact and amputated subjects. Time derivative moment based features have been implemented and utilized for effective pattern classification. Initially, system have been trained for four classes using the on-line training process later on the number of classes have been incremented on user demand till eleven, and system performance has been evaluated. The system yielded a completion rate of 100% for healthy and amputated subjects when four motions have been considered. Further 94.33% and 92% completion rate have been showcased by the system when the number of classes increased to eleven for healthy and amputees respectively. The motion efficacy test is also evaluated for all the subjects. The highest efficacy rate of 91.23% and 88.64% are observed for intact and amputated subjects respectively.Comment: 15 Pages, 8 Figures, This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Real-time EMG based pattern recognition control for hand prostheses : a review on existing methods, challenges and future implementation

    Get PDF
    Upper limb amputation is a condition that significantly restricts the amputees from performing their daily activities. The myoelectric prosthesis, using signals from residual stump muscles, is aimed at restoring the function of such lost limbs seamlessly. Unfortunately, the acquisition and use of such myosignals are cumbersome and complicated. Furthermore, once acquired, it usually requires heavy computational power to turn it into a user control signal. Its transition to a practical prosthesis solution is still being challenged by various factors particularly those related to the fact that each amputee has different mobility, muscle contraction forces, limb positional variations and electrode placements. Thus, a solution that can adapt or otherwise tailor itself to each individual is required for maximum utility across amputees. Modified machine learning schemes for pattern recognition have the potential to significantly reduce the factors (movement of users and contraction of the muscle) affecting the traditional electromyography (EMG)-pattern recognition methods. Although recent developments of intelligent pattern recognition techniques could discriminate multiple degrees of freedom with high-level accuracy, their efficiency level was less accessible and revealed in real-world (amputee) applications. This review paper examined the suitability of upper limb prosthesis (ULP) inventions in the healthcare sector from their technical control perspective. More focus was given to the review of real-world applications and the use of pattern recognition control on amputees. We first reviewed the overall structure of pattern recognition schemes for myo-control prosthetic systems and then discussed their real-time use on amputee upper limbs. Finally, we concluded the paper with a discussion of the existing challenges and future research recommendations
    • …
    corecore