120 research outputs found

    Simultaneous prediction of wrist/hand motion via wearable ultrasound sensing

    Get PDF

    From Unimodal to Multimodal: improving the sEMG-Based Pattern Recognition via deep generative models

    Full text link
    Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy. However, acquiring multimodal gesture recognition data typically requires users to wear additional sensors, thereby increasing hardware costs. This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals. Specifically, we trained a deep generative model based on the intrinsic correlation between forearm sEMG signals and forearm IMU signals to generate virtual forearm IMU signals from the input forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU signals were fed into a multimodal Convolutional Neural Network (CNN) model for gesture recognition. To evaluate the performance of the proposed approach, we conducted experiments on 6 databases, including 5 publicly available databases and our collected database comprising 28 subjects performing 38 gestures, containing both sEMG and IMU data. The results show that our proposed approach outperforms the sEMG-based unimodal HGR method (with increases of 2.15%-13.10%). It demonstrates that incorporating virtual IMU signals, generated by deep generative models, can significantly enhance the accuracy of sEMG-based HGR. The proposed approach represents a successful attempt to transition from unimodal HGR to multimodal HGR without additional sensor hardware

    Predicting Continuous Locomotion Modes via Multidimensional Feature Learning from sEMG

    Full text link
    Walking-assistive devices require adaptive control methods to ensure smooth transitions between various modes of locomotion. For this purpose, detecting human locomotion modes (e.g., level walking or stair ascent) in advance is crucial for improving the intelligence and transparency of such robotic systems. This study proposes Deep-STF, a unified end-to-end deep learning model designed for integrated feature extraction in spatial, temporal, and frequency dimensions from surface electromyography (sEMG) signals. Our model enables accurate and robust continuous prediction of nine locomotion modes and 15 transitions at varying prediction time intervals, ranging from 100 to 500 ms. In addition, we introduced the concept of 'stable prediction time' as a distinct metric to quantify prediction efficiency. This term refers to the duration during which consistent and accurate predictions of mode transitions are made, measured from the time of the fifth correct prediction to the occurrence of the critical event leading to the task transition. This distinction between stable prediction time and prediction time is vital as it underscores our focus on the precision and reliability of mode transition predictions. Experimental results showcased Deep-STP's cutting-edge prediction performance across diverse locomotion modes and transitions, relying solely on sEMG data. When forecasting 100 ms ahead, Deep-STF surpassed CNN and other machine learning techniques, achieving an outstanding average prediction accuracy of 96.48%. Even with an extended 500 ms prediction horizon, accuracy only marginally decreased to 93.00%. The averaged stable prediction times for detecting next upcoming transitions spanned from 28.15 to 372.21 ms across the 100-500 ms time advances.Comment: 10 pages,7 figure

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Worker Activity Recognition in Smart Manufacturing Using IMU and sEMG Signals with Convolutional Neural Networks

    Get PDF
    In a smart manufacturing system involving workers, recognition of the worker\u27s activity can be used for quantification and evaluation of the worker\u27s performance, as well as to provide onsite instructions with augmented reality. In this paper, we propose a method for activity recognition using Inertial Measurement Unit (IMU) and surface electromyography (sEMG) signals obtained from a Myo armband. The raw 10-channel IMU signals are stacked to form a signal image. This image is transformed into an activity image by applying Discrete Fourier Transformation (DFT) and then fed into a Convolutional Neural Network (CNN) for feature extraction, resulting in a high-level feature vector. Another feature vector representing the level of muscle activation is evaluated with the raw 8-channel sEMG signals. Then these two vectors are concatenated and used for work activity classification. A worker activity dataset is established, which at present contains 6 common activities in assembly tasks, i.e., grab tool/part, hammer nail, use power-screwdriver, rest arm, turn screwdriver, and use wrench. The developed CNN model is evaluated on this dataset and achieves 98% and 87% recognition accuracy in the half-half and leave-one-out experiments, respectively

    Deep Learning Based Abnormal Gait Classification System Study with Heterogeneous Sensor Network

    Get PDF
    Gait is one of the important biological characteristics of the human body. Abnormal gait is mostly related to the lesion site and has been demonstrated to play a guiding role in clinical research such as medical diagnosis and disease prevention. In order to promote the research of automatic gait pattern recognition, this paper introduces the research status of abnormal gait recognition and systems analysis of the common gait recognition technologies. Based on this, two gait information extraction methods, sensor-based and vision-based, are studied, including wearable system design and deep neural network-based algorithm design. In the sensor-based study, we proposed a lower limb data acquisition system. The experiment was designed to collect acceleration signals and sEMG signals under normal and pathological gaits. Specifically, wearable hardware-based on MSP430 and upper computer software based on Labview is designed. The hardware system consists of EMG foot ring, high-precision IMU and pressure-sensitive intelligent insole. Data of 15 healthy persons and 15 hemiplegic patients during walking were collected. The classification of gait was carried out based on sEMG and the average accuracy rate can reach 92.8% for CNN. For IMU signals five kinds of abnormal gait are trained based on three models: BPNN, LSTM, and CNN. The experimental results show that the system combined with the neural network can classify different pathological gaits well, and the average accuracy rate of the six-classifications task can reach 93%. In vision-based research, by using human keypoint detection technology, we obtain the precise location of the key points through the fusion of thermal mapping and offset, thus extracts the space-time information of the key points. However, the results show that even the state-of-the-art is not good enough for replacing IMU in gait analysis and classification. The good news is the rhythm wave can be observed within 2 m, which proves that the temporal and spatial information of the key points extracted is highly correlated with the acceleration information collected by IMU, which paved the way for the visual-based abnormal gait classification algorithm.步态指人走路时表现出来的姿态,是人体重要生物特征之一。异常步态多与病变部位有关,作为反映人体健康状况和行为能力的重要特征,其被论证在医疗诊断、疾病预防等临床研究中具有指导作用。为了促进步态模式自动识别的研究,本文介绍了异常步态识别的研究现状,系统地分析了常见步态识别技术以及算法,以此为基础研究了基于传感器与基于视觉两种步态信息提取方法,内容包括可穿戴系统设计与基于深度神经网络的算法设计。 在基于传感器的研究中,本工作开发了下肢步态信息采集系统,并利用该信息采集系统设计实验,采集正常与不同病理步态下的加速度信号与肌电信号,搭建深度神经网络完成分类任务。具体的,在系统搭建部分设计了基于MSP430的可穿戴硬件设备以及基于Labview的上位机软件,该硬件系统由肌电脚环,高精度IMU以及压感智能鞋垫组成,该上位机软件接收、解包蓝牙数据并计算出步频步长等常用步态参数。 在基于运动信号与基于表面肌电的研究中,采集了15名健康人与15名偏瘫病人的步态数据,并针对表面肌电信号训练卷积神经网络进行帕金森步态的识别与分类,平均准确率可达92.8%。针对运动信号训练了反向传播神经网络,LSTM以及卷积神经网络三种模型进行五种异常步态的分类任务。实验结果表明,本工作中步态信息采集系统结合神经网络模型,可以很好地对不同病理步态进行分类,六分类平均正确率可达93%。 在基于视觉的研究中,本文利用人体关键点检测技术,首先检测出图片中的一个或多个人,接着对边界框做图像分割,接着采用全卷积resnet对每一个边界框中的人物的主要关节点做热力图并分析偏移量,最后通过热力图与偏移的融合得到关键点的精确定位。通过该算法提取了不同步态下姿态关键点时空信息,为基于视觉的步态分析系统提供了基础条件。但实验结果表明目前最高准确率的人体关键点检测算法不足以替代IMU实现步态分析与分类。但在2m之内可以观察到节律信息,证明了所提取的关键点时空信息与IMU采集的加速度信息呈现较高相关度,为基于视觉的异常步态分类算法铺平了道路

    Subject-Independent Frameworks for Robotic Devices: Applying Robot Learning to EMG Signals

    Get PDF
    The capability of having human and robots cooperating together has increased the interest in the control of robotic devices by means of physiological human signals. In order to achieve this goal it is crucial to be able to catch the human intention of movement and to translate it in a coherent robot action. Up to now, the classical approach when considering physiological signals, and in particular EMG signals, is to focus on the specific subject performing the task since the great complexity of these signals. This thesis aims to expand the state of the art by proposing a general subject-independent framework, able to extract the common constraints of human movement by looking at several demonstration by many different subjects. The variability introduced in the system by multiple demonstrations from many different subjects allows the construction of a robust model of human movement, able to face small variations and signal deterioration. Furthermore, the obtained framework could be used by any subject with no need for long training sessions. The signals undergo to an accurate preprocessing phase, in order to remove noise and artefacts. Following this procedure, we are able to extract significant information to be used in online processes. The human movement can be estimated by using well-established statistical methods in Robot Programming by Demonstration applications, in particular the input can be modelled by using a Gaussian Mixture Model (GMM). The performed movement can be continuously estimated with a Gaussian Mixture Regression (GMR) technique, or it can be identified among a set of possible movements with a Gaussian Mixture Classification (GMC) approach. We improved the results by incorporating some previous information in the model, in order to enriching the knowledge of the system. In particular we considered the hierarchical information provided by a quantitative taxonomy of hand grasps. Thus, we developed the first quantitative taxonomy of hand grasps considering both muscular and kinematic information from 40 subjects. The results proved the feasibility of a subject-independent framework, even by considering physiological signals, like EMG, from a wide number of participants. The proposed solution has been used in two different kinds of applications: (I) for the control of prosthesis devices, and (II) in an Industry 4.0 facility, in order to allow human and robot to work alongside or to cooperate. Indeed, a crucial aspect for making human and robots working together is their mutual knowledge and anticipation of other’s task, and physiological signals are capable to provide a signal even before the movement is started. In this thesis we proposed also an application of Robot Programming by Demonstration in a real industrial facility, in order to optimize the production of electric motor coils. The task was part of the European Robotic Challenge (EuRoC), and the goal was divided in phases of increasing complexity. This solution exploits Machine Learning algorithms, like GMM, and the robustness was assured by considering demonstration of the task from many subjects. We have been able to apply an advanced research topic to a real factory, achieving promising results
    corecore