294 research outputs found

    Sit-to-Stand Phases Detection by Inertial Sensors

    Get PDF
    The Sit-to-Stand(STS) is defined as the transition from the sitting to standing position. It is commonly adopted in clinical practice because musculoskeletal or neurological degenerative disorders, as well as the natural process of ageing, deter-mine an increased difficulty in rising up from a seated position. This study aimed to detect the Sit To Stand phases using data from inertial sensors. Due to the high variability of this movement, and, consequently the difficulty to define events by thresholds, we used the machine learning. We collected data from 27 participants (13 females,24.37\ub13.32 years old). They wore 10 Inertial Sensors placed on: trunk,back(L4-L5),left and right thigh, tibia, and ankles. The par-ticipants were asked to stand from an height adjustable chair for 10 times. The STS exercises were recorded separately. The starting and ending points of each phase were identified by key events. The pre-processing included phases splitting in epochs. The features extracted were: mean, standard deviation, RMS, Max and min, COV and first derivative. The features were on the epochs for each sensor. To identify the most fitting classifier, two classifier algorithms,K-nearest Neighbours( KNN) and Support Vector Machine (SVM) were trained. From the data recorded, four dataset were created varying the epochs duration, the number of sensors. The validation model used to train the classifier. As validation model, we compared the results of classifiers trained using Kfold and Leave One Subject out (LOSO) models. The classifier performances were evaluated by confusion matrices and the F1 scores. The classifiers trained using LOSO technique as validation model showed higher values of predictive accuracy than the ones trained using Kfold. The predictive accuracy of KNN and SVM were reported below: \u2022 KFold \u2013 mean of overall predictive accuracy KNN: 0.75; F1 score: REST 0.86, TRUNK LEANING 0.35,STANDING 0.60,BALANCE 0.54, SITTING 0.55 \u2013 mean of overall predictive accuracy SVM: 0.75; F1 score: REST 0.89, TRUNK LEANING 0.48,STANDING 0.48,BALANCE 0.59, SITTING 0.62 \u2022 LOSO \u2013 mean of overall predictive accuracy KNN: 0.93; F1 score: REST 0.96, TRUNK LEANING 0.79,STANDING 0.89,BALANCE 0.95, SITTING 0.88 \u2013 mean of overall predictive accuracy SVM: 0.95; F1 score phases: REST 0.98, TRUNK LEANING 0.86,STANDING 0.91,BALANCE 0.98, SIT-TING 0.9

    Predicting Continuous Locomotion Modes via Multidimensional Feature Learning from sEMG

    Full text link
    Walking-assistive devices require adaptive control methods to ensure smooth transitions between various modes of locomotion. For this purpose, detecting human locomotion modes (e.g., level walking or stair ascent) in advance is crucial for improving the intelligence and transparency of such robotic systems. This study proposes Deep-STF, a unified end-to-end deep learning model designed for integrated feature extraction in spatial, temporal, and frequency dimensions from surface electromyography (sEMG) signals. Our model enables accurate and robust continuous prediction of nine locomotion modes and 15 transitions at varying prediction time intervals, ranging from 100 to 500 ms. In addition, we introduced the concept of 'stable prediction time' as a distinct metric to quantify prediction efficiency. This term refers to the duration during which consistent and accurate predictions of mode transitions are made, measured from the time of the fifth correct prediction to the occurrence of the critical event leading to the task transition. This distinction between stable prediction time and prediction time is vital as it underscores our focus on the precision and reliability of mode transition predictions. Experimental results showcased Deep-STP's cutting-edge prediction performance across diverse locomotion modes and transitions, relying solely on sEMG data. When forecasting 100 ms ahead, Deep-STF surpassed CNN and other machine learning techniques, achieving an outstanding average prediction accuracy of 96.48%. Even with an extended 500 ms prediction horizon, accuracy only marginally decreased to 93.00%. The averaged stable prediction times for detecting next upcoming transitions spanned from 28.15 to 372.21 ms across the 100-500 ms time advances.Comment: 10 pages,7 figure

    Sensor-Based Adaptive Control and Optimization of Lower-Limb Prosthesis.

    Get PDF
    Recent developments in prosthetics have enabled the development of powered prosthetic ankles (PPA). The advent of such technologies drastically improved impaired gait by increasing balance and reducing metabolic energy consumption by providing net positive power. However, control challenges limit performance and feasibility of today’s devices. With addition of sensors and motors, PPA systems should continuously make control decisions and adapt the system by manipulating control parameters of the prostheses. There are multiple challenges in optimization and control of PPAs. A prominent challenge is the objective setup of the system and calibration parameters to fit each subject. Another is whether it is possible to detect changes in intention and terrain before prosthetic use and how the system should react and adapt to it. In the first part of this study, a model for energy expenditure was proposed using electromyogram (EMG) signals from the residual lower-limbs PPA users. The proposed model was optimized to minimize energy expenditure. Optimization was performed using a modified Nelder-Mead approach with a Latin Hypercube sampling. Results of the proposed method were compared to expert values and it was shown to be a feasible alternative for tuning in a shorter time. In the second part of the study, the control challenges regarding lack of adaptivity for PPAs was investigated. The current PPA system used is enhanced with impedance-controlled parameters that allow the system to provide different assistance. However, current systems are set to a fixed value and fail to acknowledge various terrain and intentions throughout the day. In this study, a pseudo-real-time adaptive control system was proposed to predict the changes in the gait and provide a smoother gait. The proposed control system used physiological, kinetic, and kinematic data and fused them to predict the change. The prediction was done using machine learning-based methods. Results of the study showed an accuracy of up to 89.7 percent for prediction of change for four different cases

    Electromyography-Based Control of Lower Limb Prostheses: A Systematic Review

    Get PDF
    Most amputations occur in lower limbs and despite improvements in prosthetic technology, no commercially available prosthetic leg uses electromyography (EMG) information as an input for control. Efforts to integrate EMG signals as part of the control strategy have increased in the last decade. In this systematic review, we summarize the research in the field of lower limb prosthetic control using EMG. Four different online databases were searched until June 2022: Web of Science, Scopus, PubMed, and Science Direct. We included articles that reported systems for controlling a prosthetic leg (with an ankle and/or knee actuator) by decoding gait intent using EMG signals alone or in combination with other sensors. A total of 1,331 papers were initially assessed and 121 were finally included in this systematic review. The literature showed that despite the burgeoning interest in research, controlling a leg prosthesis using EMG signals remains challenging. Specifically, regarding EMG signal quality and stability, electrode placement, prosthetic hardware, and control algorithms, all of which need to be more robust for everyday use. In the studies that were investigated, large variations were found between the control methodologies, type of research participant, recording protocols, assessments, and prosthetic hardware

    A review on locomotion mode recognition and prediction when using active orthoses and exoskeletons

    Get PDF
    Understanding how to seamlessly adapt the assistance of lower-limb wearable assistive devices (active orthosis (AOs) and exoskeletons) to human locomotion modes (LMs) is challenging. Several algorithms and sensors have been explored to recognize and predict the users’ LMs. Nevertheless, it is not yet clear which are the most used and effective sensor and classifier configurations in AOs/exoskeletons and how these devices’ control is adapted according to the decoded LMs. To explore these aspects, we performed a systematic review by electronic search in Scopus and Web of Science databases, including published studies from 1 January 2010 to 31 August 2022. Sixteen studies were included and scored with 84.7 ± 8.7% quality. Decoding focused on level-ground walking along with ascent/descent stairs tasks performed by healthy subjects. Time-domain raw data from inertial measurement unit sensors were the most used data. Different classifiers were employed considering the LMs to decode (accuracy above 90% for all tasks). Five studies have adapted the assistance of AOs/exoskeletons attending to the decoded LM, in which only one study predicted the new LM before its occurrence. Future research is encouraged to develop decoding tools considering data from people with lower-limb impairments walking at self-selected speeds while performing daily LMs with AOs/exoskeletons.This work was funded in part by the Fundação para a Ciência e Tecnologia (FCT) with the Reference Scholarship under grant 2020.05711.BD, under the Stimulus of Scientific Employment with the grant 2020.03393.CEECIND, and in part by the FEDER Funds through the COMPETE 2020— Programa Operacional Competitividade e Internacionalização (POCI) and P2020 with the Reference Project SmartOs Grant POCI-01-0247-FEDER-039868, and by FCT national funds, under the national support to R&D units grant, through the reference project UIDB/04436/2020 and UIDP/04436/2020

    Human lower limb activity recognition techniques, databases, challenges and its applications using sEMG signal: an overview

    Get PDF
    Human lower limb activity recognition (HLLAR) has grown in popularity over the last decade mainly because to its applications in the identification and control of neuromuscular disorders, security, robotics, and prosthetics. Surface electromyography (sEMG) sensors provide various advantages over other wearable or visual sensors for HLLAR applications, including quick response, pervasiveness, no medical monitoring, and negligible infection. Recognizing lower limb activity from sEMG signals is also challenging owing to the noise in the sEMG signal. Pre- processing of sEMG signals is extremely desirable before the classification because they allow a more consistent and precise evaluation in the above applications. This article provides a segment-by-segment overview of: (1) Techniques for eliminating artifacts from sEMG signals from the lower limb. (2) A survey of existing datasets of lower limb sEMG. (3) A concise description of the various techniques for processing and classifying sEMG data for various applications involving lower limb activity. Finally, an open discussion is presented, which may result in the identification of a variety of future research possibilities for human lower limb activity recognition. Therefore, it is possible to anticipate that the framework presented in this study can aid in the advancement of sEMG-based recognition of human lower limb activity

    Surface Electromyography and Artificial Intelligence for Human Activity Recognition - A Systematic Review on Methods, Emerging Trends Applications, Challenges, and Future Implementation

    Get PDF
    Human activity recognition (HAR) has become increasingly popular in recent years due to its potential to meet the growing needs of various industries. Electromyography (EMG) is essential in various clinical and biological settings. It is a metric that helps doctors diagnose conditions that affect muscle activation patterns and monitor patients’ progress in rehabilitation, disease diagnosis, motion intention recognition, etc. This review summarizes the various research papers based on HAR with EMG. Over recent years, the integration of Artificial Intelligence (AI) has catalyzed remarkable advancements in the classification of biomedical signals, with a particular focus on EMG data. Firstly, this review meticulously curates a wide array of research papers that have contributed significantly to the evolution of EMG-based activity recognition. By surveying the existing literature, we provide an insightful overview of the key findings and innovations that have propelled this field forward. It explore the various approaches utilized for preprocessing EMG signals, including noise reduction, baseline correction, filtering, and normalization, ensure that the EMG data is suitably prepared for subsequent analysis. In addition, we unravel the multitude of techniques employed to extract meaningful features from raw EMG data, encompassing both time-domain and frequency-domain features. These techniques are fundamental to achieving a comprehensive characterization of muscle activity patterns. Furthermore, we provide an extensive overview of both Machine Learning (ML) and Deep Learning (DL) classification methods, showcasing their respective strengths, limitations, and real-world applications in recognizing diverse human activities from EMG signals. In examining the hardware infrastructure for HAR with EMG, the synergy between hardware and software is underscored as paramount for enabling real-time monitoring. Finally, we also discovered open issues and future research direction that may point to new lines of inquiry for ongoing research toward EMG-based detection.publishedVersio
    corecore