558 research outputs found

    EMG-driven control in lower limb prostheses: a topic-based systematic review

    Get PDF
    Background The inability of users to directly and intuitively control their state-of-the-art commercial prosthesis contributes to a low device acceptance rate. Since Electromyography (EMG)-based control has the potential to address those inabilities, research has flourished on investigating its incorporation in microprocessor-controlled lower limb prostheses (MLLPs). However, despite the proposed benefits of doing so, there is no clear explanation regarding the absence of a commercial product, in contrast to their upper limb counterparts. Objective and methodologies This manuscript aims to provide a comparative overview of EMG-driven control methods for MLLPs, to identify their prospects and limitations, and to formulate suggestions on future research and development. This is done by systematically reviewing academical studies on EMG MLLPs. In particular, this review is structured by considering four major topics: (1) type of neuro-control, which discusses methods that allow the nervous system to control prosthetic devices through the muscles; (2) type of EMG-driven controllers, which defines the different classes of EMG controllers proposed in the literature; (3) type of neural input and processing, which describes how EMG-driven controllers are implemented; (4) type of performance assessment, which reports the performance of the current state of the art controllers. Results and conclusions The obtained results show that the lack of quantitative and standardized measures hinders the possibility to analytically compare the performances of different EMG-driven controllers. In relation to this issue, the real efficacy of EMG-driven controllers for MLLPs have yet to be validated. Nevertheless, in anticipation of the development of a standardized approach for validating EMG MLLPs, the literature suggests that combining multiple neuro-controller types has the potential to develop a more seamless and reliable EMG-driven control. This solution has the promise to retain the high performance of the currently employed non-EMG-driven controllers for rhythmic activities such as walking, whilst improving the performance of volitional activities such as task switching or non-repetitive movements. Although EMG-driven controllers suffer from many drawbacks, such as high sensitivity to noise, recent progress in invasive neural interfaces for prosthetic control (bionics) will allow to build a more reliable connection between the user and the MLLPs. Therefore, advancements in powered MLLPs with integrated EMG-driven control have the potential to strongly reduce the effects of psychosomatic conditions and musculoskeletal degenerative pathologies that are currently affecting lower limb amputees

    Electromyography-Based Control of Lower Limb Prostheses: A Systematic Review

    Get PDF
    Most amputations occur in lower limbs and despite improvements in prosthetic technology, no commercially available prosthetic leg uses electromyography (EMG) information as an input for control. Efforts to integrate EMG signals as part of the control strategy have increased in the last decade. In this systematic review, we summarize the research in the field of lower limb prosthetic control using EMG. Four different online databases were searched until June 2022: Web of Science, Scopus, PubMed, and Science Direct. We included articles that reported systems for controlling a prosthetic leg (with an ankle and/or knee actuator) by decoding gait intent using EMG signals alone or in combination with other sensors. A total of 1,331 papers were initially assessed and 121 were finally included in this systematic review. The literature showed that despite the burgeoning interest in research, controlling a leg prosthesis using EMG signals remains challenging. Specifically, regarding EMG signal quality and stability, electrode placement, prosthetic hardware, and control algorithms, all of which need to be more robust for everyday use. In the studies that were investigated, large variations were found between the control methodologies, type of research participant, recording protocols, assessments, and prosthetic hardware

    Daily locomotion recognition and prediction: A kinematic data-based machine learning approach

    Get PDF
    More versatile, user-independent tools for recognizing and predicting locomotion modes (LMs) and LM transitions (LMTs) in natural gaits are still needed. This study tackles these challenges by proposing an automatic, user-independent recognition and prediction tool using easily wearable kinematic motion sensors for innovatively classifying several LMs (walking direction, level-ground walking, ascend and descend stairs, and ascend and descend ramps) and respective LMTs. We compared diverse state-of-the-art feature processing and dimensionality reduction methods and machine-learning classifiers to find an effective tool for recognition and prediction of LMs and LMTs. The comparison included kinematic patterns from 10 able-bodied subjects. The more accurate tools were achieved using min-max scaling [-1; 1] interval and 'mRMR plus forward selection' algorithm for feature normalization and dimensionality reduction, respectively, and Gaussian support vector machine classifier. The developed tool was accurate in the recognition (accuracy >99% and >96%) and prediction (accuracy >99% and >93%) of daily LMs and LMTs, respectively, using exclusively kinematic data. The use of kinematic data yielded an effective recognition and prediction tool, predicting the LMs and LMTs one-step-ahead. This timely prediction is relevant for assistive devices providing personalized assistance in daily scenarios. The kinematic data-based machine learning tool innovatively addresses several LMs and LMTs while allowing the user to self-select the leading limb to perform LMTs, ensuring a natural gait.This work was supported in part by the Fundação para a Ciência e Tecnologia (FCT) with the Reference Scholarship under Grant SFRH/BD/108309/2015 and SFRH/BD/147878/2019, by the FEDER Funds through the Programa Operacional Regional do Norte and national funds from FCT with the project SmartOs under Grant NORTE-01-0145-FEDER-030386, and through the COMPETE 2020—Programa Operacional Competitividade e Internacionalização (POCI)—with the Reference Project under Grant POCI-01-0145-FEDER-006941

    An adaptive hybrid control architecture for an active transfemoral prosthesis

    Get PDF
    The daily usage of a prosthesis for people with an amputation consists of phases of intermittent and continuous walking patterns. Based on this observation, this paper introduces a novel hybrid architecture to control a transfemoral prosthesis, where separate algorithms are used depending on these two different types of movement. For intermittent walking, an interpolation-based algorithm generates control signals for the ankle and knee joints, whereas, for continuous walking, the control signals are generated utilizing an adaptive frequency oscillator. A switching strategy that allows for smooth transitioning from one controller to another is also presented in the design of the architecture. The individual algorithms for the generation of the joints angles’ references, along with the switching strategy were experimentally validated on a pilot test with a healthy subject wearing an able-bodied adapter and a designed transfemoral prosthesis. The results demonstrate the capability of the individual algorithms to generate the required control signals while undergoing smooth transitions when required. Through the use of a combination of interpolation and adaptive frequency oscillator-based methods, the controller also demonstrates its response adaptation capability to various walking speeds

    An adaptive hybrid control architecture for an active transfemoral prosthesis

    Get PDF
    The daily usage of a prosthesis for people with an amputation consists of phases of intermittentand continuous walking patterns. Based on this observation, this paper introduces a novel hybrid architectureto control a transfemoral prosthesis, where separate algorithms are used depending on these two differenttypes of movement. For intermittent walking, an interpolation-based algorithm generates control signals forthe ankle and knee joints, whereas, for continuous walking, the control signals are generated utilizing anadaptive frequency oscillator. A switching strategy that allows for smooth transitioning from one controllerto another is also presented in the design of the architecture. The individual algorithms for the generation ofthe joints angles’ references, along with the switching strategy were experimentally validated on a pilottest with a healthy subject wearing an able-bodied adapter and a designed transfemoral prosthesis. Theresults demonstrate the capability of the individual algorithms to generate the required control signals whileundergoing smooth transitions when required. Through the use of a combination of interpolation and adaptivefrequency oscillator-based methods, the controller also demonstrates its response adaptation capability tovarious walking speeds

    Predicting Continuous Locomotion Modes via Multidimensional Feature Learning from sEMG

    Full text link
    Walking-assistive devices require adaptive control methods to ensure smooth transitions between various modes of locomotion. For this purpose, detecting human locomotion modes (e.g., level walking or stair ascent) in advance is crucial for improving the intelligence and transparency of such robotic systems. This study proposes Deep-STF, a unified end-to-end deep learning model designed for integrated feature extraction in spatial, temporal, and frequency dimensions from surface electromyography (sEMG) signals. Our model enables accurate and robust continuous prediction of nine locomotion modes and 15 transitions at varying prediction time intervals, ranging from 100 to 500 ms. In addition, we introduced the concept of 'stable prediction time' as a distinct metric to quantify prediction efficiency. This term refers to the duration during which consistent and accurate predictions of mode transitions are made, measured from the time of the fifth correct prediction to the occurrence of the critical event leading to the task transition. This distinction between stable prediction time and prediction time is vital as it underscores our focus on the precision and reliability of mode transition predictions. Experimental results showcased Deep-STP's cutting-edge prediction performance across diverse locomotion modes and transitions, relying solely on sEMG data. When forecasting 100 ms ahead, Deep-STF surpassed CNN and other machine learning techniques, achieving an outstanding average prediction accuracy of 96.48%. Even with an extended 500 ms prediction horizon, accuracy only marginally decreased to 93.00%. The averaged stable prediction times for detecting next upcoming transitions spanned from 28.15 to 372.21 ms across the 100-500 ms time advances.Comment: 10 pages,7 figure

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    ESTIMATION OF MULTI-DIRECTIONAL ANKLE IMPEDANCE AS A FUNCTION OF LOWER EXTREMITY MUSCLE ACTIVATION

    Get PDF
    The purpose of this research is to investigate the relationship between the mechanical impedance of the human ankle and the corresponding lower extremity muscle activity. Three experimental studies were performed to measure the ankle impedance about multiple degrees of freedom (DOF), while the ankle was subjected to different loading conditions and different levels of muscle activity. The first study determined the non-loaded ankle impedance in the sagittal, frontal, and transverse anatomical planes while the ankle was suspended above the ground. The subjects actively co-contracted their agonist and antagonistic muscles to various levels, measured using electromyography (EMG). An Artificial Neural Network (ANN) was implemented to characterize the relationship between the EMG and non-loaded ankle impedance in 3-DOF. The next two studies determined the ankle impedance and muscle activity during standing, while the foot and ankle were subjected to ground perturbations in the sagittal and frontal planes. These studies investigate the performance of subject-dependent models, aggregated models, and the feasibility of a generic, subject-independent model to predict ankle impedance based on the muscle activity of any person. Several regression models, including Least Square, Support Vector Machine, Gaussian Process Regression, and ANN, and EMG feature extraction techniques were explored. The resulting subject-dependent and aggregated models were able to predict ankle impedance with reasonable accuracy. Furthermore, preliminary efforts toward a subject-independent model showed promising results for the design of an EMG-impedance model that can predict ankle impedance using new subjects. This work contributes to understanding the relationship between the lower extremity muscles and the mechanical impedance of the ankle in multiple DOF. Applications of this work could be used to improve user intent recognition for the control of active ankle-foot prostheses

    The use of surface electromyography to assess transfemoral amputees : methodological and functional perspective

    Get PDF
    Aim: Surface electromyography (sEMG) has been established as a safe non-invasive method to investigate neuromuscular function. However, the use of this instrument to assess lower limbs of transfemoral amputees still presents a lack of standardization in its methods of signal acquisition and processing. The aim of this study was to review the current state of sEMG utilization to assess transfemoral amputees, the procedures adopted for the acquisition and the functional findings. Methods: This is a literature review. Five electronic databases were searched to find the studies: All relevant information of each study was extracted and registered. Methodological quality was evaluated using a customized checklist. Results: Eight studies followed the inclusion criteria and were included in this paper. Four studies did not reach more than 80% of the quality checklist, few studies fully described the methodology applied. The muscles assessed were similar in all studies, electrodes placement was determined by different criteria. Conclusion: This paper demonstrates that a few studies have used this method to assess this population and the main variable aspect is concerned to the placement of the electrodes. More researches are needed to better understand the neuromuscular behavior of amputees by using sEMG and assist future researches to develop more reproducible and reliable studies

    A Sonomyography-based Muscle Computer Interface for Individuals with Spinal Cord Injury

    Full text link
    Impairment of hand functions in individuals with spinal cord injury (SCI) severely disrupts activities of daily living. Recent advances have enabled rehabilitation assisted by robotic devices to augment the residual function of the muscles. Traditionally, non-invasive electromyography-based peripheral neural interfaces have been utilized to sense volitional motor intent to drive robotic assistive devices. However, the dexterity and fidelity of control that can be achieved with electromyography-based control have been limited due to inherent limitations in signal quality. We have developed and tested a muscle-computer interface (MCI) utilizing sonomyography to provide control of a virtual cursor for individuals with motor-incomplete spinal cord injury. We demonstrate that individuals with SCI successfully gained control of a virtual cursor by utilizing contractions of muscles of the wrist joint. The sonomyography-based interface enabled control of the cursor at multiple graded levels demonstrating the ability to achieve accurate and stable endpoint control. Our sonomyography-based muscle-computer interface can enable dexterous control of upper-extremity assistive devices for individuals with motor-incomplete SCI
    corecore