17 research outputs found

    Design of Decision Tree Structure with Improved BPNN Nodes for High-Accuracy Locomotion Mode Recognition Using a Single IMU

    Get PDF
    Smart wearable robotic system, such as exoskeleton assist device and powered lower limb prostheses can rapidly and accurately realize man–machine interaction through locomotion mode recognition system. However, previous locomotion mode recognition studies usually adopted more sensors for higher accuracy and effective intelligent algorithms to recognize multiple locomotion modes simultaneously. To reduce the burden of sensors on users and recognize more locomotion modes, we design a novel decision tree structure (DTS) based on using an improved backpropagation neural network (IBPNN) as judgment nodes named IBPNN-DTS, after analyzing the experimental locomotion mode data using the original values with a 200-ms time window for a single inertial measurement unit to hierarchically identify nine common locomotion modes (level walking at three kinds of speeds, ramp ascent/descent, stair ascent/descent, Sit, and Stand). In addition, we reduce the number of parameters in the IBPNN for structure optimization and adopted the artificial bee colony (ABC) algorithm to perform global search for initial weight and threshold value to eliminate system uncertainty because randomly generated initial values tend to result in a failure to converge or falling into local optima. Experimental results demonstrate that recognition accuracy of the IBPNN-DTS with ABC optimization (ABC-IBPNN-DTS) was up to 96.71% (97.29% for the IBPNN-DTS). Compared to IBPNN-DTS without optimization, the number of parameters in ABC-IBPNN-DTS shrank by 66% with only a 0.58% reduction in accuracy while the classification model kept high robustness

    Continuous Myoelectric Prediction of Future Ankle Angle and Moment Across Ambulation Conditions and Their Transitions

    Get PDF
    A hallmark of human locomotion is that it continuously adapts to changes in the environment and predictively adjusts to changes in the terrain, both of which are major challenges to lower limb amputees due to the limitations in prostheses and control algorithms. Here, the ability of a single-network nonlinear autoregressive model to continuously predict future ankle kinematics and kinetics simultaneously across ambulation conditions using lower limb surface electromyography (EMG) signals was examined. Ankle plantarflexor and dorsiflexor EMG from ten healthy young adults were mapped to normal ranges of ankle angle and ankle moment during level overground walking, stair ascent, and stair descent, including transitions between terrains (i.e., transitions to/from staircase). Prediction performance was characterized as a function of the time between current EMG/angle/moment inputs and future angle/moment model predictions (prediction interval), the number of past EMG/angle/moment input values over time (sampling window), and the number of units in the network hidden layer that minimized error between experimentally measured values (targets) and model predictions of ankle angle and moment. Ankle angle and moment predictions were robust across ambulation conditions with root mean squared errors less than 1° and 0.04 Nm/kg, respectively, and cross-correlations (R2) greater than 0.99 for prediction intervals of 58 ms. Model predictions at critical points of trip-related fall risk fell within the variability of the ankle angle and moment targets (Benjamini-Hochberg adjusted p \u3e 0.065). EMG contribution to ankle angle and moment predictions occurred consistently across ambulation conditions and model outputs. EMG signals had the greatest impact on noncyclic regions of gait such as double limb support, transitions between terrains, and around plantarflexion and moment peaks. The use of natural muscle activation patterns to continuously predict variations in normal gait and the model’s predictive capabilities to counteract electromechanical inherent delays suggest that this approach could provide robust and intuitive user-driven real-time control of a wide variety of lower limb robotic devices, including active powered ankle-foot prostheses

    A review on locomotion mode recognition and prediction when using active orthoses and exoskeletons

    Get PDF
    Understanding how to seamlessly adapt the assistance of lower-limb wearable assistive devices (active orthosis (AOs) and exoskeletons) to human locomotion modes (LMs) is challenging. Several algorithms and sensors have been explored to recognize and predict the users’ LMs. Nevertheless, it is not yet clear which are the most used and effective sensor and classifier configurations in AOs/exoskeletons and how these devices’ control is adapted according to the decoded LMs. To explore these aspects, we performed a systematic review by electronic search in Scopus and Web of Science databases, including published studies from 1 January 2010 to 31 August 2022. Sixteen studies were included and scored with 84.7 ± 8.7% quality. Decoding focused on level-ground walking along with ascent/descent stairs tasks performed by healthy subjects. Time-domain raw data from inertial measurement unit sensors were the most used data. Different classifiers were employed considering the LMs to decode (accuracy above 90% for all tasks). Five studies have adapted the assistance of AOs/exoskeletons attending to the decoded LM, in which only one study predicted the new LM before its occurrence. Future research is encouraged to develop decoding tools considering data from people with lower-limb impairments walking at self-selected speeds while performing daily LMs with AOs/exoskeletons.This work was funded in part by the Fundação para a Ciência e Tecnologia (FCT) with the Reference Scholarship under grant 2020.05711.BD, under the Stimulus of Scientific Employment with the grant 2020.03393.CEECIND, and in part by the FEDER Funds through the COMPETE 2020— Programa Operacional Competitividade e Internacionalização (POCI) and P2020 with the Reference Project SmartOs Grant POCI-01-0247-FEDER-039868, and by FCT national funds, under the national support to R&D units grant, through the reference project UIDB/04436/2020 and UIDP/04436/2020

    Application of Artificial Intelligence (AI) in Prosthetic and Orthotic Rehabilitation

    Get PDF
    Technological integration of Artificial Intelligence (AI) and machine learning in the Prosthetic and Orthotic industry and in the field of assistive technology has become boon for the Persons with Disabilities. The concept of neural network has been used by the leading manufacturers of rehabilitation aids for simulating various anatomical and biomechanical functions of the lost parts of the human body. The involvement of human interaction with various agents’ i.e. electronic circuitry, software, robotics, etc. has made a revolutionary impact in the rehabilitation field to develop devices like Bionic leg, mind or thought control prosthesis and exoskeletons. Application of Artificial Intelligence and robotics technology has a huge impact in achieving independent mobility and enhances the quality of life in Persons with Disabilities (PwDs)

    Use of stance control knee-ankle-foot orthoses : a review of the literature

    Get PDF
    The use of stance control orthotic knee joints are becoming increasingly popular as unlike locked knee-ankle-foot orthoses, these joints allow the limb to swing freely in swing phase while providing stance phase stability, thus aiming to promote a more physiological and energy efficient gait. It is of paramount importance that all aspects of this technology is monitored and evaluated as the demand for evidence based practice and cost effective rehabilitation increases. A robust and thorough literature review was conducted to retrieve all articles which evaluated the use of stance control orthotic knee joints. All relevant databases were searched, including The Knowledge Network, ProQuest, Web of Knowledge, RECAL Legacy, PubMed and Engineering Village. Papers were selected for review if they addressed the use and effectiveness of commercially available stance control orthotic knee joints and included participant(s) trialling the SCKAFO. A total of 11 publications were reviewed and the following questions were developed and answered according to the best available evidence: 1. The effect SCKAFO (stance control knee-ankle-foot orthoses) systems have on kinetic and kinematic gait parameters 2. The effect SCKAFO systems have on the temporal and spatial parameters of gait 3. The effect SCKAFO systems have on the cardiopulmonary and metabolic cost of walking. 4. The effect SCKAFO systems have on muscle power/generation 5. Patient’s perceptions/ compliance of SCKAFO systems Although current research is limited and lacks in methodological quality the evidence available does, on a whole, indicate a positive benefit in the use of SCKAFOs. This is with respect to increased knee flexion during swing phase resulting in sufficient ground clearance, decreased compensatory movements to facilitate swing phase clearance and improved temporal and spatial gait parameters. With the right methodological approach, the benefits of using a SCKAFO system can be evidenced and the research more effectively converted into clinical practice

    The effect of prefabricated wrist-hand orthoses on performing activities of daily living

    Get PDF
    Wrist-hand orthoses (WHOs) are commonly prescribed to manage the functional deficit associated with the wrist as a result of rheumatoid changes. The common presentation of the wrist is one of flexion and radial deviation with ulnar deviation of the fingers. This wrist position Results in altered biomechanics compromising hand function during activities of daily living (ADL). A paucity of evidence exists which suggests that improvements in ADL with WHO use are very task specific. Using normal subjects, and thus in the absence of pain as a limiting factor, the impact of ten WHOs on performing five ADLs tasks was investigated. The tasks were selected to represent common grip patterns and tests were performed with and without WHOs by right-handed, females, aged 20-50 years over a ten week period. The time taken to complete each task was recorded and a wrist goniometer, elbow goniometer and a forearm torsiometer were used to measure joint motion. Results show that, although orthoses may restrict the motion required to perform a task, participants do not use the full range of motion which the orthoses permit. The altered wrist position measured may be attributable to a modified method of performing the task or to a necessary change in grip pattern, resulting in an increased time in task performance. The effect of WHO use on ADL is task specific and may initially impede function. This could have an effect on WHO compliance if there appears to be no immediate benefits. This orthotic effect may be related to restriction of wrist motion or an inability to achieve the necessary grip patterns due to the designs of the orthoses

    The effect of prefabricated wrist-hand orthoses on grip strength

    Get PDF
    Prefabricated wrist-hand orthoses (WHOs) are commonly prescribed to manage the functional deficit and compromised grip strength as a result of rheumatoid changes. It is thought that an orthosis which improves wrist extension, reduces synovitis and increases the mechanical advantage of the flexor muscles will improve hand function. Previous studies report an initial reduction in grip strength with WHO use which may increase following prolonged use. Using normal subjects, and thus in the absence of pain as a limiting factor, the impact of ten WHOs on grip strength was measured using a Jamar dynamometer. Tests were performed with and without WHOs by right-handed, female subjects, aged 20-50 years over a ten week period. During each test, a wrist goniometer and a forearm torsiometer were used to measure wrist joint position when maximum grip strength was achieved. The majority of participants achieved maximum grip strength with no orthosis at 30° extension. All the orthoses reduced initial grip strength but surprisingly the restriction of wrist extension did not appear to contribute in a significant way to this. Reduction in grip must therefore also be attributable to WHO design characteristics or the quality of fit. The authors recognize the need for research into the long term effect of WHOs on grip strength. However if grip is initially adversely affected, patients may be unlikely to persevere with treatment thereby negating all therapeutic benefits. In studies investigating patient opinions on WHO use, it was a stable wrist rather than a stronger grip reported to have facilitated task performance. This may explain why orthoses that interfere with maximum grip strength can improve functional task performance. Therefore while it is important to measure grip strength, it is only one factor to be considered when evaluating the efficacy of WHOs

    Biomechatronics: Harmonizing Mechatronic Systems with Human Beings

    Get PDF
    This eBook provides a comprehensive treatise on modern biomechatronic systems centred around human applications. A particular emphasis is given to exoskeleton designs for assistance and training with advanced interfaces in human-machine interaction. Some of these designs are validated with experimental results which the reader will find very informative as building-blocks for designing such systems. This eBook will be ideally suited to those researching in biomechatronic area with bio-feedback applications or those who are involved in high-end research on manmachine interfaces. This may also serve as a textbook for biomechatronic design at post-graduate level

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Proceedings of the 3rd International Mobile Brain/Body Imaging Conference : Berlin, July 12th to July 14th 2018

    Get PDF
    The 3rd International Mobile Brain/Body Imaging (MoBI) conference in Berlin 2018 brought together researchers from various disciplines interested in understanding the human brain in its natural environment and during active behavior. MoBI is a new imaging modality, employing mobile brain imaging methods like the electroencephalogram (EEG) or near infrared spectroscopy (NIRS) synchronized to motion capture and other data streams to investigate brain activity while participants actively move in and interact with their environment. Mobile Brain / Body Imaging allows to investigate brain dynamics accompanying more natural cognitive and affective processes as it allows the human to interact with the environment without restriction regarding physical movement. Overcoming the movement restrictions of established imaging modalities like functional magnetic resonance tomography (MRI), MoBI can provide new insights into the human brain function in mobile participants. This imaging approach will lead to new insights into the brain functions underlying active behavior and the impact of behavior on brain dynamics and vice versa, it can be used for the development of more robust human-machine interfaces as well as state assessment in mobile humans.DFG, GR2627/10-1, 3rd International MoBI Conference 201
    corecore