532 research outputs found

    Multiple sensor outputs and computational intelligence towards estimating state and speed for control of lower limb prostheses

    Get PDF
    For as long as people have been able to survive limb threatening injuries prostheses have been created. Modern lower limb prostheses are primarily controlled by adjusting the amount of damping in the knee to bend in a suitable manner for walking and running. Often the choice of walking state or running state has to be controlled manually by pressing a button. This paper examines how this control could be improved using sensors attached tofa the limbs of two volunteers. The signals from the sensors had features extracted which were passed through a computational intelligence system. The system was used to determine whether the volunteer was walking or running and their movement speed. Two new features are presented which identify the movement states of standing, walking and running and the movement speed of the volunteer. The results suggest that the control of the prosthetic limb could be improved

    Using Artificial Intelligence To Improve The Control Of Prosthetic Legs

    Get PDF
    For as long as people have been able to survive limb threatening injuries prostheses have been created. Modern lower limb prostheses are primarily controlled by adjusting the amount of damping in the knee to bend in a suitable manner for walking and running. Often the choice of walking state or running state has to be controlled manually by pressing a button. While this simple tuning strategy can work for many users it can be limiting and there is the tendency that controlling the leg is not intuitive and the wearer has to learn how to use leg. This thesis examines how this control can be improved using Artificial Intelligence (AI) to allow the system to be tuned for each individual. A wearable gait lab was developed consisting of a number of sensors attached to the limbs of eight volunteers. The signals from the sensors were analysed and features were extracted from them which were then passed through 2 separate Artificial Neural Networks (ANN). One network attempted to classify whether the wearer was standing still, walking or running. The other network attempted to estimate the wearer’s movement speed. A Genetic Algorithm (GA) was used to tune the ANNs parameters for each individual. The results showed that each individual needed different parameters to tune the features presented to the ANN. It was also found that different features were needed for each of the two problems presented to the ANN. Two new features are presented which identify the movement states of standing, walking and running and the movement speed of the volunteer. The results suggest that the control of the prosthetic limb can be improved

    Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices

    Get PDF
    The field of neural prosthetics aims to develop prosthetic limbs with a brain-computer interface (BCI) through which neural activity is decoded into movements. A natural extension of current research is the incorporation of neural activity from multiple modalities to more accurately estimate the user's intent. The challenge remains how to appropriately combine this information in real-time for a neural prosthetic device., i.e., fusing predictions from several single-modality decoders to produce a more accurate device state estimate. We examine two algorithms for continuous variable decision fusion: the Kalman filter and artificial neural networks (ANNs). Using simulated cortical neural spike signals, we implemented several successful individual neural decoding algorithms, and tested the capabilities of each fusion method in the context of decoding 2-dimensional endpoint trajectories of a neural prosthetic arm. Extensively testing these methods on random trajectories, we find that on average both the Kalman filter and ANNs successfully fuse the individual decoder estimates to produce more accurate predictions.Our results reveal that a fusion-based approach has the potential to improve prediction accuracy over individual decoders of varying quality, and we hope that this work will encourage multimodal neural prosthetics experiments in the future

    Prediction and control in human neuromusculoskeletal models

    Get PDF
    Computational neuromusculoskeletal modelling enables the generation and testing of hypotheses about human movement on a large scale, in silico. Humanoid models, which increasingly aim to replicate the full complexity of the human nervous and musculoskeletal systems, are built on extensive prior knowledge, extracted from anatomical imaging, kinematic and kinetic measurement, and codified as model description. Where inverse dynamic analysis is applied, its basis is in Newton's laws of motion, and in solving for muscular redundancy it is necessary to invoke knowledge of central nervous motor strategy. This epistemological approach contrasts strongly with the models of machine learning, which are generally over-parameterised and largely data-driven. Even as spectacular performance has been delivered by the application of these models in a number of discrete domains of artificial intelligence, work towards general human-level intelligence has faltered, leading many to wonder if the data-driven approach is fundamentally limited, and spurring efforts to combine machine learning with knowledge-based modelling. Through a series of five studies, this thesis explores the combination of neuromusculoskeletal modelling with machine learning in order to enhance the core tasks of prediction and control. Several principles for the development of clinically useful artificially intelligent systems emerge: stability, computational efficiency and incorporation of prior knowledge. The first study concerns the use of neural network function approximators for the prediction of internal forces during human movement, an important task with many clinical applications, but one for which the standard tools of modelling are slow and cumbersome. By training on a large dataset of motions and their corresponding forces, state of the art performance is demonstrated, with many-fold increases in inference speed enabling the deployment of trained models for use in a real time biofeedback system. Neural networks trained in this way, to imitate some optimal controller, encode a mapping from high-level movement descriptors to actuator commands, and may thus be deployed in simulation as \textit{policies} to control the actions of humanoid models. Unfortunately, the high complexity of realistic simulation makes stable control a challenging task, beyond the capabilities of such naively trained models. The objective of the second study was to improve performance and stability of policy-based controllers for humanoid models in simulation. A novel technique was developed, borrowing from established unsupervised adversarial methods in computer vision. This technique enabled significant gains in performance relative to a neural network baseline, without the need for additional access to the optimal controller. For the third study, increases in the capabilities of these policy-based controllers were sought. Reinforcement learning is widely considered the most powerful means of optimising such policies, but it is computationally inefficient, and this inefficiency limits its clinical utility. To mitigate this problem, a novel framework, making use of domain-specific knowledge present in motion data, and in an inverse model of the biomechanical system, was developed. Training on simple desktop hardware, this framework enabled rapid initialisation of humanoid models that were able to move naturally through a 3-dimensional simulated environment, with 900-fold improvements in sample efficiency relative to a related technique based on pure reinforcement learning. After training with subject-specific anatomical parameters, and motion data, learned policies represent personalised models of motor control that may be further interrogated to test hypotheses about movement. For the fourth study, subject-specific controllers were taken and used as the substrate for transfer learning, by removing kinematic constraints and optimising with respect to the magnitude of the medial knee joint reaction force, an important biomechanical variable in osteoarthritis of the knee. Models learned new kinematic strategies for the reduction of this biomarker, which were subsequently validated by their use, in the real world, to construct subject-specific routines for real time gait retraining. Six out of eight subjects were able to reduce medial knee joint loading by pursuing the personalised kinematic targets found in simulation. Personalisation of assistive devices, such as limb prostheses, is another area of growing interest, and one for which computational frameworks promise cost-effective solutions. Reinforcement learning provides powerful techniques for this task but the expansion of the scope of optimisation, to include previously static elements of a prosthesis, is problematic for its complexity and resulting sample inefficiency. The fifth and final study demonstrates a new algorithm that leverages the methods described in the previous studies, and additional techniques for variance control, to surmount this problem, improving sample efficiency and simultaneously, through the use of prior knowledge encoded in motion data, providing a rational means of determining optimality in the prosthesis. Trained models were able to jointly optimise motor control and prosthesis design to enable improved performance in a walking task, and optimised designs were robust to both random seed and reward specification. This algorithm could be used to speed the design and production of real personalised prostheses, representing a potent realisation of the potential benefits of combined reinforcement learning and realistic neuromusculoskeletal modelling.Open Acces

    Volitional Control of Lower-limb Prosthesis with Vision-assisted Environmental Awareness

    Get PDF
    Early and reliable prediction of user’s intention to change locomotion mode or speed is critical for a smooth and natural lower limb prosthesis. Meanwhile, incorporation of explicit environmental feedback can facilitate context aware intelligent prosthesis which allows seamless operation in a variety of gait demands. This dissertation introduces environmental awareness through computer vision and enables early and accurate prediction of intention to start, stop or change speeds while walking. Electromyography (EMG), Electroencephalography (EEG), Inertial Measurement Unit (IMU), and Ground Reaction Force (GRF) sensors were used to predict intention to start, stop or increase walking speed. Furthermore, it was investigated whether external emotional music stimuli could enhance the predictive capability of intention prediction methodologies. Application of advanced machine learning and signal processing techniques on pre-movement EEG resulted in an intention prediction system with low latency, high sensitivity and low false positive detection. Affective analysis of EEG suggested that happy music stimuli significantly (

    Nonlinear control strategy for a cost effective myoelectric prosthetic hand

    Get PDF
    The loss of a limb tremendously impacts the life of the affected individual. In the past decades, researchers have been developing artificial limbs that may return some of the missing functions and cosmetics. However, the development of dexterous mechanisms capable of mimicking the function of the human hand is a complex venture. Even though myoelectric prostheses have advanced, several issues remain to be solved before an artificial limb may be comparable to its human counterpart. Moreover, the high cost of advanced limbs prevents their widespread use among the low-income population. This dissertation presents a strategy for the low-level of control of a cost effective robotic hand for prosthetic applications. The main purpose of this work is to reduce the high cost associated with limb replacement. The presented strategy uses an electromyographic signal classifier, which detects user intent by classifying 4 different wrist movements. This information is supplied as 4 different pre-shapes of the robotic hand to the low-level of control for safely and effectively performing the grasping tasks. Two proof-of-concept prototypes were implemented, consisting on five-finger underactuated hands driven by inexpensive DC motors and equipped with low-cost sensors. To overcome the limitations and nonlinearities of inexpensive components, a multi-stage control methodology was designed for modulating the grasping force based on slippage detection and nonlinear force control. A multi-stage control methodology for modulating the grasping force based on slippage detection and nonlinear force control was designed. The two main stages of the control strategy are the force control stage and the detection stage. The control strategy uses the force control stage to maintain a constant level of force over the object. The results of the experiments performed over this stage showed a rising time of less than 1 second, force overshoot of less than 1 N and steady state error of less than 0.15 N. The detection stage is used to monitor any sliding of the object from the hand. The experiments performed over this stage demonstrated a delay in the slip detection process of less than 200 milliseconds. The initial force, and the amount of force incremented after sliding is detected, were adjusted to reduce object displacement. Experiments were then performed to test the control strategy on situations often encountered in the ADL. The results showed that the control strategy was able to detect the dynamic changes in mass of the object and to successfully adjust the grasping force to prevent the object from dropping. The evaluation of the proposed control strategy suggests that this methodology can overcome the limitation of inexpensive sensors and actuators. Therefore, this control strategy may reduce the cost of current myoelectric prosthesis. We believe that the work presented here is a major step towards the development of a cost effective myoelectric prosthetic hand

    Electronic systems for the restoration of the sense of touch in upper limb prosthetics

    Get PDF
    In the last few years, research on active prosthetics for upper limbs focused on improving the human functionalities and the control. New methods have been proposed for measuring the user muscle activity and translating it into the prosthesis control commands. Developing the feed-forward interface so that the prosthesis better follows the intention of the user is an important step towards improving the quality of life of people with limb amputation. However, prosthesis users can neither feel if something or someone is touching them over the prosthesis and nor perceive the temperature or roughness of objects. Prosthesis users are helped by looking at an object, but they cannot detect anything otherwise. Their sight gives them most information. Therefore, to foster the prosthesis embodiment and utility, it is necessary to have a prosthetic system that not only responds to the control signals provided by the user, but also transmits back to the user the information about the current state of the prosthesis. This thesis presents an electronic skin system to close the loop in prostheses towards the restoration of the sense of touch in prosthesis users. The proposed electronic skin system inlcudes an advanced distributed sensing (electronic skin), a system for (i) signal conditioning, (ii) data acquisition, and (iii) data processing, and a stimulation system. The idea is to integrate all these components into a myoelectric prosthesis. Embedding the electronic system and the sensing materials is a critical issue on the way of development of new prostheses. In particular, processing the data, originated from the electronic skin, into low- or high-level information is the key issue to be addressed by the embedded electronic system. Recently, it has been proved that the Machine Learning is a promising approach in processing tactile sensors information. Many studies have been shown the Machine Learning eectiveness in the classication of input touch modalities.More specically, this thesis is focused on the stimulation system, allowing the communication of a mechanical interaction from the electronic skin to prosthesis users, and the dedicated implementation of algorithms for processing tactile data originating from the electronic skin. On system level, the thesis provides design of the experimental setup, experimental protocol, and of algorithms to process tactile data. On architectural level, the thesis proposes a design ow for the implementation of digital circuits for both FPGA and integrated circuits, and techniques for the power management of embedded systems for Machine Learning algorithms

    Energy Regeneration and Environment Sensing for Robotic Leg Prostheses and Exoskeletons

    Get PDF
    Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, limitations in automated control and energy-efficient actuation have impeded their transition from research laboratories to real-world environments. With regards to control, the current automated locomotion mode recognition systems being developed rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, here a multi-generation environment sensing and classification system powered by computer vision and deep learning was developed to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions. To support this initiative, the “ExoNet” database was developed – the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a novel hierarchical labelling architecture. Over a dozen state-of-the-art deep convolutional neural networks were trained and tested on ExoNet for large-scale image classification and automatic feature engineering. The benchmarked CNN architectures and their environment classification predictions were then quantitatively evaluated and compared using an operational metric called “NetScore”, which balances the classification accuracy with the architectural and computational complexities (i.e., important for onboard real-time inference with mobile computing devices). Of the benchmarked CNN architectures, the EfficientNetB0 network achieved the highest test accuracy; VGG16 the fastest inference time; and MobileNetV2 the best NetScore. These comparative results can inform the optimal architecture design or selection depending on the desired performance of an environment classification system. With regards to energetics, backdriveable actuators with energy regeneration can improve the energy efficiency and extend the battery-powered operating durations by converting some of the otherwise dissipated energy during negative mechanical work into electrical energy. However, the evaluation and control of these regenerative actuators has focused on steady-state level-ground walking. To encompass real-world community mobility more broadly, here an energy regeneration system, featuring mathematical and computational models of human and wearable robotic systems, was developed to simulate energy regeneration and storage during other locomotor activities of daily living, specifically stand-to-sit movements. Parameter identification and inverse dynamic simulations of subject-specific optimized biomechanical models were used to calculate the negative joint mechanical work and power while sitting down (i.e., the mechanical energy theoretically available for electrical energy regeneration). These joint mechanical energetics were then used to simulate a robotic exoskeleton being backdriven and regenerating energy. An empirical characterization of an exoskeleton was carried out using a joint dynamometer system and an electromechanical motor model to calculate the actuator efficiency and to simulate energy regeneration and storage with the exoskeleton parameters. The performance calculations showed that regenerating electrical energy during stand-to-sit movements provide small improvements in energy efficiency and battery-powered operating durations. In summary, this research involved the development and evaluation of environment classification and energy regeneration systems to improve the automated control and energy-efficient actuation of next-generation robotic leg prostheses and exoskeletons for real-world locomotor assistance

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
    • …
    corecore