10,363 research outputs found

    Temporal-Difference Learning to Assist Human Decision Making during the Control of an Artificial Limb

    Full text link
    In this work we explore the use of reinforcement learning (RL) to help with human decision making, combining state-of-the-art RL algorithms with an application to prosthetics. Managing human-machine interaction is a problem of considerable scope, and the simplification of human-robot interfaces is especially important in the domains of biomedical technology and rehabilitation medicine. For example, amputees who control artificial limbs are often required to quickly switch between a number of control actions or modes of operation in order to operate their devices. We suggest that by learning to anticipate (predict) a user's behaviour, artificial limbs could take on an active role in a human's control decisions so as to reduce the burden on their users. Recently, we showed that RL in the form of general value functions (GVFs) could be used to accurately detect a user's control intent prior to their explicit control choices. In the present work, we explore the use of temporal-difference learning and GVFs to predict when users will switch their control influence between the different motor functions of a robot arm. Experiments were performed using a multi-function robot arm that was controlled by muscle signals from a user's body (similar to conventional artificial limb control). Our approach was able to acquire and maintain forecasts about a user's switching decisions in real time. It also provides an intuitive and reward-free way for users to correct or reinforce the decisions made by the machine learning system. We expect that when a system is certain enough about its predictions, it can begin to take over switching decisions from the user to streamline control and potentially decrease the time and effort needed to complete tasks. This preliminary study therefore suggests a way to naturally integrate human- and machine-based decision making systems.Comment: 5 pages, 4 figures, This version to appear at The 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making, Princeton, NJ, USA, Oct. 25-27, 201

    Annotated Bibliography: Anticipation

    Get PDF

    Proceedings of the first workshop on Peripheral Machine Interfaces: going beyond traditional surface electromyography

    Get PDF
    abstract: One of the hottest topics in rehabilitation robotics is that of proper control of prosthetic devices. Despite decades of research, the state of the art is dramatically behind the expectations. To shed light on this issue, in June, 2013 the first international workshop on Present and future of non-invasive peripheral nervous system (PNS)–Machine Interfaces (MI; PMI) was convened, hosted by the International Conference on Rehabilitation Robotics. The keyword PMI has been selected to denote human–machine interfaces targeted at the limb-deficient, mainly upper-limb amputees, dealing with signals gathered from the PNS in a non-invasive way, that is, from the surface of the residuum. The workshop was intended to provide an overview of the state of the art and future perspectives of such interfaces; this paper represents is a collection of opinions expressed by each and every researcher/group involved in it

    Sensory Feedback for Upper-Limb Prostheses:Opportunities and Barriers

    Get PDF
    The addition of sensory feedback to upper-limb prostheses has been shown to improve control, increase embodiment, and reduce phantom limb pain. However, most commercial prostheses do not incorporate sensory feedback due to several factors. This paper focuses on the major challenges of a lack of deep understanding of user needs, the unavailability of tailored, realistic outcome measures and the segregation between research on control and sensory feedback. The use of methods such as the Person-Based Approach and co-creation can improve the design and testing process. Stronger collaboration between researchers can integrate different prostheses research areas to accelerate the translation process

    Algorithms for Neural Prosthetic Applications

    Get PDF
    abstract: In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.Dissertation/ThesisDoctoral Dissertation Bioengineering 201

    Analysis of the human interaction with a wearable lower-limb exoskeleton

    Get PDF
    The design of a wearable robotic exoskeleton needs to consider the interaction, either physical or cognitive, between the human user and the robotic device. This paper presents a method to analyse the interaction between the human user and a unilateral, wearable lower-limb exoskeleton. The lower-limb exoskeleton function was to compensate for muscle weakness around the knee joint. It is shown that the cognitive interaction is bidirectional; on the one hand, the robot gathered information from the sensors in order to detect human actions, such as the gait phases, but the subjects also modified their gait patterns to obtain the desired responses from the exoskeleton. The results of the two-phase evaluation of learning with healthy subjects and experiments with a patient case are presented, regarding the analysis of the interaction, assessed in terms of kinematics, kinetics and/or muscle recruitment. Human-driven response of the exoskeleton after training revealed the improvements in the use of the device, while particular modifications of motion patterns were observed in healthy subjects. Also, endurance (mechanical) tests provided criteria to perform experiments with one post-polio patient. The results with the post-polio patient demonstrate the feasibility of providing gait compensation by means of the presented wearable exoskeleton, designed with a testing procedure that involves the human users to assess the human-robot interaction
    • …
    corecore