36 research outputs found

    Temporal-Difference Learning to Assist Human Decision Making during the Control of an Artificial Limb

    Full text link
    In this work we explore the use of reinforcement learning (RL) to help with human decision making, combining state-of-the-art RL algorithms with an application to prosthetics. Managing human-machine interaction is a problem of considerable scope, and the simplification of human-robot interfaces is especially important in the domains of biomedical technology and rehabilitation medicine. For example, amputees who control artificial limbs are often required to quickly switch between a number of control actions or modes of operation in order to operate their devices. We suggest that by learning to anticipate (predict) a user's behaviour, artificial limbs could take on an active role in a human's control decisions so as to reduce the burden on their users. Recently, we showed that RL in the form of general value functions (GVFs) could be used to accurately detect a user's control intent prior to their explicit control choices. In the present work, we explore the use of temporal-difference learning and GVFs to predict when users will switch their control influence between the different motor functions of a robot arm. Experiments were performed using a multi-function robot arm that was controlled by muscle signals from a user's body (similar to conventional artificial limb control). Our approach was able to acquire and maintain forecasts about a user's switching decisions in real time. It also provides an intuitive and reward-free way for users to correct or reinforce the decisions made by the machine learning system. We expect that when a system is certain enough about its predictions, it can begin to take over switching decisions from the user to streamline control and potentially decrease the time and effort needed to complete tasks. This preliminary study therefore suggests a way to naturally integrate human- and machine-based decision making systems.Comment: 5 pages, 4 figures, This version to appear at The 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making, Princeton, NJ, USA, Oct. 25-27, 201

    First Steps Towards an Intelligent Laser Welding Architecture Using Deep Neural Networks and Reinforcement Learning

    Get PDF
    AbstractTo address control difficulties in laser welding, we propose the idea of a self-learning and self-improving laser welding system that combines three modern machine learning techniques. We first show the ability of a deep neural network to extract meaningful, low-dimensional features from high-dimensional laser-welding camera data. These features are then used by a temporal-difference learning algorithm to predict and anticipate important aspects of the system's sensor data. The third part of our proposed architecture suggests using these features and predictions to learn to deliver situation-appropriate welding power; preliminary control results are demonstrated using a laser-welding simulator. The intelligent laser-welding architecture introduced in this work has the capacity to improve its performance without further human assistance and therefore addresses key requirements of modern industry. To our knowledge, it is the first demonstrated combination of deep learning and Nexting with general value functions and also the first usage of deep learning for laser welding specifically and production engineering in general. This work also provides a unique example of how predictions can be explicitly learned using reinforcement learning to support laser welding. We believe that it would be straightforward to adapt our approach to other production engineering applications
    corecore