4,299 research outputs found

    Editorial

    Get PDF

    Temporal-Difference Learning to Assist Human Decision Making during the Control of an Artificial Limb

    Full text link
    In this work we explore the use of reinforcement learning (RL) to help with human decision making, combining state-of-the-art RL algorithms with an application to prosthetics. Managing human-machine interaction is a problem of considerable scope, and the simplification of human-robot interfaces is especially important in the domains of biomedical technology and rehabilitation medicine. For example, amputees who control artificial limbs are often required to quickly switch between a number of control actions or modes of operation in order to operate their devices. We suggest that by learning to anticipate (predict) a user's behaviour, artificial limbs could take on an active role in a human's control decisions so as to reduce the burden on their users. Recently, we showed that RL in the form of general value functions (GVFs) could be used to accurately detect a user's control intent prior to their explicit control choices. In the present work, we explore the use of temporal-difference learning and GVFs to predict when users will switch their control influence between the different motor functions of a robot arm. Experiments were performed using a multi-function robot arm that was controlled by muscle signals from a user's body (similar to conventional artificial limb control). Our approach was able to acquire and maintain forecasts about a user's switching decisions in real time. It also provides an intuitive and reward-free way for users to correct or reinforce the decisions made by the machine learning system. We expect that when a system is certain enough about its predictions, it can begin to take over switching decisions from the user to streamline control and potentially decrease the time and effort needed to complete tasks. This preliminary study therefore suggests a way to naturally integrate human- and machine-based decision making systems.Comment: 5 pages, 4 figures, This version to appear at The 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making, Princeton, NJ, USA, Oct. 25-27, 201

    IEEE ACCESS SPECIAL SECTION EDITORIAL: REAL-TIME MACHINE LEARNING APPLICATIONS IN MOBILE ROBOTICS

    Get PDF
    In the last ten years, advances in machine learning methods have brought tremendous developments to the field of robotics. The performance in many robotic applications such as robotics grasping, locomotion, human–robot interaction, perception and control of robotic systems, navigation, planning, mapping, and localization has increased since the appearance of recent machine learning methods. In particular, deep learning methods have brought significant improvements in a broad range of robot applications including drones, mobile robots, robotics manipulators, bipedal robots, and self-driving cars. The availability of big data and more powerful computational resources, such as graphics processing units (GPUs), has made numerous robotic applications feasible which were not possible previously

    Urban Swarms: A new approach for autonomous waste management

    Get PDF
    Modern cities are growing ecosystems that face new challenges due to the increasing population demands. One of the many problems they face nowadays is waste management, which has become a pressing issue requiring new solutions. Swarm robotics systems have been attracting an increasing amount of attention in the past years and they are expected to become one of the main driving factors for innovation in the field of robotics. The research presented in this paper explores the feasibility of a swarm robotics system in an urban environment. By using bio-inspired foraging methods such as multi-place foraging and stigmergy-based navigation, a swarm of robots is able to improve the efficiency and autonomy of the urban waste management system in a realistic scenario. To achieve this, a diverse set of simulation experiments was conducted using real-world GIS data and implementing different garbage collection scenarios driven by robot swarms. Results presented in this research show that the proposed system outperforms current approaches. Moreover, results not only show the efficiency of our solution, but also give insights about how to design and customize these systems.Comment: Manuscript accepted for publication in IEEE ICRA 201

    Constraining the Size Growth of the Task Space with Socially Guided Intrinsic Motivation using Demonstrations

    Get PDF
    This paper presents an algorithm for learning a highly redundant inverse model in continuous and non-preset environments. Our Socially Guided Intrinsic Motivation by Demonstrations (SGIM-D) algorithm combines the advantages of both social learning and intrinsic motivation, to specialise in a wide range of skills, while lessening its dependence on the teacher. SGIM-D is evaluated on a fishing skill learning experiment.Comment: JCAI Workshop on Agents Learning Interactively from Human Teachers (ALIHT), Barcelona : Spain (2011

    AWARE: Platform for Autonomous self-deploying and operation of Wireless sensor-actuator networks cooperating with unmanned AeRial vehiclEs

    Get PDF
    This paper presents the AWARE platform that seeks to enable the cooperation of autonomous aerial vehicles with ground wireless sensor-actuator networks comprising both static and mobile nodes carried by vehicles or people. Particularly, the paper presents the middleware, the wireless sensor network, the node deployment by means of an autonomous helicopter, and the surveillance and tracking functionalities of the platform. Furthermore, the paper presents the first general experiments of the AWARE project that took place in March 2007 with the assistance of the Seville fire brigades
    • …
    corecore