1,221 research outputs found

    Planning and control for simulated robotic Sandia hand for the DARPA Robotic Challenge

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 32-33).The DARPA Robotic Challenge (DRC) required the development of user interface, perception, and planning and control modules for a robotic humanoid. This paper focuses on the planning and control component for the manipulation qualification task of the virtual section of the DRC. Nonlinear algorithms were employed for the planning systems, such as the grasp optimization system and the robot state trajectory computation system. However, for closed-loop control, a linear proportional-derivative (PD) joint position controller was used. The nonlinear algorithms used for the planning systems may be improved, but their current functionality allows the successful completion of the manipulation qualification task. Also, even though PD controllers seem appropriate for the closed-loop control, PID controllers might yield a higher level of accuracy if tuned properly. In conclusion, a linear controller appears sufficient for certain control of the highly nonlinear ATLAS humanoid robot and Sandia hand as long as accurate optimization and planning systems complement such control.by Cecilia G. Cantu.S.B

    Reset-free Trial-and-Error Learning for Robot Damage Recovery

    Get PDF
    The high probability of hardware failures prevents many advanced robots (e.g., legged robots) from being confidently deployed in real-world situations (e.g., post-disaster rescue). Instead of attempting to diagnose the failures, robots could adapt by trial-and-error in order to be able to complete their tasks. In this situation, damage recovery can be seen as a Reinforcement Learning (RL) problem. However, the best RL algorithms for robotics require the robot and the environment to be reset to an initial state after each episode, that is, the robot is not learning autonomously. In addition, most of the RL methods for robotics do not scale well with complex robots (e.g., walking robots) and either cannot be used at all or take too long to converge to a solution (e.g., hours of learning). In this paper, we introduce a novel learning algorithm called "Reset-free Trial-and-Error" (RTE) that (1) breaks the complexity by pre-generating hundreds of possible behaviors with a dynamics simulator of the intact robot, and (2) allows complex robots to quickly recover from damage while completing their tasks and taking the environment into account. We evaluate our algorithm on a simulated wheeled robot, a simulated six-legged robot, and a real six-legged walking robot that are damaged in several ways (e.g., a missing leg, a shortened leg, faulty motor, etc.) and whose objective is to reach a sequence of targets in an arena. Our experiments show that the robots can recover most of their locomotion abilities in an environment with obstacles, and without any human intervention.Comment: 18 pages, 16 figures, 3 tables, 6 pseudocodes/algorithms, video at https://youtu.be/IqtyHFrb3BU, code at https://github.com/resibots/chatzilygeroudis_2018_rt
    corecore