2,305 research outputs found

    Free-Swinging Failure Tolerance for Robotic Manipulators

    Get PDF
    Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury

    Fast Object Learning and Dual-arm Coordination for Cluttered Stowing, Picking, and Packing

    Full text link
    Robotic picking from cluttered bins is a demanding task, for which Amazon Robotics holds challenges. The 2017 Amazon Robotics Challenge (ARC) required stowing items into a storage system, picking specific items, and packing them into boxes. In this paper, we describe the entry of team NimbRo Picking. Our deep object perception pipeline can be quickly and efficiently adapted to new items using a custom turntable capture system and transfer learning. It produces high-quality item segments, on which grasp poses are found. A planning component coordinates manipulation actions between two robot arms, minimizing execution time. The system has been demonstrated successfully at ARC, where our team reached second places in both the picking task and the final stow-and-pick task. We also evaluate individual components.Comment: In: Proceedings of the International Conference on Robotics and Automation (ICRA) 201

    Learning in behavioural robotics

    Get PDF
    The research described in this thesis examines how machine learning mechanisms can be used in an assembly robot system to improve the reliability of the system and reduce the development workload, without reducing the flexibility of the system. The justification foi' this is that for a robot to be performing effectively it is frequently necessary to have gained experience of its performance under a particular configuration before that configuration can be altered to produce a performance improvement. Machine learning mechanisms can automate this activity of testing, evaluating and then changing.From studying how other researchers have developed working robot systems the activities which require most effort and experimentation are:-• The selection of the optimal parameter settings. • The establishment of the action-sensor couplings which are necessary for the effective handling of uncertainty. • Choosing which way to achieve a goal.One way to implement the first two kinds of learning is to specify a model of the coupling or the interaction of parameters and results, and from that model derive an appropriate learning mechanism that will find a parametrisation for that model that will enable good performance to be obtained. From this starting point it has been possible to show how equal, or better performance can be obtained by using iearning mechanisms which are neither derived from nor require a model of the task being learned. Instead, by combining iteration and a task specific profit function it is possible to use a generic behavioural module based on a learning mechanism to achieve the task.Iteration and a task specific profit function can also be used to learn which behavioural module from a pool of equally competent modules is the best at any one time to use to achieve a particular goal. Like the other two kinds of learning, this successfully automates an otherwise difficult test and evaluation process that would have to be performed by a developer. In doing so effectively, it, like the other learning that has been used here, shows that instead of being a peripheral issue to be introduced to a working system, learning, carried out in the right way, can be instrumental in the production of that working system
    • …
    corecore