32,590 research outputs found

    Feasibility of Manual Teach-and-Replay and Continuous Impedance Shaping for Robotic Locomotor Training Following Spinal Cord Injury

    Get PDF
    Robotic gait training is an emerging technique for retraining walking ability following spinal cord injury (SCI). A key challenge in this training is determining an appropriate stepping trajectory and level of assistance for each patient, since patients have a wide range of sizes and impairment levels. Here, we demonstrate how a lightweight yet powerful robot can record subject-specific, trainer-induced leg trajectories during manually assisted stepping, then immediately replay those trajectories. Replay of the subject-specific trajectories reduced the effort required by the trainer during manual assistance, yet still generated similar patterns of muscle activation for six subjects with a chronic SCI. We also demonstrate how the impedance of the robot can be adjusted on a step-by-step basis with an error-based, learning law. This impedance-shaping algorithm adapted the robot's impedance so that the robot assisted only in the regions of the step trajectory where the subject consistently exhibited errors. The result was that the subjects stepped with greater variability, while still maintaining a physiologic gait pattern. These results are further steps toward tailoring robotic gait training to the needs of individual patients

    Comparison of Selection Methods in On-line Distributed Evolutionary Robotics

    Get PDF
    In this paper, we study the impact of selection methods in the context of on-line on-board distributed evolutionary algorithms. We propose a variant of the mEDEA algorithm in which we add a selection operator, and we apply it in a taskdriven scenario. We evaluate four selection methods that induce different intensity of selection pressure in a multi-robot navigation with obstacle avoidance task and a collective foraging task. Experiments show that a small intensity of selection pressure is sufficient to rapidly obtain good performances on the tasks at hand. We introduce different measures to compare the selection methods, and show that the higher the selection pressure, the better the performances obtained, especially for the more challenging food foraging task

    Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation

    Full text link
    Imitation learning is an effective approach for autonomous systems to acquire control policies when an explicit reward function is unavailable, using supervision provided as demonstrations from an expert, typically a human operator. However, standard imitation learning methods assume that the agent receives examples of observation-action tuples that could be provided, for instance, to a supervised learning algorithm. This stands in contrast to how humans and animals imitate: we observe another person performing some behavior and then figure out which actions will realize that behavior, compensating for changes in viewpoint, surroundings, object positions and types, and other factors. We term this kind of imitation learning "imitation-from-observation," and propose an imitation learning method based on video prediction with context translation and deep reinforcement learning. This lifts the assumption in imitation learning that the demonstration should consist of observations in the same environment configuration, and enables a variety of interesting applications, including learning robotic skills that involve tool use simply by observing videos of human tool use. Our experimental results show the effectiveness of our approach in learning a wide range of real-world robotic tasks modeled after common household chores from videos of a human demonstrator, including sweeping, ladling almonds, pushing objects as well as a number of tasks in simulation.Comment: Accepted at ICRA 2018, Brisbane. YuXuan Liu and Abhishek Gupta had equal contributio

    Deep Visual Foresight for Planning Robot Motion

    Full text link
    A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation -- pushing objects -- and can handle novel objects not seen during training.Comment: ICRA 2017. Supplementary video: https://sites.google.com/site/robotforesight
    • …
    corecore