41 research outputs found

    Trajectories and Keyframes for Kinesthetic Teaching: A Human-Robot Interaction Perspective

    Get PDF
    Presented at the 7th ACM/IEEE International Conference on Human-Robot Interaction, March 5-8, 2012, Boston, Massachusetts, USA.Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robot's trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill. We present a user-study (n = 34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and keyframes in a single demonstratio

    Robots learning actions and goals from everyday people

    Get PDF
    Robots are destined to move beyond the caged factory floors towards domains where they will be interacting closely with humans. They will encounter highly varied environments, scenarios and user demands. As a result, programming robots after deployment will be an important requirement. To address this challenge, the field of Learning from Demonstration (LfD) emerged with the vision of programming robots through demonstrations of the desired behavior instead of explicit programming. The field of LfD within robotics has been around for more than 30 years and is still an actively researched field. However, very little research is done on the implications of having a non-robotics expert as a teacher. This thesis aims to bridge this gap by developing learning from demonstration algorithms and interaction paradigms that allow non-expert people to teach robots new skills. The first step of the thesis was to evaluate how non-expert teachers provide demonstrations to robots. Keyframe demonstrations are introduced to the field of LfD to help people teach skills to robots and compared with the traditional trajectory demonstrations. The utility of keyframes are validated by a series of experiments with more than 80 participants. Based on the experiments, a hybrid of trajectory and keyframe demonstrations are proposed to take advantage of both and a method was developed to learn from trajectories, keyframes and hybrid demonstrations in a unified way. A key insight from these user experiments was that teachers are goal oriented. They concentrated on achieving the goal of the demonstrated skills rather than providing good quality demonstrations. Based on this observation, this thesis introduces a method that can learn actions and goals from the same set of demonstrations. The action models are used to execute the skill and goal models to monitor this execution. A user study with eight participants and two skills showed that successful goal models can be learned from non- expert teacher data even if the resulting action models are not as successful. Following these results, this thesis further develops a self-improvement algorithm that uses the goal monitoring output to improve the action models, without further user input. This approach is validated with an expert user and two skills. Finally, this thesis builds an interactive LfD system that incorporates both goal learning and self-improvement and evaluates it with 12 naive users and three skills. The results suggests that teacher feedback during experiments increases skill execution and monitoring success. Moreover, non-expert data can be used as a seed to self-improvement to fix unsuccessful action models.Ph.D

    Sampling Heuristics for Optimal Motion Planning in High Dimensions

    Get PDF
    ©2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 25-30 September 2011, San Francisco, CA.DOI: 10.1109/IROS.2011.6095077We present a sampling-based motion planner that improves the performance of the probabilistically optimal RRT* planning algorithm. Experiments demonstrate that our planner finds a fast initial path and decreases the cost of this path iteratively. We identify and address the limitations of RRT* in high-dimensional configuration spaces. We introduce a sampling bias to facilitate and accelerate cost decrease in these spaces and a simple node-rejection criteria to increase efficiency. Finally, we incorporate an existing bi-directional approach to search which decreases the time to find an initial path. We analyze our planner on a simple 2D navigation problem in detail to show its properties and test it on a difficult 7D manipulation problem to show its effectiveness. Our results consistently demonstrate improved performance over RRT*

    Possible reflex pathway between medial meniscus and semimembranosus muscle: an experimental study in rabbits

    No full text
    Meniscus is a well innervated tissue with four types of receptors. These receptors are mainly concentrated at the anterior and posterior horns. Although they are intended to be a part in reflex arc, this function has not been thoroughly evaluated. We hypothesized that electrical stimulation of the normal meniscus would elicit electromyographic activity of the hamstring muscle via the reflex arc. Five adult domestic male rabbits were used in this study. Under general anesthesia, knee arthrotomy and thigh dissection were done to expose medial meniscus and semimembranosus muscle. Menisci were stimulated by Teflon-coated bipolar needle electrodes. Needles were placed in the posterior horn of the medial menisci. Two Teflon-coated monopolar needle electrodes were placed in semimembranosus muscle. A four-channel electromyograph was used for recording. Two different potentials were recorded from the target muscle. The first response had a very short distal latency and its amplitude was changing in accordance with the strength of the stimulus, suggesting that this response was being elicited by direct muscle stimulation. The second delayed response with less amplitude also appeared in some traces. The latency and the amplitude of this second response were fairly stable stating that this delayed response was being generated by a reflex pathway and seen in all subjects

    Learning Tasks and Skills Together From a Human Teacher

    No full text
    We are interested in developing Learning from Demonstration (LfD) systems that are tailored to be used by everyday people. We highlight and tackle the issues of skill learning, task learning and interaction in the context of LfD As part of the AAAI 2011 LfD Challenge, we will demonstrate some of our most recent Socially Guided-Machine Learning work, in which the PR2 robot learns both low-level skills and high-level tasks through an ongoing social dialog with a human partne

    Keyframe-based Learning from Demonstration Method and Evaluation

    Get PDF
    The original publication is available at www.springerlink.comDOI 10.1007/s12369-012-0160-0We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a human-robot interaction perspective. Our approach –Keyframe-based Learning from Demonstration (KLfD)– takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill

    International Journal of Social Robotics manuscript No. (will be inserted by the editor) Keyframe-based Learning from Demonstration Method and Evaluation

    No full text
    Abstract We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a human-robot interaction perspective. Our approach –Keyframe-based Learning from Demonstration (KLfD) – takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters. The skill is then reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and skills such as scooping and pouring on a humanoid robot. We show that KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when the demonstration type is suited for the skill
    corecore