11 research outputs found

    Tactile Guidance for Policy Refinement and Reuse

    Get PDF
    Demonstration learning is a powerful and practical technique to develop robot behaviors. Even so, development remains a challenge and possible demonstration limitations can degrade policy performance. This work presents an approach for policy improvement and adaptation through a tactile interface located on the body of a robot. We introduce the Tactile Policy Correction (TPC) algorithm, that employs tactile feedback for the refinement of a demonstrated policy, as well as its reuse for the development of other policies. We validate TPC on a humanoid robot performing grasp-positioning tasks. The performance of the demonstrated policy is found to improve with tactile corrections. Tactile guidance also is shown to enable the development of policies able to successfully execute novel, undemonstrated, tasks

    Tactile Correction and Multiple Training Data Sources for Robot Motion Control

    Get PDF
    This work considers our approach to robot motion control learning from the standpoint of multiple data sources. Our paradigm derives data from human teachers providing task demonstrations and tactile corrections for policy refinement and reuse. We contribute a novel formalization for this data, and identify future directions for the algorithm to reason explicitly about differences in data source

    Trajectory-Based Skill Learning Using Generalized Cylinders

    Get PDF
    In this article, we introduce Trajectory Learning using Generalized Cylinders (TLGC), a novel trajectory-based skill learning approach from human demonstrations. To model a demonstrated skill, TLGC uses a Generalized Cylinder—a geometric representation composed of an arbitrary space curve called the spine and a surface with smoothly varying cross-sections. Our approach is the first application of Generalized Cylinders to manipulation, and its geometric representation offers several key features: it identifies and extracts the implicit characteristics and boundaries of the skill by encoding the demonstration space, it supports for generation of multiple skill reproductions maintaining those characteristics, the constructed model can generalize the skill to unforeseen situations through trajectory editing techniques, our approach also allows for obstacle avoidance and interactive human refinement of the resulting model through kinesthetic correction. We validate our approach through a set of real-world experiments with both a Jaco 6-DOF and a Sawyer 7-DOF robotic arm

    Tactile Guidance for Policy Adaptation

    Get PDF
    Demonstration learning is a powerful and practical technique to develop robot behaviors. Even so, development remains a challenge and possible demonstration limitations, for example correspondence issues between the robot and demonstrator, can degrade policy performance. This work presents an approach for policy improvement through a tactile interface located on the body of the robot. We introduce the Tactile Policy Correction (TPC) algorithm, that employs tactile feedback for the refinement of a demonstrated policy, as well as its reuse for the development of other policies. The TPC algorithm is validated on humanoid robot performing grasp positioning tasks. The performance of the demonstrated policy is found to improve with tactile corrections. Tactile guidance also is shown to enable the development of policies able to successfully execute novel, undemonstrated, tasks. We further show that different modalities, namely teleoperation and tactile control, provide information about allowable variability in the target behavior in different areas of the state space

    Tactile Guidance for Policy Adaptation

    Full text link

    A Survey of Tactile Human-Robot Interactions

    Get PDF
    Robots come into physical contact with humans in both experimental and operational settings. Many potential factors motivate the detection of human contact, ranging from safe robot operation around humans, to robot behaviors that depend on human guidance. This article presents a review of current research within the field of Tactile Human–Robot Interactions (Tactile HRI), where physical contact from a human is detected by a robot during the execution or development of robot behaviors. Approaches are presented from two viewpoints: the types of physical interactions that occur between the human and robot, and the types of sensors used to detect these interactions. We contribute a structure for the categorization of Tactile HRI research within each viewpoint. Tactile sensing techniques are grouped into three categories, according to what covers the sensors: (i) a hard shell, (ii) a flexible substrate or (iii) no covering. Three categories of physical HRI likewise are identified, consisting of contact that (i) interferes with robot behavior execution, (ii) contributes to behavior execution and (iii) contributes to behavior development. We populate each category with the current literature, and furthermore identify the state-of-the-art within categories and promising areas for future research

    Vjerojatnosni model robotskoga djelovanja u fizičkoj interakciji s čovjekom

    Get PDF
    U doktorskom radu razvijen je vjerojatnosni model pomoću kojeg robot donosi odluke o svojem djelovanju putem fizičke interakcije s čovjekom. Klasifikacijom taktilnih podražaja na temelju kapacitivnog senzora, sile i prostornog položaja razaznaju se elementi i smisao interakcije. Kako bi model imao određenu autonomiju i mogućnost kretanja kroz prostor u sklopu istraživanja obrađen je problem prostornog kretanja. U sklopu istraživanja definirana je višekriterijska interpretacija radnog prostora u kojoj postoji distinkcija između objekata u okolini, čovjeka, ciljeva, samog robota te putanja robota. Model interakcije je oblikovan kao slijed radnji koje robot izvršava što u konačnici rezultira robotskim djelovanjem. Definiranje varijabli vjerojatnosti modela proizlazi iz interakcije s čovjekom. Naučeni obrasci predstavljaju dugoročno znanje na temelju kojih se oblikuje robotsko djelovanje u skladu s trenutnim stanjem okoline. Vremenskim razlikovanjem bližim događajima pridaje se značajno veći faktor utjecaja, a onim udaljenijim u prošlost mnogo manji. U laboratorijskim uvjetima provedeni su pokusi na realnom sustavu koji čine robotska ruka s integriranim senzorima momenata i upravljačkom jedinicom, računalo, kao i „umjetna koža“ koja posjeduje mogućnost razlučivanja ljudskog dodira i neposredne blizine prvenstveno biološkog materijala. Eksperimentima su utvrđena ograničenja primjene autonomnog djelovanja robota

    Spatial representation for planning and executing robot behaviors in complex environments

    Get PDF
    Robots are already improving our well-being and productivity in different applications such as industry, health-care and indoor service applications. However, we are still far from developing (and releasing) a fully functional robotic agent that can autonomously survive in tasks that require human-level cognitive capabilities. Robotic systems on the market, in fact, are designed to address specific applications, and can only run pre-defined behaviors to robustly repeat few tasks (e.g., assembling objects parts, vacuum cleaning). They internal representation of the world is usually constrained to the task they are performing, and does not allows for generalization to other scenarios. Unfortunately, such a paradigm only apply to a very limited set of domains, where the environment can be assumed to be static, and its dynamics can be handled before deployment. Additionally, robots configured in this way will eventually fail if their "handcrafted'' representation of the environment does not match the external world. Hence, to enable more sophisticated cognitive skills, we investigate how to design robots to properly represent the environment and behave accordingly. To this end, we formalize a representation of the environment that enhances the robot spatial knowledge to explicitly include a representation of its own actions. Spatial knowledge constitutes the core of the robot understanding of the environment, however it is not sufficient to represent what the robot is capable to do in it. To overcome such a limitation, we formalize SK4R, a spatial knowledge representation for robots which enhances spatial knowledge with a novel and "functional" point of view that explicitly models robot actions. To this end, we exploit the concept of affordances, introduced to express opportunities (actions) that objects offer to an agent. To encode affordances within SK4R, we define the "affordance semantics" of actions that is used to annotate an environment, and to represent to which extent robot actions support goal-oriented behaviors. We demonstrate the benefits of a functional representation of the environment in multiple robotic scenarios that traverse and contribute different research topics relating to: robot knowledge representations, social robotics, multi-robot systems and robot learning and planning. We show how a domain-specific representation, that explicitly encodes affordance semantics, provides the robot with a more concrete understanding of the environment and of the effects that its actions have on it. The goal of our work is to design an agent that will no longer execute an action, because of mere pre-defined routine, rather, it will execute an actions because it "knows'' that the resulting state leads one step closer to success in its task

    Intuitive Human-Robot Interaction by Intention Recognition

    Get PDF
    corecore