4 research outputs found

    Grasp Transfer based on Self-Aligning Implicit Representations of Local Surfaces

    Full text link
    Objects we interact with and manipulate often share similar parts, such as handles, that allow us to transfer our actions flexibly due to their shared functionality. This work addresses the problem of transferring a grasp experience or a demonstration to a novel object that shares shape similarities with objects the robot has previously encountered. Existing approaches for solving this problem are typically restricted to a specific object category or a parametric shape. Our approach, however, can transfer grasps associated with implicit models of local surfaces shared across object categories. Specifically, we employ a single expert grasp demonstration to learn an implicit local surface representation model from a small dataset of object meshes. At inference time, this model is used to transfer grasps to novel objects by identifying the most geometrically similar surfaces to the one on which the expert grasp is demonstrated. Our model is trained entirely in simulation and is evaluated on simulated and real-world objects that are not seen during training. Evaluations indicate that grasp transfer to unseen object categories using this approach can be successfully performed both in simulation and real-world experiments. The simulation results also show that the proposed approach leads to better spatial precision and grasp accuracy compared to a baseline approach.Comment: Accepted by IEEE RAL. 8 pages, 6 figures, 3 table

    Grasp Transfer Based on Self-Aligning Implicit Representations of Local Surfaces

    Get PDF
    Objects we interact with and manipulate often share similar parts, such as handles, that allow us to transfer our actions flexibly due to their shared functionality. This work addresses the problem of transferring a grasp experience or a demonstration to a novel object that shares shape similarities with objects the robot has previously encountered. Existing approaches for solving this problem are typically restricted to a specific object category or a parametric shape. Our approach, however, can transfer grasps associated with implicit models of local surfaces shared across object categories. Specifically, we employ a single expert grasp demonstration to learn an implicit local surface representation model from a small dataset of object meshes. At inference time, this model is used to transfer grasps to novel objects by identifying the most geometrically similar surfaces to the one on which the expert grasp is demonstrated. Our model is trained entirely in simulation and is evaluated on simulated and real-world objects that are not seen during training. Evaluations indicate that grasp transfer to unseen object categories using this approach can be successfully performed both in simulation and real-world experiments. The simulation results also show that the proposed approach leads to better spatial precision and grasp accuracy compared to a baseline approach

    Affordance Transfer based on Self-Aligning Implicit Representations of Local Surfaces

    No full text
    Objects we interact with and manipulate often share similar parts, e.g. handles, that allow us to transfer our actions flexibly due to their shared functionality. This corresponds to affordances, i.e. set of action possibilities offered by the environment [1]. In this work, we propose to learn affordances associated with implicit models of local shapes shared across object categories. Our approach takes an expert grasp demon- stration on a given object, extracts the local geometry, and uses it as an anchor to align corresponding parts of objects from the same category. We show that the proposed implicit representation method can align objects within the same category under random pose perturbation. In addition, our general approach can align the local geometry to find grasp poses similar to the one demonstrated in the reference local shape. Finally, we show that we can identify the shared local geometry on novel objects from a different object category for affordance transfer
    corecore