6 research outputs found

    Optimizing Robot-to-Human Object Handovers using Vision-based Affordance Information

    No full text
    Robotic handovers of objects to humans require selecting appropriate grasp poses and orientations to enable efficient subsequent use. We present two methods to compute suitable handover orientations based solely on object affordances rather than object categories or predefined object-specific rules. The first uses human demonstration data to learn average handover orientations per object directly from affordances. The second is a rule-based method that orients graspable affordances towards the receiver. We integrated both approaches into a robotic system performing task-oriented grasping and handovers based on affordance segmentation. A user study indicates the rule-based method produces equally comfortable and natural handover orientations compared to learning from demonstration, while being simpler to implement. Further experiments demonstrate the robot’s ability to successfully hand over objects with proper orientations. This is the first prototype deriving handover orientations solely from affordances treated as pixel wise semantic segmentation, providing a practical approach without per-object datasets

    Learning to Segment Object Affordances on Synthetic Data for Task-oriented Robotic Handovers

    No full text
    corecore