74 research outputs found

    Hydrodynamic behaviour of low crested slopes

    Get PDF

    Video dataset of human demonstrations of folding clothing for robotic folding

    Get PDF
    General-purpose clothes-folding robots do not yet exist owing to the deformable nature of textiles, making it hard to engineer manipulation pipelines or learn this task. In order to accelerate research for the learning of the robotic clothes folding task, we introduce a video dataset of human folding demonstrations. In total, we provide 8.5 hours of demonstrations from multiple perspectives leading to 1,000 folding samples of different types of textiles. The demonstrations are recorded in multiple public places, in different conditions with a diverse set of people. Our dataset consists of anonymized RGB images, depth frames, skeleton keypoint trajectories, and object labels. In this article, we describe our recording setup, the data format, and utility scripts, which can be accessed at https://adverley.github.io/folding-demonstrations

    Automatic end tool alignment through plane detection with a RANSAC-algorithm for robotic grasping

    Get PDF
    Camera based grasping algorithms enable the handling of unknown objects without a complete CAD model. In some scenarios, the captured information from a single view is not sufficient or no grasp is possible. For these cases, the precise realignment of the gripper is difficult because a suitable rotation is part of an infinite solution space. In this paper, we propose a framework which automatically identifies correct rotations from point clouds to adjust the gripper. We validate our approach in a virtual environment for a parallel jaw gripper with multiple isolated and grouped industrial objects

    Projected Task-Specific Layers for Multi-Task Reinforcement Learning

    Full text link
    Multi-task reinforcement learning could enable robots to scale across a wide variety of manipulation tasks in homes and workplaces. However, generalizing from one task to another and mitigating negative task interference still remains a challenge. Addressing this challenge by successfully sharing information across tasks will depend on how well the structure underlying the tasks is captured. In this work, we introduce our new architecture, Projected Task-Specific Layers (PTSL), that leverages a common policy with dense task-specific corrections through task-specific layers to better express shared and variable task information. We then show that our model outperforms the state of the art on the MT10 and MT50 benchmarks of Meta-World consisting of 10 and 50 goal-conditioned tasks for a Sawyer arm

    GAMMA: Graspability-Aware Mobile MAnipulation Policy Learning based on Online Grasping Pose Fusion

    Full text link
    Mobile manipulation constitutes a fundamental task for robotic assistants and garners significant attention within the robotics community. A critical challenge inherent in mobile manipulation is the effective observation of the target while approaching it for grasping. In this work, we propose a graspability-aware mobile manipulation approach powered by an online grasping pose fusion framework that enables a temporally consistent grasping observation. Specifically, the predicted grasping poses are online organized to eliminate the redundant, outlier grasping poses, which can be encoded as a grasping pose observation state for reinforcement learning. Moreover, on-the-fly fusing the grasping poses enables a direct assessment of graspability, encompassing both the quantity and quality of grasping poses
    • …
    corecore