2,031 research outputs found

    Folding Knots Using a Team of Aerial Robots

    Full text link
    From ancient times, humans have been using cables and ropes to tie, carry, and manipulate objects by folding knots. However, automating knot folding is challenging because it requires dexterity to move a cable over and under itself. In this paper, we propose a method to fold knots in midair using a team of aerial vehicles. We take advantage of the fact that vehicles are able to fly in between cable segments without any re-grasping. So the team grasps the cable from the floor, and releases it once the knot is folded. Based on a composition of catenary curves, we simplify the complexity of dealing with an infinite-dimensional configuration space of the cable, and formally propose a new knot representation. Such representation allows us to design a trajectory that can be used to fold knots using a leader-follower approach. We show that our method works for different types of knots in simulations. Additionally, we show that our solution is also computationally efficient and can be executed in real-time.Comment: International Conference on Intelligent Robots and Systems, IROS 2022, Kyoto, Japan, Oct 23 - Oct. 27, 202

    Survey on model-based manipulation planning of deformable objects

    Get PDF
    A systematic overview on the subject of model-based manipulation planning of deformable objects is presented. Existing modelling techniques of volumetric, planar and linear deformable objects are described, emphasizing the different types of deformation. Planning strategies are categorized according to the type of manipulation goal: path planning, folding/unfolding, topology modifications and assembly. Most current contributions fit naturally into these categories, and thus the presented algorithms constitute an adequate basis for future developments.Preprin

    HANDLOOM: Learned Tracing of One-Dimensional Objects for Inspection and Manipulation

    Full text link
    Tracing - estimating the spatial state of - long deformable linear objects such as cables, threads, hoses, or ropes, is useful for a broad range of tasks in homes, retail, factories, construction, transportation, and healthcare. For long deformable linear objects (DLOs or simply cables) with many (over 25) crossings, we present HANDLOOM (Heterogeneous Autoregressive Learned Deformable Linear Object Observation and Manipulation), a learning-based algorithm that fits a trace to a greyscale image of cables. We evaluate HANDLOOM on semi-planar DLO configurations where each crossing involves at most 2 segments. HANDLOOM makes use of neural networks trained with 30,000 simulated examples and 568 real examples to autoregressively estimate traces of cables and classify crossings. Experiments find that in settings with multiple identical cables, HANDLOOM can trace each cable with 80% accuracy. In single-cable images, HANDLOOM can trace and identify knots with 77% accuracy. When HANDLOOM is incorporated into a bimanual robot system, it enables state-based imitation of knot tying with 80% accuracy, and it successfully untangles 64% of cable configurations across 3 levels of difficulty. Additionally, HANDLOOM demonstrates generalization to knot types and materials (rubber, cloth rope) not present in the training dataset with 85% accuracy. Supplementary material, including all code and an annotated dataset of RGB-D images of cables along with ground-truth traces, is at https://sites.google.com/view/cable-tracing

    A representation of cloth states based on a derivative of the Gauss linking integral

    Get PDF
    Robotic manipulation of cloth is a complex task because of the infinite-dimensional shape-state space of textiles, which makes their state estimation very difficult. In this paper we introduce the dGLI Cloth Coordinates, a finite low-dimensional representation of cloth states that allows us to efficiently distinguish a large variety of different folded states, opening the door to efficient learning methods for cloth manipulation planning and control. Our representation is based on a directional derivative of the Gauss Linking Integral and allows us to represent spatial as well as planar folded configurations in a consistent and unified way. The proposed dGLI Cloth Coordinates are shown to be more accurate in the representation of cloth states and significantly more sensitive to changes in grasping affordances than other classic shape distance methods. Finally, we apply our representation to real images of a cloth, showing that with it we can identify the different states using a distance-based classifier.This work was developed under the project CLOTHILDE which has received funding from the European Research Council (ERC) under the EU-Horizon 2020 research and innovation programme (grant agreement No. 741930). M. Alberich-Carramiñana is also with the Barcelona Graduate School of Mathematics (BGSMath) and the Institut de Matemàtiques de la UPC-BarcelonaTech (IMTech), and she and J. Amorós are partially supported by the Spanish State Research Agency AEI/10.13039/501100011033 grant PID2019-103849GB-I00 and by the AGAUR project 2021 SGR 00603 Geometry of Manifolds and Applications, GEOMVAP. J. Borràs is supported by the Spanish State Research Agency MCIN/ AEI /10.13039/501100011033 grant PID2020-118649RB-I00 (CHLOE-GRAPH project).Peer ReviewedPostprint (published version

    Co-manipulation of soft-materials estimating deformation from depth images

    Full text link
    Human-robot co-manipulation of soft materials, such as fabrics, composites, and sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. Estimating the deformation state of the co-manipulated material is one of the main challenges. Viable methods provide the indirect measure by calculating the human-robot relative distance. In this paper, we develop a data-driven model to estimate the deformation state of the material from a depth image through a Convolutional Neural Network (CNN). First, we define the deformation state of the material as the relative roto-translation from the current robot pose and a human grasping position. The model estimates the current deformation state through a Convolutional Neural Network, specifically a DenseNet-121 pretrained on ImageNet.The delta between the current and the desired deformation state is fed to the robot controller that outputs twist commands. The paper describes the developed approach to acquire, preprocess the dataset and train the model. The model is compared with the current state-of-the-art method based on a skeletal tracker from cameras. Results show that our approach achieves better performances and avoids the various drawbacks caused by using a skeletal tracker.Finally, we also studied the model performance according to different architectures and dataset dimensions to minimize the time required for dataset acquisitionComment: Pre-print, submitted to Journal of Intelligent Manufacturin

    Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks

    Full text link
    Rearranging and manipulating deformable objects such as cables, fabrics, and bags is a long-standing challenge in robotic manipulation. The complex dynamics and high-dimensional configuration spaces of deformables, compared to rigid objects, make manipulation difficult not only for multi-step planning, but even for goal specification. Goals cannot be as easily specified as rigid object poses, and may involve complex relative spatial relations such as "place the item inside the bag". In this work, we develop a suite of simulated benchmarks with 1D, 2D, and 3D deformable structures, including tasks that involve image-based goal-conditioning and multi-step deformable manipulation. We propose embedding goal-conditioning into Transporter Networks, a recently proposed model architecture for learning robotic manipulation that rearranges deep features to infer displacements that can represent pick and place actions. We demonstrate that goal-conditioned Transporter Networks enable agents to manipulate deformable structures into flexibly specified configurations without test-time visual anchors for target locations. We also significantly extend prior results using Transporter Networks for manipulating deformable objects by testing on tasks with 2D and 3D deformables. Supplementary material is available at https://berkeleyautomation.github.io/bags/.Comment: See https://berkeleyautomation.github.io/bags/ for project website and code; v2 corrects some BibTeX entries, v3 is ICRA 2021 version (minor revisions

    Single-Shot Clothing Category Recognition in Free-Configurations with Application to Autonomous Clothes Sorting

    Get PDF
    This paper proposes a single-shot approach for recognising clothing categories from 2.5D features. We propose two visual features, BSP (B-Spline Patch) and TSD (Topology Spatial Distances) for this task. The local BSP features are encoded by LLC (Locality-constrained Linear Coding) and fused with three different global features. Our visual feature is robust to deformable shapes and our approach is able to recognise the category of unknown clothing in unconstrained and random configurations. We integrated the category recognition pipeline with a stereo vision system, clothing instance detection, and dual-arm manipulators to achieve an autonomous sorting system. To verify the performance of our proposed method, we build a high-resolution RGBD clothing dataset of 50 clothing items of 5 categories sampled in random configurations (a total of 2,100 clothing samples). Experimental results show that our approach is able to reach 83.2\% accuracy while classifying clothing items which were previously unseen during training. This advances beyond the previous state-of-the-art by 36.2\%. Finally, we evaluate the proposed approach in an autonomous robot sorting system, in which the robot recognises a clothing item from an unconstrained pile, grasps it, and sorts it into a box according to its category. Our proposed sorting system achieves reasonable sorting success rates with single-shot perception.Comment: 9 pages, accepted by IROS201
    • …
    corecore