587 research outputs found

    Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects

    Get PDF
    Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe

    Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks

    Full text link
    Rearranging and manipulating deformable objects such as cables, fabrics, and bags is a long-standing challenge in robotic manipulation. The complex dynamics and high-dimensional configuration spaces of deformables, compared to rigid objects, make manipulation difficult not only for multi-step planning, but even for goal specification. Goals cannot be as easily specified as rigid object poses, and may involve complex relative spatial relations such as "place the item inside the bag". In this work, we develop a suite of simulated benchmarks with 1D, 2D, and 3D deformable structures, including tasks that involve image-based goal-conditioning and multi-step deformable manipulation. We propose embedding goal-conditioning into Transporter Networks, a recently proposed model architecture for learning robotic manipulation that rearranges deep features to infer displacements that can represent pick and place actions. We demonstrate that goal-conditioned Transporter Networks enable agents to manipulate deformable structures into flexibly specified configurations without test-time visual anchors for target locations. We also significantly extend prior results using Transporter Networks for manipulating deformable objects by testing on tasks with 2D and 3D deformables. Supplementary material is available at https://berkeleyautomation.github.io/bags/.Comment: See https://berkeleyautomation.github.io/bags/ for project website and code; v2 corrects some BibTeX entries, v3 is ICRA 2021 version (minor revisions

    Learning to bag with a simulation-free reinforcement learning framework for robots

    Full text link
    Bagging is an essential skill that humans perform in their daily activities. However, deformable objects, such as bags, are complex for robots to manipulate. This paper presents an efficient learning-based framework that enables robots to learn bagging. The novelty of this framework is its ability to perform bagging without relying on simulations. The learning process is accomplished through a reinforcement learning algorithm introduced in this work, designed to find the best grasping points of the bag based on a set of compact state representations. The framework utilizes a set of primitive actions and represents the task in five states. In our experiments, the framework reaches a 60 % and 80 % of success rate after around three hours of training in the real world when starting the bagging task from folded and unfolded, respectively. Finally, we test the trained model with two more bags of different sizes to evaluate its generalizability.Comment: IET Cyber-Systems and Robotic

    Cable Manipulation with a Tactile-Reactive Gripper

    Full text link
    Cables are complex, high dimensional, and dynamic objects. Standard approaches to manipulate them often rely on conservative strategies that involve long series of very slow and incremental deformations, or various mechanical fixtures such as clamps, pins or rings. We are interested in manipulating freely moving cables, in real time, with a pair of robotic grippers, and with no added mechanical constraints. The main contribution of this paper is a perception and control framework that moves in that direction, and uses real-time tactile feedback to accomplish the task of following a dangling cable. The approach relies on a vision-based tactile sensor, GelSight, that estimates the pose of the cable in the grip, and the friction forces during cable sliding. We achieve the behavior by combining two tactile-based controllers: 1) Cable grip controller, where a PD controller combined with a leaky integrator regulates the gripping force to maintain the frictional sliding forces close to a suitable value; and 2) Cable pose controller, where an LQR controller based on a learned linear model of the cable sliding dynamics keeps the cable centered and aligned on the fingertips to prevent the cable from falling from the grip. This behavior is possible by a reactive gripper fitted with GelSight-based high-resolution tactile sensors. The robot can follow one meter of cable in random configurations within 2-3 hand regrasps, adapting to cables of different materials and thicknesses. We demonstrate a robot grasping a headphone cable, sliding the fingers to the jack connector, and inserting it. To the best of our knowledge, this is the first implementation of real-time cable following without the aid of mechanical fixtures.Comment: Accepted to RSS 202

    ShakingBot: Dynamic Manipulation for Bagging

    Full text link
    Bag manipulation through robots is complex and challenging due to the deformability of the bag. Based on dynamic manipulation strategy, we propose a new framework, ShakingBot, for the bagging tasks. ShakingBot utilizes a perception module to identify the key region of the plastic bag from arbitrary initial configurations. According to the segmentation, ShakingBot iteratively executes a novel set of actions, including Bag Adjustment, Dual-arm Shaking, and One-arm Holding, to open the bag. The dynamic action, Dual-arm Shaking, can effectively open the bag without the need to account for the crumpled configuration.Then, we insert the items and lift the bag for transport. We perform our method on a dual-arm robot and achieve a success rate of 21/33 for inserting at least one item across various initial bag configurations. In this work, we demonstrate the performance of dynamic shaking actions compared to the quasi-static manipulation in the bagging task. We also show that our method generalizes to variations despite the bag's size, pattern, and color.Comment: Manipulating bag through robots to baggin

    Robot-Assisted Minimally Invasive Surgical Skill Assessment—Manual and Automated Platforms

    Get PDF
    The practice of Robot-Assisted Minimally Invasive Surgery (RAMIS) requires extensive skills from the human surgeons due to the special input device control, such as moving the surgical instruments, use of buttons, knobs, foot pedals and so. The global popularity of RAMIS created the need to objectively assess surgical skills, not just for quality assurance reasons, but for training feedback as well. Nowadays, there is still no routine surgical skill assessment happening during RAMIS training and education in the clinical practice. In this paper, a review of the manual and automated RAMIS skill assessment techniques is provided, focusing on their general applicability, robustness and clinical relevance

    Robocatch: Design and Making of a Hand-Held Spillage-Free Specimen Retrieval Robot for Laparoscopic Surgery

    Get PDF
    Specimen retrieval is an important step in laparoscopy, a minimally invasive surgical procedure performed to diagnose and treat a myriad of medical pathologies in fields ranging from gynecology to oncology. Specimen retrieval bags (SRBs) are used to facilitate this task, while minimizing contamination of neighboring tissues and port-sites in the abdominal cavity. This manual surgical procedure requires usage of multiple ports, creating a traffic of simultaneous operations of multiple instruments in a limited shared workspace. The skill-demanding nature of this procedure makes it time-consuming, leading to surgeons’ fatigue and operational inefficiency. This thesis presents the design and making of RoboCatch, a novel hand-held robot that aids a surgeon in performing spillage-free retrieval of operative specimens in laparoscopic surgery. The proposed design significantly modifies and extends conventional instruments that are currently used by surgeons for the retrieval task: The core instrumentation of RoboCatch comprises a webbed three-fingered grasper and atraumatic forceps that are concentrically situated in a folded configuration inside a trocar. The specimen retrieval task is achieved in six stages: 1) The trocar is introduced into the surgical site through an instrument port, 2) the three webbed fingers slide out of the tube and simultaneously unfold in an umbrella like-fashion, 3) the forceps slide toward, and grasp, the excised specimen, 4) the forceps retract the grasped specimen into the center of the surrounding grasper, 5) the grasper closes to achieve a secured containment of the specimen, and 6) the grasper, along with the contained specimen, is manually removed from the abdominal cavity. The resulting reduction in the number of active ports reduces obstruction of the port-site and increases the procedure’s efficiency. The design process was initiated by acquiring crucial parameters from surgeons and creating a design table, which informed the CAD modeling of the robot structure and selection of actuation units and fabrication material. The robot prototype was first examined in CAD simulation and then fabricated using an Objet30 Prime 3D printer. Physical validation experiments were conducted to verify the functionality of different mechanisms of the robot. Further, specimen retrieval experiments were conducted with porcine meat samples to test the feasibility of the proposed design. Experimental results revealed that the robot was capable of retrieving masses of specimen ranging from 1 gram to 50 grams. The making of RoboCatch represents a significant step toward advancing the frontiers of hand-held robots for performing specimen retrieval tasks in minimally invasive surgery

    Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware

    Full text link
    Fine manipulation tasks, such as threading cable ties or slotting a battery, are notoriously difficult for robots because they require precision, careful coordination of contact forces, and closed-loop visual feedback. Performing these tasks typically requires high-end robots, accurate sensors, or careful calibration, which can be expensive and difficult to set up. Can learning enable low-cost and imprecise hardware to perform these fine manipulation tasks? We present a low-cost system that performs end-to-end imitation learning directly from real demonstrations, collected with a custom teleoperation interface. Imitation learning, however, presents its own challenges, particularly in high-precision domains: errors in the policy can compound over time, and human demonstrations can be non-stationary. To address these challenges, we develop a simple yet novel algorithm, Action Chunking with Transformers (ACT), which learns a generative model over action sequences. ACT allows the robot to learn 6 difficult tasks in the real world, such as opening a translucent condiment cup and slotting a battery with 80-90% success, with only 10 minutes worth of demonstrations. Project website: https://tonyzhaozh.github.io/aloha
    • …
    corecore