4,612 research outputs found

    Intelligent learning for deformable object manipulation

    Get PDF
    ©1999 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 1999 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Monterey Bay, CA, November 1999.DOI: 10.1109/CIRA.1999.809935The majority of manipulation systems are designed with the assumption that the objects’being handled are rigid and do not deform when grasped. This paper addresses the problem of robotic grasping and manipulation of 3-D deformable objects, such as rubber balls or bags filled with sand.‘ Specifically, we have developed a generalized learning algorithm for handling of 3-D deformable objects in which prior knowledge of object attributes is not required and thus it can be applied to a large class of object types. Our methodology relies on the implementation of two main tasks. Our first task is to calculate deformation characteristics for a non-rigid object represented by a physically-based model. Using nonlinear partial differential equations, we model the particle motion of the deformable object in order to calculate the deformation characteristics. For our second task, we must calculate the minimum force required to successfully lift the deformable object. This minimum lifting force can be learned using a technique called ‘iterative lifting’. Once the deformation characteristics and the associated lifting force term are determined, they are used to train a neural network for extracting the minimum force required for subsequent deformable object manipulation tasks. Our developed algorithm is validated with two sets of experiments. The first experimental results are derived from the implementation of the algorithm in a simulated environment. The second set involves a physical implementation of the technique whose outcome is compared with the simulation results to test the real world validity of the developed methodology

    Manipulating Highly Deformable Materials Using a Visual Feedback Dictionary

    Full text link
    The complex physical properties of highly deformable materials such as clothes pose significant challenges fanipulation systems. We present a novel visual feedback dictionary-based method for manipulating defoor autonomous robotic mrmable objects towards a desired configuration. Our approach is based on visual servoing and we use an efficient technique to extract key features from the RGB sensor stream in the form of a histogram of deformable model features. These histogram features serve as high-level representations of the state of the deformable material. Next, we collect manipulation data and use a visual feedback dictionary that maps the velocity in the high-dimensional feature space to the velocity of the robotic end-effectors for manipulation. We have evaluated our approach on a set of complex manipulation tasks and human-robot manipulation tasks on different cloth pieces with varying material characteristics.Comment: The video is available at goo.gl/mDSC4

    Multiform Adaptive Robot Skill Learning from Humans

    Full text link
    Object manipulation is a basic element in everyday human lives. Robotic manipulation has progressed from maneuvering single-rigid-body objects with firm grasping to maneuvering soft objects and handling contact-rich actions. Meanwhile, technologies such as robot learning from demonstration have enabled humans to intuitively train robots. This paper discusses a new level of robotic learning-based manipulation. In contrast to the single form of learning from demonstration, we propose a multiform learning approach that integrates additional forms of skill acquisition, including adaptive learning from definition and evaluation. Moreover, going beyond state-of-the-art technologies of handling purely rigid or soft objects in a pseudo-static manner, our work allows robots to learn to handle partly rigid partly soft objects with time-critical skills and sophisticated contact control. Such capability of robotic manipulation offers a variety of new possibilities in human-robot interaction.Comment: Accepted to 2017 Dynamic Systems and Control Conference (DSCC), Tysons Corner, VA, October 11-1

    Feedback-based Fabric Strip Folding

    Full text link
    Accurate manipulation of a deformable body such as a piece of fabric is difficult because of its many degrees of freedom and unobservable properties affecting its dynamics. To alleviate these challenges, we propose the application of feedback-based control to robotic fabric strip folding. The feedback is computed from the low dimensional state extracted from a camera image. We trained the controller using reinforcement learning in simulation which was calibrated to cover the real fabric strip behaviors. The proposed feedback-based folding was experimentally compared to two state-of-the-art folding methods and our method outperformed both of them in terms of accuracy.Comment: Submitted to IEEE/RSJ IROS201

    Learning to Navigate Cloth using Haptics

    Full text link
    We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A. Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm
    • …
    corecore