412 research outputs found

    Tactile control based on Gaussian images and its application in bi-manual manipulation of deformable objects

    Get PDF
    The field of in-hand robot manipulation of deformable objects is an open and key issue for the next-coming robots. Developing an adaptable and agile framework for the tasks where a robot grasps and manipulates different kinds of deformable objects, is a main goal in the literature. Many research works have been proposed to control the manipulation tasks using a model of the manipulated object. Despite these techniques are precise to model the deformations, they are time consuming and, using them in real environments is almost impossible because of the large amount of objects which the robot could find. In this paper, we propose a model-independent framework to control the movements of the fingers of the hands while the robot executes manipulation tasks with deformable objects. This technique is based on tactile images which are obtained as a common interface for different tactile sensors, and uses a servo-tactile control to stabilize the grasping points, avoid sliding and adapt the contacts’ configuration regarding to position and magnitude of the applied force. Tactile images are obtained using a combination of dynamic Gaussians, which allows the creation of a common representation for tactile data given by different sensors with different technologies and resolutions. The framework was tested on different manipulation tasks where the objects are deformed, and without using a model of them.Research supported by the Spanish Ministry of Economy, European FEDER funds, Valencia Regional Government and University of Alicante through the projects DPI2015-68087-R, PROMETEO/2013/085 and GRE 15-05. This work has been also supported by the French Government Research Program Investissements d’avenir, through the RobotEx Equipment of Excellence (ANR-10-EQPX-44) and the IMobS3 Laboratory of Excellence (ANR-10-LABX-16-01)

    Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation

    Get PDF
    Interacting with the environment using hands is one of the distinctive abilities of humans with respect to other species. This aptitude reflects on the crucial role played by objects\u2019 manipulation in the world that we have shaped for us. With a view of bringing robots outside industries for supporting people during everyday life, the ability of manipulating objects autonomously and in unstructured environments is therefore one of the basic skills they need. Autonomous manipulation is characterized by great complexity especially regarding the processing of sensors information to perceive the surrounding environment. Humans rely on vision for wideranging tridimensional information, prioprioception for the awareness of the relative position of their own body in the space and the sense of touch for local information when physical interaction with objects happens. The study of autonomous manipulation in robotics aims at transferring similar perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects. The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing increasingly the research attention in the latest years. In this work of Thesis, we propose possible solutions to some key components of autonomous manipulation, focusing in particular on the perception problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information to be processed for inferring how to interact with objects. The object modeling and grasping pipeline based on superquadric functions we designed meets this need, since it reconstructs the object 3D model from partial point cloud and computes a suitable hand pose for grasping the object. Retrieving objects information with touch sensors only is a relevant skill that becomes crucial when vision is occluded, as happens for instance during physical interaction with the object. We addressed this problem with the design of a novel tactile localization algorithm, named Memory Unscented Particle Filter, capable of localizing and recognizing objects relying solely on 3D contact points collected on the object surface. Another key point of autonomous manipulation we report on in this Thesis work is bi-manual coordination. The execution of more advanced manipulation tasks in fact might require the use and coordination of two arms. Tool usage for instance often requires a proper in-hand object pose that can be obtained via dual-arm re-grasping. In pick-and-place tasks sometimes the initial and target position of the object do not belong to the same arm workspace, then requiring to use one hand for lifting the object and the other for locating it in the new position. At this regard, we implemented a pipeline for executing the handover task, i.e. the sequences of actions for autonomously passing an object from one robot hand on to the other. The contributions described thus far address specific subproblems of the more complex task of autonomous manipulation. This actually differs from what humans do, in that humans develop their manipulation skills by learning through experience and trial-and-error strategy. Aproper mathematical formulation for encoding this learning approach is given by Deep Reinforcement Learning, that has recently proved to be successful in many robotics applications. For this reason, in this Thesis we report also on the six month experience carried out at Berkeley Artificial Intelligence Research laboratory with the goal of studying Deep Reinforcement Learning and its application to autonomous manipulation

    Graph learning in robotics: a survey

    Full text link
    Deep neural networks for graphs have emerged as a powerful tool for learning on complex non-euclidean data, which is becoming increasingly common for a variety of different applications. Yet, although their potential has been widely recognised in the machine learning community, graph learning is largely unexplored for downstream tasks such as robotics applications. To fully unlock their potential, hence, we propose a review of graph neural architectures from a robotics perspective. The paper covers the fundamentals of graph-based models, including their architecture, training procedures, and applications. It also discusses recent advancements and challenges that arise in applied settings, related for example to the integration of perception, decision-making, and control. Finally, the paper provides an extensive review of various robotic applications that benefit from learning on graph structures, such as bodies and contacts modelling, robotic manipulation, action recognition, fleet motion planning, and many more. This survey aims to provide readers with a thorough understanding of the capabilities and limitations of graph neural architectures in robotics, and to highlight potential avenues for future research

    Planning Framework for Robotic Pizza Dough Stretching with a Rolling Pin

    Get PDF
    Stretching a pizza dough with a rolling pin is a nonprehensile manipulation. Since the object is deformable, force closure cannot be established, and the manipulation is carried out in a nonprehensile way. The framework of this pizza dough stretching application that is explained in this chapter consists of four sub-procedures: (i) recognition of the pizza dough on a plate, (ii) planning the necessary steps to shape the pizza dough to the desired form, (iii) path generation for a rolling pin to execute the output of the pizza dough planner, and (iv) inverse kinematics for the bi-manual robot to grasp and control the rolling pin properly. Using the deformable object model described in Chap. 3, each sub-procedure of the proposed framework is explained sequentially

    Simultaneous Tactile Exploration and Grasp Refinement for Unknown Objects

    Full text link
    This paper addresses the problem of simultaneously exploring an unknown object to model its shape, using tactile sensors on robotic fingers, while also improving finger placement to optimise grasp stability. In many situations, a robot will have only a partial camera view of the near side of an observed object, for which the far side remains occluded. We show how an initial grasp attempt, based on an initial guess of the overall object shape, yields tactile glances of the far side of the object which enable the shape estimate and consequently the successive grasps to be improved. We propose a grasp exploration approach using a probabilistic representation of shape, based on Gaussian Process Implicit Surfaces. This representation enables initial partial vision data to be augmented with additional data from successive tactile glances. This is combined with a probabilistic estimate of grasp quality to refine grasp configurations. When choosing the next set of finger placements, a bi-objective optimisation method is used to mutually maximise grasp quality and improve shape representation during successive grasp attempts. Experimental results show that the proposed approach yields stable grasp configurations more efficiently than a baseline method, while also yielding improved shape estimate of the grasped object.Comment: IEEE Robotics and Automation Letters. Preprint Version. Accepted February, 202

    Deformable Objects for Virtual Environments

    Get PDF

    Autonomous clothes manipulation using a hierarchical vision architecture

    Get PDF
    This paper presents a novel robot vision architecture for perceiving generic 3-D clothes configurations. Our architecture is hierarchically structured, starting from low-level curvature features to mid-level geometric shapes and topology descriptions, and finally, high-level semantic surface descriptions. We demonstrate our robot vision architecture in a customized dual-arm industrial robot with our inhouse developed stereo vision system, carrying out autonomous grasping and dual-arm flattening. The experimental results show the effectiveness of the proposed dual-arm flattening using the stereo vision system compared with the single-arm flattening using the widely cited Kinect-like sensor as the baseline. In addition, the proposed grasping approach achieves satisfactory performance when grasping various kind of garments, verifying the capability of the proposed visual perception architecture to be adapted to more than one clothing manipulation tasks

    Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects

    Get PDF
    Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe

    \u3cem\u3eGRASP News\u3c/em\u3e, Volume 6, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory, edited by Gregory Long and Alok Gupta
    • 

    corecore