3,101 research outputs found

    Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation

    Get PDF
    Interacting with the environment using hands is one of the distinctive abilities of humans with respect to other species. This aptitude reflects on the crucial role played by objects\u2019 manipulation in the world that we have shaped for us. With a view of bringing robots outside industries for supporting people during everyday life, the ability of manipulating objects autonomously and in unstructured environments is therefore one of the basic skills they need. Autonomous manipulation is characterized by great complexity especially regarding the processing of sensors information to perceive the surrounding environment. Humans rely on vision for wideranging tridimensional information, prioprioception for the awareness of the relative position of their own body in the space and the sense of touch for local information when physical interaction with objects happens. The study of autonomous manipulation in robotics aims at transferring similar perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects. The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing increasingly the research attention in the latest years. In this work of Thesis, we propose possible solutions to some key components of autonomous manipulation, focusing in particular on the perception problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information to be processed for inferring how to interact with objects. The object modeling and grasping pipeline based on superquadric functions we designed meets this need, since it reconstructs the object 3D model from partial point cloud and computes a suitable hand pose for grasping the object. Retrieving objects information with touch sensors only is a relevant skill that becomes crucial when vision is occluded, as happens for instance during physical interaction with the object. We addressed this problem with the design of a novel tactile localization algorithm, named Memory Unscented Particle Filter, capable of localizing and recognizing objects relying solely on 3D contact points collected on the object surface. Another key point of autonomous manipulation we report on in this Thesis work is bi-manual coordination. The execution of more advanced manipulation tasks in fact might require the use and coordination of two arms. Tool usage for instance often requires a proper in-hand object pose that can be obtained via dual-arm re-grasping. In pick-and-place tasks sometimes the initial and target position of the object do not belong to the same arm workspace, then requiring to use one hand for lifting the object and the other for locating it in the new position. At this regard, we implemented a pipeline for executing the handover task, i.e. the sequences of actions for autonomously passing an object from one robot hand on to the other. The contributions described thus far address specific subproblems of the more complex task of autonomous manipulation. This actually differs from what humans do, in that humans develop their manipulation skills by learning through experience and trial-and-error strategy. Aproper mathematical formulation for encoding this learning approach is given by Deep Reinforcement Learning, that has recently proved to be successful in many robotics applications. For this reason, in this Thesis we report also on the six month experience carried out at Berkeley Artificial Intelligence Research laboratory with the goal of studying Deep Reinforcement Learning and its application to autonomous manipulation

    Grasp plannind under task-specific contact constraints

    Get PDF
    Several aspects have to be addressed before realizing the dream of a robotic hand-arm system with human-like capabilities, ranging from the consolidation of a proper mechatronic design, to the development of precise, lightweight sensors and actuators, to the efficient planning and control of the articular forces and motions required for interaction with the environment. This thesis provides solution algorithms for a main problem within the latter aspect, known as the {\em grasp planning} problem: Given a robotic system formed by a multifinger hand attached to an arm, and an object to be grasped, both with a known geometry and location in 3-space, determine how the hand-arm system should be moved without colliding with itself or with the environment, in order to firmly grasp the object in a suitable way. Central to our algorithms is the explicit consideration of a given set of hand-object contact constraints to be satisfied in the final grasp configuration, imposed by the particular manipulation task to be performed with the object. This is a distinguishing feature from other grasp planning algorithms given in the literature, where a means of ensuring precise hand-object contact locations in the resulting grasp is usually not provided. These conventional algorithms are fast, and nicely suited for planning grasps for pick-an-place operations with the object, but not for planning grasps required for a specific manipulation of the object, like those necessary for holding a pen, a pair of scissors, or a jeweler's screwdriver, for instance, when writing, cutting a paper, or turning a screw, respectively. To be able to generate such highly-selective grasps, we assume that a number of surface regions on the hand are to be placed in contact with a number of corresponding regions on the object, and enforce the fulfilment of such constraints on the obtained solutions from the very beginning, in addition to the usual constraints of grasp restrainability, manipulability and collision avoidance. The proposed algorithms can be applied to robotic hands of arbitrary structure, possibly considering compliance in the joints and the contacts if desired, and they can accommodate general patch-patch contact constraints, instead of more restrictive contact types occasionally considered in the literature. It is worth noting, also, that while common force-closure or manipulability indices are used to asses the quality of grasps, no particular assumption is made on the mathematical properties of the quality index to be used, so that any quality criterion can be accommodated in principle. The algorithms have been tested and validated on numerous situations involving real mechanical hands and typical objects, and find applications in classical or emerging contexts like service robotics, telemedicine, space exploration, prosthetics, manipulation in hazardous environments, or human-robot interaction in general

    Vision-Based Autonomous Control in Robotic Surgery

    Get PDF
    Robotic Surgery has completely changed surgical procedures. Enhanced dexterity, ergonomics, motion scaling, and tremor filtering, are well-known advantages introduced with respect to classical laparoscopy. In the past decade, robotic plays a fundamental role in Minimally Invasive Surgery (MIS) in which the da Vinci robotic system (Intuitive Surgical Inc., Sunnyvale, CA) is the most widely used system for robot-assisted laparoscopic procedures. Robots also have great potentiality in Microsurgical applications, where human limits are crucial and surgical sub-millimetric gestures could have enormous benefits with motion scaling and tremor compensation. However, surgical robots still lack advanced assistive control methods that could notably support surgeon's activity and perform surgical tasks in autonomy for a high quality of intervention. In this scenario, images are the main feedback the surgeon can use to correctly operate in the surgical site. Therefore, in view of the increasing autonomy in surgical robotics, vision-based techniques play an important role and can arise by extending computer vision algorithms to surgical scenarios. Moreover, many surgical tasks could benefit from the application of advanced control techniques, allowing the surgeon to work under less stressful conditions and performing the surgical procedures with more accuracy and safety. The thesis starts from these topics, providing surgical robots the ability to perform complex tasks helping the surgeon to skillfully manipulate the robotic system to accomplish the above requirements. An increase in safety and a reduction in mental workload is achieved through the introduction of active constraints, that can prevent the surgical tool from crossing a forbidden region and similarly generate constrained motion to guide the surgeon on a specific path, or to accomplish robotic autonomous tasks. This leads to the development of a vision-based method for robot-aided dissection procedure allowing the control algorithm to autonomously adapt to environmental changes during the surgical intervention using stereo images elaboration. Computer vision is exploited to define a surgical tools collision avoidance method that uses Forbidden Region Virtual Fixtures by rendering a repulsive force to the surgeon. Advanced control techniques based on an optimization approach are developed, allowing multiple tasks execution with task definition encoded through Control Barrier Functions (CBFs) and enhancing haptic-guided teleoperation system during suturing procedures. The proposed methods are tested on a different robotic platform involving da Vinci Research Kit robot (dVRK) and a new microsurgical robotic platform. Finally, the integration of new sensors and instruments in surgical robots are considered, including a multi-functional tool for dexterous tissues manipulation and different visual sensing technologies

    Realtime State Estimation with Tactile and Visual sensing. Application to Planar Manipulation

    Full text link
    Accurate and robust object state estimation enables successful object manipulation. Visual sensing is widely used to estimate object poses. However, in a cluttered scene or in a tight workspace, the robot's end-effector often occludes the object from the visual sensor. The robot then loses visual feedback and must fall back on open-loop execution. In this paper, we integrate both tactile and visual input using a framework for solving the SLAM problem, incremental smoothing and mapping (iSAM), to provide a fast and flexible solution. Visual sensing provides global pose information but is noisy in general, whereas contact sensing is local, but its measurements are more accurate relative to the end-effector. By combining them, we aim to exploit their advantages and overcome their limitations. We explore the technique in the context of a pusher-slider system. We adapt iSAM's measurement cost and motion cost to the pushing scenario, and use an instrumented setup to evaluate the estimation quality with different object shapes, on different surface materials, and under different contact modes
    • …
    corecore