3,357 research outputs found

    Autonomous vision-guided bi-manual grasping and manipulation

    Get PDF
    This paper describes the implementation, demonstration and evaluation of a variety of autonomous, vision-guided manipulation capabilities, using a dual-arm Baxter robot. Initially, symmetric coordinated bi-manual manipulation based on kinematic tracking algorithm was implemented on the robot to enable a master-slave manipulation system. We demonstrate the efficacy of this approach with a human-robot collaboration experiment, where a human operator moves the master arm along arbitrary trajectories and the slave arm automatically follows the master arm while maintaining a constant relative pose between the two end-effectors. Next, this concept was extended to perform dual-arm manipulation without human intervention. To this extent, an image-based visual servoing scheme has been developed to control the motion of arms for positioning them at a desired grasp locations. Next we combine this with a dynamic position controller to move the grasped object using both arms in a prescribed trajectory. The presented approach has been validated by performing numerous symmetric and asymmetric bi-manual manipulations at different conditions. Our experiments demonstrated 80% success rate in performing the symmetric dual-arm manipulation tasks; and 73% success rate in performing asymmetric dualarm manipulation tasks

    Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot

    Full text link
    Mobile manipulation tasks are one of the key challenges in the field of search and rescue (SAR) robotics requiring robots with flexible locomotion and manipulation abilities. Since the tasks are mostly unknown in advance, the robot has to adapt to a wide variety of terrains and workspaces during a mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and an anthropomorphic upper body to carry out complex tasks in environments too dangerous for humans. Due to its high number of degrees of freedom, controlling the robot with direct teleoperation approaches is challenging and exhausting. Supervised autonomy approaches are promising to increase quality and speed of control while keeping the flexibility to solve unknown tasks. We developed a set of operator assistance functionalities with different levels of autonomy to control the robot for challenging locomotion and manipulation tasks. The integrated system was evaluated in disaster response scenarios and showed promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 201

    Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation

    Get PDF
    Interacting with the environment using hands is one of the distinctive abilities of humans with respect to other species. This aptitude reflects on the crucial role played by objects\u2019 manipulation in the world that we have shaped for us. With a view of bringing robots outside industries for supporting people during everyday life, the ability of manipulating objects autonomously and in unstructured environments is therefore one of the basic skills they need. Autonomous manipulation is characterized by great complexity especially regarding the processing of sensors information to perceive the surrounding environment. Humans rely on vision for wideranging tridimensional information, prioprioception for the awareness of the relative position of their own body in the space and the sense of touch for local information when physical interaction with objects happens. The study of autonomous manipulation in robotics aims at transferring similar perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects. The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing increasingly the research attention in the latest years. In this work of Thesis, we propose possible solutions to some key components of autonomous manipulation, focusing in particular on the perception problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information to be processed for inferring how to interact with objects. The object modeling and grasping pipeline based on superquadric functions we designed meets this need, since it reconstructs the object 3D model from partial point cloud and computes a suitable hand pose for grasping the object. Retrieving objects information with touch sensors only is a relevant skill that becomes crucial when vision is occluded, as happens for instance during physical interaction with the object. We addressed this problem with the design of a novel tactile localization algorithm, named Memory Unscented Particle Filter, capable of localizing and recognizing objects relying solely on 3D contact points collected on the object surface. Another key point of autonomous manipulation we report on in this Thesis work is bi-manual coordination. The execution of more advanced manipulation tasks in fact might require the use and coordination of two arms. Tool usage for instance often requires a proper in-hand object pose that can be obtained via dual-arm re-grasping. In pick-and-place tasks sometimes the initial and target position of the object do not belong to the same arm workspace, then requiring to use one hand for lifting the object and the other for locating it in the new position. At this regard, we implemented a pipeline for executing the handover task, i.e. the sequences of actions for autonomously passing an object from one robot hand on to the other. The contributions described thus far address specific subproblems of the more complex task of autonomous manipulation. This actually differs from what humans do, in that humans develop their manipulation skills by learning through experience and trial-and-error strategy. Aproper mathematical formulation for encoding this learning approach is given by Deep Reinforcement Learning, that has recently proved to be successful in many robotics applications. For this reason, in this Thesis we report also on the six month experience carried out at Berkeley Artificial Intelligence Research laboratory with the goal of studying Deep Reinforcement Learning and its application to autonomous manipulation

    Toward Image-Guided Automated Suture Grasping Under Complex Environments: A Learning-Enabled and Optimization-Based Holistic Framework

    Get PDF
    To realize a higher-level autonomy of surgical knot tying in minimally invasive surgery (MIS), automated suture grasping, which bridges the suture stitching and looping procedures, is an important yet challenging task needs to be achieved. This paper presents a holistic framework with image-guided and automation techniques to robotize this operation even under complex environments. The whole task is initialized by suture segmentation, in which we propose a novel semi-supervised learning architecture featured with a suture-aware loss to pertinently learn its slender information using both annotated and unannotated data. With successful segmentation in stereo-camera, we develop a Sampling-based Sliding Pairing (SSP) algorithm to online optimize the suture's 3D shape. By jointly studying the robotic configuration and the suture's spatial characteristics, a target function is introduced to find the optimal grasping pose of the surgical tool with Remote Center of Motion (RCM) constraints. To compensate for inherent errors and practical uncertainties, a unified grasping strategy with a novel vision-based mechanism is introduced to autonomously accomplish this grasping task. Our framework is extensively evaluated from learning-based segmentation, 3D reconstruction, and image-guided grasping on the da Vinci Research Kit (dVRK) platform, where we achieve high performances and successful rates in perceptions and robotic manipulations. These results prove the feasibility of our approach in automating the suture grasping task, and this work fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying

    Autonomous clothes manipulation using a hierarchical vision architecture

    Get PDF
    This paper presents a novel robot vision architecture for perceiving generic 3-D clothes configurations. Our architecture is hierarchically structured, starting from low-level curvature features to mid-level geometric shapes and topology descriptions, and finally, high-level semantic surface descriptions. We demonstrate our robot vision architecture in a customized dual-arm industrial robot with our inhouse developed stereo vision system, carrying out autonomous grasping and dual-arm flattening. The experimental results show the effectiveness of the proposed dual-arm flattening using the stereo vision system compared with the single-arm flattening using the widely cited Kinect-like sensor as the baseline. In addition, the proposed grasping approach achieves satisfactory performance when grasping various kind of garments, verifying the capability of the proposed visual perception architecture to be adapted to more than one clothing manipulation tasks

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    Jointly structuring triadic spaces of meaning and action:book sharing from 3 months on

    Get PDF
    This study explores the emergence of triadic interactions through the example of book sharing. As part of a naturalistic study, 10 infants were visited in their homes from 3-12 months. We report that (1) book sharing as a form of infant-caregiver-object interaction occurred from as early as 3 months. Using qualitative video analysis at a micro-level adapting methodologies from conversation and interaction analysis, we demonstrate that caregivers and infants practiced book sharing in a highly co-ordinated way, with caregivers carving out interaction units and shaping actions into action arcs and infants actively participating and co-ordinating their attention between mother and object from the beginning. We also (2) sketch a developmental trajectory of book sharing over the first year and show that the quality and dynamics of book sharing interactions underwent considerable change as the ecological situation was transformed in parallel with the infants' development of attention and motor skills. Social book sharing interactions reached an early peak at 6 months with the infants becoming more active in the coordination of attention between caregiver and book. From 7-9 months, the infants shifted their interest largely to solitary object exploration, in parallel with newly emerging postural and object manipulation skills, disrupting the social coordination and the cultural frame of book sharing. In the period from 9-12 months, social book interactions resurfaced, as infants began to effectively integrate object actions within the socially shared activity. In conclusion, to fully understand the development and qualities of triadic cultural activities such as book sharing, we need to look especially at the hitherto overlooked early period from 4-6 months, and investigate how shared spaces of meaning and action are structured together in and through interaction, creating the substrate for continuing cooperation and cultural learning
    • …
    corecore