1,288 research outputs found

    Visually-Guided Manipulation Techniques for Robotic Autonomous Underwater Panel Interventions

    Get PDF
    The long term of this ongoing research has to do with increasing the autonomy levels for underwater intervention missions. Bearing in mind that the speci c mission to face has been the intervention on a panel, in this paper some results in di erent development stages are presented by using the real mechatronics and the panel mockup. Furthermore, some details are highlighted describing two methodologies implemented for the required visually-guided manipulation algorithms, and also a roadmap explaining the di erent testbeds used for experimental validation, in increasing complexity order, are presented. It is worth mentioning that the aforementioned results would be impossible without previous generated know-how for both, the complete developed mechatronics for the autonomous underwater vehicle for intervention, and the required 3D simulation tool. In summary, thanks to the implemented approach, the intervention system is able to control the way in which the gripper approximates and manipulates the two panel devices (i.e. a valve and a connector) in autonomous manner and, results in di erent scenarios demonstrate the reliability and feasibility of this autonomous intervention system in water tank and pool conditions.This work was partly supported by Spanish Ministry of Research and Innovation DPI2011-27977-C03 (TRITON Project) and DPI2014-57746-C3 (MERBOTS Project), by Foundation Caixa Castell o-Bancaixa and Universitat Jaume I grant PID2010-12, by Universitat Jaume I PhD grants PREDOC/2012/47 and PREDOC/2013/46, and by Generalitat Valenciana PhD grant ACIF/2014/298. We would like also to acknowledge the support of our partners inside the Spanish Coordinated Projects TRITON and MERBOTS: Universitat de les Illes Balears, UIB (subprojects VISUAL2 and SUPERION) and Universitat de Girona, UdG (subprojects COMAROB and ARCHROV)

    Robotic execution for everyday tasks by means of external vision/force control

    Get PDF
    In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are pre- sented

    Effect of Terminal Haptic Feedback on the Sensorimotor Control of Visually and Tactile-Guided Grasping

    Get PDF
    When grasping a physical object, the sensorimotor system is able to specify grip aperture via absolute sensory information. In contrast, grasping to a location previously occupied by (no-target pantomime-grasp) or adjacent to (spatially dissociated pantomime-grasp) an object results in the specification of grip aperture via relative sensory information. It is important to recognize that grasping a physical object and pantomime-grasping differ not only in terms of their spatial properties but also with respect to the availability of haptic feedback. Thus, the objective of this dissertation was to investigate how terminal haptic feedback influences the underlying mechanisms that support goal-directed grasping in visual- and tactile-based settings. In Chapter Two I sought to determine whether absolute haptic feedback influences tactile-based cues supporting grasps performed to the location previously occupied by an object. Results demonstrated that when haptic feedback was presented at the end of the response absolute haptic signals were incorporated in grasp production. Such a finding indicates that haptic feedback supports the absolute calibration between a tactile defined object and the required motor output. In Chapter Three I examined whether haptic feedback influences the information supporting visually guided no-target pantomime-grasps in a manner similar to tactile-guided grasping. Results showed that haptic sensory signals support no-target pantomime-grasping when provided at the end of the response. Accordingly, my findings demonstrated that a visuo-haptic calibration supports the absolute specification of object size and highlights the role of multisensory integration in no-target pantomime-grasping. Importantly, however, Chapter Four demonstrated that a priori knowledge of haptic feedback is necessary to support the aforementioned calibration process. In Chapter Five I demonstrates that, unlike no-target pantomime-grasps, spatially dissociated pantomime-grasps precluded a visuo-haptic calibration. Accordingly, I propose that the top-down demands of decoupling stimulus-response relations in spatially dissociated pantomime-grasping renders aperture shaping via a visual percept that is immutable to the integration of haptic feedback. In turn, the decreased top-down demands of no-target pantomime-grasps allows haptic feedback to serve as a reliable sensory resource supporting an absolute visuo-haptic calibration

    Autonomous vision-guided bi-manual grasping and manipulation

    Get PDF
    This paper describes the implementation, demonstration and evaluation of a variety of autonomous, vision-guided manipulation capabilities, using a dual-arm Baxter robot. Initially, symmetric coordinated bi-manual manipulation based on kinematic tracking algorithm was implemented on the robot to enable a master-slave manipulation system. We demonstrate the efficacy of this approach with a human-robot collaboration experiment, where a human operator moves the master arm along arbitrary trajectories and the slave arm automatically follows the master arm while maintaining a constant relative pose between the two end-effectors. Next, this concept was extended to perform dual-arm manipulation without human intervention. To this extent, an image-based visual servoing scheme has been developed to control the motion of arms for positioning them at a desired grasp locations. Next we combine this with a dynamic position controller to move the grasped object using both arms in a prescribed trajectory. The presented approach has been validated by performing numerous symmetric and asymmetric bi-manual manipulations at different conditions. Our experiments demonstrated 80% success rate in performing the symmetric dual-arm manipulation tasks; and 73% success rate in performing asymmetric dualarm manipulation tasks

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Impaired peripheral reaching and on-line corrections in patient DF: optic ataxia with visual form agnosia

    Get PDF
    An influential model of vision suggests the presence of two visual streams within the brain: a dorsal occipito-parietal stream which mediates action and a ventral occipito-temporal stream which mediates perception. One of the cornerstones of this model is DF, a patient with visual form agnosia following bilateral ventral stream lesions. Despite her inability to identify and distinguish visual stimuli, DF can still use visual information to control her hand actions towards these stimuli. These observations have been widely interpreted as demonstrating a double dissociation from optic ataxia, a condition observed after bilateral dorsal stream damage in which patients are unable to act towards objects that they can recognize. In Experiment 1, we investigated how patient DF performed on the classical diagnostic task for optic ataxia, reaching in central and peripheral vision. We replicated recent findings that DF is remarkably inaccurate when reaching to peripheral targets, but not when reaching in free vision. In addition we present new evidence that her peripheral reaching errors follow the optic ataxia pattern increasing with target eccentricity and being biased towards fixation. In Experiments 2 and 3, for the first time we examined DF’s on-line control of reaching using a double-step paradigm in fixation-controlled and free-vision versions of the task. DF was impaired when performing fast on-line corrections on all conditions tested, similarly to optic ataxia patients. Our findings question the long-standing assumption that DF’s dorsal visual stream is functionally intact and that her on-line visuomotor control is spared. In contrast, in addition to visual form agnosia, DF also has visuomotor symptoms of optic ataxia which are most likely explained by bilateral damage to the superior parietal occipital cortex. We thus conclude that patient DF can no longer be considered as an appropriate single-case model for testing the neural basis of perception and action dissociations

    Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks

    Get PDF
    Underwater object detection and recognition using computer vision are challenging tasks due to the poor light condition of submerged environments. For intervention missions requiring grasping and manipulation of submerged objects, a vision system must provide an Autonomous Underwater Vehicles (AUV) with object detection, localization and tracking capabilities. In this paper, we describe the integration of a vision system in the MARIS intervention AUV and its configuration for detecting cylindrical pipes, a typical artifact of interest in underwater operations. Pipe edges are tracked using an alpha-beta filter to achieve robustness and return a reliable pose estimation even in case of partial pipe visibility. Experiments in an outdoor water pool in different light conditions show that the adopted algorithmic approach allows detection of target pipes and provides a sufficiently accurate estimation of their pose even when they become partially visible, thereby supporting the AUV in several successful pipe grasping operations
    • …
    corecore