583 research outputs found

    Optimizing plane-to-plane positioning tasks by image-based visual servoing and structured light

    Get PDF
    This paper considers the problem of positioning an eye-in-hand system so that it gets parallel to a planar object. Our approach to this problem is based on linking to the camera a structured light emitter designed to produce a suitable set of visual features. The aim of using structured light is not only for simplifying the image processing and allowing lowtextured objects to be considered, but also for producing a control scheme with nice properties like decoupling, convergence and adequate camera trajectory. This paper focuses on an imagebased approach that achieves decoupling in all the workspace and for which the global convergence is ensured in perfect conditions. The behavior of the image-based approach is shown to be partially equivalent to a 3D visual servoing scheme but with a better robustness with respect to image noise. Concerning the robustness of the approach against calibration errors, it is demonstrated both analytically and experimentally

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Visual servoing of an autonomous helicopter in urban areas using feature tracking

    Get PDF
    We present the design and implementation of a vision-based feature tracking system for an autonomous helicopter. Visual sensing is used for estimating the position and velocity of features in the image plane (urban features like windows) in order to generate velocity references for the flight control. These visual-based references are then combined with GPS-positioning references to navigate towards these features and then track them. We present results from experimental flight trials, performed in two UAV systems and under different conditions that show the feasibility and robustness of our approach

    Improving detection of surface discontinuities in visual-force control systms

    Get PDF
    In this paper, a new approach to detect surface discontinuities in a visual–force control task is described. A task which consists in tracking a surface using visual–force information is shown. In this task, in order to reposition the robot tool with respect to the surface it is necessary to determine the surface discontinuities. This paper describes a new method to detect surface discontinuities employing sensorial information obtained from a force sensor, a camera and structured light. This method has proved to be more robust than previous systems even in situations where high frictions occur

    Modelling the Xbox 360 Kinect for visual servo control applications

    Get PDF
    A research report submitted to the faculty of Engineering and the built environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering. Johannesburg, August 2016There has been much interest in using the Microsoft Xbox 360 Kinect cameras for visual servo control applications. It is a relatively cheap device with expected shortcomings. This work contributes to the practical considerations of using the Kinect for visual servo control applications. A comprehensive characterisation of the Kinect is synthesised from existing literature and results from a nonlinear calibration procedure. The Kinect reduces computational overhead on image processing stages, such as pose estimation or depth estimation. It is limited by its 0.8m to 3.5m practical depth range and quadratic depth resolution of 1.8mm to 35mm, respectively. Since the Kinect uses an infra-red (IR) projector, a class one laser, it should not be used outdoors, due to IR saturation, and objects belonging to classes of non- IR-friendly surfaces should be avoided, due to IR refraction, absorption, or specular reflection. Problems of task stability due to invalid depth measurements in Kinect depth maps and practical depth range limitations can be reduced by using depth map preprocessing and activating classical visual servoing techniques when Kinect-based approaches are near task failure.MT201

    Visual Control System for Robotic Welding

    Get PDF

    Robotic assembly of complex planar parts: An experimental evaluation

    Full text link
    In this paper we present an experimental evaluation of automatic robotic assembly of complex planar parts. The torque-controlled DLR light-weight robot, equipped with an on-board camera (eye-in-hand configuration), is committed with the task of looking for given parts on a table, picking them, and inserting them inside the corresponding holes on a movable plate. Visual servoing techniques are used for fine positioning over the selected part/hole, while insertion is based on active compliance control of the robot and robust assembly planning in order to align the parts automatically with the hole. Execution of the complete task is validated through extensive experiments, and performance of humans and robot are compared in terms of overall execution time

    Implementation of a Visual Servo Control in a Bi-Manual Collaborative Robot

    Get PDF
    This project presents the application of a visual servo control to an industrial human-like robot, both using a simulation environment and in a real platform. In a visual servo scheme, the control loop is closed by a vision sensor, usually a camera (or more than one, in the stereo approach). The camera acquires the image of a defined target and, a control algorithm calculates the relative pose of robot-target and then, continuously sends commands to the robot in order to position it as required. The pose calculation and controller algorithms have been written in C++. The work has been carried out through a sequence of stages that are presented in this document. The first part goes through the basic theoretical ideas that support the design of the visual servo. It is composed of three main areas: computer vision, which deals mostly with the implementation of the vision sensor; robot kinematics, which allows define the equations that describe time evolution of the robot position, orientation, speed and joints values; and finally, the merge of both areas, the visual servo itself that makes up the control loop. A next section explains how the different tools and frameworks have been used to implement the control loop. Some of these tools are manufacturer proprietary programs, others are open source. There is a detailed description of how the simulation environment is set, the content of each of the blocks in the control loop and a basic explanation of the manufacturers program. The results show how the robot (simulated and real) converges to the relative set point pose and is also able to track changes in the position and orientation of the target
    • …
    corecore