742 research outputs found

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis

    Vision Guided Force Control in Robotics

    Get PDF
    One way to increase the flexibility of industrial robots in manipulation tasks is to integrate additional sensors in the control systems. Cameras are an example of such sensors, and in recent years there has been an increased interest in vision based control. However, it is clear that most manipulation tasks can not be solved using position control alone, because of the risk of excessive contact forces. Therefore, it would be interesting to combine vision based position control with force feedback. In this thesis, we present a method for combining direct force control and visual servoing in the presence of unknown planar surfaces. The control algorithm involves a force feedback control loop and a vision based reference trajectory as a feed-forward signal. The vision system is based on a constrained image-based visual servoing algorithm, using an explicit 3D-reconstruction of the planar constraint surface. We show how calibration data calculated by a simple but efficient camera calibration method can be used in combination with force and position data to improve the reconstruction and reference trajectories. The task chosen involves force controlled drawing on an unknown surface. The robot will grasp a pen using visual servoing, and use the pen to draw lines between a number of points on a whiteboard. The force control will keep the contact force constant during the drawing. The method is validated through experiments carried out on a 6-degree-of-freedom ABB Industrial Robot 2000

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Hybrid visual servoing with hierarchical task composition for aerial manipulation

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e. for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.Peer ReviewedPostprint (author's final draft

    Near-Minimum Time Visual Servo Control Of An Underactuated Robotic Arm

    Get PDF
    In industrial robotics, grasping an object is required to happen fast since the position and orientation of such an object is a-priori known. However, if such information about the position and orientation is unavailable and objects are spread randomly on a conveyor, it may be challenging to keep the dexterity and speed at which the task is carried out. Nowadays, the use of vision sensors to compute the position and orientation of an object and to reposition the robotic system is being used accordingly. This technology has indirectly introduced a disparity in time that varies according to the nature of the control technique

    Modelbased Visual Servoing Grasping of Objects Moving by Newtonian Dynamics

    Get PDF
    Robot control systems are traditionally closed system. With the aid of vision, visual feedback is used to guide the robot manipulator to the target in a similar manner as humans do. This hand-to-target task is fairly easy if the target is static in Cartesian space. However, if the target is dynamics in motion, a model of this dynamical behaviour is required in order for the robot to predict or track the target trajectory and intercept the target successfully. One the necessary modeling is done, the framework becomes one of automatic control. >p In this master thesis, we present a model-based visual servoing of a six degree-of-freedom (DOF) industrial robot in the manner of computer simulation. The objective of this thesis is to manoeuvre the robot to grasp a ball moving by Newtonian dynamics in an unattended and less structured three-dimensional environment. >p Two digital cameras are used cooperatively to capture images of the ball for computer vision system to generate qualitative visual information. The accuracy of the visual information is essential to the robotic servoing control. The computer vision system detects the ball in image space, segments the ball from the background and computes the ball in image space as visual information. The visual information is used for 3D reconstruction of the ball in Cartesian space. The trajectory of the thrown ball is then modeled and predicted. Several ball grasp positions in Cartesian space are predicted as the thrown ball travelling towards the robot. At that same time, the inverse kinematics of the robot is also computed and it steers the robot to track the predicted ball grasp positions and grasp the ball when the error is small. In addition, the performance and robustness of this model-based prediction of the ball trajectory is verified with graphical analysis

    Automation of tissue piercing using circular needles and vision guidance for computer aided laparoscopic surgery

    Full text link
    Abstract—Despite the fact that minimally invasive robotic surgery provides many advantages for patients, such as reduced tissue trauma and shorter hospitalization, complex tasks (e.g. tissue piercing or knot-tying) are still time-consuming, error-prone and lead to quicker fatigue of the surgeon. Automating these recurrent tasks could greatly reduce total surgery time for patients and disburden the surgeon while he can focus on higher level challenges. This work tackles the problem of autonomous tissue piercing in robot-assisted laparoscopic surgery with a circular needle and general purpose surgical instruments. To command the instruments to an incision point, the surgeon utilizes a laser pointer to indicate the stitching area. A precise positioning of the needle is obtained by means of a switching visual servoing approach and the subsequent stitch is performed in a circular motion. Index Terms—robot surgery, minimally invasive surgery, tissue piercing, visual servoing I
    corecore