26 research outputs found

    Automation of tissue piercing using circular needles and vision guidance for computer aided laparoscopic surgery

    Full text link
    Abstract—Despite the fact that minimally invasive robotic surgery provides many advantages for patients, such as reduced tissue trauma and shorter hospitalization, complex tasks (e.g. tissue piercing or knot-tying) are still time-consuming, error-prone and lead to quicker fatigue of the surgeon. Automating these recurrent tasks could greatly reduce total surgery time for patients and disburden the surgeon while he can focus on higher level challenges. This work tackles the problem of autonomous tissue piercing in robot-assisted laparoscopic surgery with a circular needle and general purpose surgical instruments. To command the instruments to an incision point, the surgeon utilizes a laser pointer to indicate the stitching area. A precise positioning of the needle is obtained by means of a switching visual servoing approach and the subsequent stitch is performed in a circular motion. Index Terms—robot surgery, minimally invasive surgery, tissue piercing, visual servoing I

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis

    A Vision-guided Dual Arm Sewing System for Stent Graft Manufacturing

    Get PDF
    This paper presents an intelligent sewing system for personalized stent graft manufacturing, a challenging sewing task that is currently performed manually. Inspired by medical suturing robots, we have adopted a single-sided sewing technique using a curved needle to perform the task of sewing stents onto fabric. A motorized surgical needle driver was attached to a 7 d.o.f robot arm to manipulate the needle with a second robot controlling the position of the mandrel. A learningfrom-demonstration approach was used to program the robot to sew stents onto fabric. The demonstrated sewing skill was segmented to several phases, each of which was encoded with a Gaussian Mixture Model. Generalized sewing movements were then generated from these models and were used for task execution. During execution, a stereo vision system was adopted to guide the robots to adjust the learnt movements according to the needle pose. Two experiments are presented here with this system and the results show that our system can robustly perform the sewing task as well as adapt to various needle poses. The accuracy of the sewing system was within 2mm

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Full text link
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Get PDF
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    Vision-Based Autonomous Control in Robotic Surgery

    Get PDF
    Robotic Surgery has completely changed surgical procedures. Enhanced dexterity, ergonomics, motion scaling, and tremor filtering, are well-known advantages introduced with respect to classical laparoscopy. In the past decade, robotic plays a fundamental role in Minimally Invasive Surgery (MIS) in which the da Vinci robotic system (Intuitive Surgical Inc., Sunnyvale, CA) is the most widely used system for robot-assisted laparoscopic procedures. Robots also have great potentiality in Microsurgical applications, where human limits are crucial and surgical sub-millimetric gestures could have enormous benefits with motion scaling and tremor compensation. However, surgical robots still lack advanced assistive control methods that could notably support surgeon's activity and perform surgical tasks in autonomy for a high quality of intervention. In this scenario, images are the main feedback the surgeon can use to correctly operate in the surgical site. Therefore, in view of the increasing autonomy in surgical robotics, vision-based techniques play an important role and can arise by extending computer vision algorithms to surgical scenarios. Moreover, many surgical tasks could benefit from the application of advanced control techniques, allowing the surgeon to work under less stressful conditions and performing the surgical procedures with more accuracy and safety. The thesis starts from these topics, providing surgical robots the ability to perform complex tasks helping the surgeon to skillfully manipulate the robotic system to accomplish the above requirements. An increase in safety and a reduction in mental workload is achieved through the introduction of active constraints, that can prevent the surgical tool from crossing a forbidden region and similarly generate constrained motion to guide the surgeon on a specific path, or to accomplish robotic autonomous tasks. This leads to the development of a vision-based method for robot-aided dissection procedure allowing the control algorithm to autonomously adapt to environmental changes during the surgical intervention using stereo images elaboration. Computer vision is exploited to define a surgical tools collision avoidance method that uses Forbidden Region Virtual Fixtures by rendering a repulsive force to the surgeon. Advanced control techniques based on an optimization approach are developed, allowing multiple tasks execution with task definition encoded through Control Barrier Functions (CBFs) and enhancing haptic-guided teleoperation system during suturing procedures. The proposed methods are tested on a different robotic platform involving da Vinci Research Kit robot (dVRK) and a new microsurgical robotic platform. Finally, the integration of new sensors and instruments in surgical robots are considered, including a multi-functional tool for dexterous tissues manipulation and different visual sensing technologies

    Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery

    Get PDF
    This paper presents a methodology for movement recognition in hand-assisted laparoscopic surgery using a textile-based sensing glove. The aim is to recognize the commands given by the surgeon’s hand inside the patient’s abdominal cavity in order to guide a collaborative robot. The glove, which incorporates piezoresistive sensors, continuously captures the degree of flexion of the surgeon’s fingers. These data are analyzed throughout the surgical operation using an algorithm that detects and recognizes some defined movements as commands for the collaborative robot. However, hand movement recognition is not an easy task, because of the high variability in the motion patterns of different people and situations. The data detected by the sensing glove are analyzed using the following methodology. First, the patterns of the different selected movements are defined. Then, the parameters of the movements for each person are extracted. The parameters concerning bending speed and execution time of the movements are modeled in a prephase, in which all of the necessary information is extracted for subsequent detection during the execution of the motion. The results obtained with 10 different volunteers show a high degree of precision and recall
    corecore