73 research outputs found

    Visual servoing of a robotic endoscope holder based on surgical instrument tracking

    No full text
    International audienceWe propose an image-based control for a roboticendoscope holder during laparoscopic surgery. Our aim is toprovide more comfort to the practitioner during surgery byautomatically positioning the endoscope at his request. To doso, we propose to maintain one or more instruments roughly atthe center of the laparoscopic image through different commandmodes. The originality of this method relies on the direct useof the endoscopic image and the absence of artificial markersadded to the instruments. The application is validated on a testbench with a commercial robotic endoscope holder

    Computer- and robot-assisted Medical Intervention

    Full text link
    Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00

    Towards Exoscope Automation in Neurosurgery: A Markerless Visual-Servoing Approach

    Get PDF
    Exoscopes are a promising tool for neurosurgeons, offering improved visualisation and ergonomics compared with traditional surgical microscopes. They consist of an external scope that projects the surgical field onto a 2D or 3D monitor, providing a wider field of view and better access to the surgical site. Despite the advantages, exoscopes present some limitations, such as the need for manual or foot joystick repositioning, which can disrupt the flow of the procedure and increase the risk of user error. In this study, a markerless visual-servoing approach for autonomous exoscope control is proposed to address these limitations and enhance the ergonomics and reduce the physical and cognitive load compared with traditional joystick control. The system uses visual information from the operating field to control the exoscope, eliminating the need for markers or additional tracking devices. The proposed approach was validated using a 7-DOF robotic manipulator with a stereo camera in an eyein-hand configuration. Results showed that the system achieved 89% accuracy in detecting the target and tracking its movement with a tracking error ranging from 0.50 +/- 0.17 cm for lowspeed movements to 1.38 +/- 0.73 cm for high-speed movements. The proposed system also demonstrated improved efficiency, with a shorter execution time of 72.07 +/- 19.36 s compared with 106.52 +/- 18.50 s for the foot-joystick control. Additionally, the time out of the FoV was significantly higher in the joystick control mode and the frequency of appearance of the instrument in the centre of the image was higher when using the proposed system. The NASA TLX results indicated lower physical and cognitive load compared with the joystick control-based modality

    Automation of tissue piercing using circular needles and vision guidance for computer aided laparoscopic surgery

    Full text link
    Abstract—Despite the fact that minimally invasive robotic surgery provides many advantages for patients, such as reduced tissue trauma and shorter hospitalization, complex tasks (e.g. tissue piercing or knot-tying) are still time-consuming, error-prone and lead to quicker fatigue of the surgeon. Automating these recurrent tasks could greatly reduce total surgery time for patients and disburden the surgeon while he can focus on higher level challenges. This work tackles the problem of autonomous tissue piercing in robot-assisted laparoscopic surgery with a circular needle and general purpose surgical instruments. To command the instruments to an incision point, the surgeon utilizes a laser pointer to indicate the stitching area. A precise positioning of the needle is obtained by means of a switching visual servoing approach and the subsequent stitch is performed in a circular motion. Index Terms—robot surgery, minimally invasive surgery, tissue piercing, visual servoing I

    Visual servoing-based camera control for the da Vinci Surgical System

    Get PDF
    Minimally Invasive Surgery (MIS)—which is a very beneficial technique to the patient but can be challenging to the surgeon—includes endoscopic camera handling by an assistant (traditional MIS) or a robotic arm under the control of the operator (Robot-Assisted MIS, RAMIS). Since in the case of RAMIS the endoscopic image is the sole sensory input, it is essential to keep the surgical tools in the field-of-view of the camera for patient safety reasons. Based on the endoscopic images, the movement of the endoscope holder arm can be automated by visual servoing techniques, which can reduce the risk of medical error. In this paper, we propose a marker-based visual servoing technique for automated camera positioning in the case of RAMIS. The method was validated on the research-enhanced da Vinci Surgical System. The implemented method is available at: https://github.com/ABC-iRobotics/irob-saf/tree/visual servoin

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Medical image computing and computer-aided medical interventions applied to soft tissues. Work in progress in urology

    Full text link
    Until recently, Computer-Aided Medical Interventions (CAMI) and Medical Robotics have focused on rigid and non deformable anatomical structures. Nowadays, special attention is paid to soft tissues, raising complex issues due to their mobility and deformation. Mini-invasive digestive surgery was probably one of the first fields where soft tissues were handled through the development of simulators, tracking of anatomical structures and specific assistance robots. However, other clinical domains, for instance urology, are concerned. Indeed, laparoscopic surgery, new tumour destruction techniques (e.g. HIFU, radiofrequency, or cryoablation), increasingly early detection of cancer, and use of interventional and diagnostic imaging modalities, recently opened new challenges to the urologist and scientists involved in CAMI. This resulted in the last five years in a very significant increase of research and developments of computer-aided urology systems. In this paper, we propose a description of the main problems related to computer-aided diagnostic and therapy of soft tissues and give a survey of the different types of assistance offered to the urologist: robotization, image fusion, surgical navigation. Both research projects and operational industrial systems are discussed

    Autonomous pick-and-place using the dVRK.

    Get PDF
    PURPOSE: Robotic-assisted partial nephrectomy (RAPN) is a tissue-preserving approach to treating renal cancer, where ultrasound (US) imaging is used for intra-operative identification of tumour margins and localisation of blood vessels. With the da Vinci Surgical System (Sunnyvale, CA), the US probe is inserted through an auxiliary access port, grasped by the robotic tool and moved over the surface of the kidney. Images from US probe are displayed separately to the surgical site video within the surgical console leaving the surgeon to interpret and co-registers information which is challenging and complicates the procedural workflow. METHODS: We introduce a novel software architecture to support a hardware soft robotic rail designed to automate intra-operative US acquisition. As a preliminary step towards complete task automation, we automatically grasp the rail and position it on the tissue surface so that the surgeon is then able to manipulate manually the US probe along it. RESULTS: A preliminary clinical study, involving five surgeons, was carried out to evaluate the potential performance of the system. Results indicate that the proposed semi-autonomous approach reduced the time needed to complete a US scan compared to manual tele-operation. CONCLUSION: Procedural automation can be an important workflow enhancement functionality in future robotic surgery systems. We have shown a preliminary study on semi-autonomous US imaging, and this could support more efficient data acquisition

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Smart Camera Robotic Assistant for Laparoscopic Surgery

    Get PDF
    The cognitive architecture also includes learning mechanisms to adapt the behavior of the robot to the different ways of working of surgeons, and to improve the robot behavior through experience, in a similar way as a human assistant would do. The theoretical concepts of this dissertation have been validated both through in-vitro experimentation in the labs of medical robotics of the University of Malaga and through in-vivo experimentation with pigs in the IACE Center (Instituto Andaluz de Cirugía Experimental), performed by expert surgeons.In the last decades, laparoscopic surgery has become a daily practice in operating rooms worldwide, which evolution is tending towards less invasive techniques. In this scenario, robotics has found a wide field of application, from slave robotic systems that replicate the movements of the surgeon to autonomous robots able to assist the surgeon in certain maneuvers or to perform autonomous surgical tasks. However, these systems require the direct supervision of the surgeon, and its capacity of making decisions and adapting to dynamic environments is very limited. This PhD dissertation presents the design and implementation of a smart camera robotic assistant to collaborate with the surgeon in a real surgical environment. First, it presents the design of a novel camera robotic assistant able to augment the capacities of current vision systems. This robotic assistant is based on an intra-abdominal camera robot, which is completely inserted into the patient’s abdomen and it can be freely moved along the abdominal cavity by means of magnetic interaction with an external magnet. To provide the camera with the autonomy of motion, the external magnet is coupled to the end effector of a robotic arm, which controls the shift of the camera robot along the abdominal wall. This way, the robotic assistant proposed in this dissertation has six degrees of freedom, which allow providing a wider field of view compared to the traditional vision systems, and also to have different perspectives of the operating area. On the other hand, the intelligence of the system is based on a cognitive architecture specially designed for autonomous collaboration with the surgeon in real surgical environments. The proposed architecture simulates the behavior of a human assistant, with a natural and intuitive human-robot interface for the communication between the robot and the surgeon
    • …
    corecore