1,349 research outputs found

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Robot Assisted Object Manipulation for Minimally Invasive Surgery

    Get PDF
    Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available. In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent quality of procedures. Ultimately, allowing the surgeons to interpret the ample and intelligent information from the system will enhance the surgical outcome and positively reflect both on patients and society. Three main aspects are required to introduce automation into surgery: the surgical robot must move with high precision, have motion planning capabilities and understand the surgical scene. Besides these main factors, depending on the type of surgery, there could be other aspects that might play a fundamental role, to name some compliance, stiffness, etc. This thesis addresses three technological challenges encountered when trying to achieve the aforementioned goals, in the specific case of robot-object interaction. First, how to overcome the inaccuracy of cable-driven systems when executing fine and precise movements. Second, planning different tasks in dynamically changing environments. Lastly, how the understanding of a surgical scene can be used to solve more than one manipulation task. To address the first challenge, a control scheme relying on accurate calibration is implemented to execute the pick-up of a surgical needle. Regarding the planning of surgical tasks, two approaches are explored: one is learning from demonstration to pick and place a surgical object, and the second is using a gradient-based approach to trigger a smoother object repositioning phase during intraoperative procedures. Finally, to improve scene understanding, this thesis focuses on developing a simulation environment where multiple tasks can be learned based on the surgical scene and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully able to autonomously pick up a suturing needle, position a surgical device for intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ retraction. Despite automation of surgical subtasks has been demonstrated in this work, several challenges remain open, such as the capabilities of the generated algorithm to generalise over different environment conditions and different patients

    Implementation Of Camera Arm Control By An Oculus Rift On A Da Vinci Surgical System Simulation

    Get PDF
    Camera control methods play a significant role in remote surgery. Two methods have been developed to control the camera arm of the da Vinci Surgical System: a standard clutch-based method for manual movement of the camera and an autonomous camera (auto-camera) method. In the standard method, the surgeon positions the camera manually using a pair of hand controllers. This happens frequently during the surgery and may serve as a distraction during surgical procedures. The second method was developed in order to help surgeon to remove the issue mentioned in the standard method. Auto-camera method enables the system to move the camera autonomously. In this method, the camera is moved with-respect-to the center of surgical tool arms with automatic zoom control ability. There are still many issues with automatically moving a camera. We will show the feasibility of an intermediate solution using an Oculus rift head mounted stereo display. Achieving the optimal camera viewpoint with simple control methods is of utmost importance for remote surgical systems. We propose a new method to move the camera arm based on sensors within the Oculus Rift. Can a surgeon put the Oculus Rift (virtual reality headset), get a stereoscopic view and control the camera with simple head gestures? In this case, the surgeon will be able to see the 3D camera view of scope inside of the Oculus Rift and move the viewpoint by his/her head orientation. Position and orientation of the Oculus rift is measured by an inertial measuring unit and optical tracking sensors within the Oculus platform. These data can be used to control the position and orientation of the camera arm. In this thesis, a complete system will be created based on the Robot Operating System (ROS) and a 3D simulation of the da Vinci robot in RViz. In addition, a usability study will be conducted to analyze system accuracy. For this system evaluation, headset orientation will be compared to corresponding orientation of the camera in simulation. We will also check whether subjects can use the system comfortable during a simple operation. In this study, we propose controlling of the camera arm by Oculus Rift as a new method for camera control. It is anticipated that the headset movement will be the same as its corresponding simulation in RViz (simulation environment for the robot). We anticipate that our results will demonstrates feasibility for this method to control a camera. We will propose next steps for testing this system on the da Vinci hardware leading towards a system for the operating room of the future

    Smart Camera Robotic Assistant for Laparoscopic Surgery

    Get PDF
    The cognitive architecture also includes learning mechanisms to adapt the behavior of the robot to the different ways of working of surgeons, and to improve the robot behavior through experience, in a similar way as a human assistant would do. The theoretical concepts of this dissertation have been validated both through in-vitro experimentation in the labs of medical robotics of the University of Malaga and through in-vivo experimentation with pigs in the IACE Center (Instituto Andaluz de Cirugía Experimental), performed by expert surgeons.In the last decades, laparoscopic surgery has become a daily practice in operating rooms worldwide, which evolution is tending towards less invasive techniques. In this scenario, robotics has found a wide field of application, from slave robotic systems that replicate the movements of the surgeon to autonomous robots able to assist the surgeon in certain maneuvers or to perform autonomous surgical tasks. However, these systems require the direct supervision of the surgeon, and its capacity of making decisions and adapting to dynamic environments is very limited. This PhD dissertation presents the design and implementation of a smart camera robotic assistant to collaborate with the surgeon in a real surgical environment. First, it presents the design of a novel camera robotic assistant able to augment the capacities of current vision systems. This robotic assistant is based on an intra-abdominal camera robot, which is completely inserted into the patient’s abdomen and it can be freely moved along the abdominal cavity by means of magnetic interaction with an external magnet. To provide the camera with the autonomy of motion, the external magnet is coupled to the end effector of a robotic arm, which controls the shift of the camera robot along the abdominal wall. This way, the robotic assistant proposed in this dissertation has six degrees of freedom, which allow providing a wider field of view compared to the traditional vision systems, and also to have different perspectives of the operating area. On the other hand, the intelligence of the system is based on a cognitive architecture specially designed for autonomous collaboration with the surgeon in real surgical environments. The proposed architecture simulates the behavior of a human assistant, with a natural and intuitive human-robot interface for the communication between the robot and the surgeon

    The 3rd AAU Workshop on Robotics:Proceedings

    Get PDF

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Safety Critical Java for Robotics Programming

    Get PDF
    • …
    corecore