61 research outputs found
Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning
In this work, we investigate laparoscopic camera motion automation through
imitation learning from retrospective videos of laparoscopic interventions. A
novel method is introduced that learns to augment a surgeon's behavior in image
space through object motion invariant image registration via homographies.
Contrary to existing approaches, no geometric assumptions are made and no depth
information is necessary, enabling immediate translation to a robotic setup.
Deviating from the dominant approach in the literature which consist of
following a surgical tool, we do not handcraft the objective and no priors are
imposed on the surgical scene, allowing the method to discover unbiased
policies. In this new research field, significant improvements are demonstrated
over two baselines on the Cholec80 and HeiChole datasets, showcasing an
improvement of 47% over camera motion continuation. The method is further shown
to indeed predict camera motion correctly on the public motion classification
labels of the AutoLaparo dataset. All code is made accessible on GitHub.Comment: Early accepted at MICCAI 202
Sim2Real Transfer of Reinforcement Learning for Concentric Tube Robots
Concentric Tube Robots (CTRs) are promising for minimally invasive interventions due to their miniature diameter, high dexterity, and compliance with soft tissue. CTRs comprise individual pre-curved tubes usually composed of NiTi and are arranged concentrically. As each tube is relatively rotated and translated, the backbone elongates, twists, and bends with a dexterity that is advantageous for confined spaces. Tube interactions, unmodelled phenomena, and inaccurate tube parameter estimation make physical modeling of CTRs challenging, complicating in turn kinematics and control. Deep reinforcement learning (RL) has been investigated as a solution. However, hardware validation has remained a challenge due to differences between the simulation and hardware domains. With simulation-only data, in this work, domain randomization is proposed as a strategy for translation to hardware of a simulation policy with no additionally acquired physical training data. The differences in simulation and hardware forward kinematics accuracy and precision are characterized by errors of 14.74±8.87 mm or 26.61±17.00 % robot length. We showcase that the proposed domain randomization approach reduces errors by 56 % in mean errors as compared to no domain randomization. Furthermore, we demonstrate path following capability in hardware with a line path with resulting errors of 4.37±2.39 mm or 5.61±3.11 % robot length
Semiautonomous Robotic Manipulator for Minimally Invasive Aortic Valve Replacement
Aortic valve surgery is the preferred procedure for replacing a damaged valve with an artificial one. The ValveTech robotic platform comprises a flexible articulated manipulator and surgical interface supporting the effective delivery of an artificial valve by teleoperation and endoscopic vision. This article presents our recent work on force-perceptive, safe, semiautonomous navigation of the ValveTech platform prior to valve implantation. First, we present a force observer that transfers forces from the manipulator body and tip to a haptic interface. Second, we demonstrate how hybrid forward/inverse mechanics, together with endoscopic visual servoing, lead to autonomous valve positioning. Benchtop experiments and an artificial phantom quantify the performance of the developed robot controller and navigator. Valves can be autonomously delivered with a 2.0±0.5 mm position error and a minimal misalignment of 3.4±0.9°. The hybrid force/shape observer (FSO) algorithm was able to predict distributed external forces on the articulated manipulator body with an average error of 0.09 N. FSO can also estimate loads on the tip with an average accuracy of 3.3%. The presented system can lead to better patient care, delivery outcome, and surgeon comfort during aortic valve surgery, without requiring sensorization of the robot tip, and therefore obviating miniaturization constraints.</p
- …