5 research outputs found
Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-Time Virtual iOCT Volume Slicing
In the last decade, various robotic platforms have been introduced that could
support delicate retinal surgeries. Concurrently, to provide semantic
understanding of the surgical area, recent advances have enabled
microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with
high-resolution 3D imaging at near video rate. The combination of robotics and
semantic understanding enables task autonomy in robotic retinal surgery, such
as for subretinal injection. This procedure requires precise needle insertion
for best treatment outcomes. However, merging robotic systems with iOCT
introduces new challenges. These include, but are not limited to high demands
on data processing rates and dynamic registration of these systems during the
procedure. In this work, we propose a framework for autonomous robotic
navigation for subretinal injection, based on intelligent real-time processing
of iOCT volumes. Our method consists of an instrument pose estimation method,
an online registration between the robotic and the iOCT system, and trajectory
planning tailored for navigation to an injection target. We also introduce
intelligent virtual B-scans, a volume slicing approach for rapid instrument
pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our
experiments on ex-vivo porcine eyes demonstrate the precision and repeatability
of the method. Finally, we discuss identified challenges in this work and
suggest potential solutions to further the development of such systems
EyeLS: Shadow-Guided Instrument Landing System for Intraocular Target Approaching in Robotic Eye Surgery
Robotic ophthalmic surgery is an emerging technology to facilitate
high-precision interventions such as retina penetration in subretinal injection
and removal of floating tissues in retinal detachment depending on the input
imaging modalities such as microscopy and intraoperative OCT (iOCT). Although
iOCT is explored to locate the needle tip within its range-limited ROI, it is
still difficult to coordinate iOCT's motion with the needle, especially at the
initial target-approaching stage. Meanwhile, due to 2D perspective projection
and thus the loss of depth information, current image-based methods cannot
effectively estimate the needle tip's trajectory towards both retinal and
floating targets. To address this limitation, we propose to use the shadow
positions of the target and the instrument tip to estimate their relative depth
position and accordingly optimize the instrument tip's insertion trajectory
until the tip approaches targets within iOCT's scanning area. Our method
succeeds target approaching on a retina model, and achieves an average depth
error of 0.0127 mm and 0.3473 mm for floating and retinal targets respectively
in the surgical simulator without damaging the retina.Comment: 10 page
Unsupervised out-of-distribution detection for safer robotically-guided retinal microsurgery
Purpose: A fundamental problem in designing safe machine learning systems is
identifying when samples presented to a deployed model differ from those
observed at training time. Detecting so-called out-of-distribution (OoD)
samples is crucial in safety-critical applications such as robotically-guided
retinal microsurgery, where distances between the instrument and the retina are
derived from sequences of 1D images that are acquired by an
instrument-integrated optical coherence tomography (iiOCT) probe.
Methods: This work investigates the feasibility of using an OoD detector to
identify when images from the iiOCT probe are inappropriate for subsequent
machine learning-based distance estimation. We show how a simple OoD detector
based on the Mahalanobis distance can successfully reject corrupted samples
coming from real-world ex-vivo porcine eyes.
Results: Our results demonstrate that the proposed approach can successfully
detect OoD samples and help maintain the performance of the downstream task
within reasonable levels. MahaAD outperformed a supervised approach trained on
the same kind of corruptions and achieved the best performance in detecting OoD
cases from a collection of iiOCT samples with real-world corruptions.
Conclusion: The results indicate that detecting corrupted iiOCT data through
OoD detection is feasible and does not need prior knowledge of possible
corruptions. Consequently, MahaAD could aid in ensuring patient safety during
robotically-guided microsurgery by preventing deployed prediction models from
estimating distances that put the patient at risk.Comment: Accepted at IPCAI 202
Autonomous Needle Navigation in Retinal Microsurgery: Evaluation in ex vivo Porcine Eyes
Important challenges in retinal microsurgery include prolonged operating
time, inadequate force feedback, and poor depth perception due to a constrained
top-down view of the surgery. The introduction of robot-assisted technology
could potentially deal with such challenges and improve the surgeon's
performance. Motivated by such challenges, this work develops a strategy for
autonomous needle navigation in retinal microsurgery aiming to achieve precise
manipulation, reduced end-to-end surgery time, and enhanced safety. This is
accomplished through real-time geometry estimation and chance-constrained Model
Predictive Control (MPC) resulting in high positional accuracy while keeping
scleral forces within a safe level. The robotic system is validated using both
open-sky and intact (with lens and partial vitreous removal) ex vivo porcine
eyes. The experimental results demonstrate that the generation of safe control
trajectories is robust to small motions associated with head drift. The mean
navigation time and scleral force for MPC navigation experiments are 7.208 s
and 11.97 mN, which can be considered efficient and well within acceptable safe
limits. The resulting mean errors along lateral directions of the retina are
below 0.06 mm, which is below the typical hand tremor amplitude in retinal
microsurgery