7 research outputs found
Computer-assisted polyp matching between optical colonoscopy and CT colonography: a phantom study
Potentially precancerous polyps detected with CT colonography (CTC) need to
be removed subsequently, using an optical colonoscope (OC). Due to large
colonic deformations induced by the colonoscope, even very experienced
colonoscopists find it difficult to pinpoint the exact location of the
colonoscope tip in relation to polyps reported on CTC. This can cause unduly
prolonged OC examinations that are stressful for the patient, colonoscopist and
supporting staff.
We developed a method, based on monocular 3D reconstruction from OC images,
that automatically matches polyps observed in OC with polyps reported on prior
CTC. A matching cost is computed, using rigid point-based registration between
surface point clouds extracted from both modalities. A 3D printed and painted
phantom of a 25 cm long transverse colon segment was used to validate the
method on two medium sized polyps. Results indicate that the matching cost is
smaller at the correct corresponding polyp between OC and CTC: the value is 3.9
times higher at the incorrect polyp, comparing the correct match between polyps
to the incorrect match. Furthermore, we evaluate the matching of the
reconstructed polyp from OC with other colonic endoluminal surface structures
such as haustral folds and show that there is a minimum at the correct polyp
from CTC.
Automated matching between polyps observed at OC and prior CTC would
facilitate the biopsy or removal of true-positive pathology or exclusion of
false-positive CTC findings, and would reduce colonoscopy false-negative
(missed) polyps. Ultimately, such a method might reduce healthcare costs,
patient inconvenience and discomfort.Comment: This paper was presented at the SPIE Medical Imaging 2014 conferenc
Recommended from our members
Mobility, Navigation and Localization Towards Robotic Endoscopy
With significant progress being made towards improving endoscope technology such as capsule endoscopy and robotic endoscopy, the development of advanced strategies for manipulating, controlling and more generally, easing the accessibility of these devices for physicians is an important next step. This work presents the development of several robotic platforms for experimentally testing navigation and localization strategies in robotic endoscopy followed by the development and testing of navigation strategies using these devices. Finally, visual and visual inertial localization and mapping is explored on two of these robotic systems. We first present a detailed description on the state-of-the-art with regard to minimally invasive robotic surgery and then follow this with in-depth description of our design and validation of two important systems, the Robotic Endoscope Platform (REP) and the Modular Endoscopy Simulation Apparatus (MESA), for exploring some of the challenges in robotic endoscopy. Following these descriptions we present a technique for autonomous navigation of the REP within the MESA as well as an attempt at applying Simultaneous Localization and Mapping (SLAM) to allow for the real-time localization of this system. Finally, we transition these techniques to the Endoculus, a complete robotic endoscope suitable for in vivo testing, and demonstrate both autonomous navigation for this device, and the implementation of three different SLAM systems for localization and mapping of the Endoculus system in real-time. Throughout these experiments we demonstrate the potential for advanced methods in computer vision along with other sensory techniques to substantially benefit endoscopy, enabling greater and greater autonomy of these systems and furthering the case for robotic endoscopy as a whole.</p