803 research outputs found

    Computer-assisted polyp matching between optical colonoscopy and CT colonography: a phantom study

    Full text link
    Potentially precancerous polyps detected with CT colonography (CTC) need to be removed subsequently, using an optical colonoscope (OC). Due to large colonic deformations induced by the colonoscope, even very experienced colonoscopists find it difficult to pinpoint the exact location of the colonoscope tip in relation to polyps reported on CTC. This can cause unduly prolonged OC examinations that are stressful for the patient, colonoscopist and supporting staff. We developed a method, based on monocular 3D reconstruction from OC images, that automatically matches polyps observed in OC with polyps reported on prior CTC. A matching cost is computed, using rigid point-based registration between surface point clouds extracted from both modalities. A 3D printed and painted phantom of a 25 cm long transverse colon segment was used to validate the method on two medium sized polyps. Results indicate that the matching cost is smaller at the correct corresponding polyp between OC and CTC: the value is 3.9 times higher at the incorrect polyp, comparing the correct match between polyps to the incorrect match. Furthermore, we evaluate the matching of the reconstructed polyp from OC with other colonic endoluminal surface structures such as haustral folds and show that there is a minimum at the correct polyp from CTC. Automated matching between polyps observed at OC and prior CTC would facilitate the biopsy or removal of true-positive pathology or exclusion of false-positive CTC findings, and would reduce colonoscopy false-negative (missed) polyps. Ultimately, such a method might reduce healthcare costs, patient inconvenience and discomfort.Comment: This paper was presented at the SPIE Medical Imaging 2014 conferenc

    A Non-Rigid Map Fusion-Based RGB-Depth SLAM Method for Endoscopic Capsule Robots

    Full text link
    In the gastrointestinal (GI) tract endoscopy field, ingestible wireless capsule endoscopy is considered as a minimally invasive novel diagnostic technology to inspect the entire GI tract and to diagnose various diseases and pathologies. Since the development of this technology, medical device companies and many groups have made significant progress to turn such passive capsule endoscopes into robotic active capsule endoscopes to achieve almost all functions of current active flexible endoscopes. However, the use of robotic capsule endoscopy still has some challenges. One such challenge is the precise localization of such active devices in 3D world, which is essential for a precise three-dimensional (3D) mapping of the inner organ. A reliable 3D map of the explored inner organ could assist the doctors to make more intuitive and correct diagnosis. In this paper, we propose to our knowledge for the first time in literature a visual simultaneous localization and mapping (SLAM) method specifically developed for endoscopic capsule robots. The proposed RGB-Depth SLAM method is capable of capturing comprehensive dense globally consistent surfel-based maps of the inner organs explored by an endoscopic capsule robot in real time. This is achieved by using dense frame-to-model camera tracking and windowed surfelbased fusion coupled with frequent model refinement through non-rigid surface deformations

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Towards automated visual flexible endoscope navigation

    Get PDF
    Background:\ud The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud Methods:\ud A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud Results:\ud Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud Conclusions:\ud Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process

    Dense Vision in Image-guided Surgery

    Get PDF
    Image-guided surgery needs an efficient and effective camera tracking system in order to perform augmented reality for overlaying preoperative models or label cancerous tissues on the 2D video images of the surgical scene. Tracking in endoscopic/laparoscopic scenes however is an extremely difficult task primarily due to tissue deformation, instrument invasion into the surgical scene and the presence of specular highlights. State of the art feature-based SLAM systems such as PTAM fail in tracking such scenes since the number of good features to track is very limited. When the scene is smoky and when there are instrument motions, it will cause feature-based tracking to fail immediately. The work of this thesis provides a systematic approach to this problem using dense vision. We initially attempted to register a 3D preoperative model with multiple 2D endoscopic/laparoscopic images using a dense method but this approach did not perform well. We subsequently proposed stereo reconstruction to directly obtain the 3D structure of the scene. By using the dense reconstructed model together with robust estimation, we demonstrate that dense stereo tracking can be incredibly robust even within extremely challenging endoscopic/laparoscopic scenes. Several validation experiments have been conducted in this thesis. The proposed stereo reconstruction algorithm has turned out to be the state of the art method for several publicly available ground truth datasets. Furthermore, the proposed robust dense stereo tracking algorithm has been proved highly accurate in synthetic environment (< 0.1 mm RMSE) and qualitatively extremely robust when being applied to real scenes in RALP prostatectomy surgery. This is an important step toward achieving accurate image-guided laparoscopic surgery.Open Acces

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
    • …
    corecore