577 research outputs found

    A Non-Rigid Map Fusion-Based RGB-Depth SLAM Method for Endoscopic Capsule Robots

    Full text link
    In the gastrointestinal (GI) tract endoscopy field, ingestible wireless capsule endoscopy is considered as a minimally invasive novel diagnostic technology to inspect the entire GI tract and to diagnose various diseases and pathologies. Since the development of this technology, medical device companies and many groups have made significant progress to turn such passive capsule endoscopes into robotic active capsule endoscopes to achieve almost all functions of current active flexible endoscopes. However, the use of robotic capsule endoscopy still has some challenges. One such challenge is the precise localization of such active devices in 3D world, which is essential for a precise three-dimensional (3D) mapping of the inner organ. A reliable 3D map of the explored inner organ could assist the doctors to make more intuitive and correct diagnosis. In this paper, we propose to our knowledge for the first time in literature a visual simultaneous localization and mapping (SLAM) method specifically developed for endoscopic capsule robots. The proposed RGB-Depth SLAM method is capable of capturing comprehensive dense globally consistent surfel-based maps of the inner organs explored by an endoscopic capsule robot in real time. This is achieved by using dense frame-to-model camera tracking and windowed surfelbased fusion coupled with frequent model refinement through non-rigid surface deformations

    Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots

    Full text link
    Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.Comment: submitted to IROS 201

    A Testbed for Design and Performance Evaluation of Visual Localization Technique inside the Small Intestine

    Get PDF
    Wireless video capsule endoscopy (VCE) plays an increasingly important role in assisting clinical diagnoses of gastrointestinal (GI) diseases. It provides a non-invasive way to examine the entire small intestine, where other conventional endoscopic instruments can barely reach. Existing examination systems for the VCE cannot track the location of a endoscopic capsule, which prevents the physician from identifying the exact location of the diseases. During the eight hour examination time, the video capsule continuously keeps taking images at a frame rate up to six frame per sec, so it is possible to extract the motion information from the content of the image sequence. Many attempts have been made to develop computer vision algorithms to detect the motion of the capsule based on the small changes in the consecutive video frames and then trace the location of the capsule. However, validation of those algorithms has become a challenging topic because conducting experiments on the human body is extremely difficult due to individual differences and legal issues. In this thesis, two validation approaches for motion tracking of the VCE are presented in detail respectively. One approach is to build a physical testbed with a plastic pipe and an endoscopy camera; the other is to build a virtual testbed by creating a three-dimensional virtual small intestine model and simulating the motion of the capsule. Based on the virtual testbed, a physiological factor, intestinal contraction, has been studied in terms of its influence on visual based localization algorithm and a geometric model for measuring the amount of contraction is proposed and validated via the virtual testbed. Empirical results have made contributions in support of the performance evaluation of other research on the visual based localization algorithm of VCE

    On Simultaneous Localization and Mapping inside the Human Body (Body-SLAM)

    Get PDF
    Wireless capsule endoscopy (WCE) offers a patient-friendly, non-invasive and painless investigation of the entire small intestine, where other conventional wired endoscopic instruments can barely reach. As a critical component of the capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of intestinal disease after it is detected by the video source. To define the position of the endoscopic capsule, we need to have a map of inside the human body. However, since the shape of the small intestine is extremely complex and the RF signal propagates differently in the non-homogeneous body tissues, accurate mapping and localization inside small intestine is very challenging. In this dissertation, we present an in-body simultaneous localization and mapping technique (Body-SLAM) to enhance the positioning accuracy of the WCE inside the small intestine and reconstruct the trajectory the capsule has traveled. In this way, the positions of the intestinal diseases can be accurately located on the map of inside human body, therefore, facilitates the following up therapeutic operations. The proposed approach takes advantage of data fusion from two sources that come with the WCE: image sequences captured by the WCE\u27s embedded camera and the RF signal emitted by the capsule. This approach estimates the speed and orientation of the endoscopic capsule by analyzing displacements of feature points between consecutive images. Then, it integrates this motion information with the RF measurements by employing a Kalman filter to smooth the localization results and generate the route that the WCE has traveled. The performance of the proposed motion tracking algorithm is validated using empirical data from the patients and this motion model is later imported into a virtual testbed to test the performance of the alternative Body-SLAM algorithms. Experimental results show that the proposed Body-SLAM technique is able to provide accurate tracking of the WCE with average error of less than 2.3cm
    • …
    corecore