205 research outputs found

    A Non-Rigid Map Fusion-Based RGB-Depth SLAM Method for Endoscopic Capsule Robots

    Full text link
    In the gastrointestinal (GI) tract endoscopy field, ingestible wireless capsule endoscopy is considered as a minimally invasive novel diagnostic technology to inspect the entire GI tract and to diagnose various diseases and pathologies. Since the development of this technology, medical device companies and many groups have made significant progress to turn such passive capsule endoscopes into robotic active capsule endoscopes to achieve almost all functions of current active flexible endoscopes. However, the use of robotic capsule endoscopy still has some challenges. One such challenge is the precise localization of such active devices in 3D world, which is essential for a precise three-dimensional (3D) mapping of the inner organ. A reliable 3D map of the explored inner organ could assist the doctors to make more intuitive and correct diagnosis. In this paper, we propose to our knowledge for the first time in literature a visual simultaneous localization and mapping (SLAM) method specifically developed for endoscopic capsule robots. The proposed RGB-Depth SLAM method is capable of capturing comprehensive dense globally consistent surfel-based maps of the inner organs explored by an endoscopic capsule robot in real time. This is achieved by using dense frame-to-model camera tracking and windowed surfelbased fusion coupled with frequent model refinement through non-rigid surface deformations

    Unleashing the Power of Depth and Pose Estimation Neural Networks by Designing Compatible Endoscopic Images

    Full text link
    Deep learning models have witnessed depth and pose estimation framework on unannotated datasets as a effective pathway to succeed in endoscopic navigation. Most current techniques are dedicated to developing more advanced neural networks to improve the accuracy. However, existing methods ignore the special properties of endoscopic images, resulting in an inability to fully unleash the power of neural networks. In this study, we conduct a detail analysis of the properties of endoscopic images and improve the compatibility of images and neural networks, to unleash the power of current neural networks. First, we introcude the Mask Image Modelling (MIM) module, which inputs partial image information instead of complete image information, allowing the network to recover global information from partial pixel information. This enhances the network' s ability to perceive global information and alleviates the phenomenon of local overfitting in convolutional neural networks due to local artifacts. Second, we propose a lightweight neural network to enhance the endoscopic images, to explicitly improve the compatibility between images and neural networks. Extensive experiments are conducted on the three public datasets and one inhouse dataset, and the proposed modules improve baselines by a large margin. Furthermore, the enhanced images we proposed, which have higher network compatibility, can serve as an effective data augmentation method and they are able to extract more stable feature points in traditional feature point matching tasks and achieve outstanding performance

    Transfer Learning Based Deep Neural Network for Detecting Artefacts in Endoscopic Images

    Get PDF
    Endoscopy is typically used to visualize various parts of the digestive tract. The technique is well suited to detect abnormalities like cancer/polyp, taking sample tissue called a biopsy, or cauterizing a bleeding vessel. During the procedure, video/ images are generated. It is affected by eight different artefacts: saturation, specularity, blood, blur, bubbles, contrast, instrument and miscellaneous artefacts like floating debris, chromatic aberration etc. The frames affected by artefacts are mostly discarded as the clinician could extract no valuable information from them. It affects post-processing steps. Based on the transfer learning approach, three state-of-the-art deep learning models, namely YOLOv3, YOLOv4 and Faster R-CNN, were trained with images from EAD public datasets and a custom dataset of endoscopic images of Indian patients annotated for artefacts mentioned above. The training set of images are data augmented and used to train all the three-artefact detectors. The predictions of the artefact detectors are combined to form an ensemble model whose results outperformed well compared to existing literature works by obtaining a mAP score of 0.561 and an IoU score of 0.682. The inference time of 80.4ms was recorded, which stands out best in the literature

    Sistema de Simulación de la Iluminación Abdominal Basado en Mini Robots

    Get PDF
    Introduction: This document shows a system that simulates the illumination of the abdominal scene in laparoscopic operations using mini robots. The mini robots would be magnetically tied to the abdominal cavity and manipulated by an external robot arm. Two algorithms are tested in this system: one that moves the mini robot according to the movement of the endoscope, and another that moves from an analysis of the image captured by the scene.  Objective: To contribute to the illumination of the surgical scene by means of mini robots attached magnetically to the abdominal cavity. Methodology: A software tool was developed using Unity3D, which simulates the interior of the abdomen in laparoscopic operations, adding a new lighting: a mini light-type robot magnetically anchored to the abdominal wall. The mini robot has two different movements to illuminate the scene, one depends on the movement of the endoscope and the other on the image analysis performed. Results: Tests were performed with a representation of the real environment comparing it with the tests in the built tool, obtaining similar results and showing the potential of a mini robot to provide additional lighting to the surgeon if necessary. Conclusions: The designed algorithm allows a mini robot that is magnetically anchored in the abdominal wall to move to low-light areas following two options: a geometric relationship or movement as a result of image analysis.  Introducción: Este documento muestra un sistema que simula la iluminación de la escena abdominal en operaciones de laparoscopia utilizando mini robots. Los mini robots estarían atados magnéticamente a la cavidad abdominal y serían manipulados por un brazo robot externo. Dos algoritmos son probados en este sistema: uno que mueve al mini robot de acuerdo al movimiento del endoscopio, y otro que lo mueve a partir de un análisis de la imagen captada por la escena. Objetivo: Contribuir a la iluminación de la escena quirúrgica por medio de mini robots atados magnéticamente a la cavidad abdominal. Metodología: Se desarrolló una herramienta software por medio de Unity3D, la cual simula el interior del abdomen en operaciones de laparoscopia, agregándosele una nueva iluminación: un mini robot tipo luz anclado magnéticamente a la pared abdominal. El mini robot tiene dos movimientos diferentes para iluminar la escena, uno depende del movimiento del endoscopio y otro del análisis de imagen realizado.  Resultados: Se realizaron pruebas con una representación del entorno real comparándola con las pruebas en la herramienta construida, obteniéndose resultados similares y mostrando el potencial que tiene un mini robot para proporcionar una iluminación adicional al cirujano en caso de ser necesario.   Conclusiones: El algoritmo diseñado permite que un mini robot que estaría anclado magnéticamente a la pared abdominal, se mueva a zonas de baja iluminación siguiendo dos opciones: una relación geométrica o un movimiento como resultado de un análisis de imagen

    Laparoscopic Image Recovery and Stereo Matching

    Get PDF
    Laparoscopic imaging can play a significant role in the minimally invasive surgical procedure. However, laparoscopic images often suffer from insufficient and irregular light sources, specular highlight surfaces, and a lack of depth information. These problems can negatively influence the surgeons during surgery, and lead to erroneous visual tracking and potential surgical risks. Thus, developing effective image-processing algorithms for laparoscopic vision recovery and stereo matching is of significant importance. Most related algorithms are effective on nature images, but less effective on laparoscopic images. The first purpose of this thesis is to restore low-light laparoscopic vision, where an effective image enhancement method is proposed by identifying different illumination regions and designing the enhancement criteria for desired image quality. This method can enhance the low-light region by reducing noise amplification during the enhancement process. In addition, this thesis also proposes a simplified Retinex optimization method for non-uniform illumination enhancement. By integrating the prior information of the illumination and reflectance into the optimization process, this method can significantly enhance the dark region while preserving naturalness, texture details, and image structures. Moreover, due to the replacement of the total variation term with two l2l_2-norm terms, the proposed algorithm has a significant computational advantage. Second, a global optimization method for specular highlight removal from a single laparoscopic image is proposed. This method consists of a modified dichromatic reflection model and a novel diffuse chromaticity estimation technique. Due to utilizing the limited color variation of the laparoscopic image, the estimated diffuse chromaticity can approximate the true diffuse chromaticity, which allows us to effectively remove the specular highlight with texture detail preservation. Third, a robust edge-preserving stereo matching method is proposed, based on sparse feature matching, left and right illumination equalization, and refined disparity optimization processes. The sparse feature matching and illumination equalization techniques can provide a good disparity map initialization so that our refined disparity optimization can quickly obtain an accurate disparity map. This approach is particularly promising on surgical tool edges, smooth soft tissues, and surfaces with strong specular highlight
    • …
    corecore