6 research outputs found

    Robust endoscopic image mosaicking via fusion of multimodal estimation

    Get PDF
    We propose an endoscopic image mosaicking algorithm that is robust to light conditioning changes, specular reflections, and feature-less scenes. These conditions are especially common in minimally invasive surgery where the light source moves with the camera to dynamically illuminate close range scenes. This makes it difficult for a single image registration method to robustly track camera motion and then generate consistent mosaics of the expanded surgical scene across different and heterogeneous environments. Instead of relying on one specialised feature extractor or image registration method, we propose to fuse different image registration algorithms according to their uncertainties, formulating the problem as affine pose graph optimisation. This allows to combine landmarks, dense intensity registration, and learning-based approaches in a single framework. To demonstrate our application we consider deep learning-based optical flow, hand-crafted features, and intensity-based registration, however, the framework is general and could take as input other sources of motion estimation, including other sensor modalities. We validate the performance of our approach on three datasets with very different characteristics to highlighting its generalisability, demonstrating the advantages of our proposed fusion framework. While each individual registration algorithm eventually fails drastically on certain surgical scenes, the fusion approach flexibly determines which algorithms to use and in which proportion to more robustly obtain consistent mosaics

    EnViSoRS: Enhanced Vision System for Robotic Surgery. A User-Defined Safety Volume Tracking to Minimize the Risk of Intraoperative Bleeding

    Get PDF
    open6siIn abdominal surgery, intra-operative bleeding is one of the major complications that affect the outcome of minimally invasive surgical procedures. One of the causes is attributed to accidental damages to arteries or veins, and one of the possible risk factors falls on the surgeon's skills. This paper presents the development and application of an Enhanced Vision System for Robotic Surgery (EnViSoRS), based on a user-defined Safety Volume (SV) tracking to minimise the risk of intra-operative bleeding. It aims at enhancing the surgeon's capabilities by providing Augmented Reality (AR) assistance towards the protection of vessels from injury during the execution of surgical procedures with a robot. The core of the framework consists in: (i) a hybrid tracking algorithm (LT-SAT tracker) that robustly follows a user-defined Safety Area (SA) in long term; (ii) a dense soft tissue 3D reconstruction algorithm, necessary for the computation of the SV; (iii) AR features for visualisation of the SV to be protected and of a graphical gauge indicating the current distance between the instruments and the reconstructed surface. EnViSoRS was integrated with a commercial robotic surgery system (the dVRK system) for testing and validation. The experiments aimed at demonstrating the accuracy, robustness, performance and usability of EnViSoRS during the execution of a simulated surgical task on a liver phantom. Results show an overall accuracy in accordance with surgical requirements (< 5mm), and high robustness in the computation of the SV in terms of precision and recall of its identification. The optimisation strategy implemented to speed up the computational time is also described and evaluated, providing AR features update rate up to 4 fps without impacting the real-time visualisation of the stereo endoscopic video. Finally, qualitative results regarding the system usability indicate that the proposed system integrates well with the commercial surgical robot and has indeed potential to offer useful assistance during real surgeries.openPenza, Veronica; De Momi, Elena; Enayati, Nima; Chupin, Thibaud; Ortiz, Jesús; Mattos, Leonardo S.Penza, Veronica; DE MOMI, Elena; Enayati, Nima; Chupin, THIBAUD JEAN EUDES; Ortiz, Jesús; Mattos, Leonardo S

    Medical Image Analysis using Deep Relational Learning

    Full text link
    In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778

    Rastreamento visual sob mudanças extremas de iluminação utilizando a soma da variância condicional

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2014Rastreamento visual direto é resolvido atualmente utilizando principalmente técnicas de otimização baseadas em descida de gradiente. A velocidade de convergência destas técnicas permite utilizar modelos de transformações com vários graus de liberdade. Muitas abordagens utilizam a Soma do Quadrado dos Resíduos como medida de similaridade, mas esta técnica não oferece estabilidade perante mudanças de iluminação na cena. Estas mudanças causam instabilidades na convergência dos métodos se não forem compensadas. Uma das técnicas de compensação de iluminação utiliza um modelo paramétrico de iluminação que aumenta o número de parâmetros a serem calculados. As aplicações que utilizam rastreamento direto precisam de respostas em tempo real e podem tornar-se impraticáveis com a adição do modelo de iluminação. Nesta dissertação é proposto um método de rastreamento visual direto robusto capaz de rastrear sob condições de iluminação extremas. Utilizando a Soma da Variância Condicional como base, a abordagem proposta utiliza sub-imagens para lidar com mudanças de iluminações extremas. O método proposto reduz o esforço computacional quando comparado com técnicas similares da literatura. Resultados experimentais atestam a redução de em média 57,5% em tempo de processamento para sequências coloridas.Abstract: Direct visual tracking is currently solved mainly with the use of gradient descent optimization. The speed of convergence of these techniques allows the use of transformation models with many degrees of freedom. The most popular similarity measure for direct tracking is the Sum of the Squared Differences, even though this approach is not robust to illumination changes in the scene. These changes, when left uncompensated, can lead to instabilities in the convergence of the algorithms. One technique used to compensate illumination changes uses an illumination model, which increases the number of parameters to be computed. Since most applications that use direct visual tracking need the results to be delivered in real time, the addition of the illumination model can hinder their performance. A novel direct visual tracking approach is presented in this work, being able to cope with extreme illumination conditions. Using the Sum of Conditional Varianceas a base, the proposed method uses sub-images to compensate for extreme illumination configurations. The proposed method reduces the computational burden when compared to similar approaches in the literature. Experimental results show that the method is 57,5% faster on average when dealing with color sequences

    Aldo von Wangenheim

    Get PDF
    corecore