45 research outputs found

    Polyp Segmentation in Colonoscopy Images with Convolutional Neural Networks

    Get PDF
    The thesis looks at approaches to segmentation of polyps in colonoscopy images. The aim was to investigate and develop methods that are robust, accurate and computationally efficient and which can compete with the current state-of-the-art in polyp segmentation. Colorectal cancer is one of the leading cause of cancer deaths worldwide. To decrease mortality, an assessment of polyp malignancy is performed during colonoscopy examination so polyps can be removed at an early stage. In current routine clinical practice, polyps are detected and delineated manually in colonoscopy images by highly trained clinicians. To automate these processes, machine learning and computer vision techniques have been utilised. They have been shown to improve polyp detectability and segmentation objectivity. However, polyp segmentation is a very challenging task due to inherent variability of polyp morphology and colonoscopy image appearance. This research considers a range of approaches to polyp segmentation – seeking out those that offer a best compromise between accuracy and computational complexity. Based on analysis of existing machine learning and polyp image segmentation techniques, a novel hybrid deep learning segmentation method is proposed to alleviate the impact of the above stated challenges on polyp segmentation. The method consists of two fully convolutional networks. The first proposed network is based on a compact architecture with large receptive fields and multiple classification paths. The method performs well on most images, accurately segmenting polyps of diverse morphology and appearance. However, this network is prone to misdetection of very small polyps. To solve this problem, a second network is proposed, which primarily aims to improve sensitivity to small polyp details by emphasising low-level image features. In order to fully utilise information contained in the available training dataset, comprehensive data augmentation techniques are adopted. To further improve the performance of the proposed segmentation methods, test-time data augmentation is also implemented. A comprehensive multi-criterion analysis of the proposed methods is provided. The result demonstrates that the new methodology has better accuracy and robustness than the current state-of-the-art, as proven by the outstanding performance at the 2017 and 2018 GIANA polyp segmentation challenges

    Remote access computed tomography colonography

    Get PDF
    This thesis presents a novel framework for remote access Computed Tomography Colonography (CTC). The proposed framework consists of several integrated components: medical image data delivery, 2D image processing, 3D visualisation, and feedback provision. Medical image data sets are notoriously large and preserving the integrity of the patient data is essential. This makes real-time delivery and visualisation a key challenge. The main contribution of this work is the development of an efficient, lossless compression scheme to minimise the size of the data to be transmitted, thereby alleviating transmission time delays. The scheme utilises prior knowledge of anatomical information to divide the data into specific regions. An optimised compression method for each anatomical region is then applied. An evaluation of this compression technique shows that the proposed ‘divide and conquer’ approach significantly improves upon the level of compression achieved using more traditional global compression schemes. Another contribution of this work resides in the development of an improved volume rendering technique that provides real-time 3D visualisations of regions within CTC data sets. Unlike previous hardware acceleration methods which rely on dedicated devices, this approach employs a series of software acceleration techniques based on the characteristic properties of CTC data. A quantitative and qualitative evaluation indicates that the proposed method achieves real-time performance on a low-cost PC platform without sacrificing any image quality. Fast data delivery and real-time volume rendering represent the key features that are required for remote access CTC. These features are ultimately combined with other relevant CTC functionality to create a comprehensive, high-performance CTC framework, which makes remote access CTC feasible, even in the case of standard Web clients with low-speed data connections

    Parallel centerline extraction on the GPU

    Get PDF
    Centerline extraction is important in a variety of visualization applications including shape analysis, geometry processing, and virtual endoscopy. Centerlines allow accurate measurements of length along winding tubular structures, assist automatic virtual navigation, and provide a path-planning system to control the movement and orientation of a virtual camera. However, efficiently computing centerlines with the desired accuracy has been a major challenge. Existing centerline methods are either not fast enough or not accurate enough for interactive application to complex 3D shapes. Some methods based on distance mapping are accurate, but these are sequential algorithms which have limited performance when running on the CPU. To our knowledge, there is no accurate parallel centerline algorithm that can take advantage of modern many-core parallel computing resources, such as GPUs, to perform automatic centerline extraction from large data volumes at interactive speed and with high accuracy. In this paper, we present a new parallel centerline extraction algorithm suitable for implementation on a GPU to produce highly accurate, 26-connected, one-voxel-thick centerlines at interactive speed. The resulting centerlines are as accurate as those produced by a state-of-the-art sequential CPU method [40], while being computed hundreds of times faster. Applications to fly through path planning and virtual endoscopy are discussed. Experimental results demonstrating centeredness, robustness and efficiency are presented

    Learning-based depth and pose prediction for 3D scene reconstruction in endoscopy

    Get PDF
    Colorectal cancer is the third most common cancer worldwide. Early detection and treatment of pre-cancerous tissue during colonoscopy is critical to improving prognosis. However, navigating within the colon and inspecting the endoluminal tissue comprehensively are challenging, and success in both varies based on the endoscopist's skill and experience. Computer-assisted interventions in colonoscopy show much promise in improving navigation and inspection. For instance, 3D reconstruction of the colon during colonoscopy could promote more thorough examinations and increase adenoma detection rates which are associated with improved survival rates. Given the stakes, this thesis seeks to advance the state of research from feature-based traditional methods closer to a data-driven 3D reconstruction pipeline for colonoscopy. More specifically, this thesis explores different methods that improve subtasks of learning-based 3D reconstruction. The main tasks are depth prediction and camera pose estimation. As training data is unavailable, the author, together with her co-authors, proposes and publishes several synthetic datasets and promotes domain adaptation models to improve applicability to real data. We show, through extensive experiments, that our depth prediction methods produce more robust results than previous work. Our pose estimation network trained on our new synthetic data outperforms self-supervised methods on real sequences. Our box embeddings allow us to interpret the geometric relationship and scale difference between two images of the same surface without the need for feature matches that are often unobtainable in surgical scenes. Together, the methods introduced in this thesis help work towards a complete, data-driven 3D reconstruction pipeline for endoscopy

    Framework for the detection and classification of colorectal polyps

    No full text
    In this thesis we propose a framework for the detection and classification of colorectal polyps to assist endoscopists in bowel cancer screening. Such a system will help reduce not only the miss rate of possibly malignant polyps during screening but also reduce the number of unnecessary polypectomies where the histopathologic analysis could be spared. Our polyp detection scheme is based on a cascade filter to pre-process the incoming video frames, select a group of candidate polyp regions and then proceed to algorithmically isolate the most probable polyps based on their geometry. We also tested this system on a number of endoscopic and capsule endoscopy videos collected with the help of our clinical collaborators. Furthermore, we developed and tested a classification system for distinguishing cancerous colorectal polyps from non-cancerous ones. By analyzing the surface vasculature of high magnification polyp images from two endoscopic platforms we extracted a number of features based primarily on the vessel contrast, orientation and colour. The feature space was then filtered as to leave only the most relevant subset and this was subsequently used to train our classifier. In addition, we examined the scenario of splitting up the polyp surface into patches and including only the most feature rich areas into our classifier instead of the surface as a whole. The stability of our feature space relative to patch size was also examined to ensure reliable and robust classification. In addition, we devised a scale selection strategy to minimize the effect of inconsistencies in magnification and geometric polyp size between samples. Lastly, several techniques were also employed to ensure that our results will generalise well in real world practise. We believe this to be a solid step in forming a toolbox designed to aid endoscopists not only in the detection but also in the optical biopsy of colorectal polyps during in vivo colonoscopy.Open Acces

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201
    corecore