18 research outputs found

    Enhancing Informative Frame Filtering by Water and Bubble Detection in Colonoscopy Videos

    Get PDF
    Colonoscopy has contributed to a marked decline in the number of colorectal cancer related deaths. However, recent data suggest that there is a significant (4-12%) miss-rate for the detection of even large polyps and cancers. To address this, we have been investigating an ‘automated feedback system’ which informs the endoscopist of possible sub-optimal inspection during colonoscopy. A fundamental step of this system is to distinguish non-informative frames from informative ones. Existing methods for this cannot classify water/bubble frames as non-informative even though they do not carry any useful visual information of the colon mucosa. In this paper, we propose a novel texture feature based on accumulation of pixel differences, which can detect water and bubble frames with very high accuracy with significantly less processing time. The experimental results show the proposed feature can achieve more than 93% overall accuracy in almost half of the processing time the existing methods take

    Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?

    Get PDF
    © 2018 Awan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.This publication was made possible using a grant from the Qatar National Research Fund through National Priority Research Program (NPRP) No. 6-249-1-053. The content of this publication are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar National Research Fund or Qatar University

    Computer Aided Dysplasia Grading for Barrett’s Oesophagus Virtual Slides

    Get PDF
    Dysplasia grading in Barrett’s Oesophagus has been an issue among pathologist worldwide. Despite of the increasing number of sufferers every year especially for westerners, dysplasia in Barrett’s Oesophagus can only be graded by a trained pathologist with visual examination. Therefore, we present our work on extracting textural and spatial features from the tissue regions. Our first approach is to extract only the epithelial layer of the tissue, based on the grading rules by pathologists. This is carried out by extracting sub images of a certain window size along the tissue epithelial layer. The textural features of these sub images were used to grade regions into dysplasia or not-dysplasia and we have achieved 82.5% AP with 0.82 precision and 0.86 recall value. Therefore, we have managed to overcame the ‘boundary-effect’ issues that have usually been avoided by selecting or cropping tissue image without the boundary. Secondly, the textural and spatial features of the whole tissue in the region were investigated. Experiments were carried out using Grey Level Co-occurrence Matrices at the pixel-level with a brute-force approach experiment, to cluster patches based on its texture similarities.Then, we have developed a texture-mapping technique that translates the spatial arrangement of tissue texture within a tissue region on the patch-level. As a result, three binary decision tree models were developed from the texture-mapping image, to grade each annotated regions into dysplasia Grade 1, Grade 3 and Grade 5 with 87.5%, 75.0% and 81.3% accuracy percentage with kappa score 0.75, 0.5 and 0.63 respectively. A binary decision tree was then used on the spatial arrangement of the tissue texture types with respect to the epithelial layer to help grade the regions. 75.0%, 68.8% and 68.8% accuracy percentage with kappa value of 0.5, 0.37 and 0.37 were achieved respectively for dysplasia Grade 1, Grade 3 and Grade 5. Based on the result achieved, we can conclude that the spatial information of tissue texture types with regards to the epithelial layer, is not as strong as is on the whole region. The binary decision tree grading models were applied on the broader tissue area; the whole virtual pathology slides itself. The consensus grading for each tissue is calculated with positivity table and scoring method. Finally, we present our own thresholded frequency method to grade virtual slides based on frequency of grading occurrence; and the result were compared to the pathologist’s grading. High agreement score with 0.80 KV was achieved and this is a massive improvement compared to a simple frequency scoring, which is only 0.47 KV

    Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression

    Get PDF
    PhDHistopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results. At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE). A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells. At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation
    corecore