21 research outputs found

    An Efficient Approach for Polyps Detection in Endoscopic Videos Based on Faster R-CNN

    Full text link
    Polyp has long been considered as one of the major etiologies to colorectal cancer which is a fatal disease around the world, thus early detection and recognition of polyps plays a crucial role in clinical routines. Accurate diagnoses of polyps through endoscopes operated by physicians becomes a challenging task not only due to the varying expertise of physicians, but also the inherent nature of endoscopic inspections. To facilitate this process, computer-aid techniques that emphasize fully-conventional image processing and novel machine learning enhanced approaches have been dedicatedly designed for polyp detection in endoscopic videos or images. Among all proposed algorithms, deep learning based methods take the lead in terms of multiple metrics in evolutions for algorithmic performance. In this work, a highly effective model, namely the faster region-based convolutional neural network (Faster R-CNN) is implemented for polyp detection. In comparison with the reported results of the state-of-the-art approaches on polyps detection, extensive experiments demonstrate that the Faster R-CNN achieves very competing results, and it is an efficient approach for clinical practice.Comment: 6 pages, 10 figures,2018 International Conference on Pattern Recognitio

    PraNet: Parallel Reverse Attention Network for Polyp Segmentation

    Get PDF
    Colonoscopy is an effective technique for detecting colorectal polyps, which are highly related to colorectal cancer. In clinical practice, segmenting polyps from colonoscopy images is of great importance since it provides valuable information for diagnosis and surgery. However, accurate polyp segmentation is a challenging task, for two major reasons: (i) the same type of polyps has a diversity of size, color and texture; and (ii) the boundary between a polyp and its surrounding mucosa is not sharp. To address these challenges, we propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images. Specifically, we first aggregate the features in high-level layers using a parallel partial decoder (PPD). Based on the combined feature, we then generate a global map as the initial guidance area for the following components. In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues. Thanks to the recurrent cooperation mechanism between areas and boundaries, our PraNet is capable of calibrating any misaligned predictions, improving the segmentation accuracy. Quantitative and qualitative evaluations on five challenging datasets across six metrics show that our PraNet improves the segmentation accuracy significantly, and presents a number of advantages in terms of generalizability, and real-time segmentation efficiency.Comment: Accepted to MICCAI 202

    Decomposition of color wavelet with higher order statistical texture and convolutional neural network features set based classification of colorectal polyps from video endoscopy

    Get PDF
    Gastrointestinal cancer is one of the leading causes of death across the world. The gastrointestinal polyps are considered as the precursors of developing this malignant cancer. In order to condense the probability of cancer, early detection and removal of colorectal polyps can be cogitated. The most used diagnostic modality for colorectal polyps is video endoscopy. But the accuracy of diagnosis mostly depends on doctors' experience that is crucial to detect polyps in many cases. Computer-aided polyp detection is promising to reduce the miss detection rate of the polyp and thus improve the accuracy of diagnosis results. The proposed method first detects polyp and non-polyp then illustrates an automatic polyp classification technique from endoscopic video through color wavelet with higher-order statistical texture feature and Convolutional Neural Network (CNN). Gray Level Run Length Matrix (GLRLM) is used for higher-order statistical texture features of different directions (Ćź = 0o, 45o, 90o, 135o). The features are fed into a linear support vector machine (SVM) to train the classifier. The experimental result demonstrates that the proposed approach is auspicious and operative with residual network architecture, which triumphs the best performance of accuracy, sensitivity, and specificity of 98.83%, 97.87%, and 99.13% respectively for classification of colorectal polyps on standard public endoscopic video databases

    Application of Artificial Intelligence in Capsule Endoscopy: Where Are We Now?

    Get PDF
    Unlike wired endoscopy, capsule endoscopy requires additional time for a clinical specialist to review the operation and examine the lesions. To reduce the tedious review time and increase the accuracy of medical examinations, various approaches have been reported based on artificial intelligence for computer-aided diagnosis. Recently, deep learning–based approaches have been applied to many possible areas, showing greatly improved performance, especially for image-based recognition and classification. By reviewing recent deep learning–based approaches for clinical applications, we present the current status and future direction of artificial intelligence for capsule endoscopy

    Rethinking the transfer learning for FCN based polyp segmentation in colonoscopy

    Get PDF
    Besides the complex nature of colonoscopy frames with intrinsic frame formation artefacts such as light reflections and the diversity of polyp types/shapes, the publicly available polyp segmentation training datasets are limited, small and imbalanced. In this case, the automated polyp segmentation using a deep neural network remains an open challenge due to the overfitting of training on small datasets. We proposed a simple yet effective polyp segmentation pipeline that couples the segmentation (FCN) and classification (CNN) tasks. We find the effectiveness of interactive weight transfer between dense and coarse vision tasks that mitigates the overfitting in learning. And It motivates us to design a new training scheme within our segmentation pipeline. Our method is evaluated on CVC-EndoSceneStill and Kvasir-SEG datasets. It achieves 4.34% and 5.70% Polyp-IoU improvements compared to the state-of-the-art methods on the EndoSceneStill and Kvasir-SEG datasets, respectively.Comment: 11 pages, 10 figures, submit versio
    corecore