112 research outputs found

    An efficient method to classify GI tract images from WCE using visual words

    Get PDF
    The digital images made with the Wireless Capsule Endoscopy (WCE) from the patient's gastrointestinal tract are used to forecast abnormalities. The big amount of information from WCE pictures could take 2 hours to review GI tract illnesses per patient to research the digestive system and evaluate them. It is highly time consuming and increases healthcare costs considerably. In order to overcome this problem, the CS-LBP (Center Symmetric Local Binary Pattern) and the ACC (Auto Color Correlogram) were proposed to use a novel method based on a visual bag of features (VBOF). In order to solve this issue, we suggested a Visual Bag of Features(VBOF) method by incorporating Scale Invariant Feature Transform (SIFT), Center-Symmetric Local Binary Pattern (CS-LBP) and Auto Color Correlogram (ACC). This combination of features is able to detect the interest point, texture and color information in an image. Features for each image are calculated to create a descriptor with a large dimension. The proposed feature descriptors are clustered by K- means referred to as visual words, and the Support Vector Machine (SVM) method is used to automatically classify multiple disease abnormalities from the GI tract. Finally, post-processing scheme is applied to deal with final classification results i.e. validated the performance of multi-abnormal disease frame detection

    Time-based self-supervised learning for Wireless Capsule Endoscopy

    Full text link
    State-of-the-art machine learning models, and especially deep learning ones, are significantly data-hungry; they require vast amounts of manually labeled samples to function correctly. However, in most medical imaging fields, obtaining said data can be challenging. Not only the volume of data is a problem, but also the imbalances within its classes; it is common to have many more images of healthy patients than of those with pathology. Computer-aided diagnostic systems suffer from these issues, usually over-designing their models to perform accurately. This work proposes using self-supervised learning for wireless endoscopy videos by introducing a custom-tailored method that does not initially need labels or appropriate balance. We prove that using the inferred inherent structure learned by our method, extracted from the temporal axis, improves the detection rate on several domain-specific applications even under severe imbalance. State-of-the-art results are achieved in polyp detection, with 95.00 ± 2.09% Area Under the Curve, and 92.77 ± 1.20% accuracy in the CAD-CAP dataset

    An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Get PDF
    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%

    Kvasir-Capsule, a video capsule endoscopy dataset

    Get PDF
    Artificial intelligence (AI) is predicted to have profound effects on the future of video capsule endoscopy (VCE) technology. The potential lies in improving anomaly detection while reducing manual labour. Existing work demonstrates the promising benefits of AI-based computer-assisted diagnosis systems for VCE. They also show great potential for improvements to achieve even better results. Also, medical data is often sparse and unavailable to the research community, and qualified medical personnel rarely have time for the tedious labelling work. We present Kvasir-Capsule, a large VCE dataset collected from examinations at a Norwegian Hospital. Kvasir-Capsule consists of 117 videos which can be used to extract a total of 4,741,504 image frames. We have labelled and medically verified 47,238 frames with a bounding box around findings from 14 different classes. In addition to these labelled images, there are 4,694,266 unlabelled frames included in the dataset. The Kvasir-Capsule dataset can play a valuable role in developing better algorithms in order to reach true potential of VCE technology

    Intelligent recognition of colorectal cancer combining application of computer-assisted diagnosis with deep learning approaches

    Get PDF
    The malignancy of the colorectal testing methods has been exposed triumph to decrease the occurrence and death rate; this cancer is the relatively sluggish rising and has an extremely peculiar to develop the premalignant lesions. Now, many patients are not going to colorectal cancer screening, and people who do, are able to diagnose existing tests and screening methods. The most important concept of this motivation for this research idea is to evaluate the recognized data from the immediately available colorectal cancer screening methods. The data provided to laboratory technologists is important in the formulation of appropriate recommendations that will reduce colorectal cancer. With all standard colon cancer tests can be recognized agitatedly, the treatment of colorectal cancer is more efficient. The intelligent computer assisted diagnosis (CAD) is the most powerful technique for recognition of colorectal cancer in recent advances. It is a lot to reduce the level of interference nature has contributed considerably to the advancement of the quality of cancer treatment. To enhance diagnostic accuracy intelligent CAD has a research always active, ongoing with the deep learning and machine learning approaches with the associated convolutional neural network (CNN) scheme

    Multichannel Residual Cues for Fine-Grained Classification in Wireless Capsule Endoscopy

    Get PDF
    Early diagnosis of gastrointestinal pathologies leads to timely medical intervention and prevents disease development. Wireless Capsule Endoscopy (WCE) is used as a non-invasive alternative for gastrointestinal examination. WCE can capture images despite the structural complexity presented by human anatomy and can help in detecting pathologies. However, despite recent progress in fine-grained pathology classification and detection, limited works focus on generalization. We propose a multi-channel encoder-decoder network for learning a generalizable fine-grained pathology classifier. Specifically, we propose to use structural residual cues to explicitly impose the network to learn pathology traces. While residuals are extracted using well-established 2D wavelet decomposition, we also propose to use colour channels to learn discriminative cues in WCE images (like red color in bleeding). With less than 40% data (fewer than 2500 labels) used for training, we demonstrate the effectiveness of our approach in classifying different pathologies on two different WCE datasets (different capsule modalities). With a comprehensive benchmark for WCE abnormality and multi-class classification, we illustrate the generalizability of the proposed approach on both datasets, where our results perform better than the state-of-the-art with much fewer labels in abnormality sensitivity on several of nine different pathologies and establish a new benchmark with specificity >97% across classes.publishedVersio
    • …
    corecore