85 research outputs found

    Pre-processing Technique for Wireless Capsule Endoscopy Image Enhancement

    Get PDF
    Wireless capsule endoscopy (WCE) is used to examine human digestive tract in order to detect abnormal area. However, it has been a challenging task to detect abnormal area such as bleeding due to poor quality and dark images of WCE. In this paper, pre-processing technique is introduced to ease classification of the bleeding area. Anisotropic contrast diffusion method is employed in our pre-processing technique as a contrast enhancement of the images. There is a drawback to the method proposed B. Li in which the quality of WCE image is degraded when the number of iteration increases. To solve this problem, variance is employed in our proposed method. To further enhance WCE image, Discrete Cosine Transform is used with anisotropic contrast diffusion. Experimental results show that both proposed contrast enhancement algorithm and sharpening WCE image algorithm provide better performance compared with B. Li’s algorithm since SDME and EBCM value is stable whenever number of iterations increases, and sharpness measurement using gradient and PSNR are both improved by 31.5% and 20.3% respectively

    REGION-COLOR BASED AUTOMATED BLEEDING DETECTION IN CAPSULE ENDOSCOPY VIDEOS

    Get PDF
    Capsule Endoscopy (CE) is a unique technique for facilitating non-invasive and practical visualization of the entire small intestine. It has attracted a critical mass of studies for improvements. Among numerous studies being performed in capsule endoscopy, tremendous efforts are being made in the development of software algorithms to identify clinically important frames in CE videos. This thesis presents a computer-assisted method which performs automated detection of CE video-frames that contain bleeding. Specifically, a methodology is proposed to classify the frames of CE videos into bleeding and non-bleeding frames. It is a Support Vector Machine (SVM) based supervised method which classifies the frames on the basis of color features derived from image-regions. Image-regions are characterized on the basis of statistical features. With 15 available candidate features, an exhaustive feature-selection is followed to obtain the best feature subset. The best feature-subset is the combination of features that has the highest bleeding discrimination ability as determined by the three performance-metrics: accuracy, sensitivity and specificity. Also, a ground truth label annotation method is proposed in order to partially automate delineation of bleeding regions for training of the classifier. The method produced promising results with sensitivity and specificity values up to 94%. All the experiments were performed separately for RGB and HSV color spaces. Experimental results show the combination of the mean planes in red and green planes to be the best feature-subset in RGB (Red-Green-Blue) color space and the combination of the mean values of all three planes of the color space to be the best feature-subset in HSV (Hue-Saturation-Value)

    An efficient method to classify GI tract images from WCE using visual words

    Get PDF
    The digital images made with the Wireless Capsule Endoscopy (WCE) from the patient's gastrointestinal tract are used to forecast abnormalities. The big amount of information from WCE pictures could take 2 hours to review GI tract illnesses per patient to research the digestive system and evaluate them. It is highly time consuming and increases healthcare costs considerably. In order to overcome this problem, the CS-LBP (Center Symmetric Local Binary Pattern) and the ACC (Auto Color Correlogram) were proposed to use a novel method based on a visual bag of features (VBOF). In order to solve this issue, we suggested a Visual Bag of Features(VBOF) method by incorporating Scale Invariant Feature Transform (SIFT), Center-Symmetric Local Binary Pattern (CS-LBP) and Auto Color Correlogram (ACC). This combination of features is able to detect the interest point, texture and color information in an image. Features for each image are calculated to create a descriptor with a large dimension. The proposed feature descriptors are clustered by K- means referred to as visual words, and the Support Vector Machine (SVM) method is used to automatically classify multiple disease abnormalities from the GI tract. Finally, post-processing scheme is applied to deal with final classification results i.e. validated the performance of multi-abnormal disease frame detection

    Generic Feature Learning for Wireless Capsule Endoscopy Analysis

    Full text link
    The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase)

    Automatic Small Bowel Tumor Diagnosis by Using Multi-Scale Wavelet-Based Analysis in Wireless Capsule Endoscopy Images

    Get PDF
    BACKGROUND: Wireless capsule endoscopy has been introduced as an innovative, non-invasive diagnostic technique for evaluation of the gastrointestinal tract, reaching places where conventional endoscopy is unable to. However, the output of this technique is an 8 hours video, whose analysis by the expert physician is very time consuming. Thus, a computer assisted diagnosis tool to help the physicians to evaluate CE exams faster and more accurately is an important technical challenge and an excellent economical opportunity. METHOD: The set of features proposed in this paper to code textural information is based on statistical modeling of second order textural measures extracted from co-occurrence matrices. To cope with both joint and marginal non-Gaussianity of second order textural measures, higher order moments are used. These statistical moments are taken from the two-dimensional color-scale feature space, where two different scales are considered. Second and higher order moments of textural measures are computed from the co-occurrence matrices computed from images synthesized by the inverse wavelet transform of the wavelet transform containing only the selected scales for the three color channels. The dimensionality of the data is reduced by using Principal Component Analysis. RESULTS: The proposed textural features are then used as the input of a classifier based on artificial neural networks. Classification performances of 93.1% specificity and 93.9% sensitivity are achieved on real data. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis systems to assist physicians in their clinical practice
    corecore