491 research outputs found

    Automatic Bleeding Frame and Region Detection for GLCM Using Artificial Neural Network

    Get PDF
     Wireless capsule endoscopy is a device that inspects the direct visualization of patient’s gastrointestinal tract without invasiveness. Analyzing the WCE video is a time- consuming task hence computer aided technique is used to reduce the burden of medical clinicians. This paper proposes a novel color feature extraction method to detect the bleeding frame. First, we perform word based histogram for rapid bleeding detection in WCE images. Classification of bleeding WCE frame is performed by applying for glcm usingĂ‚  Artificial Neural Network and K-nearest neighbour method. Second we propose a two-stage saliency map extraction method. In first stage saliency, we inspect the bleeding images under different color components to highlight the bleeding regions. From second stage saliency red color in the bleeding frame reveals that the region is affected. Then, by using algorithm we fuse the two-stage of saliency to detect the bleeding area. Experimental results show that the proposed method is very efficient in detecting the bleeding frames and the region

    Automatic evaluation of degree of cleanliness in capsule endoscopy based on a novel CNN architecture

    Full text link
    [EN] Capsule endoscopy (CE) is a widely used, minimally invasive alternative to traditional endoscopy that allows visualisation of the entire small intestine. Patient preparation can help to obtain a cleaner intestine and thus better visibility in the resulting videos. However, studies on the most effective preparation method are conflicting due to the absence of objective, automatic cleanliness evaluation methods. In this work, we aim to provide such a method capable of presenting results on an intuitive scale, with a relatively light-weight novel convolutional neural network architecture at its core. We trained our model using 5-fold cross-validation on an extensive data set of over 50,000 image patches, collected from 35 different CE procedures, and compared it with state-of-the-art classification methods. From the patch classification results, we developed a method to automatically estimate pixel-level probabilities and deduce cleanliness evaluation scores through automatically learnt thresholds. We then validated our method in a clinical setting on 30 newly collected CE videos, comparing the resulting scores to those independently assigned by human specialists. We obtained the highest classification accuracy for the proposed method (95.23%), with significantly lower average prediction times than for the second-best method. In the validation of our method, we found acceptable agreement with two human specialists compared to interhuman agreement, showing its validity as an objective evaluation method.This work was funded by the European Union's H2020: MSCA: ITN program for the "Wireless In-body Environment Communication - WiBEC" project under the grant agreement no. 675353. Additionally, we gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. Figures 2 and 3 were drawn by the authors.Noorda, R.; Nevárez, A.; Colomer, A.; Pons Beltrán, V.; Naranjo Ornedo, V. (2020). Automatic evaluation of degree of cleanliness in capsule endoscopy based on a novel CNN architecture. Scientific Reports. 10(1):1-13. https://doi.org/10.1038/s41598-020-74668-8S113101Pons Beltrán, V. et al. Evaluation of different bowel preparations for small bowel capsule endoscopy: a prospective, randomized, controlled study. Dig. Dis. Sci. 56, 2900–2905. https://doi.org/10.1007/s10620-011-1693-z (2011).Klein, A., Gizbar, M., Bourke, M. J. & Ahlenstiel, G. Validated computed cleansing score for video capsule endoscopy. Dig. Endosc. 28, 564–569. https://doi.org/10.1111/den.12599 (2016).Vilarino, F., Spyridonos, P., Pujol, O., Vitria, J. & Radeva, P. Automatic detection of intestinal juices in wireless capsule video endoscopy. In 18th International Conference on Pattern Recognition (ICPR’06), Vol. 4, 719–722, https://doi.org/10.1109/ICPR.2006.296 (2006).Wang, Q. et al. Reduction of bubble-like frames using a rss filter in wireless capsule endoscopy video. Opt. Laser Technol. 110, 152–157. https://doi.org/10.1016/j.optlastec.2018.08.051 (2019).Mewes, P. W. et al. Automatic region-of-interest segmentation and pathology detection in magnetically guided capsule endoscopy. In International Conference on Medical Image Computing and Computer-Assisted Intervention 141–148, https://doi.org/10.1007/978-3-642-23626-6_18 (Springer 2011).Bashar, M. K., Mori, K., Suenaga, Y., Kitasaka, T. & Mekada, Y. Detecting informative frames from wireless capsule endoscopic video using color and texture features. In Medical Image Computing and Computer-Assisted Intervention (MICCAI 2008), 603–610, https://doi.org/10.1007/978-3-540-85990-1_72 (Springer, Berlin, 2008).Sun, Z., Li, B., Zhou, R., Zheng, H. & Meng, M. Q. H. Removal of non-informative frames for wireless capsule endoscopy video segmentation. In 2012 IEEE International Conference on Automation and Logistics, 294–299, https://doi.org/10.1109/ICAL.2012.6308214 (2012).Khun, P. C., Zhuo, Z., Yang, L. Z., Liyuan, L. & Jiang, L. Feature selection and classification for wireless capsule endoscopic frames. In 2009 International Conference on Biomedical and Pharmaceutical Engineering, 1–6, https://doi.org/10.1109/ICBPE.2009.5384106 (2009).Segui, S. et al. Categorization and segmentation of intestinal content frames for wireless capsule endoscopy. IEEE Trans. Inf Technol. Biomed. 16, 1341–1352. https://doi.org/10.1109/TITB.2012.2221472 (2012).Maghsoudi, O. H., Talebpour, A., Soltanian-Zadeh, H., Alizadeh, M. & Soleimani, H. A. Informative and uninformative regions detection in wce frames. J. Adv. Comput. 3, 12–34. https://doi.org/10.7726/jac.2014.1002a (2014).Noorda, R., Nevarez, A., Colomer, A., Naranjo, V. & Pons, V. Automatic detection of intestinal content to evaluate visibility in capsule endoscopy. In 13th13^{th}International Symposium on Medical Information and Communication Technology (ISMICT 2019) (Oslo, Norway, 2019).Andrearczyk, V. & Whelan, P. F. Deep learning in texture analysis and its application to tissue image classification. In Biomedical Texture Analysis (eds Depeursinge, A. et al.) 95–129 (Elsevier, Amsterdam, 2017). https://doi.org/10.1016/B978-0-12-812133-7.00004-1.Werbos, P. J. et al. Backpropagation through time: what it does and how to do it. Proc. IEEE 78, 1550–1560. https://doi.org/10.1109/5.58337 (1990).Jia, X. & Meng, M. Q.-H. A deep convolutional neural network for bleeding detection in wireless capsule endoscopy images. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 639–642, https://doi.org/10.1109/EMBC.2016.7590783 (IEEE, 2016).Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://doi.org/10.1109/ACPR.2015.7486599(2014).Springenberg, J. T., Dosovitskiy, A., Brox, T. & Riedmiller, M. Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014).Chollet, F. et al. Keras (2015). Software available from keras.io.Abadi, M. et al. TensorFlow: large-scale machine learning on heterogeneous systems (2015). Software available from tensorflow.org.Beltrán, V. P., Carretero, C., Gonzalez-Suárez, B., Fernández-Urien, I. & Muñoz-Navas, M. Intestinal preparation prior to capsule endoscopy administration. World J. Gastroenterol. 14, 5773. https://doi.org/10.3748/wjg.14.5773 (2008).Koo, T. K. & Li, M. Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropr. Med. 15, 155–163. https://doi.org/10.1016/j.jcm.2016.02.012 (2016).Cohen, J. Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychol. Bull. 70, 213. https://doi.org/10.1037/h0026256 (1968).Warrens, M. J. Conditional inequalities between Cohens kappa and weighted kappas. Stat. Methodol. 10, 14–22. https://doi.org/10.1016/j.stamet.2012.05.004 (2013).Sim, J. & Wright, C. C. The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys. Ther. 85, 257–268. https://doi.org/10.1093/ptj/85.3.257 (2005).Cardillo, G. Cohen’s kappa. https://www.github.com/dnafinder/Cohen (2020)

    Performance clínica de um novo software para detetar automaticamente angiectasias na endoscopia por cápsula

    Get PDF
    Background: Video capsule endoscopy (VCE) revolutionized the diagnosis and management of obscure gastrointestinal bleeding, though the rate of detection of small bowel lesions by the physician is still disappointing. Our group developed a novel algorithm (CMEMS-Uminho) to automatically detect angioectasias which display greater accuracy in VCE static frames than other methods previously published. We aimed to evaluate the algorithm overall performance and assess its diagnostic yield and usability in clinical practice. Methods: Algorithm overall performance was determined using 54 full-length VCE recordings. To assess its diagnostic yield and usability in clinical practice, 38 VCE examinations with the clinical diagnosis of angioectasias consecutively performed (2017-2018) were evaluated by three physicians with different experiences. The CMEMS-Uminho algorithm was also applied. The performance of the CMEMS-Uminho algorithm was defined by a positive concordance between a frame automatically selected by the software and a study independent capsule endoscopist. Results: Overall performance in complete VCE recordings was 77.7%, and diagnostic yield was 94.7%. There were significant differences between physicians in regard to global detection rate (p < 0.001), detection rate per capsule (p < 0.001), diagnostic yield (p = 0.007), true positive rate (p < 0.001), time (p < 0.001), and speed viewing (p < 0.001). The application of CMEMS-Uminho algorithm significantly enhanced all readers' global detection rate (p < 0.001) and the differences between them were no longer observed. Conclusion: The CMEMS-Uminho algorithm detained a good overall performance and was able to enhance physicians' performance, suggesting a potential usability of this tool in clinical practice.(undefined

    Semantic Map Guided Synthesis of Wireless Capsule Endoscopy Images using Diffusion Models

    Full text link
    Wireless capsule endoscopy (WCE) is a non-invasive method for visualizing the gastrointestinal (GI) tract, crucial for diagnosing GI tract diseases. However, interpreting WCE results can be time-consuming and tiring. Existing studies have employed deep neural networks (DNNs) for automatic GI tract lesion detection, but acquiring sufficient training examples, particularly due to privacy concerns, remains a challenge. Public WCE databases lack diversity and quantity. To address this, we propose a novel approach leveraging generative models, specifically the diffusion model (DM), for generating diverse WCE images. Our model incorporates semantic map resulted from visualization scale (VS) engine, enhancing the controllability and diversity of generated images. We evaluate our approach using visual inspection and visual Turing tests, demonstrating its effectiveness in generating realistic and diverse WCE images

    Angiodysplasia Detection and Localization Using Deep Convolutional Neural Networks

    Full text link
    Accurate detection and localization for angiodysplasia lesions is an important problem in early stage diagnostics of gastrointestinal bleeding and anemia. Gold-standard for angiodysplasia detection and localization is performed using wireless capsule endoscopy. This pill-like device is able to produce thousand of high enough resolution images during one passage through gastrointestinal tract. In this paper we present our winning solution for MICCAI 2017 Endoscopic Vision SubChallenge: Angiodysplasia Detection and Localization its further improvements over the state-of-the-art results using several novel deep neural network architectures. It address the binary segmentation problem, where every pixel in an image is labeled as an angiodysplasia lesions or background. Then, we analyze connected component of each predicted mask. Based on the analysis we developed a classifier that predict angiodysplasia lesions (binary variable) and a detector for their localization (center of a component). In this setting, our approach outperforms other methods in every task subcategory for angiodysplasia detection and localization thereby providing state-of-the-art results for these problems. The source code for our solution is made publicly available at https://github.com/ternaus/angiodysplasia-segmentatioComment: 12 pages, 6 figure

    A deep learning framework for quality assessment and restoration in video endoscopy

    Full text link
    Endoscopy is a routine imaging technique used for both diagnosis and minimally invasive surgical treatment. Artifacts such as motion blur, bubbles, specular reflections, floating objects and pixel saturation impede the visual interpretation and the automated analysis of endoscopy videos. Given the widespread use of endoscopy in different clinical applications, we contend that the robust and reliable identification of such artifacts and the automated restoration of corrupted video frames is a fundamental medical imaging problem. Existing state-of-the-art methods only deal with the detection and restoration of selected artifacts. However, typically endoscopy videos contain numerous artifacts which motivates to establish a comprehensive solution. We propose a fully automatic framework that can: 1) detect and classify six different primary artifacts, 2) provide a quality score for each frame and 3) restore mildly corrupted frames. To detect different artifacts our framework exploits fast multi-scale, single stage convolutional neural network detector. We introduce a quality metric to assess frame quality and predict image restoration success. Generative adversarial networks with carefully chosen regularization are finally used to restore corrupted frames. Our detector yields the highest mean average precision (mAP at 5% threshold) of 49.0 and the lowest computational time of 88 ms allowing for accurate real-time processing. Our restoration models for blind deblurring, saturation correction and inpainting demonstrate significant improvements over previous methods. On a set of 10 test videos we show that our approach preserves an average of 68.7% which is 25% more frames than that retained from the raw videos.Comment: 14 page
    • …
    corecore