38 research outputs found
Automatic Bleeding Frame and Region Detection for GLCM Using Artificial Neural Network
Wireless capsule endoscopy is a device that inspects the direct visualization of patient’s gastrointestinal tract without invasiveness. Analyzing the WCE video is a time- consuming task hence computer aided technique is used to reduce the burden of medical clinicians. This paper proposes a novel color feature extraction method to detect the bleeding frame. First, we perform word based histogram for rapid bleeding detection in WCE images. Classification of bleeding WCE frame is performed by applying for glcm using Artificial Neural Network and K-nearest neighbour method. Second we propose a two-stage saliency map extraction method. In first stage saliency, we inspect the bleeding images under different color components to highlight the bleeding regions. From second stage saliency red color in the bleeding frame reveals that the region is affected. Then, by using algorithm we fuse the two-stage of saliency to detect the bleeding area. Experimental results show that the proposed method is very efficient in detecting the bleeding frames and the region
Application of Artificial Intelligence in Capsule Endoscopy: Where Are We Now?
Unlike wired endoscopy, capsule endoscopy requires additional time for a clinical specialist to review the operation and examine the lesions. To reduce the tedious review time and increase the accuracy of medical examinations, various approaches have been reported based on artificial intelligence for computer-aided diagnosis. Recently, deep learning–based approaches have been applied to many possible areas, showing greatly improved performance, especially for image-based recognition and classification. By reviewing recent deep learning–based approaches for clinical applications, we present the current status and future direction of artificial intelligence for capsule endoscopy
Accurate small bowel lesions detection in wireless capsule endoscopy images using deep recurrent attention neural network
International audienceWireless capsule endoscopy (WCE) allows medical doctors to examine the interior of the small intestine with a non-invasive procedure. This methodology is particularly important for Crohn's disease (CD), where an early diagnosis improves treatment outcomes. However, the viewing and evaluation of WCE videos is a time-consuming process for the medical experts. In this work, we present a recurrent attention neural network for the detection in WCE images of CD lesions in the small bowel. Our classifier reaches 90.85% accuracy on our own dataset annotated by experts from the Hospital of Nantes. The model has also been tested on a public endoscopic dataset, the CAD-CAP database used for the GIANA competition, and achieves high performance on detection task with an accuracy of 99,67%. This automatic lesion classifier will greatly reduce the amount of time spent by gastroenterologists in reviewing WCE videos, which will likely foster the development of this technique and speed-up the diagnosis of CD
Recommended from our members
Machine learning based small bowel video capsule endoscopy analysis: Challenges and opportunities
YesVideo capsule endoscopy (VCE) is a revolutionary technology for the early diagnosis of gastric disorders. However, owing to the high redundancy and subtle manifestation of anomalies among thousands of frames, the manual construal of VCE videos requires considerable patience, focus, and time. The automatic analysis of these videos using computational methods is a challenge as the capsule is untamed in motion and captures frames inaptly. Several machine learning (ML) methods, including recent deep convolutional neural networks approaches, have been adopted after evaluating their potential of improving the VCE analysis. However, the clinical impact of these methods is yet to be investigated. This survey aimed to highlight the gaps between existing ML-based research methodologies and clinically significant rules recently established by gastroenterologists based on VCE. A framework for interpreting raw frames into contextually relevant frame-level findings and subsequently merging these findings with meta-data to obtain a disease-level diagnosis was formulated. Frame-level findings can be more intelligible for discriminative learning when organized in a taxonomical hierarchy. The proposed taxonomical hierarchy, which is formulated based on pathological and visual similarities, may yield better classification metrics by setting inference classes at a higher level than training classes. Mapping from the frame level to the disease level was structured in the form of a graph based on clinical relevance inspired by the recent international consensus developed by domain experts. Furthermore, existing methods for VCE summarization, classification, segmentation, detection, and localization were critically evaluated and compared based on aspects deemed significant by clinicians. Numerous studies pertain to single anomaly detection instead of a pragmatic approach in a clinical setting. The challenges and opportunities associated with VCE analysis were delineated. A focus on maximizing the discriminative power of features corresponding to various subtle lesions and anomalies may help cope with the diverse and mimicking nature of different VCE frames. Large multicenter datasets must be created to cope with data sparsity, bias, and class imbalance. Explainability, reliability, traceability, and transparency are important for an ML-based diagnostics system in a VCE. Existing ethical and legal bindings narrow the scope of possibilities where ML can potentially be leveraged in healthcare. Despite these limitations, ML based video capsule endoscopy will revolutionize clinical practice, aiding clinicians in rapid and accurate diagnosis
Multi-pathology detection and lesion localization in WCE videos by using the instance segmentation approach
The majority of current systems for automatic diagnosis considers the detection of a unique and previously known pathology. Considering specifically the diagnosis of lesions in the small bowel using endoscopic capsule images, very few consider the possible existence of more than one pathology and when they do, they are mainly detection based systems therefore unable to localize the suspected lesions. Such systems do not fully satisfy the medical community, that in fact needs a system that detects any pathology and eventually more than one, when they coexist. In addition, besides the diagnostic capability of these systems, localizing the lesions in the image has been of great interest to the medical community, mainly for training medical personnel purposes. So, nowadays, the inclusion of the lesion location in automatic diagnostic systems is practically mandatory. Multi-pathology detection can be seen as a multi-object detection task and as each frame can contain different instances of the same lesion, instance segmentation seems to be appropriate for the purpose. Consequently, we argue that a multi-pathology system benefits from using the instance segmentation approach, since classification and segmentation modules are both required complementing each other in lesion detection and localization. According to our best knowledge such a system does not yet exist for the detection of WCE pathologies. This paper proposes a multi-pathology system that can be applied to WCE images, which uses the Mask Improved RCNN (MI-RCNN), a new mask subnet scheme which has shown to significantly improve mask predictions of the high performing state-of-the-art Mask-RCNN and PANet systems. A novel training strategy based on the second momentum is also proposed for the first time for training Mask-RCNN and PANet based systems. These approaches were tested using the public database KID, and the included pathologies were bleeding, angioectasias, polyps and inflammatory lesions. Experimental results show significant improvements for the prFCT national funds, under the national support to R&D
units grant, through the reference project UIDB/04436/2020 and UIDP/04436/2020 and
through the PhD Grants with the references SFRH/BD/92143/2013 and
SFRH/BD/139061/201
The Future of Capsule Endoscopy: The Role of Artificial Intelligence and Other Technical Advancements
Capsule endoscopy has revolutionized the management of small-bowel diseases owing to its convenience and noninvasiveness. Capsule endoscopy is a common method for the evaluation of obscure gastrointestinal bleeding, Crohn’s disease, small-bowel tumors, and polyposis syndrome. However, the laborious reading process, oversight of small-bowel lesions, and lack of locomotion are major obstacles to expanding its application. Along with recent advances in artificial intelligence, several studies have reported the promising performance of convolutional neural network systems for the diagnosis of various small-bowel lesions including erosion/ulcers, angioectasias, polyps, and bleeding lesions, which have reduced the time needed for capsule endoscopy interpretation. Furthermore, colon capsule endoscopy and capsule endoscopy locomotion driven by magnetic force have been investigated for clinical application, and various capsule endoscopy prototypes for active locomotion, biopsy, or therapeutic approaches have been introduced. In this review, we will discuss the recent advancements in artificial intelligence in the field of capsule endoscopy, as well as studies on other technological improvements in capsule endoscopy
Time-based self-supervised learning for Wireless Capsule Endoscopy
State-of-the-art machine learning models, and especially deep learning ones, are significantly data-hungry; they require vast amounts of manually labeled samples to function correctly. However, in most medical imaging fields, obtaining said data can be challenging. Not only the volume of data is a problem, but also the imbalances within its classes; it is common to have many more images of healthy patients than of those with pathology. Computer-aided diagnostic systems suffer from these issues, usually over-designing their models to perform accurately. This work proposes using self-supervised learning for wireless endoscopy videos by introducing a custom-tailored method that does not initially need labels or appropriate balance. We prove that using the inferred inherent structure learned by our method, extracted from the temporal axis, improves the detection rate on several domain-specific applications even under severe imbalance. State-of-the-art results are achieved in polyp detection, with 95.00 ± 2.09% Area Under the Curve, and 92.77 ± 1.20% accuracy in the CAD-CAP dataset
Detection of Intestinal Bleeding in Wireless Capsule Endoscopy using Machine Learning Techniques
Gastrointestinal (GI) bleeding is very common in humans, which may lead to fatal consequences. GI bleeding can usually be identified using a flexible wired endoscope. In 2001, a newer diagnostic tool, wireless capsule endoscopy (WCE) was introduced. It is a swallow-able capsule-shaped device with a camera that captures thousands of color images and wirelessly sends those back to a data recorder. After that, the physicians analyze those images in order to identify any GI abnormalities. But it takes a longer screening time which may increase the danger of the patients in emergency cases. It is therefore necessary to use a real-time detection tool to identify bleeding in the GI tract.
Each material has its own spectral ‘signature’ which shows distinct characteristics in specific wavelength of light [33]. Therefore, by evaluating the optical characteristics, the presence of blood can be detected. In the study, three main hardware designs were presented: one using a two-wavelength based optical sensor and others using two six-wavelength based spectral sensors with AS7262 and AS7263 chips respectively to determine the optical characteristics of the blood and non-blood samples.
The goal of the research is to develop a machine learning model to differentiate blood samples (BS) and non-blood samples (NBS) by exploring their optical properties. In this experiment, 10 levels of crystallized bovine hemoglobin solutions were used as BS and 5 food colors (red, yellow, orange, tan and pink) with different concentrations totaling 25 non-blood samples were used as NBS. These blood and non-blood samples were also combined with pig’s intestine to mimic in-vivo experimental environment. The collected samples were completely separated into training and testing data.
Different spectral features are analyzed to obtain the optical information about the samples. Based on the performance on the selected most significant features of the spectral wavelengths, k-nearest neighbors algorithm (k-NN) is finally chosen for the automated bleeding detection. The proposed k-NN classifier model has been able to distinguish the BS and NBS with an accuracy of 91.54% using two wavelengths features and around 89% using three combined wavelengths features in the visible and near-infrared spectral regions. The research also indicates that it is possible to deploy tiny optical detectors to detect GI bleeding in a WCE system which could eliminate the need of time-consuming image post-processing steps