16 research outputs found

    Artificial intelligence to improve polyp detection and screening time in colon capsule endoscopy

    Get PDF
    Colon Capsule Endoscopy (CCE) is a minimally invasive procedure which is increasingly being used as an alternative to conventional colonoscopy. Videos recorded by the capsule cameras are long and require one or more experts' time to review and identify polyps or other potential intestinal problems that can lead to major health issues. We developed and tested a multi-platform web application, AI-Tool, which embeds a Convolution Neural Network (CNN) to help CCE reviewers. With the help of artificial intelligence, AI-Tool is able to detect images with high probability of containing a polyp and prioritize them during the reviewing process. With the collaboration of 3 experts that reviewed 18 videos, we compared the classical linear review method using RAPID Reader Software v9.0 and the new software we present. Applying the new strategy, reviewing time was reduced by a factor of 6 and polyp detection sensitivity was increased from 81.08 to 87.80%

    Time-based self-supervised learning for Wireless Capsule Endoscopy

    Full text link
    State-of-the-art machine learning models, and especially deep learning ones, are significantly data-hungry; they require vast amounts of manually labeled samples to function correctly. However, in most medical imaging fields, obtaining said data can be challenging. Not only the volume of data is a problem, but also the imbalances within its classes; it is common to have many more images of healthy patients than of those with pathology. Computer-aided diagnostic systems suffer from these issues, usually over-designing their models to perform accurately. This work proposes using self-supervised learning for wireless endoscopy videos by introducing a custom-tailored method that does not initially need labels or appropriate balance. We prove that using the inferred inherent structure learned by our method, extracted from the temporal axis, improves the detection rate on several domain-specific applications even under severe imbalance. State-of-the-art results are achieved in polyp detection, with 95.00 ± 2.09% Area Under the Curve, and 92.77 ± 1.20% accuracy in the CAD-CAP dataset

    Clinicians’ Guide to Artificial Intelligence in Colon Capsule Endoscopy—Technology Made Simple

    Get PDF
    Artificial intelligence (AI) applications have become widely popular across the healthcare ecosystem. Colon capsule endoscopy (CCE) was adopted in the NHS England pilot project following the recent COVID pandemic’s impact. It demonstrated its capability to relieve the national backlog in endoscopy. As a result, AI-assisted colon capsule video analysis has become gastroenterology’s most active research area. However, with rapid AI advances, mastering these complex machine learning concepts remains challenging for healthcare professionals. This forms a barrier for clinicians to take on this new technology and embrace the new era of big data. This paper aims to bridge the knowledge gap between the current CCE system and the future, fully integrated AI system. The primary focus is on simplifying the technical terms and concepts in machine learning. This will hopefully address the general “fear of the unknown in AI” by helping healthcare professionals understand the basic principle of machine learning in capsule endoscopy and apply this knowledge in their future interactions and adaptation to AI technology. It also summarises the evidence of AI in CCE and its impact on diagnostic pathways. Finally, it discusses the unintended consequences of using AI, ethical challenges, potential flaws, and bias within clinical settings

    Anatomical landmarks localization for capsule endoscopy studies

    Full text link
    Wireless Capsule Endoscopy is a medical procedure that uses a small, wireless camera to capture images of the inside of the digestive tract. The identification of the entrance and exit of the small bowel and of the large intestine is one of the first tasks that need to be accomplished to read a video. This paper addresses the design of a clinical decision support tool to detect these anatomical landmarks. We have developed a system based on deep learning that combines images, timestamps, and motion data to achieve state-of-the-art results. Our method does not only classify the images as being inside or outside the studied organs, but it is also able to identify the entrance and exit frames. The experiments performed with three different datasets (one public and two private) show that our system is able to approximate the landmarks while achieving high accuracy on the classification problem (inside/outside of the organ). When comparing the entrance and exit of the studied organs, the distance between predicted and real landmarks is reduced from 1.5 to 10 times with respect to previous state-of-the-art methods

    Study of capsule endoscopy delivery at scale through enhanced artificial intelligence-enabled analysis (the CESCAIL study)

    Get PDF
    Funding Information: This study is funded by the National Institute for Health and Care Research (NIHR) (funder award NIHR AI_AWARD02440).Peer reviewe

    Study of capsule endoscopy delivery at scale through enhanced artificial intelligence‐enabled analysis (the CESCAIL study)

    Get PDF
    Aim Lower gastrointestinal (GI) diagnostics have been facing relentless capacity constraints for many years, even before the COVID-19 era. Restrictions from the COVID pandemic have resulted in a significant backlog in lower GI diagnostics. Given recent developments in deep neural networks (DNNs) and the application of artificial intelligence (AI) in endoscopy, automating capsule video analysis is now within reach. Comparable to the efficiency and accuracy of AI applications in small bowel capsule endoscopy, AI in colon capsule analysis will also improve the efficiency of video reading and address the relentless demand on lower GI services. The aim of the CESCAIL study is to determine the feasibility, accuracy and productivity of AI-enabled analysis tools (AiSPEED) for polyp detection compared with the ‘gold standard’: a conventional care pathway with clinician analysis. Method This multi-centre, diagnostic accuracy study aims to recruit 674 participants retrospectively and prospectively from centers conducting colon capsule endoscopy (CCE) as part of their standard care pathway. After the study participants have undergone CCE, the colon capsule videos will be uploaded onto two different pathways: AI-enabled video analysis and the gold standard conventional clinician analysis pathway. The reports generated from both pathways will be compared for accuracy (sensitivity and specificity). The reading time can only be compared in the prospective cohort. In addition to validating the AI tool, this study will also provide observational data concerning its use to assess the pathway execution in real-world performance. Results The study is currently recruiting participants at multiple centers within the United Kingdom and is at the stage of collecting data. Conclusion This standard diagnostic accuracy study carries no additional risk to patients as it does not affect the standard care pathway, and hence patient care remains unaffected

    Sequential Models for Endoluminal Image Classification

    Full text link
    Wireless Capsule Endoscopy (WCE) is a procedure to examine the human digestive system for potential mucosal polyps, tumours, or bleedings using an encapsulated camera. This work focuses on polyp detection within WCE videos through Machine Learning. When using Machine Learning in the medical field, scarce and unbalanced datasets often make it hard to receive a satisfying performance. We claim that using Sequential Models in order to take the temporal nature of the data into account improves the performance of previous approaches. Thus, we present a bidirectional Long Short-Term Memory Network (BLSTM), a sequential network that is particularly designed for temporal data. We find the BLSTM Network outperforms non-sequential architectures and other previous models, receiving a final Area under the Curve of 93.83%. Experiments show that our method of extracting spatial and temporal features yields better performance and could be a possible method to decrease the time needed by physicians to analyse the video material

    Sequential Models for Endoluminal Image Classification

    No full text
    Wireless Capsule Endoscopy (WCE) is a procedure to examine the human digestive system for potential mucosal polyps, tumours, or bleedings using an encapsulated camera. This work focuses on polyp detection within WCE videos through Machine Learning. When using Machine Learning in the medical field, scarce and unbalanced datasets often make it hard to receive a satisfying performance. We claim that using Sequential Models in order to take the temporal nature of the data into account improves the performance of previous approaches. Thus, we present a bidirectional Long Short-Term Memory Network (BLSTM), a sequential network that is particularly designed for temporal data. We find the BLSTM Network outperforms non-sequential architectures and other previous models, receiving a final Area under the Curve of 93.83%. Experiments show that our method of extracting spatial and temporal features yields better performance and could be a possible method to decrease the time needed by physicians to analyse the video material

    Artificial intelligence for the detection of polyps or cancer with colon capsule endoscopy

    No full text
    Colorectal cancer is common and can be devastating, with long-term survival rates vastly improved by early diagnosis. Colon capsule endoscopy (CCE) is increasingly recognised as a reliable option for colonic surveillance, but widespread adoption has been slow for several reasons, including the time-consuming reading process of the CCE recording. Automated image recognition and artificial intelligence (AI) are appealing solutions in CCE. Through a review of the currently available and developmental technologies, we discuss how AI is poised to deliver at the forefront of CCE in the coming years. Current practice for CCE reporting often involves a two-step approach, with a ‘pre-reader’ and ‘validator’. This requires skilled and experienced readers with a significant time commitment. Therefore, CCE is well-positioned to reap the benefits of the ongoing digital innovation. This is likely to initially involve an automated AI check of finished CCE evaluations as a quality control measure. Once felt reliable, AI could be used in conjunction with a ‘pre-reader’, before adopting more of this role by sending provisional results and abnormal frames to the validator. With time, AI would be able to evaluate the findings more thoroughly and reduce the input required from human readers and ultimately autogenerate a highly accurate report and recommendation of therapy, if required, for any pathology identified. As with many medical fields reliant on image recognition, AI will be a welcome aid in CCE. Initially, this will be as an adjunct to ‘double-check’ that nothing has been missed, but with time will hopefully lead to a faster, more convenient diagnostic service for the screening population
    corecore