202 research outputs found

    Frontiers of robotic endoscopic capsules: a review

    Get PDF
    Digestive diseases are a major burden for society and healthcare systems, and with an aging population, the importance of their effective management will become critical. Healthcare systems worldwide already struggle to insure quality and affordability of healthcare delivery and this will be a significant challenge in the midterm future. Wireless capsule endoscopy (WCE), introduced in 2000 by Given Imaging Ltd., is an example of disruptive technology and represents an attractive alternative to traditional diagnostic techniques. WCE overcomes conventional endoscopy enabling inspection of the digestive system without discomfort or the need for sedation. Thus, it has the advantage of encouraging patients to undergo gastrointestinal (GI) tract examinations and of facilitating mass screening programmes. With the integration of further capabilities based on microrobotics, e.g. active locomotion and embedded therapeutic modules, WCE could become the key-technology for GI diagnosis and treatment. This review presents a research update on WCE and describes the state-of-the-art of current endoscopic devices with a focus on research-oriented robotic capsule endoscopes enabled by microsystem technologies. The article also presents a visionary perspective on WCE potential for screening, diagnostic and therapeutic endoscopic procedures

    Eye-tracking the moving medical image: Development and investigation of a novel investigational tool for CT Colonography

    Get PDF
    Colorectal cancer remains the third most common cancer in the UK but the second leading cause of cancer death with >16,000 dying per year. Many advances have been made in recent years in all areas of investigation for colorectal cancer, one of the more notable being the widespread introduction of CT Colonography (CTC). CTC has rapidly established itself as a cornerstone of diagnosis for colonic neoplasia and much work has been done to standardise and assure quality in practice in both the acquisition and interpretation of the technique. A novel feature of CTC is the presentation of imaging in both traditional 2D and the ‘virtual’ 3D endoluminal formats. This thesis looks at expanding our understanding of and improving our performance in utilizing the endoluminal 3D view. We present and develop novel metrics applicable to eye-tracking the moving image, so that the complex dynamic nature of 3D endoluminal fly-through interpretation can be captured. These metrics are then applied to assess the effect of important elements of image interpretation, namely, reader experience, the effect of the use Computer Aided Detection (CAD) and the influence of the expected prevalence of abnormality. We review our findings with reference to the literature of eye tracking within medical imaging. In the co-registration section we apply our validated computer-assisted registration algorithm to the matching of 3D endoluminal colonic locations between temporally separate datasets, assessing its accuracy as an aid to colonic polyp surveillance with CTC

    Toward a Human-Centered AI-assisted Colonoscopy System

    Full text link
    AI-assisted colonoscopy has received lots of attention in the last decade. Several randomised clinical trials in the previous two years showed exciting results of the improving detection rate of polyps. However, current commercial AI-assisted colonoscopy systems focus on providing visual assistance for detecting polyps during colonoscopy. There is a lack of understanding of the needs of gastroenterologists and the usability issues of these systems. This paper aims to introduce the recent development and deployment of commercial AI-assisted colonoscopy systems to the HCI community, identify gaps between the expectation of the clinicians and the capabilities of the commercial systems, and highlight some unique challenges in Australia.Comment: 9 page

    Supervised cnn strategies for optical image segmentation and classification in interventional medicine

    Get PDF
    The analysis of interventional images is a topic of high interest for the medical-image analysis community. Such an analysis may provide interventional-medicine professionals with both decision support and context awareness, with the final goal of improving patient safety. The aim of this chapter is to give an overview of some of the most recent approaches (up to 2018) in the field, with a focus on Convolutional Neural Networks (CNNs) for both segmentation and classification tasks. For each approach, summary tables are presented reporting the used dataset, involved anatomical region and achieved performance. Benefits and disadvantages of each approach are highlighted and discussed. Available datasets for algorithm training and testing and commonly used performance metrics are summarized to offer a source of information for researchers that are approaching the field of interventional-image analysis. The advancements in deep learning for medical-image analysis are involving more and more the interventional-medicine field. However, these advancements are undeniably slower than in other fields (e.g. preoperative-image analysis) and considerable work still needs to be done in order to provide clinicians with all possible support during interventional-medicine procedures

    Enhancing endoscopic navigation and polyp detection using artificial intelligence

    Get PDF
    Colorectal cancer (CRC) is one most common and deadly forms of cancer. It has a very high mortality rate if the disease advances to late stages however early diagnosis and treatment can be curative is hence essential to enhancing disease management. Colonoscopy is considered the gold standard for CRC screening and early therapeutic treatment. The effectiveness of colonoscopy is highly dependent on the operator’s skill, as a high level of hand-eye coordination is required to control the endoscope and fully examine the colon wall. Because of this, detection rates can vary between different gastroenterologists and technology have been proposed as solutions to assist disease detection and standardise detection rates. This thesis focuses on developing artificial intelligence algorithms to assist gastroenterologists during colonoscopy with the potential to ensure a baseline standard of quality in CRC screening. To achieve such assistance, the technical contributions develop deep learning methods and architectures for automated endoscopic image analysis to address both the detection of lesions in the endoscopic image and the 3D mapping of the endoluminal environment. The proposed detection models can run in real-time and assist visualization of different polyp types. Meanwhile the 3D reconstruction and mapping models developed are the basis for ensuring that the entire colon has been examined appropriately and to support quantitative measurement of polyp sizes using the image during a procedure. Results and validation studies presented within the thesis demonstrate how the developed algorithms perform on both general scenes and on clinical data. The feasibility of clinical translation is demonstrated for all of the models on endoscopic data from human participants during CRC screening examinations

    Colonoscopy polyp detection and classification: Dataset creation and comparative evaluations

    Get PDF
    Colorectal cancer (CRC) is one of the most common types of cancer with a high mortality rate. Colonoscopy is the preferred procedure for CRC screening and has proven to be effective in reducing CRC mortality. Thus, a reliable computer-aided polyp detection and classification system can significantly increase the effectiveness of colonoscopy. In this paper, we create an endoscopic dataset collected from various sources and annotate the ground truth of polyp location and classification results with the help of experienced gastroenterologists. The dataset can serve as a benchmark platform to train and evaluate the machine learning models for polyp classification. We have also compared the performance of eight state-of-the-art deep learning-based object detection models. The results demonstrate that deep CNN models are promising in CRC screening. This work can serve as a baseline for future research in polyp detection and classification

    Towards a framework for analysis of eye-tracking studies in the three dimensional environment: a study of visual search by experienced readers of endoluminal CT colonography.

    Get PDF
    Objective: Eye tracking in three dimensions is novel, but established descriptors derived from two-dimensional (2D) studies are not transferable. We aimed to develop metrics suitable for statistical comparison of eye-tracking data obtained from readers of three-dimensional (3D) “virtual” medical imaging, using CT colonography (CTC) as a typical example. Methods: Ten experienced radiologists were eye tracked while observing eight 3D endoluminal CTC videos. Sub-sequently, we developed metrics that described their visual search patterns based on concepts derived from 2D gaze studies. Statistical methods were developed to allow analysis of the metrics. Results: Eye tracking was possible for all readers. Visual dwell on the moving region of interest (ROI) was defined as pursuit of the moving object across multiple frames. Using this concept of pursuit, five categories of metrics were defined that allowed characterization of reader gaze behaviour. These were time to first pursuit, identi-fication and assessment time, pursuit duration, ROI size and pursuit frequency. Additional subcategories allowed us to further characterize visual search between readers in the test population. Conclusion: We propose metrics for the characterization of visual search of 3D moving medical images. These metrics can be used to compare readers’ visual search patterns and provide a reproducible framework for the analysis of gaze tracking in the 3D environment. Advances in knowledge: This article describes a novel set of metrics that can be used to describe gaze behaviour when eye tracking readers during interpretation of 3D medical images. These metrics build on those established for 2D eye tracking and are applicable to increasingly common 3D medical image displays
    • 

    corecore