726 research outputs found

    A new audio-visual analysis approach and tools for parsing colonoscopy videos

    Get PDF
    Colonoscopy is an important screening tool for colorectal cancer. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. We call videos captured from colonoscopic procedures colonoscopy videos. Because these videos possess unique characteristics, new types of semantic units and parsing techniques are required. In this paper, we introduce a new analysis approach that includes (a) a new definition of semantic unit - scene (a segment of visual and audio data that correspond to an endoscopic segment of the colon); (b) a novel scene segmentation algorithm using audio and visual analysis to recognize scene boundaries. We design a prototype system to implement the proposed approach. This system also provides the tools for video/image browsing. The tools enable the users to quickly locate and browse scenes of interest. Experiments on real colonoscopy videos show the effectiveness of our algorithms. The proposed techniques and software are useful (1) for post-procedure reviews, (2) for developing an effective content-based retrieval system for colonoscopy videos to facilitate endoscopic research and education, and (3) for development of a systematic approach to assess endoscopists\u27 procedural skills

    Deep Learning for Improved Polyp Detection from Synthetic Narrow-Band Imaging

    Full text link
    To cope with the growing prevalence of colorectal cancer (CRC), screening programs for polyp detection and removal have proven their usefulness. Colonoscopy is considered the best-performing procedure for CRC screening. To ease the examination, deep learning based methods for automatic polyp detection have been developed for conventional white-light imaging (WLI). Compared with WLI, narrow-band imaging (NBI) can improve polyp classification during colonoscopy but requires special equipment. We propose a CycleGAN-based framework to convert images captured with regular WLI to synthetic NBI (SNBI) as a pre-processing method for improving object detection on WLI when NBI is unavailable. This paper first shows that better results for polyp detection can be achieved on NBI compared to a relatively similar dataset of WLI. Secondly, experimental results demonstrate that our proposed modality translation can achieve improved polyp detection on SNBI images generated from WLI compared to the original WLI. This is because our WLI-to-SNBI translation model can enhance the observation of polyp surface patterns in the generated SNBI images

    Edge cross-section profile for colonoscopic object detection

    Get PDF
    Colorectal cancer is the second leading cause of cancer-related deaths, claiming close to 50,000 lives annually in the United States alone. Colonoscopy is an important screening tool that has contributed to a significant decline in colorectal cancer-related deaths. During colonoscopy, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the human colon. The video data is displayed on a monitor for real-time diagnosis by the endoscopist. Despite the success of colonoscopy in lowering cancer-related deaths, a significant miss rate for detection of both large polyps and cancers is estimated around 4-12%. As a result, in recent years, many computer-aided object detection techniques have been developed with the ultimate goal to assist the endoscopist in lowering the polyp miss rate. Automatic object detection in recorded video data during colonoscopy is challenging due to the noisy nature of endoscopic images caused by camera motion, strong light reflections, the wide angle lens that cannot be automatically focused, and the location and appearance variations of objects within the colon. The unique characteristics of colonoscopy video require new image/video analysis techniques. The dissertation presents our investigation on edge cross-section profile (ECSP), a local appearance model, for colonoscopic object detection. We propose several methods to derive new features on ECSP from its surrounding region pixels, its first-order derivative profile, and its second-order derivative profile. These ECSP features describe discriminative patterns for different types of objects in colonoscopy. The new algorithms and software using the ECSP features can effectively detect three representative types of objects and extract their corresponding semantic unit in terms of both accuracy and analysis time. The main contributions of dissertation are summarized as follows. The dissertation presents 1) a new ECSP calculation method and feature-based ECSP method that extracts features on ECSP for object detection, 2) edgeless ECSP method that calculates ECSP without using edges, 3) part-based multi-derivative ECSP algorithm that segments ECSP, its 1st - order and its 2nd - order derivative functions into parts and models each part using the method that is suitable to that part, 4) ECSP based algorithms for detecting three representative types of colonoscopic objects including appendiceal orifices, endoscopes during retroflexion operations, and polyps and extracting videos or segmented shots containing these objects as semantic units, and 5) a software package that implements these techniques and provides meaningful visual feedback of the detected results to the endoscopist. Ideally, we would like the software to provide feedback to the endoscopist before the next video frame becomes available and to process video data at the rate in which the data are captured (typically at about 30 frames per second (fps)). This real-time requirement is difficult to achieve using today\u27s affordable off-the-shelf workstations. We aim for achieving near real-time performance where the analysis and feedback complete at the rate of at least 1 fps. The dissertation has the following broad impacts. Firstly, the performance study shows that our proposed ECSP based techniques are promising both in terms of the detection rate and execution time for detecting the appearance of the three aforementioned types of objects in colonoscopy video. Our ECSP based techniques can be extended to both detect other types of colonoscopic objects such as diverticula, lumen and vessel, and analyze other endoscopy procedures, such as laparoscopy, upper gastrointestinal endoscopy, wireless capsule endoscopy and EGD. Secondly, to our best knowledge, our polyp detection system is the only computer-aided system that can warn the endoscopist the appearance of polyps in near real time. Our retroflexion detection system is also the first computer-aided system that can detect retroflexion in near real-time. Retroflexion is a maneuver used by the endoscopist to inspect the colon area that is hard to reach. The use of our system in future clinical trials may contribute to the decline in the polyp miss rate during live colonoscopy. Our system may be used as a training platform for novice endoscopists. Lastly, the automatic documentation of detected semantic units of colonoscopic objects can be helpful to discover unknown patterns of colorectal cancers or new diseases and used as educational resources for endoscopic research
    corecore