Various artifacts, such as ghost colors, interlacing, and motion blur, hinder
diagnosing colorectal cancer (CRC) from videos acquired during colonoscopy. The
frames containing these artifacts are called uninformative frames and are
present in large proportions in colonoscopy videos. To alleviate the impact of
artifacts, we propose an adversarial network based framework to convert
uninformative frames to clinically relevant frames. We examine the
effectiveness of the proposed approach by evaluating the translated frames for
polyp detection using YOLOv5. Preliminary results present improved detection
performance along with elegant qualitative outcomes. We also examine the
failure cases to determine the directions for future work.Comment: Student Abstract, Accepted at AAAI 202